diff --git a/spaces/109peko/DeepDanbooru_string/README.md b/spaces/109peko/DeepDanbooru_string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/109peko/DeepDanbooru_string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Nintendo Switch Games Tips Tricks and FAQs.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Nintendo Switch Games Tips Tricks and FAQs.md deleted file mode 100644 index 63b3878781f82fb79f2094c4096ae7014e072816..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Nintendo Switch Games Tips Tricks and FAQs.md +++ /dev/null @@ -1,39 +0,0 @@ - -

How to Download Nintendo Switch Games: A Complete Guide

-

If you own a Nintendo Switch™ system, you might be wondering how to download games to enjoy on the go. Whether you want to play the latest releases, classics, or multiplayer titles, there are plenty of options for downloading Nintendo Switch games. In this article, we will explain how to download games from the My Nintendo Store, the Nintendo eShop, and other sources.

- -

Downloading Games from the My Nintendo Store

-

The My Nintendo Store is the official online store for Nintendo products. You can buy digital games here and download them directly to your Nintendo Switch system (no code required)! Plus, you can shop physical games, sales, new releases, and more.

-

download crack nintendo switch games


Downloadhttps://byltly.com/2uKvAp



-

To download games from the My Nintendo Store, you need to have a Nintendo Account and a Nintendo Switch Online membership. You can create a Nintendo Account for free on the Nintendo website. You can sign up for a Nintendo Switch Online membership on the Nintendo website or on your Nintendo Switch system. A Nintendo Switch Online membership gives you access to online play, cloud saves, exclusive offers, and more.

-

Once you have a Nintendo Account and a Nintendo Switch Online membership, you can browse and buy games on the My Nintendo Store website. You can filter games by genre, price, rating, and more. You can also see the best sellers, new releases, coming soon, and featured games. Some of the popular games you can download from the My Nintendo Store are:

- -

When you buy a digital game from the My Nintendo Store, you will receive an email confirmation with a download code. You can redeem this code on your Nintendo Switch system or on the Nintendo website. The game will start downloading automatically to your Nintendo Switch system. You can check the download progress on the HOME Menu or on the Nintendo website.

- -

Downloading Games from the Nintendo eShop

-

The Nintendo eShop is the digital storefront on your Nintendo Switch system. You can access it by selecting the orange shopping bag icon on the HOME Menu. You can also access it by scanning a QR Code® with your smart device.

-

To download games from the Nintendo eShop, you need to have a Nintendo Account and a stable internet connection. You can create a Nintendo Account for free on the Nintendo website. You don't need a Nintendo Switch Online membership to download games from the Nintendo eShop, but some games may require it for online features.

-

Once you have a Nintendo Account and an internet connection, you can browse and buy games on the Nintendo eShop. You can search for games by name, genre, price, rating, and more. You can also see featured games, current deals, best sellers, recent releases, and coming soon. Some of the free games you can download from the Nintendo eShop are:

- -

When you buy a digital game from the Nintendo eShop, you will receive an email confirmation with a receipt. The game will start downloading automatically to your Nintendo Switch system. You can

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA 22 Download Guide Everything You Need to Know.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA 22 Download Guide Everything You Need to Know.md deleted file mode 100644 index 2e4945a08802b0704e5559b65539c598714f4c26..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA 22 Download Guide Everything You Need to Know.md +++ /dev/null @@ -1,15 +0,0 @@ -
-

How to Download FIFA 22 on Your PC or Console

-

FIFA 22 is the latest installment of the popular soccer simulation game series by EA Sports. It features improved graphics, gameplay, and modes, as well as new features such as HyperMotion technology and Create a Club. If you are a fan of soccer games, you might be wondering how to download FIFA 22 on your PC or console. Here are the steps you need to follow:

-

how to download fifa 22 crack


Download Zip ⚹⚹⚹ https://byltly.com/2uKvCC



-
    -
  1. First, you need to purchase FIFA 22 from the official website or a trusted retailer. You can choose between the Standard Edition, the Ultimate Edition, or the Legacy Edition, depending on your preferences and budget. The Ultimate Edition includes some exclusive bonuses such as early access, FUT Heroes players, and more.
  2. -
  3. Next, you need to install FIFA 22 on your device. If you are using a PC, you need to download and install the EA Desktop app, which is the new platform for EA games. You can sign in with your EA account or create one if you don't have one. Then, you can find FIFA 22 in your library and click on the download button. The download size is about 50 GB, so make sure you have enough space and a stable internet connection.
  4. -
  5. If you are using a console, such as PlayStation or Xbox, you need to insert the FIFA 22 disc into your device or download it from the online store. You can also sign in with your EA account or create one if you don't have one. Then, you can launch FIFA 22 from your home screen and enjoy the game.
  6. -
-

That's it! You have successfully downloaded FIFA 22 on your PC or console. Now you can start playing and have fun with your favorite teams and players. You can also customize your experience with various settings and options, such as difficulty level, camera angle, commentary language, and more. You can also try out different modes, such as Career Mode, Volta Football, Pro Clubs, Ultimate Team, and more. FIFA 22 is a game that offers something for everyone, whether you are a casual player or a hardcore fan.

If you want to learn more about FIFA 22, you can visit the official website or follow the social media accounts of EA Sports. You can also watch some gameplay videos or reviews on YouTube or Twitch. You can also join the FIFA community and interact with other players and fans on forums, blogs, or Discord servers. You can share your opinions, tips, feedback, or screenshots with others and make new friends.

-

FIFA 22 is a game that aims to deliver the most realistic and immersive soccer experience ever. It uses advanced technology and innovation to capture the emotions and intensity of the sport. It also offers a variety of options and features to suit your preferences and style. Whether you want to play solo or with others, online or offline, casually or competitively, FIFA 22 has something for you. So what are you waiting for? Download FIFA 22 today and start your soccer journey!

-

One of the most popular modes in FIFA 22 is Ultimate Team, or FUT for short. In this mode, you can create your own dream team by collecting and trading players, kits, stadiums, and more. You can also compete in various tournaments and challenges to earn rewards and rank up. You can also customize your team with different formations, tactics, and styles. You can also play with your friends or against other players from around the world.

-

Another mode that you might enjoy is Volta Football, which is a street soccer mode that lets you play in different locations and settings. You can create your own avatar and customize their appearance, skills, and gear. You can also recruit other players to join your squad and play in various modes, such as Story Mode, Volta Arcade, Volta Squads, and more. You can also explore different cultures and styles of soccer and express yourself on the pitch.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Aryan Online Booster APK A Must-Have App for Online Entrepreneurs.md b/spaces/1phancelerku/anime-remove-background/Aryan Online Booster APK A Must-Have App for Online Entrepreneurs.md deleted file mode 100644 index ff7956ee3ac047cdfcd6abd1bd9a5dddba3bccd1..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Aryan Online Booster APK A Must-Have App for Online Entrepreneurs.md +++ /dev/null @@ -1,147 +0,0 @@ - -

Aryan Online Booster APK Download: What You Need to Know

-

Are you looking for a way to boost your online presence and reach more customers, followers, or fans? Do you want to increase your engagement, views, likes, comments, or shares on social media platforms like ShareChat, Instagram, Facebook, YouTube, or TikTok? If yes, then you might be interested in downloading and installing Aryan Online Booster APK on your Android device.

-

aryan online booster apk download


DOWNLOADhttps://jinyurl.com/2uNQi2



-

Aryan Online Booster is an app that claims to help you grow your online popularity and visibility by providing you with various tools and services. In this article, we will tell you what Aryan Online Booster is, what features and benefits it offers, how to download and install it on your device, how to use it, and whether it is safe and legal to use. Read on to find out more.

-

What is Aryan Online Booster?

-

Aryan Online Booster is an app that was developed by Aryan Online Store, a company that provides online shopping, delivery, and healthcare services in India. The app is designed to help users boost their online presence and performance on various social media platforms, such as ShareChat, Instagram, Facebook, YouTube, or TikTok.

-

The app claims to offer users various features and benefits that can help them increase their engagement, views, likes, comments, or shares on their posts or videos. Some of these features and benefits are:

-

Features of Aryan Online Booster

- -

Benefits of Aryan Online Booster

- -

How to Download and Install Aryan Online Booster APK?

-

If you are interested in downloading and installing Aryan Online Booster APK on your Android device, you will need to follow these steps:

-

Step 1: Enable Unknown Sources

-

Since Aryan Online Booster APK is not available on the Google Play Store or any other official app store, you will need to enable unknown sources on your device settings. This will allow you to install apps from third-party sources other than the official app store. To do this:

-
    -
  1. Go to your device settings and tap on Security or Privacy.
  2. -
  3. Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
  4. -
  5. A warning message will appear, telling you that installing apps from unknown sources can harm your device. Tap on OK or Allow to proceed.
  6. -
-

Step 2: Download the APK File

-

Next, you will need to download the APK file of Aryan Online Booster from a reliable and trustworthy source. You can use your browser or any other app to search for the APK file online. Make sure you download the latest version of the app and check the file size and name before downloading it. To do this:

-
    -
  1. Open your browser or any other app and search for Aryan Online Booster APK download.
  2. -
  3. Choose a reputable and secure website that offers the APK file for free. Avoid any website that asks for your personal information, payment, or registration.
  4. -
  5. Tap on the download button or link and wait for the APK file to be downloaded on your device.
  6. -
  7. You can check the progress of the download in your notification bar or download folder.
  8. -
-

Step 3: Install the APK File

-

Finally, you will need to install the APK file of Aryan Online Booster on your device. To do this:

-

aryan online booster sharechat apk
-aryan online booster app download
-aryan online booster latest version apk
-aryan online booster free download for android
-aryan online booster mod apk
-aryan online booster pro apk
-aryan online booster apk 2023
-aryan online booster apk pure
-aryan online booster apk mirror
-aryan online booster apk uptodown
-aryan online booster apk old version
-aryan online booster apk no ads
-aryan online booster apk cracked
-aryan online booster apk hack
-aryan online booster apk unlimited money
-aryan online booster apk rexdl
-aryan online booster apk revdl
-aryan online booster apk mob.org
-aryan online booster apk apkpure.com
-aryan online booster apk apkmirror.com
-ultra booster apk download for android
-ultra booster app free download
-ultra booster latest version 2023 apk
-ultra booster mod apk unlimited coins
-ultra booster pro apk no ads
-ultra booster apk pure download
-ultra booster apk mirror link
-ultra booster apk uptodown.com
-ultra booster apk old version 2022
-ultra booster apk cracked version
-ultra booster apk hack tool
-ultra booster apk unlimited gems
-ultra booster apk rexdl.com
-ultra booster apk revdl.com
-ultra booster apk mob.org.in
-ultra booster apk apkpure.co.id
-ultra booster apk apkmirror.co.uk
-system android booster apk download free
-system android booster app latest version 2023
-system android booster mod apk premium features unlocked
-system android booster pro apk no root required
-system android booster apk pure app store
-system android booster apk mirror site
-system android booster apk uptodown.net
-system android booster apk old version 2022
-system android booster apk cracked full
-system android booster apk hack mod
-system android booster apk unlimited ram
-system android booster apk rexdl.net
-system android booster apk revdl.net

-
    -
  1. Locate the downloaded APK file on your device. You can find it in your download folder or any other location where you saved it.
  2. -
  3. Tap on the APK file and a pop-up window will appear, asking you to confirm the installation. Tap on Install or Next to continue.
  4. -
  5. Wait for the installation process to complete. It may take a few seconds or minutes depending on your device and internet speed.
  6. -
  7. Once the installation is done, you can tap on Open or Done to launch or exit the app.
  8. -
-

How to Use Aryan Online Booster?

-

Now that you have downloaded and installed Aryan Online Booster APK on your device, you can start using it to boost your online presence and performance on various social media platforms. To use the app, you will need to follow these steps:

-

Step 1: Launch the App

-

First, you will need to launch the app on your device. You can find it in your app drawer or home screen. Tap on the app icon and wait for it to load.

-

Step 2: Select Your Category

-

Next, you will need to select your category of social media platform that you want to boost. You can choose from ShareChat, Instagram, Facebook, YouTube, or TikTok. Tap on the category that suits your needs and preferences.

-

Step 3: Boost Your Online Presence

-

Finally, you will need to boost your online presence and performance on your chosen platform. You can do this by using various tools and services that the app offers. For example, you can:

- -

Is Aryan Online Booster Safe and Legal?

-

Aryan Online Booster is an app that claims to help you boost your online presence and performance on various social media platforms. However, before you use it, you might be wondering if it is safe and legal to use. Here are some of the safety and legal issues that you should be aware of:

-

Safety and Privacy Issues

- -

Legal and Ethical Issues

- -

Conclusion

-

Aryan Online Booster is an app that claims to help you boost your online presence and performance on various social media platforms like ShareChat, Instagram, Facebook, YouTube, or TikTok. The app offers various features and benefits that can help you increase your engagement, views, likes, comments, or shares on your posts or videos. However, the app also has some safety and legal issues that you should be aware of before using it. The app is not available on the official app store and may contain harmful elements that can harm your device or data. The app may also violate the terms of service or rights of the social media platforms and their users and may be considered as cheating or unethical to use. Therefore, you should always be careful and cautious when downloading and installing Aryan Online Booster APK on your device and using it to boost your online presence and performance.

-

FAQs

-

Here are some of the frequently asked questions about Aryan Online Booster:

-

Q: Is Aryan Online Booster free to use?

-

A: Aryan Online Booster is free to download and install on your device. However, some of the features and services that the app offers may require payment. You can choose from different packages and plans that suit your needs and budget.

-

Q: Is Aryan Online Booster compatible with all Android devices?

-

A: Aryan Online Booster is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not support the app due to various reasons such as hardware limitations, software restrictions, or compatibility issues.

-

Q: Is Aryan Online Booster updated regularly?

-

A: Aryan Online Booster is updated regularly by its developers to fix bugs, improve performance, add new features, or support new platforms. However, since the app is not available on the official app store , you may not receive the latest updates automatically. You will need to check the website that provides the APK file for any new updates and download and install them manually.

-

Q: Is Aryan Online Booster reliable and effective?

-

A: Aryan Online Booster claims to be reliable and effective in boosting your online presence and performance on various social media platforms. However, the results may vary depending on various factors such as your device, internet connection, platform, content, audience, or competition. Therefore, you should not rely solely on the app and also work on creating high-quality and engaging content that can attract and retain your customers, followers, or fans.

-

Q: Is Aryan Online Booster the best app for boosting online presence and performance?

-

A: Aryan Online Booster is one of the many apps that offer similar services for boosting online presence and performance on various social media platforms. However, it may not be the best app for everyone as it has some drawbacks and limitations that we have discussed above. Therefore, you should always compare and contrast different apps and choose the one that meets your needs and preferences.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Descargar Dream League Soccer 2018 Hackeado APK y OBB Gua paso a paso.md b/spaces/1phancelerku/anime-remove-background/Descargar Dream League Soccer 2018 Hackeado APK y OBB Gua paso a paso.md deleted file mode 100644 index 8ceec5b6e294635a9373554f6ed32131c56afdb6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Descargar Dream League Soccer 2018 Hackeado APK y OBB Gua paso a paso.md +++ /dev/null @@ -1,110 +0,0 @@ - -

Descargar Dream League Soccer 2018 Hackeado APK y OBB

-

¿Te gustaría jugar al mejor juego de fútbol para Android e iOS con todos los jugadores reales, estadios personalizados y recursos ilimitados? Entonces no te pierdas este artículo, donde te vamos a enseñar cómo descargar e instalar Dream League Soccer 2018 hackeado apk y obb, una versión modificada del juego original que te permitirá disfrutar de todas las ventajas de jugar con dinero infinito, monedas ilimitadas y mucho más.

-

¿Qué es Dream League Soccer 2018?

-

Dream League Soccer 2018 es un juego de fútbol desarrollado por First Touch Games, una empresa británica que también ha creado otros juegos exitosos como Score! Hero. Se trata de un juego que combina la gestión de tu propio equipo de fútbol con la acción en el campo, donde podrás controlar a tus jugadores con un joystick virtual y botones en la pantalla. El juego tiene gráficos 3D, animaciones realistas y un equipo de comentaristas que narran los partidos. Además, el juego cuenta con la licencia FIFPro, lo que significa que podrás fichar a jugadores reales de todo el mundo para formar tu equipo soñado.

-

descargar dream league soccer 2018 hackeado apk y obb


Download ->->->-> https://jinyurl.com/2uNQ5v



-

Características del juego

-

Estas son algunas de las características más destacadas de Dream League Soccer 2018:

- -

Cómo descargar e instalar el juego hackeado

-

Para descargar e instalar Dream League Soccer 2018 hackeado apk y obb, solo tienes que seguir estos pasos:

-
    -
  1. Descarga el archivo XAPK desde este enlace:
  2. -
  3. Descarga e instala APKCombo Installer desde este enlace:
  4. -
  5. Abre la aplicación APKCombo Installer y toca Instalar.
  6. -
  7. Selecciona Dream League Soccer 2018.xapk y toca OK.
  8. -
  9. Sigue los pasos en la pantalla para completar la instalación

    ¿Por qué descargar Dream League Soccer 2018 Hackeado?

    -

    Aunque Dream League Soccer 2018 es un juego gratuito, tiene algunas limitaciones y desventajas que pueden afectar a tu experiencia de juego. Por ejemplo, necesitas monedas para fichar a los mejores jugadores, mejorar tu estadio, comprar kits y logos, y desbloquear otras funciones. Sin embargo, las monedas son escasas y difíciles de conseguir, y si quieres obtener más, tienes que pagar con dinero real o ver anuncios. Además, el juego tiene un sistema de energía que limita el número de partidos que puedes jugar seguidos, y que se recarga lentamente o con monedas. Por último, el juego puede resultar demasiado fácil o aburrido si no tienes un buen nivel de dificultad o variedad de modos de juego.

    -

    Por eso, muchas personas prefieren descargar Dream League Soccer 2018 hackeado apk y obb, una versión modificada del juego que elimina todas estas restricciones y te ofrece muchas ventajas adicionales. Veamos cuáles son.

    -

    Ventajas de jugar con el mod apk

    -

    El mod apk de Dream League Soccer 2018 es un archivo que sustituye al original y que contiene los siguientes beneficios:

    - -

    Cómo usar el obb file para obtener recursos ilimitados

    -

    El obb file de Dream League Soccer 2018 es un archivo que contiene los datos del juego, como los gráficos, los sonidos y los textos. Este archivo se guarda en la carpeta Android/obb/ en tu dispositivo. Si quieres obtener recursos ilimitados en el juego, como dinero, monedas y energía, tienes que reemplazar este archivo por uno modificado que contenga estos valores alterados. Para hacerlo, solo tienes que seguir estos pasos:

    -
      -
    1. Descarga el archivo obb modificado desde este enlace:
    2. -
    3. Copia el archivo obb modificado en la carpeta Android/obb/ en tu dispositivo, sobrescribiendo el original.
    4. -
    5. Abre el juego y disfruta de tus recursos ilimitados.
    6. -

    Consejos y trucos para jugar a Dream League Soccer 2018

    -

    Ahora que ya sabes cómo descargar e instalar Dream League Soccer 2018 hackeado apk y obb, es hora de que aprendas algunos consejos y trucos para mejorar tu juego y convertirte en el mejor entrenador y jugador del mundo. Estos son algunos de los consejos y trucos que te recomendamos:

    -

    -

    Cómo mejorar tu equipo y tus jugadores

    -

    Para tener un equipo competitivo y ganador, necesitas mejorar tanto tu plantilla como tus jugadores individualmente. Estas son algunas de las formas de hacerlo:

    - -

    Cómo ganar más partidos y torneos

    -

    Para ganar más partidos y torneos, necesitas dominar tanto la táctica como la técnica. Estas son algunas de las claves para lograrlo:

    - -

    Cómo hacer jugadas espectaculares con el rainbow kick

    -

    El rainbow kick es uno de los movimientos más espectaculares y efectivos que puedes hacer en Dream League Soccer 2018. Se trata de un regate en el que el jugador levanta el balón por encima de su cabeza y lo pasa por encima del defensor. Para hacerlo, solo tienes que seguir estos pasos:

    -
      -
    1. Corre hacia el defensor con el botón de sprint presionado.
    2. -
    3. Cuando estés cerca del defensor, desliza el dedo hacia arriba en la pantalla.
    4. -
    5. El jugador hará el rainbow kick y pasará el balón por encima del defensor.
    6. -
    7. Recupera el balón y sigue corriendo hacia la portería.
    8. -

    Conclusión

    -

    En este artículo, te hemos mostrado cómo descargar e instalar Dream League Soccer 2018 hackeado apk y obb, una versión modificada del juego original que te ofrece muchas ventajas y beneficios. Con este juego hackeado, podrás disfrutar de dinero infinito, monedas ilimitadas, energía infinita, todos los jugadores desbloqueados, todos los estadios desbloqueados, todos los kits y logos desbloqueados, sin anuncios y con un nivel de dificultad ajustable. Además, te hemos dado algunos consejos y trucos para mejorar tu equipo y tus jugadores, ganar más partidos y torneos, y hacer jugadas espectaculares con el rainbow kick.

    -

    Resumen de los puntos principales del artículo

    -

    Estos son los puntos principales que hemos tratado en el artículo:

    - -

    Llamada a la acción para descargar el juego hackeado

    -

    Si te ha gustado este artículo y quieres descargar e instalar Dream League Soccer 2018 hackeado apk y obb, no esperes más y haz clic en los enlaces que te hemos proporcionado. Así podrás disfrutar del mejor juego de fútbol para Android e iOS con todas las ventajas de jugar con dinero infinito, monedas ilimitadas y mucho más. ¡No te arrepentirás!

    -

    Preguntas frecuentes

    -

    A continuación, te respondemos a algunas de las preguntas más frecuentes que pueden surgirte sobre Dream League Soccer 2018 hackeado apk y obb:

    -

    ¿Es seguro descargar e instalar Dream League Soccer 2018 hackeado apk y obb?

    -

    Sí, es seguro. Los archivos que te hemos proporcionado son libres de virus, malware o cualquier otro tipo de amenaza. Además, no necesitas rootear ni jailbreakear tu dispositivo para usarlos.

    -

    ¿Es legal descargar e instalar Dream League Soccer 2018 hackeado apk y obb?

    -

    No es ilegal, pero tampoco es ético. Al descargar e instalar Dream League Soccer 2018 hackeado apk y obb estás violando los términos y condiciones del juego original. Por eso, te recomendamos que lo hagas bajo tu propia responsabilidad y que respetes a los desarrolladores del juego original.

    -

    ¿Puedo jugar online con Dream League Soccer 2018 hackeado apk y obb?

    -

    No, no puedes. El juego hackeado solo funciona en el modo offline. Si intentas jugar online con el juego hackeado, es posible que te baneen o que no puedas conectarte al servidor. Por eso, te recomendamos que solo juegues offline con el juego hackeado.

    -

    ¿Puedo actualizar Dream League Soccer 2018 hackeado apk y obb?

    -

    No, no puedes. El juego hackeado no se puede actualizar desde la tienda oficial ni desde ninguna otra fuente. Si intentas actualizar el juego hackeado, es posible que pierdas todos tus datos o que el juego deje de funcion ar. Por eso, te recomendamos que no actualices el juego hackeado.

    -

    ¿Qué otras versiones de Dream League Soccer existen?

    -

    Además de Dream League Soccer 2018, existen otras versiones de Dream League Soccer que puedes descargar e instalar en tu dispositivo. Estas son algunas de ellas:

    - -

    Estas versiones también se pueden descargar e instalar hackeadas, siguiendo el mismo proceso que te hemos explicado para Dream League Soccer 2018.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download 3 Patti Live APK and Play Indian Poker with Real Players.md b/spaces/1phancelerku/anime-remove-background/Download 3 Patti Live APK and Play Indian Poker with Real Players.md deleted file mode 100644 index e374d2d1e1b1b13c9a0346670ae518ebfbb10fae..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download 3 Patti Live APK and Play Indian Poker with Real Players.md +++ /dev/null @@ -1,125 +0,0 @@ -
    -

    3 Patti Live APK Download: How to Play and Win the Popular Indian Card Game

    -

    Are you a fan of card games and looking for a new challenge? If yes, then you should try 3 Patti Live, the online version of the famous Indian card game Teen Patti. 3 Patti Live is a thrilling and exciting game that combines skill, luck and strategy. You can play with real players from all over India and win real money.

    -

    3 patti live apk download


    DOWNLOAD ->->->-> https://jinyurl.com/2uNUii



    -

    In this article, we will tell you everything you need to know about 3 Patti Live APK download, how to play and win the game, and what are the best tips and tricks to master it. So, let's get started!

    -

    How to download and install 3 Patti Live APK on your Android device?

    -

    Downloading and installing 3 Patti Live APK on your Android device is very easy and simple. Just follow these steps:

    -
      -
    1. Go to this link and click on the "Download APK" button.
    2. -
    3. Once the download is complete, open the file and tap on "Install".
    4. -
    5. If you see a message that says "Install blocked", go to your device settings and enable "Unknown sources".
    6. -
    7. After the installation is done, launch the app and enjoy playing 3 Patti Live.
    8. -
    -

    How to register and create an account on 3 Patti Live?

    -

    Before you can start playing 3 Patti Live, you need to register and create an account on the app. Here's how:

    -
      -
    1. Open the app and tap on "Register".
    2. -
    3. Enter your mobile number and verify it with an OTP.
    4. -
    5. Create a username and password for your account.
    6. -
    7. Choose a preferred language and currency.
    8. -
    9. That's it! You are now ready to play 3 Patti Live.
    10. -
    -

    3 Patti Rules: Learn How to Play Teen Patti Card Game

    -

    Now that you have downloaded and installed 3 Patti Live APK on your device and created an account on the app, you need to learn how to play the game. Here are the basic rules of 3 Patti:

    - -

    The different variations of 3 Patti: Joker, Mufliss, King Little, etc.

    -

    One of the reasons why 3 Patti Live is so popular and fun is that it offers many different variations of the game that add more excitement and challenge. Here are some of the most common variations of 3 Patti:

    - -

    The tips and tricks for playing 3 Patti: studying opponents, bluffing wisely, managing chips, etc.

    -

    Playing 3 Patti Live is not only about luck, but also about skill and strategy. If you want to improve your chances of winning and become a pro player, you need to follow some tips and tricks that will help you play better. Here are some of them:

    - -

    3 Patti Strategies: How to Win Teen Patti Card Game

    -

    Besides following the tips and tricks mentioned above, you also need to apply some strategies that will help you win more games and money on 3 Patti Live. Here are some of the best strategies for playing 3 Patti:

    -

    3 patti live game download for android
    -3 patti live online play with real players
    -3 patti live casino apk free download
    -3 patti live mod apk unlimited chips
    -3 patti live hack apk download latest version
    -3 patti live app download for pc
    -3 patti live indian poker apk download
    -3 patti live flush card game download
    -3 patti live variations joker ak47 apk
    -3 patti live royal war apk download
    -3 patti live private table apk download
    -3 patti love - 3 patti apk download[^1^]
    -3 patti gold live with real players apk download
    -3 patti star live indian poker apk download
    -3 patti ultimate plus live apk download
    -3 patti superstar live teen patti apk download
    -3 patti power - live indian poker apk download
    -3 patti champion - live card game apk download
    -3 patti pro - live teenpatti flush poker apk download
    -3 patti king - live indian poker game apk download
    -3 patti master - live online card game apk download
    -3 patti legend - live teenpatti flush rummy apk download
    -3 patti diamond - live poker card game apk download
    -3 patti classic - live indian poker flush apk download
    -3 patti deluxe - live teenpatti card game apk download
    -3 patti express - live online poker game apk download
    -3 patti fantasy - live teenpatti rummy game apk download
    -3 patti frenzy - live indian poker flush game apk download
    -3 patti glory - live teenpatti card game apk download
    -3 patti grand - live online poker flush game apk download
    -3 patti joy - live teenpatti rummy game apk download
    -3 patti magic - live indian poker card game apk download
    -3 patti marvel - live teenpatti flush rummy game apk download
    -3 patti mega - live online poker card game apk download
    -3 patti miracle - live teenpatti rummy game apk download
    -3 patti platinum - live indian poker flush game apk download
    -3 patti premium - live teenpatti card game apk download
    -3 patti prime - live online poker flush game apk download
    -3 patti quest - live teenpatti rummy game apk download
    -3 patti royal - live indian poker card game apk download
    -3 patti silver - live teenpatti flush rummy game apk download
    -3 patti starlight - live online poker card game apk download
    -3 patti supreme - live teenpatti rummy game apk download
    -3 patti turbo - live indian poker flush game apk download
    -3 patti ultimate - live teenpatti card game apk download
    -3 patti wonder - live online poker flush game apk download
    -how to play and win in 3 patti live online casino games
    -best tips and tricks for playing and winning in the latest version of the popular Indian card game, Teen Pati Live

    - -

    Conclusion

    -

    3 Patti Live is a great way to enjoy the popular Indian card game Teen Patti online. You can download and install 3 Patti Live APK on your Android device easily and play with real players from all over India. You can also learn how to play and win the game by following the rules, variations, tips, tricks and strategies that we have shared in this article. So, what are you waiting for? Download 3 Patti Live APK today and start playing and winning!

    -

    FAQs

    -

    Here are some of the frequently asked questions about 3 Patti Live APK download:

    -
      -
    1. Q: Is 3 Patti Live APK safe and secure?
    2. -
    3. A: Yes, 3 Patti Live APK is safe and secure to download and install on your device. The app uses advanced encryption and security measures to protect your personal and financial information. You can also contact the customer support team anytime if you have any issues or queries.
    4. -
    5. Q: How can I deposit and withdraw money on 3 Patti Live?
    6. -
    7. A: You can deposit and withdraw money on 3 Patti Live using various methods such as credit cards, debit cards, net banking, UPI, Paytm, etc. The transactions are fast and hassle-free, and you can withdraw your winnings anytime you want.
    8. -
    9. Q: What are the bonuses and rewards on 3 Patti Live?
    10. -
    11. A: 3 Patti Live offers many bonuses and rewards for its players, such as welcome bonus, referral bonus, loyalty bonus, daily bonus, etc. You can also participate in various tournaments and events on the app and win big prizes.
    12. -
    13. Q: Can I play 3 Patti Live with my friends?
    14. -
    15. A: Yes, you can play 3 Patti Live with your friends by inviting them to join the app using your referral code. You can also create private tables on the app and play with your friends exclusively.
    16. -
    17. Q: Can I play 3 Patti Live offline?
    18. -
    19. A: No, you cannot play 3 Patti Live offline, as it is an online game that requires an internet connection. However, you can play 3 Patti Live with low data consumption and enjoy a smooth gaming experience.
    20. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1vash/demo-flask-docker-template/Dockerfile b/spaces/1vash/demo-flask-docker-template/Dockerfile deleted file mode 100644 index 82303454b623349d2001b3db04d1f2f2482a2f06..0000000000000000000000000000000000000000 --- a/spaces/1vash/demo-flask-docker-template/Dockerfile +++ /dev/null @@ -1,32 +0,0 @@ -# Use the official Python base image -FROM python:3.9 - -# Set the working directory in the container -WORKDIR /app - -# Copy the requirements.txt file and install the Python dependencies -COPY requirements.txt . -RUN pip install --no-cache-dir -r requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -# Expose the port on which the Flask application will run -EXPOSE 5000 - -# Set the environment variable for Flask -ENV FLASK_APP=api_server.py - -# Run the Flask application -CMD ["flask", "run", "--host=0.0.0.0"] diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/field.css b/spaces/AchyuthGamer/OpenGPT/client/css/field.css deleted file mode 100644 index 914425a75d9e62e6428bdb8f5de2c66c91f10d33..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/client/css/field.css +++ /dev/null @@ -1,11 +0,0 @@ -.field { - display: flex; - align-items: center; - padding: 4px; -} - -@media screen and (max-width: 990px) { - .field { - flex-wrap: nowrap; - } -} diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Bing.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Bing.py deleted file mode 100644 index f4275a5f54d23bedf2392aad143058c6245bbb00..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Bing.py +++ /dev/null @@ -1,300 +0,0 @@ -from __future__ import annotations - -import random -import uuid -import json -import os -import uuid -import urllib.parse -from aiohttp import ClientSession, ClientTimeout -from ..typing import AsyncGenerator -from .base_provider import AsyncGeneratorProvider - -class Tones(): - creative = "Creative" - balanced = "Balanced" - precise = "Precise" - -default_cookies = { - 'SRCHD' : 'AF=NOFORM', - 'PPLState' : '1', - 'KievRPSSecAuth': '', - 'SUID' : '', - 'SRCHUSR' : '', - 'SRCHHPGUSR' : '', -} - -class Bing(AsyncGeneratorProvider): - url = "https://bing.com/chat" - working = True - supports_gpt_4 = True - - @staticmethod - def create_async_generator( - model: str, - messages: list[dict[str, str]], - cookies: dict = None, - tone: str = Tones.creative, - **kwargs - ) -> AsyncGenerator: - if len(messages) < 2: - prompt = messages[0]["content"] - context = None - else: - prompt = messages[-1]["content"] - context = create_context(messages[:-1]) - - if not cookies or "SRCHD" not in cookies: - cookies = default_cookies - return stream_generate(prompt, tone, context, cookies) - -def create_context(messages: list[dict[str, str]]): - context = "".join(f"[{message['role']}](#message)\n{message['content']}\n\n" for message in messages) - - return context - -class Conversation(): - def __init__(self, conversationId: str, clientId: str, conversationSignature: str) -> None: - self.conversationId = conversationId - self.clientId = clientId - self.conversationSignature = conversationSignature - -async def create_conversation(session: ClientSession) -> Conversation: - url = 'https://www.bing.com/turing/conversation/create?bundleVersion=1.1150.3' - - async with await session.get(url) as response: - data = await response.json() - - conversationId = data.get('conversationId') - clientId = data.get('clientId') - conversationSignature = response.headers.get('X-Sydney-Encryptedconversationsignature') - - if not conversationId or not clientId or not conversationSignature: - raise Exception('Failed to create conversation.') - - return Conversation(conversationId, clientId, conversationSignature) - -async def list_conversations(session: ClientSession) -> list: - url = "https://www.bing.com/turing/conversation/chats" - async with session.get(url) as response: - response = await response.json() - return response["chats"] - -async def delete_conversation(session: ClientSession, conversation: Conversation) -> list: - url = "https://sydney.bing.com/sydney/DeleteSingleConversation" - json = { - "conversationId": conversation.conversationId, - "conversationSignature": conversation.conversationSignature, - "participant": {"id": conversation.clientId}, - "source": "cib", - "optionsSets": ["autosave"] - } - async with session.post(url, json=json) as response: - response = await response.json() - return response["result"]["value"] == "Success" - -class Defaults: - delimiter = "\x1e" - ip_address = f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}" - - allowedMessageTypes = [ - "Chat", - "Disengaged", - "AdsQuery", - "SemanticSerp", - "GenerateContentQuery", - "SearchQuery", - "ActionRequest", - "Context", - "Progress", - "AdsQuery", - "SemanticSerp", - ] - - sliceIds = [ - "winmuid3tf", - "osbsdusgreccf", - "ttstmout", - "crchatrev", - "winlongmsgtf", - "ctrlworkpay", - "norespwtf", - "tempcacheread", - "temptacache", - "505scss0", - "508jbcars0", - "515enbotdets0", - "5082tsports", - "515vaoprvs", - "424dagslnv1s0", - "kcimgattcf", - "427startpms0", - ] - - location = { - "locale": "en-US", - "market": "en-US", - "region": "US", - "locationHints": [ - { - "country": "United States", - "state": "California", - "city": "Los Angeles", - "timezoneoffset": 8, - "countryConfidence": 8, - "Center": {"Latitude": 34.0536909, "Longitude": -118.242766}, - "RegionType": 2, - "SourceType": 1, - } - ], - } - - headers = { - 'accept': '*/*', - 'accept-language': 'en-US,en;q=0.9', - 'cache-control': 'max-age=0', - 'sec-ch-ua': '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - 'sec-ch-ua-arch': '"x86"', - 'sec-ch-ua-bitness': '"64"', - 'sec-ch-ua-full-version': '"110.0.1587.69"', - 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-model': '""', - 'sec-ch-ua-platform': '"Windows"', - 'sec-ch-ua-platform-version': '"15.0.0"', - 'sec-fetch-dest': 'document', - 'sec-fetch-mode': 'navigate', - 'sec-fetch-site': 'none', - 'sec-fetch-user': '?1', - 'upgrade-insecure-requests': '1', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69', - 'x-edge-shopping-flag': '1', - 'x-forwarded-for': ip_address, - } - - optionsSets = [ - 'saharasugg', - 'enablenewsfc', - 'clgalileo', - 'gencontentv3', - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3precise" - "dtappid", - "cricinfo", - "cricinfov2", - "dv3sugg", - "nojbfedge" - ] - -def format_message(msg: dict) -> str: - return json.dumps(msg, ensure_ascii=False) + Defaults.delimiter - -def create_message(conversation: Conversation, prompt: str, tone: str, context: str=None) -> str: - request_id = str(uuid.uuid4()) - struct = { - 'arguments': [ - { - 'source': 'cib', - 'optionsSets': Defaults.optionsSets, - 'allowedMessageTypes': Defaults.allowedMessageTypes, - 'sliceIds': Defaults.sliceIds, - 'traceId': os.urandom(16).hex(), - 'isStartOfSession': True, - 'requestId': request_id, - 'message': Defaults.location | { - 'author': 'user', - 'inputMethod': 'Keyboard', - 'text': prompt, - 'messageType': 'Chat', - 'requestId': request_id, - 'messageId': request_id, - }, - 'tone': tone, - 'spokenTextMode': 'None', - 'conversationId': conversation.conversationId, - 'participant': { - 'id': conversation.clientId - }, - } - ], - 'invocationId': '1', - 'target': 'chat', - 'type': 4 - } - - if context: - struct['arguments'][0]['previousMessages'] = [{ - "author": "user", - "description": context, - "contextType": "WebPage", - "messageType": "Context", - "messageId": "discover-web--page-ping-mriduna-----" - }] - return format_message(struct) - -async def stream_generate( - prompt: str, - tone: str, - context: str=None, - cookies: dict=None, - ): - async with ClientSession( - timeout=ClientTimeout(total=900), - cookies=cookies, - headers=Defaults.headers, - ) as session: - conversation = await create_conversation(session) - try: - async with session.ws_connect( - f'wss://sydney.bing.com/sydney/ChatHub', - autoping=False, - params={'sec_access_token': conversation.conversationSignature} - ) as wss: - - await wss.send_str(format_message({'protocol': 'json', 'version': 1})) - await wss.receive(timeout=900) - await wss.send_str(create_message(conversation, prompt, tone, context)) - - response_txt = '' - returned_text = '' - final = False - - while not final: - msg = await wss.receive(timeout=900) - objects = msg.data.split(Defaults.delimiter) - for obj in objects: - if obj is None or not obj: - continue - - response = json.loads(obj) - if response.get('type') == 1 and response['arguments'][0].get('messages'): - message = response['arguments'][0]['messages'][0] - if (message['contentOrigin'] != 'Apology'): - if 'adaptiveCards' in message: - card = message['adaptiveCards'][0]['body'][0] - if "text" in card: - response_txt = card.get('text') - if message.get('messageType'): - inline_txt = card['inlines'][0].get('text') - response_txt += inline_txt + '\n' - elif message.get('contentType') == "IMAGE": - query = urllib.parse.quote(message.get('text')) - url = f"\nhttps://www.bing.com/images/create?q={query}" - response_txt += url - final = True - if response_txt.startswith(returned_text): - new = response_txt[len(returned_text):] - if new != "\n": - yield new - returned_text = response_txt - elif response.get('type') == 2: - result = response['item']['result'] - if result.get('error'): - raise Exception(f"{result['value']}: {result['message']}") - return - finally: - await delete_conversation(session, conversation) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/PerplexityAi.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/PerplexityAi.py deleted file mode 100644 index f4f7171219664c50e0c90e214276c9b226c16d17..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/PerplexityAi.py +++ /dev/null @@ -1,101 +0,0 @@ -from __future__ import annotations - -import json -import time -import base64 -from curl_cffi.requests import AsyncSession - -from ..base_provider import AsyncProvider, format_prompt, get_cookies - - -class PerplexityAi(AsyncProvider): - url = "https://www.perplexity.ai" - working = False - supports_gpt_35_turbo = True - _sources = [] - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - **kwargs - ) -> str: - url = cls.url + "/socket.io/?EIO=4&transport=polling" - headers = { - "Referer": f"{cls.url}/" - } - async with AsyncSession(headers=headers, proxies={"https": proxy}, impersonate="chrome107") as session: - url_session = "https://www.perplexity.ai/api/auth/session" - response = await session.get(url_session) - response.raise_for_status() - - url_session = "https://www.perplexity.ai/api/auth/session" - response = await session.get(url_session) - response.raise_for_status() - - response = await session.get(url, params={"t": timestamp()}) - response.raise_for_status() - sid = json.loads(response.text[1:])["sid"] - - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - - data = '40{"jwt":"anonymous-ask-user"}' - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - - data = "424" + json.dumps([ - "perplexity_ask", - format_prompt(messages), - { - "version":"2.1", - "source":"default", - "language":"en", - "timezone": time.tzname[0], - "search_focus":"internet", - "mode":"concise" - } - ]) - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - while True: - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - for line in response.text.splitlines(): - if line.startswith("434"): - result = json.loads(json.loads(line[3:])[0]["text"]) - - cls._sources = [{ - "title": source["name"], - "url": source["url"], - "snippet": source["snippet"] - } for source in result["web_results"]] - - return result["answer"] - - @classmethod - def get_sources(cls): - return cls._sources - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("proxy", "str"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" - - -def timestamp() -> str: - return base64.urlsafe_b64encode(int(time.time()-1407782612).to_bytes(4, 'big')).decode() \ No newline at end of file diff --git a/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan_light.py b/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan_light.py deleted file mode 100644 index 808c7f882cb75e2ba2340d5b55881d11927351f0..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan_light.py +++ /dev/null @@ -1,651 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - - wd2 = wd2/4 - wd = wd/4 - - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random()) - img = ndimage.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(80, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - # elif i == 1: - # image = add_blur(image, sf=sf) - - if i == 0: - pass - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.8: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - # - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - if up: - image = cv2.resize(image, (w1, h1), interpolation=cv2.INTER_CUBIC) # todo: random, as above? want to condition on it then - example = {"image": image} - return example - - - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_hq = img - img_lq = deg_fn(img)["image"] - img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), - (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/CreatePerspectiveCardMesh.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/CreatePerspectiveCardMesh.js deleted file mode 100644 index 4e3d5cb9de0f648601878efa18601106b8154f69..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/CreatePerspectiveCardMesh.js +++ /dev/null @@ -1,39 +0,0 @@ -import { PerspectiveCard } from '../../../plugins/perspectiveimage.js'; -import Clone from '../../../plugins/utils/object/Clone.js'; - -const GetValue = Phaser.Utils.Objects.GetValue; - -var CreatePerspectiveCardMesh = function (config) { - var scene = this.scene; - - this.setSnapshotPadding(GetValue(config, 'snapshotPadding', 0)); - - config = Clone(config); - // Remove size config - delete config.width; - delete config.height; - // Initial size of render-texture is 1x1 - config.front = { width: 1, height: 1 }; - config.back = { width: 1, height: 1 }; - // Create PerspectiveCard as card-behavior - var card = new PerspectiveCard(scene, config); - scene.add.existing(card); - - var flip = card.flip; - if (flip) { - var parent = this; - flip - .on('start', function () { - // Before flipping - parent.enterPerspectiveMode(); - }) - .on('complete', function () { - // After flipping - parent.exitPerspectiveMode(); - }) - } - - return card; -} - -export default CreatePerspectiveCardMesh; \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_retinanet_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_retinanet_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index b97b6720f0522ee19e3f8353bf490b74a5835308..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_retinanet_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = '../retinanet/retinanet_x101_32x4d_fpn_1x_coco.py' - -model = dict( - bbox_head=dict( - type='PISARetinaHead', - loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)), - train_cfg=dict(isr=dict(k=2., bias=0.), carl=dict(k=1., bias=0.2))) diff --git a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/get_flops.py b/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/get_flops.py deleted file mode 100644 index e3cfe8e826fb39de2eec3be0ccbc1ae2a9b3e965..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/get_flops.py +++ /dev/null @@ -1,81 +0,0 @@ -import argparse - -import torch -from mmcv import Config, DictAction - -from mmdet.models import build_detector - -try: - from mmcv.cnn import get_model_complexity_info -except ImportError: - raise ImportError('Please upgrade mmcv to >0.6.2') - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[1280, 800], - help='input image size') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - args = parser.parse_args() - return args - - -def main(): - - args = parse_args() - - if len(args.shape) == 1: - input_shape = (3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = (3, ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - if torch.cuda.is_available(): - model.cuda() - model.eval() - - if hasattr(model, 'forward_dummy'): - model.forward = model.forward_dummy - else: - raise NotImplementedError( - 'FLOPs counter is currently not currently supported with {}'. - format(model.__class__.__name__)) - - flops, params = get_model_complexity_info(model, input_shape) - split_line = '=' * 30 - print(f'{split_line}\nInput shape: {input_shape}\n' - f'Flops: {flops}\nParams: {params}\n{split_line}') - print('!!!Please be cautious if you use the results in papers. ' - 'You may need to check if all ops are supported and verify that the ' - 'flops computation is correct.') - - -if __name__ == '__main__': - main() diff --git a/spaces/Anustup/NS_AI_LABS/src/utils.py b/spaces/Anustup/NS_AI_LABS/src/utils.py deleted file mode 100644 index b85a7f3ff5c2e3e94823f4e1bf181e54edb1ddf9..0000000000000000000000000000000000000000 --- a/spaces/Anustup/NS_AI_LABS/src/utils.py +++ /dev/null @@ -1,115 +0,0 @@ -import textwrap -import unicodedata -import re - -import zlib -from typing import Iterator, TextIO - - -def exact_div(x, y): - assert x % y == 0 - return x // y - - -def str2bool(string): - str2val = {"True": True, "False": False} - if string in str2val: - return str2val[string] - else: - raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}") - - -def optional_int(string): - return None if string == "None" else int(string) - - -def optional_float(string): - return None if string == "None" else float(string) - - -def compression_ratio(text) -> float: - return len(text) / len(zlib.compress(text.encode("utf-8"))) - - -def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'): - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}" - - -def write_txt(transcript: Iterator[dict], file: TextIO): - for segment in transcript: - print(segment['text'].strip(), file=file, flush=True) - - -def write_vtt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - print("WEBVTT\n", file=file) - for segment in transcript: - text = process_text(segment['text'], maxLineWidth).replace('-->', '->') - - print( - f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - f"{text}\n", - file=file, - flush=True, - ) - - -def write_srt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - """ - Write a transcript to a file in SRT format. - Example usage: - from pathlib import Path - from whisper.utils import write_srt - result = transcribe(model, audio_path, temperature=temperature, **args) - # save SRT - audio_basename = Path(audio_path).stem - with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt: - write_srt(result["segments"], file=srt) - """ - for i, segment in enumerate(transcript, start=1): - text = process_text(segment['text'].strip(), maxLineWidth).replace('-->', '->') - - # write srt lines - print( - f"{i}\n" - f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> " - f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n" - f"{text}\n", - file=file, - flush=True, - ) - -def process_text(text: str, maxLineWidth=None): - if (maxLineWidth is None or maxLineWidth < 0): - return text - - lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4) - return '\n'.join(lines) - -def slugify(value, allow_unicode=False): - """ - Taken from https://github.com/django/django/blob/master/django/utils/text.py - Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated - dashes to single dashes. Remove characters that aren't alphanumerics, - underscores, or hyphens. Convert to lowercase. Also strip leading and - trailing whitespace, dashes, and underscores. - """ - value = str(value) - if allow_unicode: - value = unicodedata.normalize('NFKC', value) - else: - value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') - value = re.sub(r'[^\w\s-]', '', value.lower()) - return re.sub(r'[-\s]+', '-', value).strip('-_') \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/direct_url_helpers.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/direct_url_helpers.py deleted file mode 100644 index 0e8e5e1608b911e789a3d346ebe48aa7cc54b79e..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/direct_url_helpers.py +++ /dev/null @@ -1,87 +0,0 @@ -from typing import Optional - -from pip._internal.models.direct_url import ArchiveInfo, DirectUrl, DirInfo, VcsInfo -from pip._internal.models.link import Link -from pip._internal.utils.urls import path_to_url -from pip._internal.vcs import vcs - - -def direct_url_as_pep440_direct_reference(direct_url: DirectUrl, name: str) -> str: - """Convert a DirectUrl to a pip requirement string.""" - direct_url.validate() # if invalid, this is a pip bug - requirement = name + " @ " - fragments = [] - if isinstance(direct_url.info, VcsInfo): - requirement += "{}+{}@{}".format( - direct_url.info.vcs, direct_url.url, direct_url.info.commit_id - ) - elif isinstance(direct_url.info, ArchiveInfo): - requirement += direct_url.url - if direct_url.info.hash: - fragments.append(direct_url.info.hash) - else: - assert isinstance(direct_url.info, DirInfo) - requirement += direct_url.url - if direct_url.subdirectory: - fragments.append("subdirectory=" + direct_url.subdirectory) - if fragments: - requirement += "#" + "&".join(fragments) - return requirement - - -def direct_url_for_editable(source_dir: str) -> DirectUrl: - return DirectUrl( - url=path_to_url(source_dir), - info=DirInfo(editable=True), - ) - - -def direct_url_from_link( - link: Link, source_dir: Optional[str] = None, link_is_in_wheel_cache: bool = False -) -> DirectUrl: - if link.is_vcs: - vcs_backend = vcs.get_backend_for_scheme(link.scheme) - assert vcs_backend - url, requested_revision, _ = vcs_backend.get_url_rev_and_auth( - link.url_without_fragment - ) - # For VCS links, we need to find out and add commit_id. - if link_is_in_wheel_cache: - # If the requested VCS link corresponds to a cached - # wheel, it means the requested revision was an - # immutable commit hash, otherwise it would not have - # been cached. In that case we don't have a source_dir - # with the VCS checkout. - assert requested_revision - commit_id = requested_revision - else: - # If the wheel was not in cache, it means we have - # had to checkout from VCS to build and we have a source_dir - # which we can inspect to find out the commit id. - assert source_dir - commit_id = vcs_backend.get_revision(source_dir) - return DirectUrl( - url=url, - info=VcsInfo( - vcs=vcs_backend.name, - commit_id=commit_id, - requested_revision=requested_revision, - ), - subdirectory=link.subdirectory_fragment, - ) - elif link.is_existing_dir(): - return DirectUrl( - url=link.url_without_fragment, - info=DirInfo(), - subdirectory=link.subdirectory_fragment, - ) - else: - hash = None - hash_name = link.hash_name - if hash_name: - hash = f"{hash_name}={link.hash}" - return DirectUrl( - url=link.url_without_fragment, - info=ArchiveInfo(hash=hash), - subdirectory=link.subdirectory_fragment, - ) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnety_4gf_dds_fpn_1x.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnety_4gf_dds_fpn_1x.py deleted file mode 100644 index 72c6b7a5c8939970bd0e1e4a3c1155695943b19a..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnety_4gf_dds_fpn_1x.py +++ /dev/null @@ -1,35 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.mask_rcnn_fpn import model -from ..common.train import train - -from detectron2.config import LazyCall as L -from detectron2.modeling.backbone import RegNet -from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock - - -# Replace default ResNet with RegNetY-4GF from the DDS paper. Config source: -# https://github.com/facebookresearch/pycls/blob/2c152a6e5d913e898cca4f0a758f41e6b976714d/configs/dds_baselines/regnety/RegNetY-4.0GF_dds_8gpu.yaml#L4-L10 # noqa -model.backbone.bottom_up = L(RegNet)( - stem_class=SimpleStem, - stem_width=32, - block_class=ResBottleneckBlock, - depth=22, - w_a=31.41, - w_0=96, - w_m=2.24, - group_width=64, - se_ratio=0.25, - freeze_at=2, - norm="FrozenBN", - out_features=["s1", "s2", "s3", "s4"], -) -model.pixel_std = [57.375, 57.120, 58.395] - -optimizer.weight_decay = 5e-5 -train.init_checkpoint = ( - "https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906838/RegNetY-4.0GF_dds_8gpu.pyth" -) -# RegNets benefit from enabling cudnn benchmark mode -train.cudnn_benchmark = True diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/shared.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/shared.py deleted file mode 100644 index 2d0f7bf3999064a68f28a1207d65a2de7ae98c0a..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/shared.py +++ /dev/null @@ -1,1034 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import collections -import contextlib -import copy -import functools -import logging -import numpy as np -import os -from typing import Any, Callable, Dict, List, Optional, Tuple, Union -from unittest import mock -import caffe2.python.utils as putils -import torch -import torch.nn.functional as F -from caffe2.proto import caffe2_pb2 -from caffe2.python import core, net_drawer, workspace -from torch.nn.functional import interpolate as interp - -logger = logging.getLogger(__name__) - - -# ==== torch/utils_toffee/cast.py ======================================= - - -def to_device(t, device_str): - """ - This function is a replacement of .to(another_device) such that it allows the - casting to be traced properly by explicitly calling the underlying copy ops. - It also avoids introducing unncessary op when casting to the same device. - """ - src = t.device - dst = torch.device(device_str) - - if src == dst: - return t - elif src.type == "cuda" and dst.type == "cpu": - return torch.ops._caffe2.CopyGPUToCPU(t) - elif src.type == "cpu" and dst.type == "cuda": - return torch.ops._caffe2.CopyCPUToGPU(t) - else: - raise RuntimeError("Can't cast tensor from device {} to device {}".format(src, dst)) - - -# ==== torch/utils_toffee/interpolate.py ======================================= - - -# Note: borrowed from vision/detection/fair/detectron/detectron/modeling/detector.py -def BilinearInterpolation(tensor_in, up_scale): - assert up_scale % 2 == 0, "Scale should be even" - - def upsample_filt(size): - factor = (size + 1) // 2 - if size % 2 == 1: - center = factor - 1 - else: - center = factor - 0.5 - - og = np.ogrid[:size, :size] - return (1 - abs(og[0] - center) / factor) * (1 - abs(og[1] - center) / factor) - - kernel_size = int(up_scale) * 2 - bil_filt = upsample_filt(kernel_size) - - dim = int(tensor_in.shape[1]) - kernel = np.zeros((dim, dim, kernel_size, kernel_size), dtype=np.float32) - kernel[range(dim), range(dim), :, :] = bil_filt - - tensor_out = F.conv_transpose2d( - tensor_in, - weight=to_device(torch.Tensor(kernel), tensor_in.device), - bias=None, - stride=int(up_scale), - padding=int(up_scale / 2), - ) - - return tensor_out - - -# NOTE: ONNX is incompatible with traced torch.nn.functional.interpolate if -# using dynamic `scale_factor` rather than static `size`. (T43166860) -# NOTE: Caffe2 Int8 conversion might not be able to quantize `size` properly. -def onnx_compatibale_interpolate( - input, size=None, scale_factor=None, mode="nearest", align_corners=None -): - # NOTE: The input dimensions are interpreted in the form: - # `mini-batch x channels x [optional depth] x [optional height] x width`. - if size is None and scale_factor is not None: - if input.dim() == 4: - if isinstance(scale_factor, (int, float)): - height_scale, width_scale = (scale_factor, scale_factor) - else: - assert isinstance(scale_factor, (tuple, list)) - assert len(scale_factor) == 2 - height_scale, width_scale = scale_factor - - assert not align_corners, "No matching C2 op for align_corners == True" - if mode == "nearest": - return torch.ops._caffe2.ResizeNearest( - input, order="NCHW", width_scale=width_scale, height_scale=height_scale - ) - elif mode == "bilinear": - logger.warning( - "Use F.conv_transpose2d for bilinear interpolate" - " because there's no such C2 op, this may cause significant" - " slowdown and the boundary pixels won't be as same as" - " using F.interpolate due to padding." - ) - assert height_scale == width_scale - return BilinearInterpolation(input, up_scale=height_scale) - logger.warning("Output size is not static, it might cause ONNX conversion issue") - - return interp(input, size, scale_factor, mode, align_corners) - - -@contextlib.contextmanager -def mock_torch_nn_functional_interpolate(): - if torch.onnx.is_in_onnx_export(): - with mock.patch( - "torch.nn.functional.interpolate", side_effect=onnx_compatibale_interpolate - ): - yield - else: - yield - - -# ==== torch/utils_caffe2/ws_utils.py ========================================== - - -class ScopedWS(object): - def __init__(self, ws_name, is_reset, is_cleanup=False): - self.ws_name = ws_name - self.is_reset = is_reset - self.is_cleanup = is_cleanup - self.org_ws = "" - - def __enter__(self): - self.org_ws = workspace.CurrentWorkspace() - if self.ws_name is not None: - workspace.SwitchWorkspace(self.ws_name, True) - if self.is_reset: - workspace.ResetWorkspace() - - return workspace - - def __exit__(self, *args): - if self.is_cleanup: - workspace.ResetWorkspace() - if self.ws_name is not None: - workspace.SwitchWorkspace(self.org_ws) - - -def fetch_any_blob(name): - bb = None - try: - bb = workspace.FetchBlob(name) - except TypeError: - bb = workspace.FetchInt8Blob(name) - except Exception as e: - logger.error("Get blob {} error: {}".format(name, e)) - - return bb - - -# ==== torch/utils_caffe2/protobuf.py ========================================== - - -def get_pb_arg(pb, arg_name): - for x in pb.arg: - if x.name == arg_name: - return x - return None - - -def get_pb_arg_valf(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return arg.f if arg is not None else default_val - - -def get_pb_arg_floats(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return list(map(float, arg.floats)) if arg is not None else default_val - - -def get_pb_arg_ints(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return list(map(int, arg.ints)) if arg is not None else default_val - - -def get_pb_arg_vali(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return arg.i if arg is not None else default_val - - -def get_pb_arg_vals(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return arg.s if arg is not None else default_val - - -def get_pb_arg_valstrings(pb, arg_name, default_val): - arg = get_pb_arg(pb, arg_name) - return list(arg.strings) if arg is not None else default_val - - -def check_set_pb_arg(pb, arg_name, arg_attr, arg_value, allow_override=False): - arg = get_pb_arg(pb, arg_name) - if arg is None: - arg = putils.MakeArgument(arg_name, arg_value) - assert hasattr(arg, arg_attr) - pb.arg.extend([arg]) - if allow_override and getattr(arg, arg_attr) != arg_value: - logger.warning( - "Override argument {}: {} -> {}".format(arg_name, getattr(arg, arg_attr), arg_value) - ) - setattr(arg, arg_attr, arg_value) - else: - assert arg is not None - assert getattr(arg, arg_attr) == arg_value, "Existing value {}, new value {}".format( - getattr(arg, arg_attr), arg_value - ) - - -def _create_const_fill_op_from_numpy(name, tensor, device_option=None): - assert type(tensor) == np.ndarray - kTypeNameMapper = { - np.dtype("float32"): "GivenTensorFill", - np.dtype("int32"): "GivenTensorIntFill", - np.dtype("int64"): "GivenTensorInt64Fill", - np.dtype("uint8"): "GivenTensorStringFill", - } - - args_dict = {} - if tensor.dtype == np.dtype("uint8"): - args_dict.update({"values": [str(tensor.data)], "shape": [1]}) - else: - args_dict.update({"values": tensor, "shape": tensor.shape}) - - if device_option is not None: - args_dict["device_option"] = device_option - - return core.CreateOperator(kTypeNameMapper[tensor.dtype], [], [name], **args_dict) - - -def _create_const_fill_op_from_c2_int8_tensor(name, int8_tensor): - assert type(int8_tensor) == workspace.Int8Tensor - kTypeNameMapper = { - np.dtype("int32"): "Int8GivenIntTensorFill", - np.dtype("uint8"): "Int8GivenTensorFill", - } - - tensor = int8_tensor.data - assert tensor.dtype in [np.dtype("uint8"), np.dtype("int32")] - values = tensor.tobytes() if tensor.dtype == np.dtype("uint8") else tensor - - return core.CreateOperator( - kTypeNameMapper[tensor.dtype], - [], - [name], - values=values, - shape=tensor.shape, - Y_scale=int8_tensor.scale, - Y_zero_point=int8_tensor.zero_point, - ) - - -def create_const_fill_op( - name: str, - blob: Union[np.ndarray, workspace.Int8Tensor], - device_option: Optional[caffe2_pb2.DeviceOption] = None, -) -> caffe2_pb2.OperatorDef: - """ - Given a blob object, return the Caffe2 operator that creates this blob - as constant. Currently support NumPy tensor and Caffe2 Int8Tensor. - """ - - tensor_type = type(blob) - assert tensor_type in [ - np.ndarray, - workspace.Int8Tensor, - ], 'Error when creating const fill op for "{}", unsupported blob type: {}'.format( - name, type(blob) - ) - - if tensor_type == np.ndarray: - return _create_const_fill_op_from_numpy(name, blob, device_option) - elif tensor_type == workspace.Int8Tensor: - assert device_option is None - return _create_const_fill_op_from_c2_int8_tensor(name, blob) - - -def construct_init_net_from_params( - params: Dict[str, Any], device_options: Optional[Dict[str, caffe2_pb2.DeviceOption]] = None -) -> caffe2_pb2.NetDef: - """ - Construct the init_net from params dictionary - """ - init_net = caffe2_pb2.NetDef() - device_options = device_options or {} - for name, blob in params.items(): - if isinstance(blob, str): - logger.warning( - ( - "Blob {} with type {} is not supported in generating init net," - " skipped.".format(name, type(blob)) - ) - ) - continue - init_net.op.extend( - [create_const_fill_op(name, blob, device_option=device_options.get(name, None))] - ) - init_net.external_output.append(name) - return init_net - - -def get_producer_map(ssa): - """ - Return dict from versioned blob to (i, j), - where i is index of producer op, j is the index of output of that op. - """ - producer_map = {} - for i in range(len(ssa)): - outputs = ssa[i][1] - for j, outp in enumerate(outputs): - producer_map[outp] = (i, j) - return producer_map - - -def get_consumer_map(ssa): - """ - Return dict from versioned blob to list of (i, j), - where i is index of consumer op, j is the index of input of that op. - """ - consumer_map = collections.defaultdict(list) - for i in range(len(ssa)): - inputs = ssa[i][0] - for j, inp in enumerate(inputs): - consumer_map[inp].append((i, j)) - return consumer_map - - -def get_params_from_init_net( - init_net: caffe2_pb2.NetDef, -) -> [Dict[str, Any], Dict[str, caffe2_pb2.DeviceOption]]: - """ - Take the output blobs from init_net by running it. - Outputs: - params: dict from blob name to numpy array - device_options: dict from blob name to the device option of its creating op - """ - # NOTE: this assumes that the params is determined by producer op with the - # only exception be CopyGPUToCPU which is CUDA op but returns CPU tensor. - def _get_device_option(producer_op): - if producer_op.type == "CopyGPUToCPU": - return caffe2_pb2.DeviceOption() - else: - return producer_op.device_option - - with ScopedWS("__get_params_from_init_net__", is_reset=True, is_cleanup=True) as ws: - ws.RunNetOnce(init_net) - params = {b: fetch_any_blob(b) for b in init_net.external_output} - ssa, versions = core.get_ssa(init_net) - producer_map = get_producer_map(ssa) - device_options = { - b: _get_device_option(init_net.op[producer_map[(b, versions[b])][0]]) - for b in init_net.external_output - } - return params, device_options - - -def _updater_raise(op, input_types, output_types): - raise RuntimeError( - "Failed to apply updater for op {} given input_types {} and" - " output_types {}".format(op, input_types, output_types) - ) - - -def _generic_status_identifier( - predict_net: caffe2_pb2.NetDef, - status_updater: Callable, - known_status: Dict[Tuple[str, int], Any], -) -> Dict[Tuple[str, int], Any]: - """ - Statically infer the status of each blob, the status can be such as device type - (CPU/GPU), layout (NCHW/NHWC), data type (float32/int8), etc. "Blob" here - is versioned blob (Tuple[str, int]) in the format compatible with ssa. - Inputs: - predict_net: the caffe2 network - status_updater: a callable, given an op and the status of its input/output, - it returns the updated status of input/output. `None` is used for - representing unknown status. - known_status: a dict containing known status, used as initialization. - Outputs: - A dict mapping from versioned blob to its status - """ - ssa, versions = core.get_ssa(predict_net) - versioned_ext_input = [(b, 0) for b in predict_net.external_input] - versioned_ext_output = [(b, versions[b]) for b in predict_net.external_output] - all_versioned_blobs = set().union(*[set(x[0] + x[1]) for x in ssa]) - - allowed_vbs = all_versioned_blobs.union(versioned_ext_input).union(versioned_ext_output) - assert all(k in allowed_vbs for k in known_status) - assert all(v is not None for v in known_status.values()) - _known_status = copy.deepcopy(known_status) - - def _check_and_update(key, value): - assert value is not None - if key in _known_status: - if not _known_status[key] == value: - raise RuntimeError( - "Confilict status for {}, existing status {}, new status {}".format( - key, _known_status[key], value - ) - ) - _known_status[key] = value - - def _update_i(op, ssa_i): - versioned_inputs = ssa_i[0] - versioned_outputs = ssa_i[1] - - inputs_status = [_known_status.get(b, None) for b in versioned_inputs] - outputs_status = [_known_status.get(b, None) for b in versioned_outputs] - - new_inputs_status, new_outputs_status = status_updater(op, inputs_status, outputs_status) - - for versioned_blob, status in zip( - versioned_inputs + versioned_outputs, new_inputs_status + new_outputs_status - ): - if status is not None: - _check_and_update(versioned_blob, status) - - for op, ssa_i in zip(predict_net.op, ssa): - _update_i(op, ssa_i) - for op, ssa_i in zip(reversed(predict_net.op), reversed(ssa)): - _update_i(op, ssa_i) - - # NOTE: This strictly checks all the blob from predict_net must be assgined - # a known status. However sometimes it's impossible (eg. having deadend op), - # we may relax this constraint if - for k in all_versioned_blobs: - if k not in _known_status: - raise NotImplementedError( - "Can not infer the status for {}. Currently only support the case where" - " a single forward and backward pass can identify status for all blobs.".format(k) - ) - - return _known_status - - -def infer_device_type( - predict_net: caffe2_pb2.NetDef, - known_status: Dict[Tuple[str, int], Any], - device_name_style: str = "caffe2", -) -> Dict[Tuple[str, int], str]: - """Return the device type ("cpu" or "gpu"/"cuda") of each (versioned) blob""" - - assert device_name_style in ["caffe2", "pytorch"] - _CPU_STR = "cpu" - _GPU_STR = "gpu" if device_name_style == "caffe2" else "cuda" - - def _copy_cpu_to_gpu_updater(op, input_types, output_types): - if input_types[0] == _GPU_STR or output_types[0] == _CPU_STR: - _updater_raise(op, input_types, output_types) - return ([_CPU_STR], [_GPU_STR]) - - def _copy_gpu_to_cpu_updater(op, input_types, output_types): - if input_types[0] == _CPU_STR or output_types[0] == _GPU_STR: - _updater_raise(op, input_types, output_types) - return ([_GPU_STR], [_CPU_STR]) - - def _other_ops_updater(op, input_types, output_types): - non_none_types = [x for x in input_types + output_types if x is not None] - if len(non_none_types) > 0: - the_type = non_none_types[0] - if not all(x == the_type for x in non_none_types): - _updater_raise(op, input_types, output_types) - else: - the_type = None - return ([the_type for _ in op.input], [the_type for _ in op.output]) - - def _device_updater(op, *args, **kwargs): - return { - "CopyCPUToGPU": _copy_cpu_to_gpu_updater, - "CopyGPUToCPU": _copy_gpu_to_cpu_updater, - }.get(op.type, _other_ops_updater)(op, *args, **kwargs) - - return _generic_status_identifier(predict_net, _device_updater, known_status) - - -# ==== torch/utils_caffe2/vis.py =============================================== - - -def _modify_blob_names(ops, blob_rename_f): - ret = [] - - def _replace_list(blob_list, replaced_list): - del blob_list[:] - blob_list.extend(replaced_list) - - for x in ops: - cur = copy.deepcopy(x) - _replace_list(cur.input, list(map(blob_rename_f, cur.input))) - _replace_list(cur.output, list(map(blob_rename_f, cur.output))) - ret.append(cur) - - return ret - - -def _rename_blob(name, blob_sizes, blob_ranges): - def _list_to_str(bsize): - ret = ", ".join([str(x) for x in bsize]) - ret = "[" + ret + "]" - return ret - - ret = name - if blob_sizes is not None and name in blob_sizes: - ret += "\n" + _list_to_str(blob_sizes[name]) - if blob_ranges is not None and name in blob_ranges: - ret += "\n" + _list_to_str(blob_ranges[name]) - - return ret - - -# graph_name could not contain word 'graph' -def save_graph(net, file_name, graph_name="net", op_only=True, blob_sizes=None, blob_ranges=None): - blob_rename_f = functools.partial(_rename_blob, blob_sizes=blob_sizes, blob_ranges=blob_ranges) - return save_graph_base(net, file_name, graph_name, op_only, blob_rename_f) - - -def save_graph_base(net, file_name, graph_name="net", op_only=True, blob_rename_func=None): - graph = None - ops = net.op - if blob_rename_func is not None: - ops = _modify_blob_names(ops, blob_rename_func) - if not op_only: - graph = net_drawer.GetPydotGraph(ops, graph_name, rankdir="TB") - else: - graph = net_drawer.GetPydotGraphMinimal( - ops, graph_name, rankdir="TB", minimal_dependency=True - ) - - try: - par_dir = os.path.dirname(file_name) - if not os.path.exists(par_dir): - os.makedirs(par_dir) - - format = os.path.splitext(os.path.basename(file_name))[-1] - if format == ".png": - graph.write_png(file_name) - elif format == ".pdf": - graph.write_pdf(file_name) - elif format == ".svg": - graph.write_svg(file_name) - else: - print("Incorrect format {}".format(format)) - except Exception as e: - print("Error when writing graph to image {}".format(e)) - - return graph - - -# ==== torch/utils_toffee/aten_to_caffe2.py ==================================== - - -def group_norm_replace_aten_with_caffe2(predict_net: caffe2_pb2.NetDef): - """ - For ONNX exported model, GroupNorm will be represented as ATen op, - this can be a drop in replacement from ATen to GroupNorm - """ - count = 0 - for op in predict_net.op: - if op.type == "ATen": - op_name = get_pb_arg_vals(op, "operator", None) # return byte in py3 - if op_name and op_name.decode() == "group_norm": - op.arg.remove(get_pb_arg(op, "operator")) - - if get_pb_arg_vali(op, "cudnn_enabled", None): - op.arg.remove(get_pb_arg(op, "cudnn_enabled")) - - num_groups = get_pb_arg_vali(op, "num_groups", None) - if num_groups is not None: - op.arg.remove(get_pb_arg(op, "num_groups")) - check_set_pb_arg(op, "group", "i", num_groups) - - op.type = "GroupNorm" - count += 1 - if count > 1: - logger.info("Replaced {} ATen operator to GroupNormOp".format(count)) - - -# ==== torch/utils_toffee/alias.py ============================================= - - -def alias(x, name, is_backward=False): - if not torch.onnx.is_in_onnx_export(): - return x - assert isinstance(x, torch.Tensor) - return torch.ops._caffe2.AliasWithName(x, name, is_backward=is_backward) - - -def fuse_alias_placeholder(predict_net, init_net): - """Remove AliasWithName placeholder and rename the input/output of it""" - # First we finish all the re-naming - for i, op in enumerate(predict_net.op): - if op.type == "AliasWithName": - assert len(op.input) == 1 - assert len(op.output) == 1 - name = get_pb_arg_vals(op, "name", None).decode() - is_backward = bool(get_pb_arg_vali(op, "is_backward", 0)) - rename_op_input(predict_net, init_net, i, 0, name, from_producer=is_backward) - rename_op_output(predict_net, i, 0, name) - - # Remove AliasWithName, should be very safe since it's a non-op - new_ops = [] - for op in predict_net.op: - if op.type != "AliasWithName": - new_ops.append(op) - else: - # safety check - assert op.input == op.output - assert op.input[0] == op.arg[0].s.decode() - del predict_net.op[:] - predict_net.op.extend(new_ops) - - -# ==== torch/utils_caffe2/graph_transform.py =================================== - - -class IllegalGraphTransformError(ValueError): - """When a graph transform function call can't be executed.""" - - -def _rename_versioned_blob_in_proto( - proto: caffe2_pb2.NetDef, - old_name: str, - new_name: str, - version: int, - ssa: List[Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]], - start_versions: Dict[str, int], - end_versions: Dict[str, int], -): - """In given proto, rename all blobs with matched version""" - # Operater list - for op, i_th_ssa in zip(proto.op, ssa): - versioned_inputs, versioned_outputs = i_th_ssa - for i in range(len(op.input)): - if versioned_inputs[i] == (old_name, version): - op.input[i] = new_name - for i in range(len(op.output)): - if versioned_outputs[i] == (old_name, version): - op.output[i] = new_name - # external_input - if start_versions.get(old_name, 0) == version: - for i in range(len(proto.external_input)): - if proto.external_input[i] == old_name: - proto.external_input[i] = new_name - # external_output - if end_versions.get(old_name, 0) == version: - for i in range(len(proto.external_output)): - if proto.external_output[i] == old_name: - proto.external_output[i] = new_name - - -def rename_op_input( - predict_net: caffe2_pb2.NetDef, - init_net: caffe2_pb2.NetDef, - op_id: int, - input_id: int, - new_name: str, - from_producer: bool = False, -): - """ - Rename the op_id-th operator in predict_net, change it's input_id-th input's - name to the new_name. It also does automatic re-route and change - external_input and init_net if necessary. - - It requires the input is only consumed by this op. - - This function modifies predict_net and init_net in-place. - - When from_producer is enable, this also updates other operators that consumes - the same input. Be cautious because may trigger unintended behavior. - """ - assert isinstance(predict_net, caffe2_pb2.NetDef) - assert isinstance(init_net, caffe2_pb2.NetDef) - - init_net_ssa, init_net_versions = core.get_ssa(init_net) - predict_net_ssa, predict_net_versions = core.get_ssa( - predict_net, copy.deepcopy(init_net_versions) - ) - - versioned_inputs, versioned_outputs = predict_net_ssa[op_id] - old_name, version = versioned_inputs[input_id] - - if from_producer: - producer_map = get_producer_map(predict_net_ssa) - if not (old_name, version) in producer_map: - raise NotImplementedError( - "Can't find producer, the input {} is probably from" - " init_net, this is not supported yet.".format(old_name) - ) - producer = producer_map[(old_name, version)] - rename_op_output(predict_net, producer[0], producer[1], new_name) - return - - def contain_targets(op_ssa): - return (old_name, version) in op_ssa[0] - - is_consumer = [contain_targets(op_ssa) for op_ssa in predict_net_ssa] - if sum(is_consumer) > 1: - raise IllegalGraphTransformError( - ( - "Input '{}' of operator(#{}) are consumed by other ops, please use" - + " rename_op_output on the producer instead. Offending op: \n{}" - ).format(old_name, op_id, predict_net.op[op_id]) - ) - - # update init_net - _rename_versioned_blob_in_proto( - init_net, old_name, new_name, version, init_net_ssa, {}, init_net_versions - ) - # update predict_net - _rename_versioned_blob_in_proto( - predict_net, - old_name, - new_name, - version, - predict_net_ssa, - init_net_versions, - predict_net_versions, - ) - - -def rename_op_output(predict_net: caffe2_pb2.NetDef, op_id: int, output_id: int, new_name: str): - """ - Rename the op_id-th operator in predict_net, change it's output_id-th input's - name to the new_name. It also does automatic re-route and change - external_output and if necessary. - - It allows multiple consumers of its output. - - This function modifies predict_net in-place, doesn't need init_net. - """ - assert isinstance(predict_net, caffe2_pb2.NetDef) - - ssa, blob_versions = core.get_ssa(predict_net) - - versioned_inputs, versioned_outputs = ssa[op_id] - old_name, version = versioned_outputs[output_id] - - # update predict_net - _rename_versioned_blob_in_proto( - predict_net, old_name, new_name, version, ssa, {}, blob_versions - ) - - -def get_sub_graph_external_input_output( - predict_net: caffe2_pb2.NetDef, sub_graph_op_indices: List[int] -) -> Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]: - """ - Return the list of external input/output of sub-graph, - each element is tuple of the name and corresponding version in predict_net. - - external input/output is defined the same way as caffe2 NetDef. - """ - ssa, versions = core.get_ssa(predict_net) - - all_inputs = [] - all_outputs = [] - for op_id in sub_graph_op_indices: - all_inputs += [inp for inp in ssa[op_id][0] if inp not in all_inputs] - all_outputs += list(ssa[op_id][1]) # ssa output won't repeat - - # for versioned blobs, external inputs are just those blob in all_inputs - # but not in all_outputs - ext_inputs = [inp for inp in all_inputs if inp not in all_outputs] - - # external outputs are essentially outputs of this subgraph that are used - # outside of this sub-graph (including predict_net.external_output) - all_other_inputs = sum( - (ssa[i][0] for i in range(len(ssa)) if i not in sub_graph_op_indices), - [(outp, versions[outp]) for outp in predict_net.external_output], - ) - ext_outputs = [outp for outp in all_outputs if outp in set(all_other_inputs)] - - return ext_inputs, ext_outputs - - -class DiGraph: - """A DAG representation of caffe2 graph, each vertice is a versioned blob.""" - - def __init__(self): - self.vertices = set() - self.graph = collections.defaultdict(list) - - def add_edge(self, u, v): - self.graph[u].append(v) - self.vertices.add(u) - self.vertices.add(v) - - # grab from https://www.geeksforgeeks.org/find-paths-given-source-destination/ - def get_all_paths(self, s, d): - visited = {k: False for k in self.vertices} - path = [] - all_paths = [] - - def _get_all_paths_util(graph, u, d, visited, path): - visited[u] = True - path.append(u) - if u == d: - all_paths.append(copy.deepcopy(path)) - else: - for i in graph[u]: - if not visited[i]: - _get_all_paths_util(graph, i, d, visited, path) - path.pop() - visited[u] = False - - _get_all_paths_util(self.graph, s, d, visited, path) - return all_paths - - @staticmethod - def from_ssa(ssa): - graph = DiGraph() - for op_id in range(len(ssa)): - for inp in ssa[op_id][0]: - for outp in ssa[op_id][1]: - graph.add_edge(inp, outp) - return graph - - -def _get_dependency_chain(ssa, versioned_target, versioned_source): - """ - Return the index list of relevant operator to produce target blob from source blob, - if there's no dependency, return empty list. - """ - - # finding all paths between nodes can be O(N!), thus we can only search - # in the subgraph using the op starting from the first consumer of source blob - # to the producer of the target blob. - consumer_map = get_consumer_map(ssa) - producer_map = get_producer_map(ssa) - start_op = min(x[0] for x in consumer_map[versioned_source]) - 15 - end_op = ( - producer_map[versioned_target][0] + 15 if versioned_target in producer_map else start_op - ) - sub_graph_ssa = ssa[start_op : end_op + 1] - if len(sub_graph_ssa) > 30: - logger.warning( - "Subgraph bebetween {} and {} is large (from op#{} to op#{}), it" - " might take non-trival time to find all paths between them.".format( - versioned_source, versioned_target, start_op, end_op - ) - ) - - dag = DiGraph.from_ssa(sub_graph_ssa) - paths = dag.get_all_paths(versioned_source, versioned_target) # include two ends - ops_in_paths = [[producer_map[blob][0] for blob in path[1:]] for path in paths] - return sorted(set().union(*[set(ops) for ops in ops_in_paths])) - - -def identify_reshape_sub_graph(predict_net: caffe2_pb2.NetDef) -> List[List[int]]: - """ - Idenfity the reshape sub-graph in a protobuf. - The reshape sub-graph is defined as matching the following pattern: - - (input_blob) -> Op_1 -> ... -> Op_N -> (new_shape) -─┐ - └-------------------------------------------> Reshape -> (output_blob) - - Return: - List of sub-graphs, each sub-graph is represented as a list of indices - of the relavent ops, [Op_1, Op_2, ..., Op_N, Reshape] - """ - - ssa, _ = core.get_ssa(predict_net) - - ret = [] - for i, op in enumerate(predict_net.op): - if op.type == "Reshape": - assert len(op.input) == 2 - input_ssa = ssa[i][0] - data_source = input_ssa[0] - shape_source = input_ssa[1] - op_indices = _get_dependency_chain(ssa, shape_source, data_source) - ret.append(op_indices + [i]) - return ret - - -def remove_reshape_for_fc(predict_net, params): - """ - In PyTorch nn.Linear has to take 2D tensor, this often leads to reshape - a 4D tensor to 2D by calling .view(). However this (dynamic) reshaping - doesn't work well with ONNX and Int8 tools, and cause using extra - ops (eg. ExpandDims) that might not be available on mobile. - Luckily Caffe2 supports 4D tensor for FC, so we can remove those reshape - after exporting ONNX model. - """ - from caffe2.python import core - - # find all reshape sub-graph that can be removed, which is now all Reshape - # sub-graph whose output is only consumed by FC. - # TODO: to make it safer, we may need the actually value to better determine - # if a Reshape before FC is removable. - reshape_sub_graphs = identify_reshape_sub_graph(predict_net) - sub_graphs_to_remove = [] - for reshape_sub_graph in reshape_sub_graphs: - reshape_op_id = reshape_sub_graph[-1] - assert predict_net.op[reshape_op_id].type == "Reshape" - ssa, _ = core.get_ssa(predict_net) - reshape_output = ssa[reshape_op_id][1][0] - consumers = [i for i in range(len(ssa)) if reshape_output in ssa[i][0]] - if all(predict_net.op[consumer].type == "FC" for consumer in consumers): - # safety check if the sub-graph is isolated, for this reshape sub-graph, - # it means it has one non-param external input and one external output. - ext_inputs, ext_outputs = get_sub_graph_external_input_output( - predict_net, reshape_sub_graph - ) - non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0] - if len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1: - sub_graphs_to_remove.append(reshape_sub_graph) - - # perform removing subgraph by: - # 1: rename the Reshape's output to its input, then the graph can be - # seen as in-place itentify, meaning whose external input/output are the same. - # 2: simply remove those ops. - remove_op_ids = [] - params_to_remove = [] - for sub_graph in sub_graphs_to_remove: - logger.info( - "Remove Reshape sub-graph:\n{}".format( - "".join(["(#{:>4})\n{}".format(i, predict_net.op[i]) for i in sub_graph]) - ) - ) - reshape_op_id = sub_graph[-1] - new_reshap_output = predict_net.op[reshape_op_id].input[0] - rename_op_output(predict_net, reshape_op_id, 0, new_reshap_output) - ext_inputs, ext_outputs = get_sub_graph_external_input_output(predict_net, sub_graph) - non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0] - params_ext_inputs = [inp for inp in ext_inputs if inp[1] == 0] - assert len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1 - assert ext_outputs[0][0] == non_params_ext_inputs[0][0] - assert ext_outputs[0][1] == non_params_ext_inputs[0][1] + 1 - remove_op_ids.extend(sub_graph) - params_to_remove.extend(params_ext_inputs) - - predict_net = copy.deepcopy(predict_net) - new_ops = [op for i, op in enumerate(predict_net.op) if i not in remove_op_ids] - del predict_net.op[:] - predict_net.op.extend(new_ops) - for versioned_params in params_to_remove: - name = versioned_params[0] - logger.info("Remove params: {} from init_net and predict_net.external_input".format(name)) - del params[name] - predict_net.external_input.remove(name) - - return predict_net, params - - -def fuse_copy_between_cpu_and_gpu(predict_net: caffe2_pb2.NetDef): - """ - In-place fuse extra copy ops between cpu/gpu for the following case: - a -CopyAToB-> b -CopyBToA> c1 -NextOp1-> d1 - -CopyBToA> c2 -NextOp2-> d2 - The fused network will look like: - a -NextOp1-> d1 - -NextOp2-> d2 - """ - - _COPY_OPS = ["CopyCPUToGPU", "CopyGPUToCPU"] - - def _fuse_once(predict_net): - ssa, blob_versions = core.get_ssa(predict_net) - consumer_map = get_consumer_map(ssa) - versioned_external_output = [ - (name, blob_versions[name]) for name in predict_net.external_output - ] - - for op_id, op in enumerate(predict_net.op): - if op.type in _COPY_OPS: - fw_copy_versioned_output = ssa[op_id][1][0] - consumer_ids = [x[0] for x in consumer_map[fw_copy_versioned_output]] - reverse_op_type = _COPY_OPS[1 - _COPY_OPS.index(op.type)] - - is_fusable = ( - len(consumer_ids) > 0 - and fw_copy_versioned_output not in versioned_external_output - and all( - predict_net.op[_op_id].type == reverse_op_type - and ssa[_op_id][1][0] not in versioned_external_output - for _op_id in consumer_ids - ) - ) - - if is_fusable: - for rv_copy_op_id in consumer_ids: - # making each NextOp uses "a" directly and removing Copy ops - rs_copy_versioned_output = ssa[rv_copy_op_id][1][0] - next_op_id, inp_id = consumer_map[rs_copy_versioned_output][0] - predict_net.op[next_op_id].input[inp_id] = op.input[0] - # remove CopyOps - new_ops = [ - op - for i, op in enumerate(predict_net.op) - if i != op_id and i not in consumer_ids - ] - del predict_net.op[:] - predict_net.op.extend(new_ops) - return True - - return False - - # _fuse_once returns False is nothing can be fused - while _fuse_once(predict_net): - pass - - -def remove_dead_end_ops(net_def: caffe2_pb2.NetDef): - """remove ops if its output is not used or not in external_output""" - ssa, versions = core.get_ssa(net_def) - versioned_external_output = [(name, versions[name]) for name in net_def.external_output] - consumer_map = get_consumer_map(ssa) - removed_op_ids = set() - - def _is_dead_end(versioned_blob): - return not ( - versioned_blob in versioned_external_output - or ( - len(consumer_map[versioned_blob]) > 0 - and all(x[0] not in removed_op_ids for x in consumer_map[versioned_blob]) - ) - ) - - for i, ssa_i in reversed(list(enumerate(ssa))): - versioned_outputs = ssa_i[1] - if all(_is_dead_end(outp) for outp in versioned_outputs): - removed_op_ids.add(i) - - # simply removing those deadend ops should have no effect to external_output - new_ops = [op for i, op in enumerate(net_def.op) if i not in removed_op_ids] - del net_def.op[:] - net_def.op.extend(new_ops) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/evaluation.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/evaluation.md deleted file mode 100644 index bd924a3b1d9bb1e0dacc53306d30f938a724135e..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/evaluation.md +++ /dev/null @@ -1,68 +0,0 @@ - -# Evaluation - -Evaluation is a process that takes a number of inputs/outputs pairs and aggregate them. -You can always [use the model](./models.md) directly and just parse its inputs/outputs manually to perform -evaluation. -Alternatively, evaluation is implemented in detectron2 using the [DatasetEvaluator](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluator) -interface. - -Detectron2 includes a few `DatasetEvaluator` that computes metrics using standard dataset-specific -APIs (e.g., COCO, LVIS). -You can also implement your own `DatasetEvaluator` that performs some other jobs -using the inputs/outputs pairs. -For example, to count how many instances are detected on the validation set: - -``` -class Counter(DatasetEvaluator): - def reset(self): - self.count = 0 - def process(self, inputs, outputs): - for output in outputs: - self.count += len(output["instances"]) - def evaluate(self): - # save self.count somewhere, or print it, or return it. - return {"count": self.count} -``` - -## Use evaluators - -To evaluate using the methods of evaluators manually: -``` -def get_all_inputs_outputs(): - for data in data_loader: - yield data, model(data) - -evaluator.reset() -for inputs, outputs in get_all_inputs_outputs(): - evaluator.process(inputs, outputs) -eval_results = evaluator.evaluate() -``` - -Evaluators can also be used with [inference_on_dataset](../modules/evaluation.html#detectron2.evaluation.inference_on_dataset). -For example, - -```python -eval_results = inference_on_dataset( - model, - data_loader, - DatasetEvaluators([COCOEvaluator(...), Counter()])) -``` -This will execute `model` on all inputs from `data_loader`, and call evaluator to process them. - -Compared to running the evaluation manually using the model, the benefit of this function is that -evaluators can be merged together using [DatasetEvaluators](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluators), -and all the evaluation can finish in one forward pass over the dataset. -This function also provides accurate speed benchmarks for the given model and dataset. - -## Evaluators for custom dataset - -Many evaluators in detectron2 are made for specific datasets, -in order to obtain scores using each dataset's official API. -In addition to that, two evaluators are able to evaluate any generic dataset -that follows detectron2's [standard dataset format](./datasets.md), so they -can be used to evaluate custom datasets: - -* [COCOEvaluator](../modules/evaluation.html#detectron2.evaluation.COCOEvaluator) is able to evaluate AP (Average Precision) for box detection, - instance segmentation, keypoint detection on any custom dataset. -* [SemSegEvaluator](../modules/evaluation.html#detectron2.evaluation.SemSegEvaluator) is able to evaluate semantic segmentation metrics on any custom dataset. diff --git a/spaces/BennoKrojer/imagecode-demo/app.py b/spaces/BennoKrojer/imagecode-demo/app.py deleted file mode 100644 index 7e601ed0cad5306ea47042fbbb7438148ecd3b93..0000000000000000000000000000000000000000 --- a/spaces/BennoKrojer/imagecode-demo/app.py +++ /dev/null @@ -1,69 +0,0 @@ -from turtle import color, onclick -import streamlit as st -from PIL import Image, ImageOps -import glob -import json -import requests -import random -import io - -random.seed(10) - -if 'show' not in st.session_state: - st.session_state.show = False - -if 'example_idx' not in st.session_state: - st.session_state.example_idx = 0 - -st.set_page_config(layout="wide") -st.markdown("**This is a demo of the *ImageCoDe* benchmark. What is the task? You are given a description and you have to pick the image it describes, out of 10 images total.**") -st.markdown("**If you click the Sample button, you will get a new text and images. More details of ImageCoDe can be found in our ACL 2022 paper.**") - -col1, col2 = st.columns(2) - -prefix = 'https://raw.githubusercontent.com/BennoKrojer/imagecode-val-set/main/image-sets-val/' -set2ids = json.load(open('set2ids.json', 'r')) -descriptions = json.load(open('valid_list.json', 'r')) - -#example_idx = int(col1.number_input('Sample an example (description + corresponding images) from the validation set', value=0, min_value=0, max_value=len(descriptions)-1)) -if col1.button('Sample a description + 10 images from the validation set'): - st.session_state.example_idx += 1 -# st.session_state.example_idx = random.randint(0, len(descriptions)-1) - -img_set, true_idx, descr = descriptions[st.session_state.example_idx] -true_idx = int(true_idx) -images = [prefix+'/'+img_set+'/'+i for i in set2ids[img_set]] -img_urls = images.copy() -index = int(col2.number_input('Image Index from 0 to 9', value=0, min_value=0, max_value=9)) - -if col1.button('Toggle to reveal/hide the correct image, try to guess yourself before giving up!'): - st.session_state.show = not st.session_state.show - -col1.markdown(f'**Description for {img_set}**:') -col1.markdown(f'**{descr}**') - -big_img = images[index] -img = Image.open(io.BytesIO(requests.get(images[index], stream=True).content)) -img_width, img_height = img.size -smaller = min(img_width, img_height) -images[index]= ImageOps.expand(img,border=smaller//18,fill='blue') - -caps = list(range(10)) -cap = str(index) - -if st.session_state.show: - caps[true_idx] = f'{true_idx} (TARGET IMAGE)' - img = Image.open(io.BytesIO(requests.get(img_urls[true_idx], stream=True).content)) - img_width, img_height = img.size - smaller = min(img_width, img_height) - images[true_idx] = ImageOps.expand(img,border=smaller//8,fill='green') - if true_idx == index: - cap = f'{true_idx} (TARGET IMAGE)' -else: - caps[true_idx] = f'{true_idx}' - if true_idx == index: - cap = f'{true_idx}' - -col1.image(big_img, use_column_width=True, caption=cap) -col2.image(images, width=175, caption=caps) -col1.markdown(f'{st.session_state.example_idx}') diff --git a/spaces/Benson/text-generation/Examples/Backrooms Apk.md b/spaces/Benson/text-generation/Examples/Backrooms Apk.md deleted file mode 100644 index d4e48fa387706da303a6820f9b3ca8fabd6ae208..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Backrooms Apk.md +++ /dev/null @@ -1,40 +0,0 @@ - -

    Backrooms APK: Cinco juegos que exploran la nueva sensación de Internet espeluznante

    -

    Las salas traseras son una leyenda urbana en línea que se originó a partir de un creepypasta publicado en un 2018 4chan hilo. Los cuartos traseros se describen como un laberinto de salas de oficina vacías que solo se pueden entrar por "noclipping" fuera de la realidad. Las habitaciones están llenas de viejas alfombras húmedas, papel pintado amarillo y luces fluorescentes que crean una sensación de temor y aislamiento. Algunas historias también incluyen criaturas malévolas que acechan en las sombras. Las salas traseras se han convertido en uno de los ejemplos más conocidos de la estética de Internet de los espacios liminales, que representan lugares generalmente ocupados como vacíos anormales.

    -

    backrooms apk


    DOWNLOADhttps://bltlly.com/2v6MVl



    -

    Las salas traseras han inspirado a muchos fans y creadores a expandir el concepto original creando diferentes niveles, entidades e historias que exploran el horror y el misterio de esta dimensión alternativa. Una de las formas más populares de experimentar las salas traseras es a través de los videojuegos, que permiten a los jugadores sumergirse en el entorno aterrador y tratar de sobrevivir o escapar. En este artículo, revisaremos cinco juegos que exploran el concepto de salas traseras de diferentes maneras. Estos juegos están disponibles como archivos APK para dispositivos Android, para que pueda descargarlos y reproducirlos en su teléfono o tableta.

    -

    Entrar en las salas traseras

    -

    Enter the Backrooms es un juego muy lo-fi que nos recuerda a uno de esos viejos juegos shareware de la década de 1990. El juego se mantiene fiel a la creepypasta original con solo un nivel: Nivel 0, que es la clásica sala de oficina con alfombra, papel pintado y luces. El juego no tiene armas, ni entidades, y no tiene otros objetivos que sobrevivir el mayor tiempo posible sin perder la cordura. El juego mide tu cordura revisando tu reloj cada 30 segundos. Si te olvidas de hacerlo, tu cordura caerá y empezarás a alucinar. El juego también tiene un sistema de generación de niveles aleatorios que crea más de 600 millones de millas cuadradas de habitaciones para explorar.

    - -

    El juego Backrooms Edición GRATUITA

    -

    The Backrooms Game FREE Edition es un juego de Steam que cuenta con niveles infinitos y un sistema de locura. El juego se basa en la foto original de los cuartos traseros, pero también incluye otros niveles que se inspiran en diferentes espacios liminales, como áreas industriales, túneles de servicio y sótanos. El juego tiene un ser que vaga por los cuartos traseros y puede oírte si haces ruido. Tienes que evitarlo o esconderte de él si lo encuentras. El juego también tiene un sistema de locura que afecta tu visión y audición a medida que exploras más profundo en los cuartos traseros.

    -

    The Backrooms Game FREE Edition es un juego más pulido y variado que Enter the Backrooms. Tiene mejores gráficos, efectos de sonido y mecánica de juego. También ofrece más desafío y variedad al introducir diferentes niveles y enemigos. Sin embargo, a algunos puristas puede que no les guste el hecho de que se desvíe del concepto original de backrooms añadiendo nuevos elementos.

    -

    BACKROOMS

    -

    BACKROOMS

    -

    BACKROOMS es un juego de terror que combina supervivencia, rompecabezas y acción. El juego se basa en el icónico juego de terror de supervivencia con desafiantes puzzles no euclidianos y acción en primera persona. El juego también tiene un viaje narrativo convincente que se desarrolla a medida que juegas. El juego cuenta con nueve niveles de los cuartos traseros, cada uno con su propio entorno, rompecabezas y enemigos. El juego también tiene un sistema de iluminación dinámico que crea una atmósfera realista e inmersiva.

    -

    -

    BACKROOMS es un juego que atrae a los fans de juegos clásicos de terror como Silent Hill, Resident Evil y Amnesia. Tiene un alto nivel de dificultad y tensión que te mantendrá en el borde de tu asiento. También tiene una historia rica e intrigante que te hará querer descubrir más sobre las trassalas y tu propia identidad.

    -

    Escapar de las salas traseras

    - -

    Escape the Backrooms es un juego que se disfruta mejor con los amigos. Tiene un juego divertido y cooperativo que requiere trabajo en equipo y comunicación. También tiene mucho valor de repetición debido a la generación de mapas aleatorios y los diferentes elementos y herramientas. Sin embargo, el juego también puede ser muy frustrante y aterrador si juegas solo o con extraños.

    -

    Las salas traseras 1998

    -

    The Backrooms 1998 es un juego retro que mezcla horror y humor. El juego está inspirado en los viejos juegos de PlayStation 1 y tiene un estilo de gráficos low-poly y una actuación de voz cursi. El juego sigue las aventuras de Bob, un repartidor de pizza que accidentalmente no habla en los cuartos traseros mientras entrega una pizza. El juego tiene cuatro niveles de los cuartos traseros, cada uno con su propio tema y enemigos. El juego también tiene muchos chistes y referencias a la cultura pop y memes.

    -

    The Backrooms 1998 es un juego que no se toma demasiado en serio. Es una parodia del concepto de backrooms y los viejos juegos de terror. Tiene mucho humor y encanto que te hará reír y sonreír. Sin embargo, también puede ser bastante aterrador a veces, especialmente si no estás familiarizado con las referencias y chistes.

    -

    Conclusión

    -

    Las salas traseras son un fenómeno interesante que ha capturado la imaginación de muchas personas en línea. Son una fuente de horror, misterio y creatividad para fans y creadores por igual. Hay muchos juegos que exploran el concepto de salas traseras de diferentes maneras, de simple a complejo, de serio a divertido, de solo a cooperativo. Cada juego tiene sus propias fortalezas y debilidades, pero todos comparten una cosa en común: son divertidos y atractivos para jugar.

    - -

    Preguntas frecuentes

    -

    ¿Qué son los cuartos traseros?

    -

    Las salas traseras son una leyenda urbana en línea que se originó a partir de un creepypasta publicado en un 2018 4chan hilo. Los cuartos traseros son descritos como un laberinto de salas de oficina vacías que solo pueden ser ingresadas por "noclipping" fuera de la realidad.

    -

    ¿Cómo se entra en los cuartos traseros?

    -

    De acuerdo con el creepypasta original, puede entrar en los cuartos traseros noclipping fuera de la realidad en áreas donde no quieren que estés. Esto significa cortar paredes o pisos en lugares que no están diseñados para el acceso humano o la ocupación.

    -

    ¿Son reales los cuartos traseros?

    -

    Las trassalas no son reales en ningún sentido físico o científico. Son un concepto ficticio creado por los usuarios de Internet para fines de entretenimiento. Sin embargo, algunas personas pueden creer en ellos como parte de sus creencias o experiencias personales.

    -

    ¿Hay entidades en los cuartos traseros?

    -

    ¿Cómo escapar de los cuartos traseros?

    -

    No hay una respuesta definitiva a cómo escapar de los cuartos traseros, ya que diferentes historias y juegos tienen diferentes reglas y mecanismos. Algunas formas posibles de escapar de los cuartos traseros son encontrar una puerta de salida, alcanzar un cierto nivel o despertar de un sueño. Sin embargo, algunas historias y juegos también implican que no hay escape de los cuartos traseros, o que escapar de los cuartos traseros conducirá a peores consecuencias.

    -

    ¿Cuáles son algunos otros juegos que exploran el concepto de trastienda?

    -

    Algunos otros juegos que exploran el concepto de salas traseras son The Backrooms VR, The Backrooms Simulator, The Backrooms: Level 2 y The Backrooms: SCP-3008. Estos juegos también están disponibles como archivos APK para dispositivos Android.

    -

    Espero que hayas disfrutado de este artículo y hayas aprendido algo nuevo sobre los backrooms y los juegos que los exploran. Si tiene alguna pregunta o comentario, no dude en dejarlos abajo. ¡Gracias por leer!

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Clash Of Clans Elmas Hilesi Apk Indir.md b/spaces/Benson/text-generation/Examples/Clash Of Clans Elmas Hilesi Apk Indir.md deleted file mode 100644 index e38c33a24b0a93a31964fd309de803f10b82b0b6..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Clash Of Clans Elmas Hilesi Apk Indir.md +++ /dev/null @@ -1,61 +0,0 @@ - -

    Clash of Clans Diamond Cheat Apk Download: Enjoy the Game

    -

    Clash of Clans is one of the most popular mobile strategy games in the world. Millions of players play this game to build their own village, train their troops and fight other players. Resources such as diamonds, gold and potions are very important in the game. With these resources, you can develop your village, produce stronger troops and gain more victory. However, collecting these resources can take time and sometimes be inadequate. In this case, you may want to download Clash of Clans diamond trick apk.

    -

    Clash of Clans diamond trick apk is a modified version of the game. In this version, you can have an unlimited amount of diamonds, gold and potions. Thus, you can develop your village as you wish, train the strongest troops and easily defeat your opponents. You can also access all the features of the game and participate in clan battles. In this article, we will describe how you can download Clash of Clans diamond trick apk, its advantages and disadvantages.

    -

    clash of clans elmas hilesi apk indir


    Download File ……… https://bltlly.com/2v6IXp



    -What is

    Clash of Clans?

    -

    Clash of Clans is a mobile strategy game developed and published by Supercell in 2012. In the game, you have duties such as building your own village, collecting resources, training military troops and fighting other players. You can also set up your own clan or join existing clans. By collaborating with other players in your clan, you can fight against enemy clans and organize strategic attacks to achieve victory.

    -

    Clash of Clans Game Features

    -

    Clash of Clans offers its features as a mobile game that can also be downloaded as an apk on the Android platform. Here are some features of Clash of Clans game:

    - -

    Clash of Clans Game Tips

    -You may need some tips to succeed in

    Clash of Clans. Here are some tips for Clash of Clans game:

    - -

    Clash of Clans Diamond Cheat How to Download Apk?

    -

    Clash of Clans diamond trick apk is a modified version of the game. In this version, you can have an unlimited amount of diamonds, gold and potions. Thus, you can develop your village as you wish, train the strongest troops and easily defeat your opponents. You can also access all the features of the game and participate in clan battles. So how to download Clash of Clans diamond trick apk? Here are the steps:

    -
      -
    1. First, remove the original version of the Clash of Clans game on your device. This is required to download the modified version of the game.
    2. -
    3. Then find a reliable source to download the Clash of Clans diamond trick apk file. You can find this file on the internet from search engines or social media platforms. However, be careful and be sure to download a virus-free file.
    4. -
    5. Then enter your device's settings and enable the option to install applications from unknown sources. This will allow you to upload the apk file to your device.
    6. -
    7. Then click on the Clash of Clans diamond trick apk file you downloaded and start the installation process. This process can take several minutes.
    8. - -
    -

    Clash of Clans Diamond Trick Apk Advantages

    -Downloading

    Clash of Clans diamond trick apk has some advantages. Here are some of them:

    -

    Clash of Clans Diamond Trick Apk Disadvantages

    -There are also some disadvantages to downloading

    Clash of Clans diamond trick apk. Here are some of them:

    - -

    Clash of Clans Diamond Cheat Is Apk Safe?

    - -

    Result

    -Downloading

    Clash of Clans diamond trick apk can make you play the game more easily and fun. However, this method has some advantages, as well as some disadvantages and risks. So before using Clash of Clans diamond trick apk, you need to evaluate them well and make decisions. Our advice is to play the game with its original version and succeed with your own labor. In this way, you both respect the rules of the game and enjoy the game more.

    -

    Frequently Asked Questions

    -You can reach the frequently asked questions and answers about

    Clash of Clans diamond trick apk from below.

    -

    -
      -Where can I download

    1. Answer: You can find the Clash of Clans diamond trick apk file from search engines or social media platforms on the internet. However, be careful and be sure to download a virus-free file.
    2. -Is it legal to use

    3. Answer: No, it is illegal to use Clash of Clans diamond trick apk. It is against the rules of the game and can cause your account to be ban.
    4. -Can you join clan battles with < >>b
      Answer: Yes, you can participate in clan battles with Clash of Clans diamond trick apk. However, if this is noticed by other players, it can damage your clan's reputation and cause your account to be ban. -Can I get updates of the game with

    5. Answer: No, you cannot get updates of the game with Clash of Clans diamond trick apk. Since you are using the modified version of the game, you cannot connect with the game's official servers. So you can't take advantage of the new features and fixes of the game.
    6. - -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/unicode.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/unicode.py deleted file mode 100644 index 06526203911de55da3c2a8c5ae73f48024c3f018..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/unicode.py +++ /dev/null @@ -1,352 +0,0 @@ -# unicode.py - -import sys -from itertools import filterfalse -from typing import List, Tuple, Union - - -class _lazyclassproperty: - def __init__(self, fn): - self.fn = fn - self.__doc__ = fn.__doc__ - self.__name__ = fn.__name__ - - def __get__(self, obj, cls): - if cls is None: - cls = type(obj) - if not hasattr(cls, "_intern") or any( - cls._intern is getattr(superclass, "_intern", []) - for superclass in cls.__mro__[1:] - ): - cls._intern = {} - attrname = self.fn.__name__ - if attrname not in cls._intern: - cls._intern[attrname] = self.fn(cls) - return cls._intern[attrname] - - -UnicodeRangeList = List[Union[Tuple[int, int], Tuple[int]]] - - -class unicode_set: - """ - A set of Unicode characters, for language-specific strings for - ``alphas``, ``nums``, ``alphanums``, and ``printables``. - A unicode_set is defined by a list of ranges in the Unicode character - set, in a class attribute ``_ranges``. Ranges can be specified using - 2-tuples or a 1-tuple, such as:: - - _ranges = [ - (0x0020, 0x007e), - (0x00a0, 0x00ff), - (0x0100,), - ] - - Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x). - - A unicode set can also be defined using multiple inheritance of other unicode sets:: - - class CJK(Chinese, Japanese, Korean): - pass - """ - - _ranges: UnicodeRangeList = [] - - @_lazyclassproperty - def _chars_for_ranges(cls): - ret = [] - for cc in cls.__mro__: - if cc is unicode_set: - break - for rr in getattr(cc, "_ranges", ()): - ret.extend(range(rr[0], rr[-1] + 1)) - return [chr(c) for c in sorted(set(ret))] - - @_lazyclassproperty - def printables(cls): - "all non-whitespace characters in this range" - return "".join(filterfalse(str.isspace, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphas(cls): - "all alphabetic characters in this range" - return "".join(filter(str.isalpha, cls._chars_for_ranges)) - - @_lazyclassproperty - def nums(cls): - "all numeric digit characters in this range" - return "".join(filter(str.isdigit, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphanums(cls): - "all alphanumeric characters in this range" - return cls.alphas + cls.nums - - @_lazyclassproperty - def identchars(cls): - "all characters in this range that are valid identifier characters, plus underscore '_'" - return "".join( - sorted( - set( - "".join(filter(str.isidentifier, cls._chars_for_ranges)) - + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµº" - + "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ" - + "_" - ) - ) - ) - - @_lazyclassproperty - def identbodychars(cls): - """ - all characters in this range that are valid identifier body characters, - plus the digits 0-9 - """ - return "".join( - sorted( - set( - cls.identchars - + "0123456789" - + "".join( - [c for c in cls._chars_for_ranges if ("_" + c).isidentifier()] - ) - ) - ) - ) - - -class pyparsing_unicode(unicode_set): - """ - A namespace class for defining common language unicode_sets. - """ - - # fmt: off - - # define ranges in language character sets - _ranges: UnicodeRangeList = [ - (0x0020, sys.maxunicode), - ] - - class BasicMultilingualPlane(unicode_set): - "Unicode set for the Basic Multilingual Plane" - _ranges: UnicodeRangeList = [ - (0x0020, 0xFFFF), - ] - - class Latin1(unicode_set): - "Unicode set for Latin-1 Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0020, 0x007E), - (0x00A0, 0x00FF), - ] - - class LatinA(unicode_set): - "Unicode set for Latin-A Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0100, 0x017F), - ] - - class LatinB(unicode_set): - "Unicode set for Latin-B Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0180, 0x024F), - ] - - class Greek(unicode_set): - "Unicode set for Greek Unicode Character Ranges" - _ranges: UnicodeRangeList = [ - (0x0342, 0x0345), - (0x0370, 0x0377), - (0x037A, 0x037F), - (0x0384, 0x038A), - (0x038C,), - (0x038E, 0x03A1), - (0x03A3, 0x03E1), - (0x03F0, 0x03FF), - (0x1D26, 0x1D2A), - (0x1D5E,), - (0x1D60,), - (0x1D66, 0x1D6A), - (0x1F00, 0x1F15), - (0x1F18, 0x1F1D), - (0x1F20, 0x1F45), - (0x1F48, 0x1F4D), - (0x1F50, 0x1F57), - (0x1F59,), - (0x1F5B,), - (0x1F5D,), - (0x1F5F, 0x1F7D), - (0x1F80, 0x1FB4), - (0x1FB6, 0x1FC4), - (0x1FC6, 0x1FD3), - (0x1FD6, 0x1FDB), - (0x1FDD, 0x1FEF), - (0x1FF2, 0x1FF4), - (0x1FF6, 0x1FFE), - (0x2129,), - (0x2719, 0x271A), - (0xAB65,), - (0x10140, 0x1018D), - (0x101A0,), - (0x1D200, 0x1D245), - (0x1F7A1, 0x1F7A7), - ] - - class Cyrillic(unicode_set): - "Unicode set for Cyrillic Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0400, 0x052F), - (0x1C80, 0x1C88), - (0x1D2B,), - (0x1D78,), - (0x2DE0, 0x2DFF), - (0xA640, 0xA672), - (0xA674, 0xA69F), - (0xFE2E, 0xFE2F), - ] - - class Chinese(unicode_set): - "Unicode set for Chinese Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x2E80, 0x2E99), - (0x2E9B, 0x2EF3), - (0x31C0, 0x31E3), - (0x3400, 0x4DB5), - (0x4E00, 0x9FEF), - (0xA700, 0xA707), - (0xF900, 0xFA6D), - (0xFA70, 0xFAD9), - (0x16FE2, 0x16FE3), - (0x1F210, 0x1F212), - (0x1F214, 0x1F23B), - (0x1F240, 0x1F248), - (0x20000, 0x2A6D6), - (0x2A700, 0x2B734), - (0x2B740, 0x2B81D), - (0x2B820, 0x2CEA1), - (0x2CEB0, 0x2EBE0), - (0x2F800, 0x2FA1D), - ] - - class Japanese(unicode_set): - "Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges" - _ranges: UnicodeRangeList = [] - - class Kanji(unicode_set): - "Unicode set for Kanji Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x4E00, 0x9FBF), - (0x3000, 0x303F), - ] - - class Hiragana(unicode_set): - "Unicode set for Hiragana Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x3041, 0x3096), - (0x3099, 0x30A0), - (0x30FC,), - (0xFF70,), - (0x1B001,), - (0x1B150, 0x1B152), - (0x1F200,), - ] - - class Katakana(unicode_set): - "Unicode set for Katakana Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x3099, 0x309C), - (0x30A0, 0x30FF), - (0x31F0, 0x31FF), - (0x32D0, 0x32FE), - (0xFF65, 0xFF9F), - (0x1B000,), - (0x1B164, 0x1B167), - (0x1F201, 0x1F202), - (0x1F213,), - ] - - class Hangul(unicode_set): - "Unicode set for Hangul (Korean) Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x1100, 0x11FF), - (0x302E, 0x302F), - (0x3131, 0x318E), - (0x3200, 0x321C), - (0x3260, 0x327B), - (0x327E,), - (0xA960, 0xA97C), - (0xAC00, 0xD7A3), - (0xD7B0, 0xD7C6), - (0xD7CB, 0xD7FB), - (0xFFA0, 0xFFBE), - (0xFFC2, 0xFFC7), - (0xFFCA, 0xFFCF), - (0xFFD2, 0xFFD7), - (0xFFDA, 0xFFDC), - ] - - Korean = Hangul - - class CJK(Chinese, Japanese, Hangul): - "Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range" - - class Thai(unicode_set): - "Unicode set for Thai Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0E01, 0x0E3A), - (0x0E3F, 0x0E5B) - ] - - class Arabic(unicode_set): - "Unicode set for Arabic Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0600, 0x061B), - (0x061E, 0x06FF), - (0x0700, 0x077F), - ] - - class Hebrew(unicode_set): - "Unicode set for Hebrew Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0591, 0x05C7), - (0x05D0, 0x05EA), - (0x05EF, 0x05F4), - (0xFB1D, 0xFB36), - (0xFB38, 0xFB3C), - (0xFB3E,), - (0xFB40, 0xFB41), - (0xFB43, 0xFB44), - (0xFB46, 0xFB4F), - ] - - class Devanagari(unicode_set): - "Unicode set for Devanagari Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0900, 0x097F), - (0xA8E0, 0xA8FF) - ] - - # fmt: on - - -pyparsing_unicode.Japanese._ranges = ( - pyparsing_unicode.Japanese.Kanji._ranges - + pyparsing_unicode.Japanese.Hiragana._ranges - + pyparsing_unicode.Japanese.Katakana._ranges -) - -pyparsing_unicode.BMP = pyparsing_unicode.BasicMultilingualPlane - -# add language identifiers using language Unicode -pyparsing_unicode.العربية = pyparsing_unicode.Arabic -pyparsing_unicode.中文 = pyparsing_unicode.Chinese -pyparsing_unicode.кириллица = pyparsing_unicode.Cyrillic -pyparsing_unicode.Ελληνικά = pyparsing_unicode.Greek -pyparsing_unicode.עִברִית = pyparsing_unicode.Hebrew -pyparsing_unicode.日本語 = pyparsing_unicode.Japanese -pyparsing_unicode.Japanese.漢字 = pyparsing_unicode.Japanese.Kanji -pyparsing_unicode.Japanese.カタカナ = pyparsing_unicode.Japanese.Katakana -pyparsing_unicode.Japanese.ひらがな = pyparsing_unicode.Japanese.Hiragana -pyparsing_unicode.한국어 = pyparsing_unicode.Korean -pyparsing_unicode.ไทย = pyparsing_unicode.Thai -pyparsing_unicode.देवनागरी = pyparsing_unicode.Devanagari diff --git a/spaces/BreetheRun/mitchtech-vulcan-diffusion/README.md b/spaces/BreetheRun/mitchtech-vulcan-diffusion/README.md deleted file mode 100644 index a377094a0d3bb9ecdfbff8235787f98598c70bc7..0000000000000000000000000000000000000000 --- a/spaces/BreetheRun/mitchtech-vulcan-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mitchtech Vulcan Diffusion -emoji: 🏃 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_custom_type_casters.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_custom_type_casters.cpp deleted file mode 100644 index 9485d3cdb207b14fd74eb1d8afe1c31d92891b7b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_custom_type_casters.cpp +++ /dev/null @@ -1,125 +0,0 @@ -/* - tests/test_custom_type_casters.cpp -- tests type_caster - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" - - -// py::arg/py::arg_v testing: these arguments just record their argument when invoked -class ArgInspector1 { public: std::string arg = "(default arg inspector 1)"; }; -class ArgInspector2 { public: std::string arg = "(default arg inspector 2)"; }; -class ArgAlwaysConverts { }; -namespace pybind11 { namespace detail { -template <> struct type_caster { -public: - PYBIND11_TYPE_CASTER(ArgInspector1, _("ArgInspector1")); - - bool load(handle src, bool convert) { - value.arg = "loading ArgInspector1 argument " + - std::string(convert ? "WITH" : "WITHOUT") + " conversion allowed. " - "Argument value = " + (std::string) str(src); - return true; - } - - static handle cast(const ArgInspector1 &src, return_value_policy, handle) { - return str(src.arg).release(); - } -}; -template <> struct type_caster { -public: - PYBIND11_TYPE_CASTER(ArgInspector2, _("ArgInspector2")); - - bool load(handle src, bool convert) { - value.arg = "loading ArgInspector2 argument " + - std::string(convert ? "WITH" : "WITHOUT") + " conversion allowed. " - "Argument value = " + (std::string) str(src); - return true; - } - - static handle cast(const ArgInspector2 &src, return_value_policy, handle) { - return str(src.arg).release(); - } -}; -template <> struct type_caster { -public: - PYBIND11_TYPE_CASTER(ArgAlwaysConverts, _("ArgAlwaysConverts")); - - bool load(handle, bool convert) { - return convert; - } - - static handle cast(const ArgAlwaysConverts &, return_value_policy, handle) { - return py::none().release(); - } -}; -}} - -// test_custom_caster_destruction -class DestructionTester { -public: - DestructionTester() { print_default_created(this); } - ~DestructionTester() { print_destroyed(this); } - DestructionTester(const DestructionTester &) { print_copy_created(this); } - DestructionTester(DestructionTester &&) { print_move_created(this); } - DestructionTester &operator=(const DestructionTester &) { print_copy_assigned(this); return *this; } - DestructionTester &operator=(DestructionTester &&) { print_move_assigned(this); return *this; } -}; -namespace pybind11 { namespace detail { -template <> struct type_caster { - PYBIND11_TYPE_CASTER(DestructionTester, _("DestructionTester")); - bool load(handle, bool) { return true; } - - static handle cast(const DestructionTester &, return_value_policy, handle) { - return py::bool_(true).release(); - } -}; -}} - -TEST_SUBMODULE(custom_type_casters, m) { - // test_custom_type_casters - - // test_noconvert_args - // - // Test converting. The ArgAlwaysConverts is just there to make the first no-conversion pass - // fail so that our call always ends up happening via the second dispatch (the one that allows - // some conversion). - class ArgInspector { - public: - ArgInspector1 f(ArgInspector1 a, ArgAlwaysConverts) { return a; } - std::string g(ArgInspector1 a, const ArgInspector1 &b, int c, ArgInspector2 *d, ArgAlwaysConverts) { - return a.arg + "\n" + b.arg + "\n" + std::to_string(c) + "\n" + d->arg; - } - static ArgInspector2 h(ArgInspector2 a, ArgAlwaysConverts) { return a; } - }; - py::class_(m, "ArgInspector") - .def(py::init<>()) - .def("f", &ArgInspector::f, py::arg(), py::arg() = ArgAlwaysConverts()) - .def("g", &ArgInspector::g, "a"_a.noconvert(), "b"_a, "c"_a.noconvert()=13, "d"_a=ArgInspector2(), py::arg() = ArgAlwaysConverts()) - .def_static("h", &ArgInspector::h, py::arg().noconvert(), py::arg() = ArgAlwaysConverts()) - ; - m.def("arg_inspect_func", [](ArgInspector2 a, ArgInspector1 b, ArgAlwaysConverts) { return a.arg + "\n" + b.arg; }, - py::arg().noconvert(false), py::arg_v(nullptr, ArgInspector1()).noconvert(true), py::arg() = ArgAlwaysConverts()); - - m.def("floats_preferred", [](double f) { return 0.5 * f; }, py::arg("f")); - m.def("floats_only", [](double f) { return 0.5 * f; }, py::arg("f").noconvert()); - m.def("ints_preferred", [](int i) { return i / 2; }, py::arg("i")); - m.def("ints_only", [](int i) { return i / 2; }, py::arg("i").noconvert()); - - // test_custom_caster_destruction - // Test that `take_ownership` works on types with a custom type caster when given a pointer - - // default policy: don't take ownership: - m.def("custom_caster_no_destroy", []() { static auto *dt = new DestructionTester(); return dt; }); - - m.def("custom_caster_destroy", []() { return new DestructionTester(); }, - py::return_value_policy::take_ownership); // Takes ownership: destroy when finished - m.def("custom_caster_destroy_const", []() -> const DestructionTester * { return new DestructionTester(); }, - py::return_value_policy::take_ownership); // Likewise (const doesn't inhibit destruction) - m.def("destruction_tester_cstats", &ConstructorStats::get, py::return_value_policy::reference); -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_category_to_traversal.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_category_to_traversal.h deleted file mode 100644 index 7596682e2ecaa42f0128b7d2c4b70707199b9b1a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_category_to_traversal.h +++ /dev/null @@ -1,131 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ - -namespace detail -{ - -// forward declarations -template struct is_iterator_system; -template struct is_iterator_traversal; - -template - struct host_system_category_to_traversal - : eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - void - > - > - > - > - > -{ -}; // end host_system_category_to_traversal - - - -template - struct device_system_category_to_traversal - : eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - eval_if< - is_convertible::value, - detail::identity_, - void - > - > - > - > - > -{ -}; // end device_system_category_to_traversal - - -template - struct category_to_traversal - // check for host system - : eval_if< - or_< - is_convertible, - is_convertible - >::value, - - host_system_category_to_traversal, - - // check for device system - eval_if< - or_< - is_convertible, - is_convertible - >::value, - - device_system_category_to_traversal, - - // unknown category - void - > - > -{}; - - -template - struct iterator_category_to_traversal - : eval_if< - is_iterator_traversal::value, - detail::identity_, - category_to_traversal - > -{ -}; // end iterator_category_to_traversal - - -} // end detail - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/get_value.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/get_value.h deleted file mode 100644 index 5f3f8eb040e01a381613661e13fe46c6d2de011e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/get_value.h +++ /dev/null @@ -1,46 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -template -__host__ __device__ - typename thrust::iterator_value::type - get_value(sequential::execution_policy &, Pointer ptr) -{ - return *thrust::raw_pointer_cast(ptr); -} // end get_value() - - -} // end sequential -} // end detail -} // end system -} // end thrust - diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/vfnet.py b/spaces/CVPR/WALT/mmdet/models/detectors/vfnet.py deleted file mode 100644 index e23f89674c919921219ffd3486587a2d3c318fbd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/vfnet.py +++ /dev/null @@ -1,18 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class VFNet(SingleStageDetector): - """Implementation of `VarifocalNet - (VFNet).`_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(VFNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/CVPR/lama-example/fetch_data/places_standard_test_val_prepare.sh b/spaces/CVPR/lama-example/fetch_data/places_standard_test_val_prepare.sh deleted file mode 100644 index 6017e29aa1593c1c66affa4b9081afac2b9fb000..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/fetch_data/places_standard_test_val_prepare.sh +++ /dev/null @@ -1,5 +0,0 @@ -mkdir -p places_standard_dataset/original/test/ -tar -xvf test_large.tar --transform='s/.*\///' -C places_standard_dataset/original/test/ - -mkdir -p places_standard_dataset/original/val/ -tar -xvf val_large.tar --transform='s/.*\///' -C places_standard_dataset/original/val/ diff --git a/spaces/ConceptArtHouse/webui-gameasset/app.py b/spaces/ConceptArtHouse/webui-gameasset/app.py deleted file mode 100644 index 630c8b966cf1aa7330697d668c4b176152557012..0000000000000000000000000000000000000000 --- a/spaces/ConceptArtHouse/webui-gameasset/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -#os.system(f"sed -i -e 's/inputs=\[component\],/&\\n queue=False,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -#os.system(f"sed -i -e 's/outputs=\[token_counter\]/outputs=[token_counter], queue=False/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/outputs=\[/queue=False, &/g' home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -#os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") -#os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") -#os.system(f"wget -q https://huggingface.co/phuson/shields-game-asset/resolve/main/model.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/model.ckpt") -#os.system(f"wget -q https://huggingface.co/phuson/shield-asset-model-sd-2-1/resolve/main/model.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/model2.ckpt") - -if "IS_SHARED_UI" in os.environ: - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test --disable-safe-unpickle") diff --git a/spaces/Cong723/gpt-academic-public/request_llm/bridge_chatglm.py b/spaces/Cong723/gpt-academic-public/request_llm/bridge_chatglm.py deleted file mode 100644 index 7c86a22316cda8d6568afbd27e7d6e652703fb7f..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/request_llm/bridge_chatglm.py +++ /dev/null @@ -1,160 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "ChatGLM尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,ChatGLM消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.chatglm_model = None - self.chatglm_tokenizer = None - self.info = "" - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import sentencepiece - self.info = "依赖检测通过" - self.success = True - except: - self.info = "缺少ChatGLM的依赖,如果要使用ChatGLM,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。" - self.success = False - - def ready(self): - return self.chatglm_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - retry = 0 - while True: - try: - if self.chatglm_model is None: - self.chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) - device, = get_conf('LOCAL_MODEL_DEVICE') - if device=='cpu': - self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).float() - else: - self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda() - self.chatglm_model = self.chatglm_model.eval() - break - else: - break - except: - retry += 1 - if retry > 3: - self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。') - raise RuntimeError("不能正常加载ChatGLM的参数!") - - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - # 收到消息,开始请求 - try: - for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs): - self.child.send(response) - # # 中途接收可能的终止指令(如果有的话) - # if self.child.poll(): - # command = self.child.recv() - # if command == '[Terminate]': break - except: - self.child.send('[Local Message] Call ChatGLM fail.') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global glm_handle -glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - observe_window[0] = load_message + "\n\n" + glm_handle.info - if not glm_handle.success: - error = glm_handle.info - glm_handle = None - raise RuntimeError(error) - - # chatglm 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - history_feedin.append(["What can I do?", sys_prompt]) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global glm_handle - if glm_handle is None: - glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not glm_handle.success: - glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - history_feedin.append(["What can I do?", system_prompt] ) - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收chatglm的回复 - response = "[Local Message]: 等待ChatGLM响应中 ..." - for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待ChatGLM响应中 ...": - response = "[Local Message]: ChatGLM响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_abnf.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_abnf.py deleted file mode 100644 index 933587fba22290d7eb7df4c88e12f1e61702b8ce..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_abnf.py +++ /dev/null @@ -1,132 +0,0 @@ -# We use native strings for all the re patterns, to take advantage of string -# formatting, and then convert to bytestrings when compiling the final re -# objects. - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#whitespace -# OWS = *( SP / HTAB ) -# ; optional whitespace -OWS = r"[ \t]*" - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#rule.token.separators -# token = 1*tchar -# -# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" -# / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" -# / DIGIT / ALPHA -# ; any VCHAR, except delimiters -token = r"[-!#$%&'*+.^_`|~0-9a-zA-Z]+" - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#header.fields -# field-name = token -field_name = token - -# The standard says: -# -# field-value = *( field-content / obs-fold ) -# field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ] -# field-vchar = VCHAR / obs-text -# obs-fold = CRLF 1*( SP / HTAB ) -# ; obsolete line folding -# ; see Section 3.2.4 -# -# https://tools.ietf.org/html/rfc5234#appendix-B.1 -# -# VCHAR = %x21-7E -# ; visible (printing) characters -# -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#rule.quoted-string -# obs-text = %x80-FF -# -# However, the standard definition of field-content is WRONG! It disallows -# fields containing a single visible character surrounded by whitespace, -# e.g. "foo a bar". -# -# See: https://www.rfc-editor.org/errata_search.php?rfc=7230&eid=4189 -# -# So our definition of field_content attempts to fix it up... -# -# Also, we allow lots of control characters, because apparently people assume -# that they're legal in practice (e.g., google analytics makes cookies with -# \x01 in them!): -# https://github.com/python-hyper/h11/issues/57 -# We still don't allow NUL or whitespace, because those are often treated as -# meta-characters and letting them through can lead to nasty issues like SSRF. -vchar = r"[\x21-\x7e]" -vchar_or_obs_text = r"[^\x00\s]" -field_vchar = vchar_or_obs_text -field_content = r"{field_vchar}+(?:[ \t]+{field_vchar}+)*".format(**globals()) - -# We handle obs-fold at a different level, and our fixed-up field_content -# already grows to swallow the whole value, so ? instead of * -field_value = r"({field_content})?".format(**globals()) - -# header-field = field-name ":" OWS field-value OWS -header_field = ( - r"(?P{field_name})" - r":" - r"{OWS}" - r"(?P{field_value})" - r"{OWS}".format(**globals()) -) - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#request.line -# -# request-line = method SP request-target SP HTTP-version CRLF -# method = token -# HTTP-version = HTTP-name "/" DIGIT "." DIGIT -# HTTP-name = %x48.54.54.50 ; "HTTP", case-sensitive -# -# request-target is complicated (see RFC 7230 sec 5.3) -- could be path, full -# URL, host+port (for connect), or even "*", but in any case we are guaranteed -# that it contists of the visible printing characters. -method = token -request_target = r"{vchar}+".format(**globals()) -http_version = r"HTTP/(?P[0-9]\.[0-9])" -request_line = ( - r"(?P{method})" - r" " - r"(?P{request_target})" - r" " - r"{http_version}".format(**globals()) -) - -# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#status.line -# -# status-line = HTTP-version SP status-code SP reason-phrase CRLF -# status-code = 3DIGIT -# reason-phrase = *( HTAB / SP / VCHAR / obs-text ) -status_code = r"[0-9]{3}" -reason_phrase = r"([ \t]|{vchar_or_obs_text})*".format(**globals()) -status_line = ( - r"{http_version}" - r" " - r"(?P{status_code})" - # However, there are apparently a few too many servers out there that just - # leave out the reason phrase: - # https://github.com/scrapy/scrapy/issues/345#issuecomment-281756036 - # https://github.com/seanmonstar/httparse/issues/29 - # so make it optional. ?: is a non-capturing group. - r"(?: (?P{reason_phrase}))?".format(**globals()) -) - -HEXDIG = r"[0-9A-Fa-f]" -# Actually -# -# chunk-size = 1*HEXDIG -# -# but we impose an upper-limit to avoid ridiculosity. len(str(2**64)) == 20 -chunk_size = r"({HEXDIG}){{1,20}}".format(**globals()) -# Actually -# -# chunk-ext = *( ";" chunk-ext-name [ "=" chunk-ext-val ] ) -# -# but we aren't parsing the things so we don't really care. -chunk_ext = r";.*" -chunk_header = ( - r"(?P{chunk_size})" - r"(?P{chunk_ext})?" - r"{OWS}\r\n".format( - **globals() - ) # Even though the specification does not allow for extra whitespaces, - # we are lenient with trailing whitespaces because some servers on the wild use it. -) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_headers.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_headers.py deleted file mode 100644 index 846cca3f1d3c3f000de92840a89fb11e35f2083f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_headers.py +++ /dev/null @@ -1,234 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to handle headers to send in calls to Huggingface Hub.""" -from typing import Dict, Optional, Union - -from .. import constants -from ._hf_folder import HfFolder -from ._runtime import ( - get_fastai_version, - get_fastcore_version, - get_hf_hub_version, - get_python_version, - get_tf_version, - get_torch_version, - is_fastai_available, - is_fastcore_available, - is_tf_available, - is_torch_available, -) -from ._validators import validate_hf_hub_args - - -class LocalTokenNotFoundError(EnvironmentError): - """Raised if local token is required but not found.""" - - -@validate_hf_hub_args -def build_hf_headers( - *, - token: Optional[Union[bool, str]] = None, - is_write_action: bool = False, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, -) -> Dict[str, str]: - """ - Build headers dictionary to send in a HF Hub call. - - By default, authorization token is always provided either from argument (explicit - use) or retrieved from the cache (implicit use). To explicitly avoid sending the - token to the Hub, set `token=False` or set the `HF_HUB_DISABLE_IMPLICIT_TOKEN` - environment variable. - - In case of an API call that requires write access, an error is thrown if token is - `None` or token is an organization token (starting with `"api_org***"`). - - In addition to the auth header, a user-agent is added to provide information about - the installed packages (versions of python, huggingface_hub, torch, tensorflow, - fastai and fastcore). - - Args: - token (`str`, `bool`, *optional*): - The token to be sent in authorization header for the Hub call: - - if a string, it is used as the Hugging Face token - - if `True`, the token is read from the machine (cache or env variable) - - if `False`, authorization header is not set - - if `None`, the token is read from the machine only except if - `HF_HUB_DISABLE_IMPLICIT_TOKEN` env variable is set. - is_write_action (`bool`, default to `False`): - Set to True if the API call requires a write access. If `True`, the token - will be validated (cannot be `None`, cannot start by `"api_org***"`). - library_name (`str`, *optional*): - The name of the library that is making the HTTP request. Will be added to - the user-agent header. - library_version (`str`, *optional*): - The version of the library that is making the HTTP request. Will be added - to the user-agent header. - user_agent (`str`, `dict`, *optional*): - The user agent info in the form of a dictionary or a single string. It will - be completed with information about the installed packages. - - Returns: - A `Dict` of headers to pass in your API call. - - Example: - ```py - >>> build_hf_headers(token="hf_***") # explicit token - {"authorization": "Bearer hf_***", "user-agent": ""} - - >>> build_hf_headers(token=True) # explicitly use cached token - {"authorization": "Bearer hf_***",...} - - >>> build_hf_headers(token=False) # explicitly don't use cached token - {"user-agent": ...} - - >>> build_hf_headers() # implicit use of the cached token - {"authorization": "Bearer hf_***",...} - - # HF_HUB_DISABLE_IMPLICIT_TOKEN=True # to set as env variable - >>> build_hf_headers() # token is not sent - {"user-agent": ...} - - >>> build_hf_headers(token="api_org_***", is_write_action=True) - ValueError: You must use your personal account token for write-access methods. - - >>> build_hf_headers(library_name="transformers", library_version="1.2.3") - {"authorization": ..., "user-agent": "transformers/1.2.3; hf_hub/0.10.2; python/3.10.4; tensorflow/1.55"} - ``` - - Raises: - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If organization token is passed and "write" access is required. - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If "write" access is required but token is not passed and not saved locally. - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - If `token=True` but token is not saved locally. - """ - # Get auth token to send - token_to_send = get_token_to_send(token) - _validate_token_to_send(token_to_send, is_write_action=is_write_action) - - # Combine headers - headers = { - "user-agent": _http_user_agent( - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - ) - } - if token_to_send is not None: - headers["authorization"] = f"Bearer {token_to_send}" - return headers - - -def get_token_to_send(token: Optional[Union[bool, str]]) -> Optional[str]: - """Select the token to send from either `token` or the cache.""" - # Case token is explicitly provided - if isinstance(token, str): - return token - - # Case token is explicitly forbidden - if token is False: - return None - - # Token is not provided: we get it from local cache - cached_token = HfFolder().get_token() - - # Case token is explicitly required - if token is True: - if cached_token is None: - raise LocalTokenNotFoundError( - "Token is required (`token=True`), but no token found. You" - " need to provide a token or be logged in to Hugging Face with" - " `huggingface-cli login` or `huggingface_hub.login`. See" - " https://huggingface.co/settings/tokens." - ) - return cached_token - - # Case implicit use of the token is forbidden by env variable - if constants.HF_HUB_DISABLE_IMPLICIT_TOKEN: - return None - - # Otherwise: we use the cached token as the user has not explicitly forbidden it - return cached_token - - -def _validate_token_to_send(token: Optional[str], is_write_action: bool) -> None: - if is_write_action: - if token is None: - raise ValueError( - "Token is required (write-access action) but no token found. You need" - " to provide a token or be logged in to Hugging Face with" - " `huggingface-cli login` or `huggingface_hub.login`. See" - " https://huggingface.co/settings/tokens." - ) - if token.startswith("api_org"): - raise ValueError( - "You must use your personal account token for write-access methods. To" - " generate a write-access token, go to" - " https://huggingface.co/settings/tokens" - ) - - -def _http_user_agent( - *, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, -) -> str: - """Format a user-agent string containing information about the installed packages. - - Args: - library_name (`str`, *optional*): - The name of the library that is making the HTTP request. - library_version (`str`, *optional*): - The version of the library that is making the HTTP request. - user_agent (`str`, `dict`, *optional*): - The user agent info in the form of a dictionary or a single string. - - Returns: - The formatted user-agent string. - """ - if library_name is not None: - ua = f"{library_name}/{library_version}" - else: - ua = "unknown/None" - ua += f"; hf_hub/{get_hf_hub_version()}" - ua += f"; python/{get_python_version()}" - - if not constants.HF_HUB_DISABLE_TELEMETRY: - if is_torch_available(): - ua += f"; torch/{get_torch_version()}" - if is_tf_available(): - ua += f"; tensorflow/{get_tf_version()}" - if is_fastai_available(): - ua += f"; fastai/{get_fastai_version()}" - if is_fastcore_available(): - ua += f"; fastcore/{get_fastcore_version()}" - - if isinstance(user_agent, dict): - ua += "; " + "; ".join(f"{k}/{v}" for k, v in user_agent.items()) - elif isinstance(user_agent, str): - ua += "; " + user_agent - - return _deduplicate_user_agent(ua) - - -def _deduplicate_user_agent(user_agent: str) -> str: - """Deduplicate redundant information in the generated user-agent.""" - # Split around ";" > Strip whitespaces > Store as dict keys (ensure unicity) > format back as string - # Order is implicitly preserved by dictionary structure (see https://stackoverflow.com/a/53657523). - return "; ".join({key.strip(): None for key in user_agent.split(";")}.keys()) diff --git a/spaces/DShrimp/PoseMaker/start.py b/spaces/DShrimp/PoseMaker/start.py deleted file mode 100644 index e5d512289a4581dca4612d6aa2390ace7e534426..0000000000000000000000000000000000000000 --- a/spaces/DShrimp/PoseMaker/start.py +++ /dev/null @@ -1,3 +0,0 @@ -import subprocess - -subprocess.run("uvicorn app:app --host 0.0.0.0 --port 7860", shell=True) diff --git a/spaces/Deevyankar/Deep-AD/app.py b/spaces/Deevyankar/Deep-AD/app.py deleted file mode 100644 index 98bbafe36d6e82007337817cdb94c45f7d1f8f69..0000000000000000000000000000000000000000 --- a/spaces/Deevyankar/Deep-AD/app.py +++ /dev/null @@ -1,464 +0,0 @@ -import os -import ants -import monai -import torch -import shutil -import numpy as np -import pandas as pd -import altair as alt -import nibabel as nib -import streamlit as st -from random import randint -from itertools import chain -import antspynet as antspynet -from torch.utils.data import DataLoader -from monai.transforms import Compose, LoadImaged -from monai.networks.nets.efficientnet import EfficientNetBN - -import dicom2nifti - - -st.set_option('deprecation.showPyplotGlobalUse', False) -np.random.seed(0) -torch.manual_seed(0) - -template = ants.image_read('MNI152_T1_1mm_brain.nii.gz') - - -def pre_process(image): - with st.spinner('Reading the image...'): - y = ants.image_read(image) - with st.spinner('Bias field correction ongoing...'): - y = ants.utils.n4_bias_field_correction(y) - with st.spinner('Denoising the image...'): - yn = y + np.random.randn(*y.shape).astype('float32')*5 - y = ants.denoise_image(yn, ants.get_mask(y)) - with st.spinner('brain_extraction fn. running...'): - x = antspynet.utilities.brain_extraction( - y, modality='t1', antsxnet_cache_directory=None, verbose=True) - y = y*x - with st.spinner('Registering from template..'): - y1 = ants.registration(fixed=template, moving=y, - type_of_transform='AffineFast') - with st.spinner('Applying transforms...'): - y = ants.apply_transforms( - fixed=template, moving=y, transformlist=y1['fwdtransforms']) - st.success('Successfully Preprocessed the Image !') - return y - - -col1, col2, col3 = st.columns(3) - -with col1: - st.write(' ') - -with col2: - st.image("unilogo.png") - -with col3: - st.write(' ') - -st.markdown("

    Deep-AD: Deep Learning Model for Early Detection of Alzheimer’s

    ", unsafe_allow_html=True) -st.markdown("
    Developed by: Deevyankar Agarwal
    ", - unsafe_allow_html=True) -st.markdown("
    Part Time Ph.D. Student, UVA, Spain
    ", - unsafe_allow_html=True) -st.write('**Description**: Users can upload T1-W MRIs either in NifTI or DICOM format. After preprocessing (N4 bias field correction, noise removal, brain extraction, and registration in the MNI-152 template), the model will classify MRI scans into one of three groups.') - -st.markdown('- AD : Alzheimer’s') -st.markdown('- CN : Cognitively Normal') -st.markdown('- SMCI : stable MCI') - -st.write('This Application is based on ensemble learning. The output of multiclassification task AD vs. sMCI vs. CN will be validated further by binary classification models AD vs. CN and sMCI vs. AD implemented by end-to-end learning and 3D transfer learning, respectively. It will provide an extra layer of verification to make robust decisions.') -st.markdown('''
    ''', unsafe_allow_html=True) - -element1 = st.write(""" - # MRI Classification :brain: - """ - ) - -if 'key' not in st.session_state: - st.session_state.key = str( randint(1000, 100000000)) -file_upload = st.file_uploader("Upload the MRI scan (either a single NIfTI file or a folder containing multiple DICOM files)", type=[ - "nii", "gz", "dcm"], accept_multiple_files=True, key=st.session_state.key) -st.set_option('deprecation.showfileUploaderEncoding', False) - - -if file_upload == []: - st.text("No file uploaded !") - -st.text('Note : Please clear existing files before uploading new files') -if st.button('Clear Uploaded File(s)', help='Please clear existing files before uploading new files') and 'key' in st.session_state.keys(): - st.session_state.pop('key') - st.experimental_rerun() - -st.write("⚠️ [**Feedback form**](https://forms.gle/xuScGN6Cmf69bsUE9) ⚠️ ") - - -if len(file_upload) == 1: - - for file in file_upload: - file.name = file.name - with open(file.name, "wb") as f: - f.write(file.getbuffer()) - - saved_path = f"{file.name}" - - display_image = ants.image_read(saved_path) - element2 = st.pyplot(ants.plot(display_image)) - - processed_image = pre_process(saved_path) - a = processed_image.to_nibabel() - saved_preprocessed_path = 'input_image' - nib.save(a, saved_preprocessed_path) - element3 = st.text("Preprocessed Image") - element4 = st.pyplot(ants.plot(f"{saved_preprocessed_path}.nii", cmap="seismic")) - - transformsv = Compose( - [ - LoadImaged(keys=["img"]) - ] - ) - - test_files = [{"img": f"{saved_preprocessed_path}.nii", "label": "NA"}] - test_ds = monai.data.Dataset(data=test_files, transform=transformsv) - test_loader = DataLoader(test_ds, batch_size=1, - pin_memory=torch.cuda.is_available()) - - for test_data in test_loader: - test_images, test_labels = test_data["img"], test_data["label"] - - with st.spinner('Performing Inference...'): - model = EfficientNetBN( - "efficientnet-b0", spatial_dims=3, in_channels=1, num_classes=3) - model.load_state_dict(torch.load( - 'MCEBNfold3.pth', map_location='cpu')) - model.eval() - prediction = model(test_images.unsqueeze(1)) - pred = prediction.argmax(dim=1).item() - class_names = ["SMCI", "AD", "CN"] - predicted_label = class_names[pred] - - graph_input = list(chain.from_iterable(prediction.tolist())) - "Plot depicting Class Probabilities" - source = pd.DataFrame({ - 'Model output': graph_input, - 'class': ["SMCI", "AD", "CN"] - }) - - bar_chart = alt.Chart(source).mark_bar().encode( - y='Model output:Q', - x='class:O', - ) - - element5 = st.altair_chart(bar_chart, use_container_width=True) - - element6 = st.write( - f"The MRI Scan belong to the class **{predicted_label}**") - - - if pred == 0: - with st.spinner('Please wait...verifying the model output with another model'): - model_verify = monai.networks.nets.DenseNet264(spatial_dims=3, in_channels=1, out_channels=2) - model_verify.load_state_dict(torch.load( - 'DENSENET264ADvsCNbest_metric_model_classification3d_dict.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["CN", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if pred_verify == 0: - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - if pred_verify == 1 : - - model_verify = EfficientNetBN( - "efficientnet-b0", spatial_dims=3, in_channels=1, num_classes=2) - model_verify.load_state_dict(torch.load( - 'EBNfold3.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["SMCI", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - - - - - if pred == 1: - with st.spinner('Please wait...verifying the model output with another model'): - model_verify = EfficientNetBN( - "efficientnet-b0", spatial_dims=3, in_channels=1, num_classes=2) - model_verify.load_state_dict(torch.load( - 'EBNfold3.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["SMCI", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - - - if pred == 2: - with st.spinner('Please wait...verifying the model output with another model'): - model_verify = EfficientNetBN( - "efficientnet-b0", spatial_dims=3, in_channels=1, num_classes=2) - model_verify.load_state_dict(torch.load( - 'ENB0ADvsCNbest_metric_model_classification3d_dict.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["CN", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - - - - graph_input_1 = list(chain.from_iterable(prediction_verify.tolist())) - - "Plot depicting verifying model outputs" - source_1 = pd.DataFrame({ - 'Model output': graph_input_1, - 'class': class_names_verify - }) - - bar_chart_1 = alt.Chart(source_1).mark_bar().encode( - y='Model output:Q', - x='class:O', - ) - - st.altair_chart(bar_chart_1, use_container_width=True) - - -if len(file_upload) > 1: - - print(len(file_upload)) - - if os.path.exists('tmp') == True: - shutil.rmtree('tmp') - os.makedirs('tmp') - - for file in file_upload: - file.name = file.name - with open(file.name, "wb") as f: - f.write(file.getbuffer()) - shutil.copy(file.name, 'tmp') - print(len(file_upload)) - - display_image = st.empty() - # display_image = ants.core.ants_image_io.dicom_read('tmp') - saved_path = 'uploaded_image' - display_image = dicom2nifti.dicom_series_to_nifti('tmp', saved_path, reorient_nifti=True) - # nib.save(display_image, saved_path) - display_image = ants.image_read(f"{saved_path}.nii") - element2 = st.pyplot(ants.plot(display_image)) - - # b = display_image.to_nibabel() - # saved_path = 'uploaded_image' - # nib.save(b, saved_path) - - processed_image = pre_process(f"{saved_path}.nii") - a = processed_image.to_nibabel() - saved_preprocessed_path = 'input_image' - nib.save(a, saved_preprocessed_path) - element3 = st.text("Preprocessed Image") - element4 = st.pyplot(ants.plot(f"{saved_preprocessed_path}.nii", cmap="seismic")) - - transformsv = Compose( - [ - LoadImaged(keys=["img"]) - ] - ) - - test_files = [{"img": f"{saved_preprocessed_path}.nii", "label": 1}] - test_ds = monai.data.Dataset(data=test_files, transform=transformsv) - test_loader = DataLoader(test_ds, batch_size=1, - pin_memory=torch.cuda.is_available()) - - for test_data in test_loader: - test_images, test_labels = test_data["img"], test_data["label"] - with st.spinner('Performing Inference...'): - model = EfficientNetBN( - "efficientnet-b0", spatial_dims=3, in_channels=1, num_classes=3) - model.load_state_dict(torch.load( - 'MCEBNfold3.pth', map_location='cpu')) - model.eval() - prediction = model(test_images.unsqueeze(1)) - pred = prediction.argmax(dim=1).item() - class_names = ["SMCI", "AD", "CN"] - predicted_label = class_names[pred] - - graph_input = list(chain.from_iterable(prediction.tolist())) - "Plot depicting Class Probabilities" - source = pd.DataFrame({ - 'Model output': graph_input, - 'class': ["SMCI", "AD", "CN"] - }) - - bar_chart = alt.Chart(source).mark_bar().encode( - y='Model output:Q', - x='class:O', - ) - - element5 = st.altair_chart(bar_chart, use_container_width=True) - - element6 = st.write( - f"The MRI Scan belong to the class **{predicted_label}**") - - - - if pred == 0: - with st.spinner('Please wait...verifying the model output with another model'): - model_verify = monai.networks.nets.DenseNet264(spatial_dims=3, in_channels=1, out_channels=2) - model_verify.load_state_dict(torch.load( - 'DENSENET264ADvsCNbest_metric_model_classification3d_dict.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["CN", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if pred_verify == 0: - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - if pred_verify == 1 : - - model_verify = EfficientNetBN( - "efficientnet-b0", spatial_dims=3, in_channels=1, num_classes=2) - model_verify.load_state_dict(torch.load( - 'EBNfold3.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["SMCI", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - - - - if pred == 1: - with st.spinner('Please wait...verifying the model output with another model'): - model_verify = EfficientNetBN( - "efficientnet-b0", spatial_dims=3, in_channels=1, num_classes=2) - model_verify.load_state_dict(torch.load( - 'EBNfold3.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["SMCI", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - - - if pred == 2: - with st.spinner('Please wait...verifying the model output with another model'): - model_verify = monai.networks.nets.DenseNet264(spatial_dims=3, in_channels=1, out_channels=2) - model_verify.load_state_dict(torch.load( - 'F3DENSENET264ADvsCNbest_metric_model_classification3d_dict.pth', map_location='cpu')) - model_verify.eval() - prediction_verify = model_verify(test_images.unsqueeze(1)) - pred_verify = prediction_verify.argmax(dim=1).item() - class_names_verify = ["CN", "AD"] - predicted_label_verify = class_names_verify[pred_verify] - - if predicted_label_verify == predicted_label: - st.write( - f"Succesfully Verified the result, both models classified the scan as **{predicted_label_verify}**") - else: - st.write( - f"Verifying gave a different result ! **First model predicted as {predicted_label}, other predicted {predicted_label_verify}**") - - - - - - graph_input_1 = list(chain.from_iterable(prediction_verify.tolist())) - - "Plot depicting verifying model outputs" - source_1 = pd.DataFrame({ - 'Model output': graph_input_1, - 'class': class_names_verify - }) - - bar_chart_1 = alt.Chart(source_1).mark_bar().encode( - y='Model output:Q', - x='class:O', - ) - - st.altair_chart(bar_chart_1, use_container_width=True) - - -st.markdown('''

    ''', unsafe_allow_html=True) -st.markdown('''#### Publications :book:''', unsafe_allow_html=True) - -st.markdown("""1. [Transfer Learning for Alzheimer’s Disease through Neuroimaging Biomarkers: A Systematic Review](https://www.mdpi.com/1424-8220/21/21/7259 ) \n - Q1 Sensors

    - -2. [End-to-End Deep Learning Architectures Using 3D Neuroimaging Biomarkers for Early Alzheimer’s Diagnosis](https://www.mdpi.com/2227-7390/10/15/2575) \n - Q2 mathematics

    - - 3. [Automated Medical Diagnosis of Alzheimer´s Disease Using an Efficient Net Convolutional Neural Network](https://link.springer.com/article/10.1007/s10916-023-01941-4) \n - Q1 Springer Nature ,Journal of Medical Systems

    -
    """, unsafe_allow_html=True) - - - -st.markdown('''#### Contact details :mailbox:''', unsafe_allow_html=True) - -st.markdown(''' -Group :busts_in_silhouette:  :   http://www.sigte.tel.uva.es/index.php/en/homepage/ -The eHealth and Telemedicine Group (GTe) of the University of Valladolid is a multidisciplinary international group consisting of telecommunications, informatics and medical doctors from different specialties. \n - -
    - -Email :e-mail:   :   deevynkar@gmail.com''', unsafe_allow_html=True) - diff --git a/spaces/Div99/Chat-with-Div/polly_utils.py b/spaces/Div99/Chat-with-Div/polly_utils.py deleted file mode 100644 index 7cb38abff2aaac3c5b24f20914d464151173780d..0000000000000000000000000000000000000000 --- a/spaces/Div99/Chat-with-Div/polly_utils.py +++ /dev/null @@ -1,635 +0,0 @@ -# This class stores Polly voice data. Specifically, the class stores several records containing -# language, lang_code, gender, voice_id and engine. The class also has a method to return the -# voice_id, lang_code and engine given a language and gender. - -NEURAL_ENGINE = "neural" -STANDARD_ENGINE = "standard" - - -class PollyVoiceData: - def get_voice(self, language, gender): - for voice in self.voice_data: - if voice['language'] == language and voice['gender'] == gender: - if voice['neural'] == 'Yes': - return voice['voice_id'], voice['lang_code'], NEURAL_ENGINE - for voice in self.voice_data: - if voice['language'] == language and voice['gender'] == gender: - if voice['standard'] == 'Yes': - return voice['voice_id'], voice['lang_code'], STANDARD_ENGINE - return None, None, None - - def get_whisper_lang_code(self, language): - for voice in self.voice_data: - if voice['language'] == language: - return voice['whisper_lang_code'] - return "en" - - def __init__(self): - self.voice_data = [ - {'language': 'Arabic', - 'lang_code': 'arb', - 'whisper_lang_code': 'ar', - 'voice_id': 'Zeina', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Arabic (Gulf)', - 'lang_code': 'ar-AE', - 'whisper_lang_code': 'ar', - 'voice_id': 'Hala', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Catalan', - 'lang_code': 'ca-ES', - 'whisper_lang_code': 'ca', - 'voice_id': 'Arlet', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Chinese (Cantonese)', - 'lang_code': 'yue-CN', - 'whisper_lang_code': 'zh', - 'voice_id': 'Hiujin', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Chinese (Mandarin)', - 'lang_code': 'cmn-CN', - 'whisper_lang_code': 'zh', - 'voice_id': 'Zhiyu', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Danish', - 'lang_code': 'da-DK', - 'whisper_lang_code': 'da', - 'voice_id': 'Naja', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Danish', - 'lang_code': 'da-DK', - 'whisper_lang_code': 'da', - 'voice_id': 'Mads', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Dutch', - 'lang_code': 'nl-NL', - 'whisper_lang_code': 'nl', - 'voice_id': 'Laura', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Dutch', - 'lang_code': 'nl-NL', - 'whisper_lang_code': 'nl', - 'voice_id': 'Lotte', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Dutch', - 'lang_code': 'nl-NL', - 'whisper_lang_code': 'nl', - 'voice_id': 'Ruben', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'English (Australian)', - 'lang_code': 'en-AU', - 'whisper_lang_code': 'en', - 'voice_id': 'Nicole', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'English (Australian)', - 'lang_code': 'en-AU', - 'whisper_lang_code': 'en', - 'voice_id': 'Olivia', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'English (Australian)', - 'lang_code': 'en-AU', - 'whisper_lang_code': 'en', - 'voice_id': 'Russell', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'English (British)', - 'lang_code': 'en-GB', - 'whisper_lang_code': 'en', - 'voice_id': 'Amy', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (British)', - 'lang_code': 'en-GB', - 'whisper_lang_code': 'en', - 'voice_id': 'Emma', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (British)', - 'lang_code': 'en-GB', - 'whisper_lang_code': 'en', - 'voice_id': 'Brian', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (British)', - 'lang_code': 'en-GB', - 'whisper_lang_code': 'en', - 'voice_id': 'Arthur', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'English (Indian)', - 'lang_code': 'en-IN', - 'whisper_lang_code': 'en', - 'voice_id': 'Aditi', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'English (Indian)', - 'lang_code': 'en-IN', - 'whisper_lang_code': 'en', - 'voice_id': 'Raveena', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'English (Indian)', - 'lang_code': 'en-IN', - 'whisper_lang_code': 'en', - 'voice_id': 'Kajal', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'English (New Zealand)', - 'lang_code': 'en-NZ', - 'whisper_lang_code': 'en', - 'voice_id': 'Aria', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'English (South African)', - 'lang_code': 'en-ZA', - 'whisper_lang_code': 'en', - 'voice_id': 'Ayanda', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Ivy', - 'gender': 'Female (child)', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Joanna', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Kendra', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Kimberly', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Salli', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Joey', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Justin', - 'gender': 'Male (child)', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Kevin', - 'gender': 'Male (child)', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'English (US)', - 'lang_code': 'en-US', - 'whisper_lang_code': 'en', - 'voice_id': 'Matthew', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'English (Welsh)', - 'lang_code': 'en-GB-WLS', - 'whisper_lang_code': 'en', - 'voice_id': 'Geraint', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Finnish', - 'lang_code': 'fi-FI', - 'whisper_lang_code': 'fi', - 'voice_id': 'Suvi', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'French', - 'lang_code': 'fr-FR', - 'whisper_lang_code': 'fr', - 'voice_id': 'Celine', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'French', - 'lang_code': 'fr-FR', - 'whisper_lang_code': 'fr', - 'voice_id': 'Lea', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'French', - 'lang_code': 'fr-FR', - 'whisper_lang_code': 'fr', - 'voice_id': 'Mathieu', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'French (Canadian)', - 'lang_code': 'fr-CA', - 'whisper_lang_code': 'fr', - 'voice_id': 'Chantal', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'French (Canadian)', - 'lang_code': 'fr-CA', - 'whisper_lang_code': 'fr', - 'voice_id': 'Gabrielle', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'French (Canadian)', - 'lang_code': 'fr-CA', - 'whisper_lang_code': 'fr', - 'voice_id': 'Liam', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'German', - 'lang_code': 'de-DE', - 'whisper_lang_code': 'de', - 'voice_id': 'Marlene', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'German', - 'lang_code': 'de-DE', - 'whisper_lang_code': 'de', - 'voice_id': 'Vicki', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'German', - 'lang_code': 'de-DE', - 'whisper_lang_code': 'de', - 'voice_id': 'Hans', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'German', - 'lang_code': 'de-DE', - 'whisper_lang_code': 'de', - 'voice_id': 'Daniel', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'German (Austrian)', - 'lang_code': 'de-AT', - 'whisper_lang_code': 'de', - 'voice_id': 'Hannah', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Hindi', - 'lang_code': 'hi-IN', - 'whisper_lang_code': 'hi', - 'voice_id': 'Aditi', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Hindi', - 'lang_code': 'hi-IN', - 'whisper_lang_code': 'hi', - 'voice_id': 'Kajal', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Icelandic', - 'lang_code': 'is-IS', - 'whisper_lang_code': 'is', - 'voice_id': 'Dora', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Icelandic', - 'lang_code': 'is-IS', - 'whisper_lang_code': 'is', - 'voice_id': 'Karl', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Italian', - 'lang_code': 'it-IT', - 'whisper_lang_code': 'it', - 'voice_id': 'Carla', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Italian', - 'lang_code': 'it-IT', - 'whisper_lang_code': 'it', - 'voice_id': 'Bianca', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Japanese', - 'lang_code': 'ja-JP', - 'whisper_lang_code': 'ja', - 'voice_id': 'Mizuki', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Japanese', - 'lang_code': 'ja-JP', - 'whisper_lang_code': 'ja', - 'voice_id': 'Takumi', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Korean', - 'lang_code': 'ko-KR', - 'whisper_lang_code': 'ko', - 'voice_id': 'Seoyeon', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Norwegian', - 'lang_code': 'nb-NO', - 'whisper_lang_code': 'no', - 'voice_id': 'Liv', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Norwegian', - 'lang_code': 'nb-NO', - 'whisper_lang_code': 'no', - 'voice_id': 'Ida', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Polish', - 'lang_code': 'pl-PL', - 'whisper_lang_code': 'pl', - 'voice_id': 'Ewa', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Polish', - 'lang_code': 'pl-PL', - 'whisper_lang_code': 'pl', - 'voice_id': 'Maja', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Polish', - 'lang_code': 'pl-PL', - 'whisper_lang_code': 'pl', - 'voice_id': 'Jacek', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Polish', - 'lang_code': 'pl-PL', - 'whisper_lang_code': 'pl', - 'voice_id': 'Jan', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Polish', - 'lang_code': 'pl-PL', - 'whisper_lang_code': 'pl', - 'voice_id': 'Ola', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Portuguese (Brazilian)', - 'lang_code': 'pt-BR', - 'whisper_lang_code': 'pt', - 'voice_id': 'Camila', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Portuguese (Brazilian)', - 'lang_code': 'pt-BR', - 'whisper_lang_code': 'pt', - 'voice_id': 'Vitoria', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Portuguese (Brazilian)', - 'lang_code': 'pt-BR', - 'whisper_lang_code': 'pt', - 'voice_id': 'Ricardo', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Portuguese (European)', - 'lang_code': 'pt-PT', - 'whisper_lang_code': 'pt', - 'voice_id': 'Ines', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Portuguese (European)', - 'lang_code': 'pt-PT', - 'whisper_lang_code': 'pt', - 'voice_id': 'Cristiano', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Romanian', - 'lang_code': 'ro-RO', - 'whisper_lang_code': 'ro', - 'voice_id': 'Carmen', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Russian', - 'lang_code': 'ru-RU', - 'whisper_lang_code': 'ru', - 'voice_id': 'Tatyana', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Russian', - 'lang_code': 'ru-RU', - 'whisper_lang_code': 'ru', - 'voice_id': 'Maxim', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Spanish (European)', - 'lang_code': 'es-ES', - 'whisper_lang_code': 'es', - 'voice_id': 'Conchita', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Spanish (European)', - 'lang_code': 'es-ES', - 'whisper_lang_code': 'es', - 'voice_id': 'Lucia', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Spanish (European)', - 'lang_code': 'es-ES', - 'whisper_lang_code': 'es', - 'voice_id': 'Enrique', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Spanish (Mexican)', - 'lang_code': 'es-MX', - 'whisper_lang_code': 'es', - 'voice_id': 'Mia', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Spanish (US)', - 'lang_code': 'es-US', - 'whisper_lang_code': 'es', - 'voice_id': 'Lupe', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'Yes'}, - {'language': 'Spanish (US)', - 'lang_code': 'es-US', - 'whisper_lang_code': 'es', - 'voice_id': 'Penelope', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Spanish (US)', - 'lang_code': 'es-US', - 'whisper_lang_code': 'es', - 'voice_id': 'Miguel', - 'gender': 'Male', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Spanish (US)', - 'lang_code': 'es-US', - 'whisper_lang_code': 'es', - 'voice_id': 'Pedro', - 'gender': 'Male', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Swedish', - 'lang_code': 'sv-SE', - 'whisper_lang_code': 'sv', - 'voice_id': 'Astrid', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Swedish', - 'lang_code': 'sv-SE', - 'whisper_lang_code': 'sv', - 'voice_id': 'Elin', - 'gender': 'Female', - 'neural': 'Yes', - 'standard': 'No'}, - {'language': 'Turkish', - 'lang_code': 'tr-TR', - 'whisper_lang_code': 'tr', - 'voice_id': 'Filiz', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'}, - {'language': 'Welsh', - 'lang_code': 'cy-GB', - 'whisper_lang_code': 'cy', - 'voice_id': 'Gwyneth', - 'gender': 'Female', - 'neural': 'No', - 'standard': 'Yes'} - ] - - -# Run from the command-line -if __name__ == '__main__': - polly_voice_data = PollyVoiceData() - - voice_id, language_code, engine = polly_voice_data.get_voice('English (US)', 'Male') - print('English (US)', 'Male', voice_id, language_code, engine) - - voice_id, language_code, engine = polly_voice_data.get_voice('English (US)', 'Female') - print('English (US)', 'Female', voice_id, language_code, engine) - - voice_id, language_code, engine = polly_voice_data.get_voice('French', 'Female') - print('French', 'Female', voice_id, language_code, engine) - - voice_id, language_code, engine = polly_voice_data.get_voice('French', 'Male') - print('French', 'Male', voice_id, language_code, engine) - - voice_id, language_code, engine = polly_voice_data.get_voice('Japanese', 'Female') - print('Japanese', 'Female', voice_id, language_code, engine) - - voice_id, language_code, engine = polly_voice_data.get_voice('Japanese', 'Male') - print('Japanese', 'Male', voice_id, language_code, engine) - - voice_id, language_code, engine = polly_voice_data.get_voice('Hindi', 'Female') - print('Hindi', 'Female', voice_id, language_code, engine) - - voice_id, language_code, engine = polly_voice_data.get_voice('Hindi', 'Male') - print('Hindi', 'Male', voice_id, language_code, engine) - - whisper_lang_code = polly_voice_data.get_whisper_lang_code('English (US)') - print('English (US) whisper_lang_code:', whisper_lang_code) - - whisper_lang_code = polly_voice_data.get_whisper_lang_code('Chinese (Mandarin)') - print('Chinese (Mandarin) whisper_lang_code:', whisper_lang_code) - - whisper_lang_code = polly_voice_data.get_whisper_lang_code('Norwegian') - print('Norwegian whisper_lang_code:', whisper_lang_code) - - whisper_lang_code = polly_voice_data.get_whisper_lang_code('Dutch') - print('Dutch whisper_lang_code:', whisper_lang_code) - - whisper_lang_code = polly_voice_data.get_whisper_lang_code('Foo') - print('Foo whisper_lang_code:', whisper_lang_code) - - diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/openpose/src/model.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/openpose/src/model.py deleted file mode 100644 index e5f67d39e3f8b1068ec1c3d27cee07670acbce91..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/openpose/src/model.py +++ /dev/null @@ -1,218 +0,0 @@ -import torch -from collections import OrderedDict - -import torch -import torch.nn as nn - - -def make_layers(block, no_relu_layers): - layers = [] - for layer_name, v in block.items(): - if 'pool' in layer_name: - layer = nn.MaxPool2d(kernel_size=v[0], stride=v[1], - padding=v[2]) - layers.append((layer_name, layer)) - else: - conv2d = nn.Conv2d(in_channels=v[0], out_channels=v[1], - kernel_size=v[2], stride=v[3], - padding=v[4]) - layers.append((layer_name, conv2d)) - if layer_name not in no_relu_layers: - layers.append(('relu_'+layer_name, nn.ReLU(inplace=True))) - - return nn.Sequential(OrderedDict(layers)) - - -class bodypose_model(nn.Module): - def __init__(self): - super(bodypose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv5_5_CPM_L1', 'conv5_5_CPM_L2', 'Mconv7_stage2_L1', - 'Mconv7_stage2_L2', 'Mconv7_stage3_L1', 'Mconv7_stage3_L2', - 'Mconv7_stage4_L1', 'Mconv7_stage4_L2', 'Mconv7_stage5_L1', - 'Mconv7_stage5_L2', 'Mconv7_stage6_L1', 'Mconv7_stage6_L1'] - blocks = {} - block0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3_CPM', [512, 256, 3, 1, 1]), - ('conv4_4_CPM', [256, 128, 3, 1, 1]) - ]) - - # Stage 1 - block1_1 = OrderedDict([ - ('conv5_1_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L1', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L1', [512, 38, 1, 1, 0]) - ]) - - block1_2 = OrderedDict([ - ('conv5_1_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L2', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L2', [512, 19, 1, 1, 0]) - ]) - blocks['block1_1'] = block1_1 - blocks['block1_2'] = block1_2 - - self.model0 = make_layers(block0, no_relu_layers) - - # Stages 2 - 6 - for i in range(2, 7): - blocks['block%d_1' % i] = OrderedDict([ - ('Mconv1_stage%d_L1' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L1' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L1' % i, [128, 38, 1, 1, 0]) - ]) - - blocks['block%d_2' % i] = OrderedDict([ - ('Mconv1_stage%d_L2' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L2' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L2' % i, [128, 19, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_1 = blocks['block1_1'] - self.model2_1 = blocks['block2_1'] - self.model3_1 = blocks['block3_1'] - self.model4_1 = blocks['block4_1'] - self.model5_1 = blocks['block5_1'] - self.model6_1 = blocks['block6_1'] - - self.model1_2 = blocks['block1_2'] - self.model2_2 = blocks['block2_2'] - self.model3_2 = blocks['block3_2'] - self.model4_2 = blocks['block4_2'] - self.model5_2 = blocks['block5_2'] - self.model6_2 = blocks['block6_2'] - - def forward(self, x): - - out1 = self.model0(x) - - out1_1 = self.model1_1(out1) - out1_2 = self.model1_2(out1) - out2 = torch.cat([out1_1, out1_2, out1], 1) - - out2_1 = self.model2_1(out2) - out2_2 = self.model2_2(out2) - out3 = torch.cat([out2_1, out2_2, out1], 1) - - out3_1 = self.model3_1(out3) - out3_2 = self.model3_2(out3) - out4 = torch.cat([out3_1, out3_2, out1], 1) - - out4_1 = self.model4_1(out4) - out4_2 = self.model4_2(out4) - out5 = torch.cat([out4_1, out4_2, out1], 1) - - out5_1 = self.model5_1(out5) - out5_2 = self.model5_2(out5) - out6 = torch.cat([out5_1, out5_2, out1], 1) - - out6_1 = self.model6_1(out6) - out6_2 = self.model6_2(out6) - - return out6_1, out6_2 - - -class handpose_model(nn.Module): - def __init__(self): - super(handpose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv6_2_CPM', 'Mconv7_stage2', 'Mconv7_stage3', - 'Mconv7_stage4', 'Mconv7_stage5', 'Mconv7_stage6'] - # stage 1 - block1_0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3', [512, 512, 3, 1, 1]), - ('conv4_4', [512, 512, 3, 1, 1]), - ('conv5_1', [512, 512, 3, 1, 1]), - ('conv5_2', [512, 512, 3, 1, 1]), - ('conv5_3_CPM', [512, 128, 3, 1, 1]) - ]) - - block1_1 = OrderedDict([ - ('conv6_1_CPM', [128, 512, 1, 1, 0]), - ('conv6_2_CPM', [512, 22, 1, 1, 0]) - ]) - - blocks = {} - blocks['block1_0'] = block1_0 - blocks['block1_1'] = block1_1 - - # stage 2-6 - for i in range(2, 7): - blocks['block%d' % i] = OrderedDict([ - ('Mconv1_stage%d' % i, [150, 128, 7, 1, 3]), - ('Mconv2_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d' % i, [128, 22, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_0 = blocks['block1_0'] - self.model1_1 = blocks['block1_1'] - self.model2 = blocks['block2'] - self.model3 = blocks['block3'] - self.model4 = blocks['block4'] - self.model5 = blocks['block5'] - self.model6 = blocks['block6'] - - def forward(self, x): - out1_0 = self.model1_0(x) - out1_1 = self.model1_1(out1_0) - concat_stage2 = torch.cat([out1_1, out1_0], 1) - out_stage2 = self.model2(concat_stage2) - concat_stage3 = torch.cat([out_stage2, out1_0], 1) - out_stage3 = self.model3(concat_stage3) - concat_stage4 = torch.cat([out_stage3, out1_0], 1) - out_stage4 = self.model4(concat_stage4) - concat_stage5 = torch.cat([out_stage4, out1_0], 1) - out_stage5 = self.model5(concat_stage5) - concat_stage6 = torch.cat([out_stage5, out1_0], 1) - out_stage6 = self.model6(concat_stage6) - return out_stage6 diff --git a/spaces/DrishtiSharma/Whisper-Serbian-Transcriber/README.md b/spaces/DrishtiSharma/Whisper-Serbian-Transcriber/README.md deleted file mode 100644 index 8d5769baa1592ccd750cbc98918091a410210de8..0000000000000000000000000000000000000000 --- a/spaces/DrishtiSharma/Whisper-Serbian-Transcriber/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Whisper Serbian Transcriber -emoji: 🤫🇷🇸 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: whisper-event/whisper-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ECCV2022/bytetrack/tutorials/motr/motr.py b/spaces/ECCV2022/bytetrack/tutorials/motr/motr.py deleted file mode 100644 index 3e24b1d26318cd7d33a473198d743e9a9a69548f..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/motr/motr.py +++ /dev/null @@ -1,676 +0,0 @@ -# ------------------------------------------------------------------------ -# Copyright (c) 2021 megvii-model. All Rights Reserved. -# ------------------------------------------------------------------------ -# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR) -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ - -""" -DETR model and criterion classes. -""" -import copy -import math -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn, Tensor -from typing import List - -from util import box_ops -from util.misc import (NestedTensor, nested_tensor_from_tensor_list, - accuracy, get_world_size, interpolate, get_rank, - is_dist_avail_and_initialized, inverse_sigmoid) - -from models.structures import Instances, Boxes, pairwise_iou, matched_boxlist_iou - -from .backbone import build_backbone -from .matcher import build_matcher -from .deformable_transformer_plus import build_deforamble_transformer -from .qim import build as build_query_interaction_layer -from .memory_bank import build_memory_bank -from .deformable_detr import SetCriterion, MLP -from .segmentation import sigmoid_focal_loss - - -class ClipMatcher(SetCriterion): - def __init__(self, num_classes, - matcher, - weight_dict, - losses): - """ Create the criterion. - Parameters: - num_classes: number of object categories, omitting the special no-object category - matcher: module able to compute a matching between targets and proposals - weight_dict: dict containing as key the names of the losses and as values their relative weight. - eos_coef: relative classification weight applied to the no-object category - losses: list of all the losses to be applied. See get_loss for list of available losses. - """ - super().__init__(num_classes, matcher, weight_dict, losses) - self.num_classes = num_classes - self.matcher = matcher - self.weight_dict = weight_dict - self.losses = losses - self.focal_loss = True - self.losses_dict = {} - self._current_frame_idx = 0 - - def initialize_for_single_clip(self, gt_instances: List[Instances]): - self.gt_instances = gt_instances - self.num_samples = 0 - self.sample_device = None - self._current_frame_idx = 0 - self.losses_dict = {} - - def _step(self): - self._current_frame_idx += 1 - - def calc_loss_for_track_scores(self, track_instances: Instances): - frame_id = self._current_frame_idx - 1 - gt_instances = self.gt_instances[frame_id] - outputs = { - 'pred_logits': track_instances.track_scores[None], - } - device = track_instances.track_scores.device - - num_tracks = len(track_instances) - src_idx = torch.arange(num_tracks, dtype=torch.long, device=device) - tgt_idx = track_instances.matched_gt_idxes # -1 for FP tracks and disappeared tracks - - track_losses = self.get_loss('labels', - outputs=outputs, - gt_instances=[gt_instances], - indices=[(src_idx, tgt_idx)], - num_boxes=1) - self.losses_dict.update( - {'frame_{}_track_{}'.format(frame_id, key): value for key, value in - track_losses.items()}) - - def get_num_boxes(self, num_samples): - num_boxes = torch.as_tensor(num_samples, dtype=torch.float, device=self.sample_device) - if is_dist_avail_and_initialized(): - torch.distributed.all_reduce(num_boxes) - num_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item() - return num_boxes - - def get_loss(self, loss, outputs, gt_instances, indices, num_boxes, **kwargs): - loss_map = { - 'labels': self.loss_labels, - 'cardinality': self.loss_cardinality, - 'boxes': self.loss_boxes, - } - assert loss in loss_map, f'do you really want to compute {loss} loss?' - return loss_map[loss](outputs, gt_instances, indices, num_boxes, **kwargs) - - def loss_boxes(self, outputs, gt_instances: List[Instances], indices: List[tuple], num_boxes): - """Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss - targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4] - The target boxes are expected in format (center_x, center_y, h, w), normalized by the image size. - """ - # We ignore the regression loss of the track-disappear slots. - #TODO: Make this filter process more elegant. - filtered_idx = [] - for src_per_img, tgt_per_img in indices: - keep = tgt_per_img != -1 - filtered_idx.append((src_per_img[keep], tgt_per_img[keep])) - indices = filtered_idx - idx = self._get_src_permutation_idx(indices) - src_boxes = outputs['pred_boxes'][idx] - target_boxes = torch.cat([gt_per_img.boxes[i] for gt_per_img, (_, i) in zip(gt_instances, indices)], dim=0) - - # for pad target, don't calculate regression loss, judged by whether obj_id=-1 - target_obj_ids = torch.cat([gt_per_img.obj_ids[i] for gt_per_img, (_, i) in zip(gt_instances, indices)], dim=0) # size(16) - mask = (target_obj_ids != -1) - - loss_bbox = F.l1_loss(src_boxes[mask], target_boxes[mask], reduction='none') - loss_giou = 1 - torch.diag(box_ops.generalized_box_iou( - box_ops.box_cxcywh_to_xyxy(src_boxes[mask]), - box_ops.box_cxcywh_to_xyxy(target_boxes[mask]))) - - losses = {} - losses['loss_bbox'] = loss_bbox.sum() / num_boxes - losses['loss_giou'] = loss_giou.sum() / num_boxes - - return losses - - def loss_labels(self, outputs, gt_instances: List[Instances], indices, num_boxes, log=False): - """Classification loss (NLL) - targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes] - """ - src_logits = outputs['pred_logits'] - idx = self._get_src_permutation_idx(indices) - target_classes = torch.full(src_logits.shape[:2], self.num_classes, - dtype=torch.int64, device=src_logits.device) - # The matched gt for disappear track query is set -1. - labels = [] - for gt_per_img, (_, J) in zip(gt_instances, indices): - labels_per_img = torch.ones_like(J) - # set labels of track-appear slots to 0. - if len(gt_per_img) > 0: - labels_per_img[J != -1] = gt_per_img.labels[J[J != -1]] - labels.append(labels_per_img) - target_classes_o = torch.cat(labels) - target_classes[idx] = target_classes_o - if self.focal_loss: - gt_labels_target = F.one_hot(target_classes, num_classes=self.num_classes + 1)[:, :, :-1] # no loss for the last (background) class - gt_labels_target = gt_labels_target.to(src_logits) - loss_ce = sigmoid_focal_loss(src_logits.flatten(1), - gt_labels_target.flatten(1), - alpha=0.25, - gamma=2, - num_boxes=num_boxes, mean_in_dim1=False) - loss_ce = loss_ce.sum() - else: - loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight) - losses = {'loss_ce': loss_ce} - - if log: - # TODO this should probably be a separate loss, not hacked in this one here - losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0] - - return losses - - def match_for_single_frame(self, outputs: dict): - outputs_without_aux = {k: v for k, v in outputs.items() if k != 'aux_outputs'} - - gt_instances_i = self.gt_instances[self._current_frame_idx] # gt instances of i-th image. - track_instances: Instances = outputs_without_aux['track_instances'] - pred_logits_i = track_instances.pred_logits # predicted logits of i-th image. - pred_boxes_i = track_instances.pred_boxes # predicted boxes of i-th image. - - obj_idxes = gt_instances_i.obj_ids - obj_idxes_list = obj_idxes.detach().cpu().numpy().tolist() - obj_idx_to_gt_idx = {obj_idx: gt_idx for gt_idx, obj_idx in enumerate(obj_idxes_list)} - outputs_i = { - 'pred_logits': pred_logits_i.unsqueeze(0), - 'pred_boxes': pred_boxes_i.unsqueeze(0), - } - - # step1. inherit and update the previous tracks. - num_disappear_track = 0 - for j in range(len(track_instances)): - obj_id = track_instances.obj_idxes[j].item() - # set new target idx. - if obj_id >= 0: - if obj_id in obj_idx_to_gt_idx: - track_instances.matched_gt_idxes[j] = obj_idx_to_gt_idx[obj_id] - else: - num_disappear_track += 1 - track_instances.matched_gt_idxes[j] = -1 # track-disappear case. - else: - track_instances.matched_gt_idxes[j] = -1 - - full_track_idxes = torch.arange(len(track_instances), dtype=torch.long).to(pred_logits_i.device) - matched_track_idxes = (track_instances.obj_idxes >= 0) # occu - prev_matched_indices = torch.stack( - [full_track_idxes[matched_track_idxes], track_instances.matched_gt_idxes[matched_track_idxes]], dim=1).to( - pred_logits_i.device) - - # step2. select the unmatched slots. - # note that the FP tracks whose obj_idxes are -2 will not be selected here. - unmatched_track_idxes = full_track_idxes[track_instances.obj_idxes == -1] - - # step3. select the untracked gt instances (new tracks). - tgt_indexes = track_instances.matched_gt_idxes - tgt_indexes = tgt_indexes[tgt_indexes != -1] - - tgt_state = torch.zeros(len(gt_instances_i)).to(pred_logits_i.device) - tgt_state[tgt_indexes] = 1 - untracked_tgt_indexes = torch.arange(len(gt_instances_i)).to(pred_logits_i.device)[tgt_state == 0] - # untracked_tgt_indexes = select_unmatched_indexes(tgt_indexes, len(gt_instances_i)) - untracked_gt_instances = gt_instances_i[untracked_tgt_indexes] - - def match_for_single_decoder_layer(unmatched_outputs, matcher): - new_track_indices = matcher(unmatched_outputs, - [untracked_gt_instances]) # list[tuple(src_idx, tgt_idx)] - - src_idx = new_track_indices[0][0] - tgt_idx = new_track_indices[0][1] - # concat src and tgt. - new_matched_indices = torch.stack([unmatched_track_idxes[src_idx], untracked_tgt_indexes[tgt_idx]], - dim=1).to(pred_logits_i.device) - return new_matched_indices - - # step4. do matching between the unmatched slots and GTs. - unmatched_outputs = { - 'pred_logits': track_instances.pred_logits[unmatched_track_idxes].unsqueeze(0), - 'pred_boxes': track_instances.pred_boxes[unmatched_track_idxes].unsqueeze(0), - } - new_matched_indices = match_for_single_decoder_layer(unmatched_outputs, self.matcher) - - # step5. update obj_idxes according to the new matching result. - track_instances.obj_idxes[new_matched_indices[:, 0]] = gt_instances_i.obj_ids[new_matched_indices[:, 1]].long() - track_instances.matched_gt_idxes[new_matched_indices[:, 0]] = new_matched_indices[:, 1] - - # step6. calculate iou. - active_idxes = (track_instances.obj_idxes >= 0) & (track_instances.matched_gt_idxes >= 0) - active_track_boxes = track_instances.pred_boxes[active_idxes] - if len(active_track_boxes) > 0: - gt_boxes = gt_instances_i.boxes[track_instances.matched_gt_idxes[active_idxes]] - active_track_boxes = box_ops.box_cxcywh_to_xyxy(active_track_boxes) - gt_boxes = box_ops.box_cxcywh_to_xyxy(gt_boxes) - track_instances.iou[active_idxes] = matched_boxlist_iou(Boxes(active_track_boxes), Boxes(gt_boxes)) - - # step7. merge the unmatched pairs and the matched pairs. - matched_indices = torch.cat([new_matched_indices, prev_matched_indices], dim=0) - - # step8. calculate losses. - self.num_samples += len(gt_instances_i) + num_disappear_track - self.sample_device = pred_logits_i.device - for loss in self.losses: - new_track_loss = self.get_loss(loss, - outputs=outputs_i, - gt_instances=[gt_instances_i], - indices=[(matched_indices[:, 0], matched_indices[:, 1])], - num_boxes=1) - self.losses_dict.update( - {'frame_{}_{}'.format(self._current_frame_idx, key): value for key, value in new_track_loss.items()}) - - if 'aux_outputs' in outputs: - for i, aux_outputs in enumerate(outputs['aux_outputs']): - unmatched_outputs_layer = { - 'pred_logits': aux_outputs['pred_logits'][0, unmatched_track_idxes].unsqueeze(0), - 'pred_boxes': aux_outputs['pred_boxes'][0, unmatched_track_idxes].unsqueeze(0), - } - new_matched_indices_layer = match_for_single_decoder_layer(unmatched_outputs_layer, self.matcher) - matched_indices_layer = torch.cat([new_matched_indices_layer, prev_matched_indices], dim=0) - for loss in self.losses: - if loss == 'masks': - # Intermediate masks losses are too costly to compute, we ignore them. - continue - l_dict = self.get_loss(loss, - aux_outputs, - gt_instances=[gt_instances_i], - indices=[(matched_indices_layer[:, 0], matched_indices_layer[:, 1])], - num_boxes=1, ) - self.losses_dict.update( - {'frame_{}_aux{}_{}'.format(self._current_frame_idx, i, key): value for key, value in - l_dict.items()}) - self._step() - return track_instances - - def forward(self, outputs, input_data: dict): - # losses of each frame are calculated during the model's forwarding and are outputted by the model as outputs['losses_dict]. - losses = outputs.pop("losses_dict") - num_samples = self.get_num_boxes(self.num_samples) - for loss_name, loss in losses.items(): - losses[loss_name] /= num_samples - return losses - - -class RuntimeTrackerBase(object): - def __init__(self, score_thresh=0.8, filter_score_thresh=0.6, miss_tolerance=5): - self.score_thresh = score_thresh - self.filter_score_thresh = filter_score_thresh - self.miss_tolerance = miss_tolerance - self.max_obj_id = 0 - - def clear(self): - self.max_obj_id = 0 - - def update(self, track_instances: Instances): - track_instances.disappear_time[track_instances.scores >= self.score_thresh] = 0 - for i in range(len(track_instances)): - if track_instances.obj_idxes[i] == -1 and track_instances.scores[i] >= self.score_thresh: - # print("track {} has score {}, assign obj_id {}".format(i, track_instances.scores[i], self.max_obj_id)) - track_instances.obj_idxes[i] = self.max_obj_id - self.max_obj_id += 1 - elif track_instances.obj_idxes[i] >= 0 and track_instances.scores[i] < self.filter_score_thresh: - track_instances.disappear_time[i] += 1 - if track_instances.disappear_time[i] >= self.miss_tolerance: - # Set the obj_id to -1. - # Then this track will be removed by TrackEmbeddingLayer. - track_instances.obj_idxes[i] = -1 - - -class TrackerPostProcess(nn.Module): - """ This module converts the model's output into the format expected by the coco api""" - def __init__(self): - super().__init__() - - @torch.no_grad() - def forward(self, track_instances: Instances, target_size) -> Instances: - """ Perform the computation - Parameters: - outputs: raw outputs of the model - target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch - For evaluation, this must be the original image size (before any data augmentation) - For visualization, this should be the image size after data augment, but before padding - """ - out_logits = track_instances.pred_logits - out_bbox = track_instances.pred_boxes - - prob = out_logits.sigmoid() - # prob = out_logits[...,:1].sigmoid() - scores, labels = prob.max(-1) - - # convert to [x0, y0, x1, y1] format - boxes = box_ops.box_cxcywh_to_xyxy(out_bbox) - # and from relative [0, 1] to absolute [0, height] coordinates - img_h, img_w = target_size - scale_fct = torch.Tensor([img_w, img_h, img_w, img_h]).to(boxes) - boxes = boxes * scale_fct[None, :] - - track_instances.boxes = boxes - track_instances.scores = scores - track_instances.labels = labels -# track_instances.remove('pred_logits') -# track_instances.remove('pred_boxes') - return track_instances - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -class MOTR(nn.Module): - def __init__(self, backbone, transformer, num_classes, num_queries, num_feature_levels, criterion, track_embed, - aux_loss=True, with_box_refine=False, two_stage=False, memory_bank=None): - """ Initializes the model. - Parameters: - backbone: torch module of the backbone to be used. See backbone.py - transformer: torch module of the transformer architecture. See transformer.py - num_classes: number of object classes - num_queries: number of object queries, ie detection slot. This is the maximal number of objects - DETR can detect in a single image. For COCO, we recommend 100 queries. - aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used. - with_box_refine: iterative bounding box refinement - two_stage: two-stage Deformable DETR - """ - super().__init__() - self.num_queries = num_queries - self.track_embed = track_embed - self.transformer = transformer - hidden_dim = transformer.d_model - self.num_classes = num_classes - self.class_embed = nn.Linear(hidden_dim, num_classes) - self.bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3) - self.num_feature_levels = num_feature_levels - if not two_stage: - self.query_embed = nn.Embedding(num_queries, hidden_dim * 2) - if num_feature_levels > 1: - num_backbone_outs = len(backbone.strides) - input_proj_list = [] - for _ in range(num_backbone_outs): - in_channels = backbone.num_channels[_] - input_proj_list.append(nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - )) - for _ in range(num_feature_levels - num_backbone_outs): - input_proj_list.append(nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(32, hidden_dim), - )) - in_channels = hidden_dim - self.input_proj = nn.ModuleList(input_proj_list) - else: - self.input_proj = nn.ModuleList([ - nn.Sequential( - nn.Conv2d(backbone.num_channels[0], hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - )]) - self.backbone = backbone - self.aux_loss = aux_loss - self.with_box_refine = with_box_refine - self.two_stage = two_stage - - prior_prob = 0.01 - bias_value = -math.log((1 - prior_prob) / prior_prob) - self.class_embed.bias.data = torch.ones(num_classes) * bias_value - nn.init.constant_(self.bbox_embed.layers[-1].weight.data, 0) - nn.init.constant_(self.bbox_embed.layers[-1].bias.data, 0) - for proj in self.input_proj: - nn.init.xavier_uniform_(proj[0].weight, gain=1) - nn.init.constant_(proj[0].bias, 0) - - # if two-stage, the last class_embed and bbox_embed is for region proposal generation - num_pred = (transformer.decoder.num_layers + 1) if two_stage else transformer.decoder.num_layers - if with_box_refine: - self.class_embed = _get_clones(self.class_embed, num_pred) - self.bbox_embed = _get_clones(self.bbox_embed, num_pred) - nn.init.constant_(self.bbox_embed[0].layers[-1].bias.data[2:], -2.0) - # hack implementation for iterative bounding box refinement - self.transformer.decoder.bbox_embed = self.bbox_embed - else: - nn.init.constant_(self.bbox_embed.layers[-1].bias.data[2:], -2.0) - self.class_embed = nn.ModuleList([self.class_embed for _ in range(num_pred)]) - self.bbox_embed = nn.ModuleList([self.bbox_embed for _ in range(num_pred)]) - self.transformer.decoder.bbox_embed = None - if two_stage: - # hack implementation for two-stage - self.transformer.decoder.class_embed = self.class_embed - for box_embed in self.bbox_embed: - nn.init.constant_(box_embed.layers[-1].bias.data[2:], 0.0) - self.post_process = TrackerPostProcess() - self.track_base = RuntimeTrackerBase() - self.criterion = criterion - self.memory_bank = memory_bank - self.mem_bank_len = 0 if memory_bank is None else memory_bank.max_his_length - - def _generate_empty_tracks(self): - track_instances = Instances((1, 1)) - num_queries, dim = self.query_embed.weight.shape # (300, 512) - device = self.query_embed.weight.device - track_instances.ref_pts = self.transformer.reference_points(self.query_embed.weight[:, :dim // 2]) - track_instances.query_pos = self.query_embed.weight - track_instances.output_embedding = torch.zeros((num_queries, dim >> 1), device=device) - track_instances.obj_idxes = torch.full((len(track_instances),), -1, dtype=torch.long, device=device) - track_instances.matched_gt_idxes = torch.full((len(track_instances),), -1, dtype=torch.long, device=device) - track_instances.disappear_time = torch.zeros((len(track_instances), ), dtype=torch.long, device=device) - track_instances.iou = torch.zeros((len(track_instances),), dtype=torch.float, device=device) - track_instances.scores = torch.zeros((len(track_instances),), dtype=torch.float, device=device) - track_instances.track_scores = torch.zeros((len(track_instances),), dtype=torch.float, device=device) - track_instances.pred_boxes = torch.zeros((len(track_instances), 4), dtype=torch.float, device=device) - track_instances.pred_logits = torch.zeros((len(track_instances), self.num_classes), dtype=torch.float, device=device) - - mem_bank_len = self.mem_bank_len - track_instances.mem_bank = torch.zeros((len(track_instances), mem_bank_len, dim // 2), dtype=torch.float32, device=device) - track_instances.mem_padding_mask = torch.ones((len(track_instances), mem_bank_len), dtype=torch.bool, device=device) - track_instances.save_period = torch.zeros((len(track_instances), ), dtype=torch.float32, device=device) - - return track_instances.to(self.query_embed.weight.device) - - def clear(self): - self.track_base.clear() - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_coord): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - return [{'pred_logits': a, 'pred_boxes': b, } - for a, b in zip(outputs_class[:-1], outputs_coord[:-1])] - - def _forward_single_image(self, samples, track_instances: Instances): - features, pos = self.backbone(samples) - src, mask = features[-1].decompose() - assert mask is not None - - srcs = [] - masks = [] - for l, feat in enumerate(features): - src, mask = feat.decompose() - srcs.append(self.input_proj[l](src)) - masks.append(mask) - assert mask is not None - - if self.num_feature_levels > len(srcs): - _len_srcs = len(srcs) - for l in range(_len_srcs, self.num_feature_levels): - if l == _len_srcs: - src = self.input_proj[l](features[-1].tensors) - else: - src = self.input_proj[l](srcs[-1]) - m = samples.mask - mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0] - pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype) - srcs.append(src) - masks.append(mask) - pos.append(pos_l) - - hs, init_reference, inter_references, enc_outputs_class, enc_outputs_coord_unact = self.transformer(srcs, masks, pos, track_instances.query_pos, ref_pts=track_instances.ref_pts) - - outputs_classes = [] - outputs_coords = [] - for lvl in range(hs.shape[0]): - if lvl == 0: - reference = init_reference - else: - reference = inter_references[lvl - 1] - reference = inverse_sigmoid(reference) - outputs_class = self.class_embed[lvl](hs[lvl]) - tmp = self.bbox_embed[lvl](hs[lvl]) - if reference.shape[-1] == 4: - tmp += reference - else: - assert reference.shape[-1] == 2 - tmp[..., :2] += reference - outputs_coord = tmp.sigmoid() - outputs_classes.append(outputs_class) - outputs_coords.append(outputs_coord) - outputs_class = torch.stack(outputs_classes) - outputs_coord = torch.stack(outputs_coords) - - ref_pts_all = torch.cat([init_reference[None], inter_references[:, :, :, :2]], dim=0) - out = {'pred_logits': outputs_class[-1], 'pred_boxes': outputs_coord[-1], 'ref_pts': ref_pts_all[5]} - if self.aux_loss: - out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord) - - with torch.no_grad(): - if self.training: - track_scores = outputs_class[-1, 0, :].sigmoid().max(dim=-1).values - else: - track_scores = outputs_class[-1, 0, :, 0].sigmoid() - - track_instances.scores = track_scores - track_instances.pred_logits = outputs_class[-1, 0] - track_instances.pred_boxes = outputs_coord[-1, 0] - track_instances.output_embedding = hs[-1, 0] - if self.training: - # the track id will be assigned by the mather. - out['track_instances'] = track_instances - track_instances = self.criterion.match_for_single_frame(out) - else: - # each track will be assigned an unique global id by the track base. - self.track_base.update(track_instances) - if self.memory_bank is not None: - track_instances = self.memory_bank(track_instances) - # track_instances.track_scores = track_instances.track_scores[..., 0] - # track_instances.scores = track_instances.track_scores.sigmoid() - if self.training: - self.criterion.calc_loss_for_track_scores(track_instances) - tmp = {} - tmp['init_track_instances'] = self._generate_empty_tracks() - tmp['track_instances'] = track_instances - out_track_instances = self.track_embed(tmp) - out['track_instances'] = out_track_instances - return out - - @torch.no_grad() - def inference_single_image(self, img, ori_img_size, track_instances=None): - if not isinstance(img, NestedTensor): - img = nested_tensor_from_tensor_list(img) - if track_instances is None: - track_instances = self._generate_empty_tracks() - - res = self._forward_single_image(img, track_instances=track_instances) - - track_instances = res['track_instances'] - track_instances = self.post_process(track_instances, ori_img_size) - ret = {'track_instances': track_instances} - if 'ref_pts' in res: - ref_pts = res['ref_pts'] - img_h, img_w = ori_img_size - scale_fct = torch.Tensor([img_w, img_h]).to(ref_pts) - ref_pts = ref_pts * scale_fct[None] - ret['ref_pts'] = ref_pts - return ret - - def forward(self, data: dict): - if self.training: - self.criterion.initialize_for_single_clip(data['gt_instances']) - frames = data['imgs'] # list of Tensor. - outputs = { - 'pred_logits': [], - 'pred_boxes': [], - } - - track_instances = self._generate_empty_tracks() - for frame in frames: - if not isinstance(frame, NestedTensor): - frame = nested_tensor_from_tensor_list([frame]) - frame_res = self._forward_single_image(frame, track_instances) - track_instances = frame_res['track_instances'] - outputs['pred_logits'].append(frame_res['pred_logits']) - outputs['pred_boxes'].append(frame_res['pred_boxes']) - - if not self.training: - outputs['track_instances'] = track_instances - else: - outputs['losses_dict'] = self.criterion.losses_dict - return outputs - - -def build(args): - dataset_to_num_classes = { - 'coco': 91, - 'coco_panoptic': 250, - 'e2e_mot': 1, - 'e2e_joint': 1, - 'e2e_static_mot': 1 - } - assert args.dataset_file in dataset_to_num_classes - num_classes = dataset_to_num_classes[args.dataset_file] - device = torch.device(args.device) - - backbone = build_backbone(args) - - transformer = build_deforamble_transformer(args) - d_model = transformer.d_model - hidden_dim = args.dim_feedforward - query_interaction_layer = build_query_interaction_layer(args, args.query_interaction_layer, d_model, hidden_dim, d_model*2) - - img_matcher = build_matcher(args) - num_frames_per_batch = max(args.sampler_lengths) - weight_dict = {} - for i in range(num_frames_per_batch): - weight_dict.update({"frame_{}_loss_ce".format(i): args.cls_loss_coef, - 'frame_{}_loss_bbox'.format(i): args.bbox_loss_coef, - 'frame_{}_loss_giou'.format(i): args.giou_loss_coef, - }) - - # TODO this is a hack - if args.aux_loss: - for i in range(num_frames_per_batch): - for j in range(args.dec_layers - 1): - weight_dict.update({"frame_{}_aux{}_loss_ce".format(i, j): args.cls_loss_coef, - 'frame_{}_aux{}_loss_bbox'.format(i, j): args.bbox_loss_coef, - 'frame_{}_aux{}_loss_giou'.format(i, j): args.giou_loss_coef, - }) - if args.memory_bank_type is not None and len(args.memory_bank_type) > 0: - memory_bank = build_memory_bank(args, d_model, hidden_dim, d_model * 2) - for i in range(num_frames_per_batch): - weight_dict.update({"frame_{}_track_loss_ce".format(i): args.cls_loss_coef}) - else: - memory_bank = None - losses = ['labels', 'boxes'] - criterion = ClipMatcher(num_classes, matcher=img_matcher, weight_dict=weight_dict, losses=losses) - criterion.to(device) - postprocessors = {} - model = MOTR( - backbone, - transformer, - track_embed=query_interaction_layer, - num_feature_levels=args.num_feature_levels, - num_classes=num_classes, - num_queries=args.num_queries, - aux_loss=args.aux_loss, - criterion=criterion, - with_box_refine=args.with_box_refine, - two_stage=args.two_stage, - memory_bank=memory_bank, - ) - return model, criterion, postprocessors diff --git a/spaces/EleutherAI/magma/train.py b/spaces/EleutherAI/magma/train.py deleted file mode 100644 index a004bf122d4f423dcefb0051d66d69d3e323758c..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/magma/train.py +++ /dev/null @@ -1,192 +0,0 @@ -import torch -import os -import deepspeed -import wandb -from torch.utils.data import random_split, ConcatDataset -from torch.optim import AdamW -from tqdm import tqdm -from functools import partial -from magma.datasets import ( - collate_fn, - ImgCptDataset, -) -from magma.magma import ( - Magma, -) -from magma.utils import ( - is_main, - cycle, - parse_args, - wandb_log, - wandb_init, - save_model, - load_model, - print_main, - configure_param_groups, -) -from magma.train_loop import ( - eval_step, - inference_step, - train_step, -) - - -def _load_img_cpt_datasets(dataset_dir, tokenizer, transforms): - if isinstance(dataset_dir, (list, tuple)): - return ConcatDataset( - [_load_img_cpt_datasets(d, tokenizer, transforms) for d in dataset_dir] - ) - elif isinstance(dataset_dir, str): - return ImgCptDataset(dataset_dir, tokenizer=tokenizer, transforms=transforms) - else: - raise TypeError("dataset dir wrong type") - - -def get_pretraining_datasets(config, tokenizer, transforms): - # if config.train_dataset_dir is a list, load all datasets + join together - train_dataset = _load_img_cpt_datasets( - config.train_dataset_dir, tokenizer, transforms - ) - # if no dedicated eval sets are given, use a percentage of the train dataset - if config.eval_dataset_dir is None: - eval_len = int(len(train_dataset) * config.eval_dataset_pct) - train_len = len(train_dataset) - eval_len - print( - f"Randomly splitting train_dataset into two datasets of length {train_len} and {eval_len}" - ) - train_dataset, eval_dataset = random_split(train_dataset, [train_len, eval_len]) - else: - eval_dataset = _load_img_cpt_datasets( - config.eval_dataset_dir, tokenizer, transforms - ) - - print_main(f"Loaded train dataset with {len(train_dataset)} samples") - print_main(f"Loaded eval dataset with {len(eval_dataset)} samples") - - return train_dataset, eval_dataset - - -# tell tokenizers not to do parallelism -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -if __name__ == "__main__": - - # parse command line arguments: - args = parse_args() - deepspeed.init_distributed() - - # load model + tokenizer: - model = Magma( - args.config - ) # for finetuning one might want to load the model via Magma.from_checkpoint(...) here - tokenizer, config, transforms = model.tokenizer, model.config, model.transforms - - # filter frozen from trainable parameters: - trainable_parameters = configure_param_groups(model, config) - - # load data: - train_dataset, eval_dataset = get_pretraining_datasets( - config, tokenizer, transforms - ) - - print_main(f"Loaded train dataset with {len(train_dataset)} samples") - print_main(f"Loaded eval dataset with {len(eval_dataset)} samples") - - opt = AdamW( - trainable_parameters, - config.lr, - betas=(0.9, 0.95), - weight_decay=config.weight_decay, - ) - - model_engine, opt, train_loader, lr_scheduler = deepspeed.initialize( - args=args, - model=model, - optimizer=opt, - model_parameters=trainable_parameters, - training_data=train_dataset, - collate_fn=partial(collate_fn, seq_len=model.seq_len), - config_params=config.deepspeed_config_params, - ) - eval_loader = cycle(model_engine.deepspeed_io(eval_dataset)) - train_loader = cycle(train_loader) - - # initialize training - global_step = 0 - if config.load: - # loads a deepspeed checkpoint if provided. For finetuning, set load_optimizer to false - previous_global_step = load_model( - model_engine, - config.load, - load_optimizer_states=config.load_optimizer, - load_lr_scheduler_states=config.load_optimizer, - ) - - if config.load_optimizer: - global_step = previous_global_step - - pbar = tqdm( - range(0, config.train_steps), - desc="training...", - initial=global_step, - total=config.train_steps, - disable=not is_main(), - ) - wandb_init( - project=config.wandb_project, - name=config.name or wandb.util.generate_id(), - config=config, - ) - - # training loop - for i in pbar: - if global_step >= config.train_steps: - break - - ##### train step - loss = train_step(config, train_loader, model_engine) - - global_step += 1 - - if global_step % config.log_every == 0: - pbar.set_description(f"training... Step: {global_step} Loss: {loss}") - current_lr = ( - [lr for lr in lr_scheduler.get_lr()] - if lr_scheduler is not None - else config.lr - ) - to_log = {"train/loss": loss, "train/lr": current_lr} - wandb_log(to_log, step=global_step) - - ##### Evaluation phase - if global_step % config.eval_every == 0: - model_engine.eval() - with torch.no_grad(): - - ##### eval step: - eval_loss = eval_step(config, eval_loader, model_engine) - - wandb_log({"eval/loss": eval_loss}, step=global_step) - pbar.set_description( - f"evaluating... Step: {global_step} Eval Loss: {eval_loss}" - ) - - ##### inference: - image_grid, caption = inference_step(config, eval_loader, model_engine) - wandb_log( - {"inference/image": wandb.Image(image_grid, caption=caption)}, - step=global_step, - ) - - model_engine.train() - - ##### Save model - if global_step % config.save_every == 0: - if config.save is not None: - save_model(model_engine, config.save, global_step) - print_main(f"saving model at step {global_step}") - - ##### Save model after training is finished - if config.save is not None: - save_model(model_engine, config.save, global_step) - print_main(f"saving model at end of training (step {global_step})") diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/EuroPython2022/Fin-Eng-ASR-autosubtitles/README.md b/spaces/EuroPython2022/Fin-Eng-ASR-autosubtitles/README.md deleted file mode 100644 index 304690c62593beb6b66827a79e4c2ecca9e6804d..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/Fin-Eng-ASR-autosubtitles/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Fin Eng ASR Autosubtitles -emoji: 🌍 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -We use Opus-MT models in the code. Here is the citations -``` -@inproceedings{tiedemann-thottingal-2020-opus, - title = "{OPUS}-{MT} {--} Building open translation services for the World", - author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, - booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", - month = nov, - year = "2020", - address = "Lisboa, Portugal", - publisher = "European Association for Machine Translation", - url = "https://aclanthology.org/2020.eamt-1.61", - pages = "479--480", -} -@inproceedings{tiedemann-2020-tatoeba, - title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", - author = {Tiedemann, J{\"o}rg}, - booktitle = "Proceedings of the Fifth Conference on Machine Translation", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://aclanthology.org/2020.wmt-1.139", - pages = "1174--1182", -} - -Wav2vec2: - BAEVSKI, Alexei, et al. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 2020, 33: 12449-12460. - -T5: - RAFFEL, Colin, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 2020, 21.140: 1-67. -``` \ No newline at end of file diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/general.py b/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/general.py deleted file mode 100644 index 7da642409021ef1a2e785cd8aa1a4be467344df4..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/general.py +++ /dev/null @@ -1,1299 +0,0 @@ -import glob -import logging -import os -import platform -import random -import re -import shutil -import subprocess -import time -from contextlib import contextmanager -from copy import copy -from pathlib import Path - -import cv2 -import math -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import torch -import torch.nn as nn -import yaml -from PIL import Image -from scipy.cluster.vq import kmeans -from scipy.signal import butter, filtfilt -from tqdm import tqdm - -from metadata.predictor_yolo_detector.utils.google_utils import gsutil_getsize -from metadata.predictor_yolo_detector.utils.torch_utils import is_parallel, init_torch_seeds - -# Set printoptions -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -matplotlib.rc('font', **{'size': 11}) - -# Prevent OpenCV from multithreading (to use PyTorch DataLoader) -cv2.setNumThreads(0) - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - """ - Decorator to make all processes in distributed training wait for each local_master to do something. - """ - if local_rank not in [-1, 0]: - torch.distributed.barrier() - yield - if local_rank == 0: - torch.distributed.barrier() - - -def set_logging(rank=-1): - logging.basicConfig( - format="%(message)s", - level=logging.INFO if rank in [-1, 0] else logging.WARN) - - -def init_seeds(seed=0): - random.seed(seed) - np.random.seed(seed) - init_torch_seeds(seed) - - -def get_latest_run(search_dir='./runs'): - # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) - last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) - return max(last_list, key=os.path.getctime) if last_list else '' - - -def check_git_status(): - # Suggest 'git pull' if repo is out of date - if platform.system() in ['Linux', 'Darwin'] and not os.path.isfile('/.dockerenv'): - s = subprocess.check_output('if [ -d .git ]; then git fetch && git status -uno; fi', shell=True).decode('utf-8') - if 'Your branch is behind' in s: - print(s[s.find('Your branch is behind'):s.find('\n\n')] + '\n') - - -def check_img_size(img_size, s=32): - # Verify img_size is a multiple of stride s - new_size = make_divisible(img_size, int(s)) # ceil gs-multiple - if new_size != img_size: - print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size)) - return new_size - - -def check_anchors(dataset, model, thr=4.0, imgsz=640): - # Check anchor fit to data, recompute if necessary - print('\nAnalyzing anchors... ', end='') - m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() - shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) - scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale - wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh - - def metric(k): # compute metric - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - best = x.max(1)[0] # best_x - aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold - bpr = (best > 1. / thr).float().mean() # best possible recall - return bpr, aat - - bpr, aat = metric(m.anchor_grid.clone().cpu().view(-1, 2)) - print('anchors/target = %.2f, Best Possible Recall (BPR) = %.4f' % (aat, bpr), end='') - if bpr < 0.98: # threshold to recompute - print('. Attempting to generate improved anchors, please wait...' % bpr) - na = m.anchor_grid.numel() // 2 # number of anchors - new_anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) - new_bpr = metric(new_anchors.reshape(-1, 2))[0] - if new_bpr > bpr: # replace anchors - new_anchors = torch.tensor(new_anchors, device=m.anchors.device).type_as(m.anchors) - m.anchor_grid[:] = new_anchors.clone().view_as(m.anchor_grid) # for inference - m.anchors[:] = new_anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss - check_anchor_order(m) - print('New anchors saved to model. Update model *.yaml to use these anchors in the future.') - else: - print('Original anchors better than new anchors. Proceeding with original anchors.') - print('') # newline - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary - a = m.anchor_grid.prod(-1).view(-1) # anchor area - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da.sign() != ds.sign(): # same order - print('Reversing anchor order') - m.anchors[:] = m.anchors.flip(0) - m.anchor_grid[:] = m.anchor_grid.flip(0) - - -def check_file(file): - # Search for file if not found - if os.path.isfile(file) or file == '': - return file - else: - files = glob.glob('./**/' + file, recursive=True) # find file - assert len(files), 'File Not Found: %s' % file # assert file was found - assert len(files) == 1, "Multiple files match '%s', specify exact path: %s" % (file, files) # assert unique - return files[0] # return file - - -def check_dataset(dict): - # Download dataset if not found - val, s = dict.get('val'), dict.get('download') - if val and len(val): - val = [os.path.abspath(x) for x in (val if isinstance(val, list) else [val])] # val path - if not all(os.path.exists(x) for x in val): - print('\nWARNING: Dataset not found, nonexistent paths: %s' % [*val]) - if s and len(s): # download script - print('Downloading %s ...' % s) - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - torch.hub.download_url_to_file(s, f) - r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip - else: # bash script - r = os.system(s) - print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value - else: - raise Exception('Dataset not found.') - - -def make_divisible(x, divisor): - # Returns x evenly divisible by divisor - return math.ceil(x / divisor) * divisor - - -def labels_to_class_weights(labels, nc=80): - # Get class weights (inverse frequency) from training labels - if labels[0] is None: # no labels loaded - return torch.Tensor() - - labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(np.int) # labels = [class xywh] - weights = np.bincount(classes, minlength=nc) # occurrences per class - - # Prepend gridpoint count (for uCE training) - # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image - # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start - - weights[weights == 0] = 1 # replace empty bins with 1 - weights = 1 / weights # number of targets per class - weights /= weights.sum() # normalize - return torch.from_numpy(weights) - - -def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): - # Produces image weights based on class mAPs - n = len(labels) - class_counts = np.array([np.bincount(labels[i][:, 0].astype(np.int), minlength=nc) for i in range(n)]) - image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) - # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample - return image_weights - - -def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') - # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') - # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco - # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - return x - - -def xyxy2xywh(x): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right - y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - clip_coords(coords, img0_shape) - return coords - - -def clip_coords(boxes, img_shape): - # Clip bounding xyxy bounding boxes to image shape (height, width) - boxes[:, 0].clamp_(0, img_shape[1]) # x1 - boxes[:, 1].clamp_(0, img_shape[0]) # y1 - boxes[:, 2].clamp_(0, img_shape[1]) # x2 - boxes[:, 3].clamp_(0, img_shape[0]) # y2 - - -def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, fname='precision-recall_curve.png'): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. - # Arguments - tp: True positives (nparray, nx1 or nx10). - conf: Objectness value from 0-1 (nparray). - pred_cls: Predicted object classes (nparray). - target_cls: True object classes (nparray). - plot: Plot precision-recall curve at mAP@0.5 - fname: Plot filename - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Sort by objectness - i = np.argsort(-conf) - tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] - - # Find unique classes - unique_classes = np.unique(target_cls) - - # Create Precision-Recall curve and compute AP for each class - px, py = np.linspace(0, 1, 1000), [] # for plotting - pr_score = 0.1 # score to evaluate P and R https://github.com/ultralytics/yolov3/issues/898 - s = [unique_classes.shape[0], tp.shape[1]] # number class, number iou thresholds (i.e. 10 for mAP0.5...0.95) - ap, p, r = np.zeros(s), np.zeros(s), np.zeros(s) - for ci, c in enumerate(unique_classes): - i = pred_cls == c - n_gt = (target_cls == c).sum() # Number of ground truth objects - n_p = i.sum() # Number of predicted objects - - if n_p == 0 or n_gt == 0: - continue - else: - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) - - # Recall - recall = tpc / (n_gt + 1e-16) # recall curve - r[ci] = np.interp(-pr_score, -conf[i], recall[:, 0]) # r at pr_score, negative x, xp because xp decreases - - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-pr_score, -conf[i], precision[:, 0]) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) - if j == 0: - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 - - # Compute F1 score (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + 1e-16) - - if plot: - py = np.stack(py, axis=1) - fig, ax = plt.subplots(1, 1, figsize=(5, 5)) - ax.plot(px, py, linewidth=0.5, color='grey') # plot(recall, precision) - ax.plot(px, py.mean(1), linewidth=2, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) - ax.set_xlabel('Recall') - ax.set_ylabel('Precision') - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend() - fig.tight_layout() - fig.savefig(fname, dpi=200) - - return p, r, ap, f1, unique_classes.astype('int32') - - -def compute_ap(recall, precision): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rbgirshick/py-faster-rcnn. - # Arguments - recall: The recall curve (list). - precision: The precision curve (list). - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Append sentinel values to beginning and end - mrec = recall # np.concatenate(([0.], recall, [recall[-1] + 1E-3])) - mpre = precision # np.concatenate(([0.], precision, [0.])) - - # Compute the precision envelope - mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) - - # Integrate area under curve - method = 'interp' # methods: 'continuous', 'interp' - if method == 'interp': - x = np.linspace(0, 1, 101) # 101-point interp (COCO) - ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate - else: # 'continuous' - i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve - - return ap, mpre, mrec - - -def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-9): - # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - iou = inter / union - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + - (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha = v / ((1 + eps) - iou + v) - return iou - (rho2 / c2 + v * alpha) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU - else: - return iou # IoU - - -def box_iou(box1, box2): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) - - -def wh_iou(wh1, wh2): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter) - - -class FocalLoss(nn.Module): - # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(FocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - # p_t = torch.exp(-loss) - # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability - - # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py - pred_prob = torch.sigmoid(pred) # prob from logits - p_t = true * pred_prob + (1 - true) * (1 - pred_prob) - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = (1.0 - p_t) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - - -def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441 - # return positive, negative label smoothing BCE targets - return 1.0 - 0.5 * eps, 0.5 * eps - - -class BCEBlurWithLogitsLoss(nn.Module): - # BCEwithLogitLoss() with reduced missing label effects. - def __init__(self, alpha=0.05): - super(BCEBlurWithLogitsLoss, self).__init__() - self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss() - self.alpha = alpha - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - pred = torch.sigmoid(pred) # prob from logits - dx = pred - true # reduce only missing label effects - # dx = (pred - true).abs() # reduce missing label and false label effects - alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4)) - loss *= alpha_factor - return loss.mean() - - -def compute_loss(p, targets, model): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - tcls, tbox, indices, anchors = build_targets(p, targets, model) # targets - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([h['cls_pw']])).to(device) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([h['obj_pw']])).to(device) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - cp, cn = smooth_BCE(eps=0.0) - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - # Losses - nt = 0 # number of targets - np = len(p) # number of outputs - balance = [4.0, 1.0, 0.4] if np == 3 else [4.0, 1.0, 0.4, 0.1] # P3-5 or P3-6 - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = indices[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - nt += n # cumulative targets - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1).to(device) # predicted box - iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - model.gr) + model.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - if model.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 5:], cn, device=device) # targets - t[range(n), tcls[i]] = cp - lcls += BCEcls(ps[:, 5:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - lobj += BCEobj(pi[..., 4], tobj) * balance[i] # obj loss - - s = 3 / np # output count scaling - lbox *= h['box'] * s - lobj *= h['obj'] * s * (1.4 if np == 4 else 1.) - lcls *= h['cls'] * s - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - -def build_targets(p, targets, model): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - na, nt = det.na, targets.shape[0] # number of anchors, targets - tcls, tbox, indices, anch = [], [], [], [] - gain = torch.ones(7, device=targets.device) # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(det.nl): - anchors = det.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < model.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj, gi)) # image, anchor, grid indices - tbox.append(torch.cat((gxy - gij, gwh), 1)) # box - anch.append(anchors[a]) # anchors - tcls.append(c) # class - - return tcls, tbox, indices, anch - - -def non_max_suppression(prediction, conf_thres=0.1, iou_thres=0.6, merge=False, classes=None, agnostic=False): - """Performs Non-Maximum Suppression (NMS) on inference results - - Returns: - detections with shape: nx6 (x1, y1, x2, y2, conf, cls) - """ - - nc = prediction[0].shape[1] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img) - - t = time.time() - output = [None] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # If none remain process next image - n = x.shape[0] # number of boxes - if not n: - continue - - # Sort by confidence - # x = x[x[:, 4].argsort(descending=True)] - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torch.ops.torchvision.nms(boxes, scores, iou_thres) - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - try: # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - except: # possible CUDA error https://github.com/ultralytics/yolov3/issues/1139 - print(x, i, x.shape, i.shape) - pass - - output[xi] = x[i] - if (time.time() - t) > time_limit: - break # time limit exceeded - - return output - - -def strip_optimizer(f='weights/best.pt', s=''): # from utils.general import *; strip_optimizer() - # Strip optimizer from 'f' to finalize training, optionally save as 's' - x = torch.load(f, map_location=torch.device('cpu')) - x['optimizer'] = None - x['training_results'] = None - x['epoch'] = -1 - x['model'].half() # to FP16 - for p in x['model'].parameters(): - p.requires_grad = False - torch.save(x, s or f) - mb = os.path.getsize(s or f) / 1E6 # filesize - print('Optimizer stripped from %s,%s %.1fMB' % (f, (' saved as %s,' % s) if s else '', mb)) - - -def coco_class_count(path='../coco/labels/train2014/'): - # Histogram of occurrences per class - nc = 80 # number classes - x = np.zeros(nc, dtype='int32') - files = sorted(glob.glob('%s/*.*' % path)) - for i, file in enumerate(files): - labels = np.loadtxt(file, dtype=np.float32).reshape(-1, 5) - x += np.bincount(labels[:, 0].astype('int32'), minlength=nc) - print(i, len(files)) - - -def coco_only_people(path='../coco/labels/train2017/'): # from utils.general import *; coco_only_people() - # Find images with only people - files = sorted(glob.glob('%s/*.*' % path)) - for i, file in enumerate(files): - labels = np.loadtxt(file, dtype=np.float32).reshape(-1, 5) - if all(labels[:, 0] == 0): - print(labels.shape[0], file) - - -def crop_images_random(path='../images/', scale=0.50): # from utils.general import *; crop_images_random() - # crops images into random squares up to scale fraction - # WARNING: overwrites images! - for file in tqdm(sorted(glob.glob('%s/*.*' % path))): - img = cv2.imread(file) # BGR - if img is not None: - h, w = img.shape[:2] - - # create random mask - a = 30 # minimum size (pixels) - mask_h = random.randint(a, int(max(a, h * scale))) # mask height - mask_w = mask_h # mask width - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - cv2.imwrite(file, img[ymin:ymax, xmin:xmax]) - - -def coco_single_class_labels(path='../coco/labels/train2014/', label_class=43): - # Makes single-class coco datasets. from utils.general import *; coco_single_class_labels() - if os.path.exists('new/'): - shutil.rmtree('new/') # delete output folder - os.makedirs('new/') # make new output folder - os.makedirs('new/labels/') - os.makedirs('new/images/') - for file in tqdm(sorted(glob.glob('%s/*.*' % path))): - with open(file, 'r') as f: - labels = np.array([x.split() for x in f.read().splitlines()], dtype=np.float32) - i = labels[:, 0] == label_class - if any(i): - img_file = file.replace('labels', 'images').replace('txt', 'jpg') - labels[:, 0] = 0 # reset class to 0 - with open('new/images.txt', 'a') as f: # add image to dataset list - f.write(img_file + '\n') - with open('new/labels/' + Path(file).name, 'a') as f: # write label - for l in labels[i]: - f.write('%g %.6f %.6f %.6f %.6f\n' % tuple(l)) - shutil.copyfile(src=img_file, dst='new/images/' + Path(file).name.replace('txt', 'jpg')) # copy images - - -def kmean_anchors(path='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): - """ Creates kmeans-evolved anchors from training dataset - - Arguments: - path: path to dataset *.yaml, or a loaded dataset - n: number of anchors - img_size: image size used for training - thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 - gen: generations to evolve anchors using genetic algorithm - - Return: - k: kmeans evolved anchors - - Usage: - from utils.general import *; _ = kmean_anchors() - """ - thr = 1. / thr - - def metric(k, wh): # compute metrics - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - # x = wh_iou(wh, torch.tensor(k)) # iou metric - return x, x.max(1)[0] # x, best_x - - def fitness(k): # mutation fitness - _, best = metric(torch.tensor(k, dtype=torch.float32), wh) - return (best * (best > thr).float()).mean() # fitness - - def print_results(k): - k = k[np.argsort(k.prod(1))] # sort small to large - x, best = metric(k, wh0) - bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr - print('thr=%.2f: %.4f best possible recall, %.2f anchors past thr' % (thr, bpr, aat)) - print('n=%g, img_size=%s, metric_all=%.3f/%.3f-mean/best, past_thr=%.3f-mean: ' % - (n, img_size, x.mean(), best.mean(), x[x > thr].mean()), end='') - for i, x in enumerate(k): - print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg - return k - - if isinstance(path, str): # *.yaml file - with open(path) as f: - data_dict = yaml.load(f, Loader=yaml.FullLoader) # model dict - from metadata.predictor_yolo_detector.utils.datasets import LoadImagesAndLabels - dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) - else: - dataset = path # dataset - - # Get label wh - shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) - wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh - - # Filter - i = (wh0 < 3.0).any(1).sum() - if i: - print('WARNING: Extremely small objects found. ' - '%g of %g labels are < 3 pixels in width or height.' % (i, len(wh0))) - wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels - - # Kmeans calculation - print('Running kmeans for %g anchors on %g points...' % (n, len(wh))) - s = wh.std(0) # sigmas for whitening - k, dist = kmeans(wh / s, n, iter=30) # points, mean distance - k *= s - wh = torch.tensor(wh, dtype=torch.float32) # filtered - wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered - k = print_results(k) - - # Plot - # k, d = [None] * 20, [None] * 20 - # for i in tqdm(range(1, 21)): - # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) - # ax = ax.ravel() - # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh - # ax[0].hist(wh[wh[:, 0]<100, 0],400) - # ax[1].hist(wh[wh[:, 1]<100, 1],400) - # fig.tight_layout() - # fig.savefig('wh.png', dpi=200) - - # Evolve - npr = np.random - f, sh, mp, s = fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), desc='Evolving anchors with Genetic Algorithm') # progress bar - for _ in pbar: - v = np.ones(sh) - while (v == 1).all(): # mutate until a change occurs (prevent duplicates) - v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) - kg = (k.copy() * v).clip(min=2.0) - fg = fitness(kg) - if fg > f: - f, k = fg, kg.copy() - pbar.desc = 'Evolving anchors with Genetic Algorithm: fitness = %.4f' % f - if verbose: - print_results(k) - - return print_results(k) - - -def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''): - # Print mutation results to evolve.txt (for use with train.py --evolve) - a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys - b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c)) - - if bucket: - url = 'gs://%s/evolve.txt' % bucket - if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0): - os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local - - with open('evolve.txt', 'a') as f: # append result - f.write(c + b + '\n') - x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows - x = x[np.argsort(-fitness(x))] # sort - np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness - - # Save yaml - for i, k in enumerate(hyp.keys()): - hyp[k] = float(x[0, i + 7]) - with open(yaml_file, 'w') as f: - results = tuple(x[0, :7]) - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n') - yaml.dump(hyp, f, sort_keys=False) - - if bucket: - os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload - - -def apply_classifier(x, model, img, im0): - # applies a second stage classifier to yolo outputs - im0 = [im0] if isinstance(im0, np.ndarray) else im0 - for i, d in enumerate(x): # per image - if d is not None and len(d): - d = d.clone() - - # Reshape and pad cutouts - b = xyxy2xywh(d[:, :4]) # boxes - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square - b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad - d[:, :4] = xywh2xyxy(b).long() - - # Rescale boxes from img_size to im0 size - scale_coords(img.shape[2:], d[:, :4], im0[i].shape) - - # Classes - pred_cls1 = d[:, 5].long() - ims = [] - for j, a in enumerate(d): # per item - cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] - im = cv2.resize(cutout, (224, 224)) # BGR - # cv2.imwrite('test%i.jpg' % j, cutout) - - im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 - im /= 255.0 # 0 - 255 to 0.0 - 1.0 - ims.append(im) - - pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction - x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections - - return x - - -def fitness(x): - # Returns fitness (for use with results.txt or evolve.txt) - w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] - return (x[:, :4] * w).sum(1) - - -def output_to_target(output, width, height): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - if isinstance(output, torch.Tensor): - output = output.cpu().numpy() - - targets = [] - for i, o in enumerate(output): - if o is not None: - for pred in o: - box = pred[:4] - w = (box[2] - box[0]) / width - h = (box[3] - box[1]) / height - x = box[0] / width + w / 2 - y = box[1] / height + h / 2 - conf = pred[4] - cls = int(pred[5]) - - targets.append([i, cls, x, y, w, h, conf]) - - return np.array(targets) - - -def increment_dir(dir, comment=''): - # Increments a directory runs/exp1 --> runs/exp2_comment - n = 0 # number - dir = str(Path(dir)) # os-agnostic - dirs = sorted(glob.glob(dir + '*')) # directories - if dirs: - matches = [re.search(r"exp(\d+)", d) for d in dirs] - idxs = [int(m.groups()[0]) for m in matches if m] - if idxs: - n = max(idxs) + 1 # increment - return dir + str(n) + ('_' + comment if comment else '') - - -# Plotting functions --------------------------------------------------------------------------------------------------- -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - b, a = butter(order, normal_cutoff, btype='low', analog=False) - return b, a - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def plot_one_box(x, img, color=None, label=None, line_thickness=None): - # Plots one bounding box on image img - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - color = color or [random.randint(0, 255) for _ in range(3)] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - -def plot_wh_methods(): # from utils.general import *; plot_wh_methods() - # Compares the two methods for width-height anchor multiplication - # https://github.com/ultralytics/yolov3/issues/168 - x = np.arange(-4.0, 4.0, .1) - ya = np.exp(x) - yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 - - fig = plt.figure(figsize=(6, 3), dpi=150) - plt.plot(x, ya, '.-', label='YOLOv3') - plt.plot(x, yb ** 2, '.-', label='YOLOv5 ^2') - plt.plot(x, yb ** 1.6, '.-', label='YOLOv5 ^1.6') - plt.xlim(left=-4, right=4) - plt.ylim(bottom=0, top=6) - plt.xlabel('input') - plt.ylabel('output') - plt.grid() - plt.legend() - fig.tight_layout() - fig.savefig('comparison.png', dpi=200) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): - tl = 3 # line thickness - tf = max(tl - 1, 1) # font thickness - - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - # un-normalise - if np.max(images[0]) <= 1: - images *= 255 - - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Check if we should resize - scale_factor = max_size / max(h, w) - if scale_factor < 1: - h = math.ceil(scale_factor * h) - w = math.ceil(scale_factor * w) - - # Empty array for output - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) - - # Fix class - colour map - prop_cycle = plt.rcParams['axes.prop_cycle'] - # https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb - hex2rgb = lambda h: tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - color_lut = [hex2rgb(h) for h in prop_cycle.by_key()['color']] - - for i, img in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - - block_x = int(w * (i // ns)) - block_y = int(h * (i % ns)) - - img = img.transpose(1, 2, 0) - if scale_factor < 1: - img = cv2.resize(img, (w, h)) - - mosaic[block_y:block_y + h, block_x:block_x + w, :] = img - if len(targets) > 0: - image_targets = targets[targets[:, 0] == i] - boxes = xywh2xyxy(image_targets[:, 2:6]).T - classes = image_targets[:, 1].astype('int') - gt = image_targets.shape[1] == 6 # ground truth if no conf column - conf = None if gt else image_targets[:, 6] # check for confidence presence (gt vs pred) - - boxes[[0, 2]] *= w - boxes[[0, 2]] += block_x - boxes[[1, 3]] *= h - boxes[[1, 3]] += block_y - for j, box in enumerate(boxes.T): - cls = int(classes[j]) - color = color_lut[cls % len(color_lut)] - cls = names[cls] if names else cls - if gt or conf[j] > 0.3: # 0.3 conf thresh - label = '%s' % cls if gt else '%s %.1f' % (cls, conf[j]) - plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl) - - # Draw image filename labels - if paths is not None: - label = os.path.basename(paths[i])[:40] # trim to 40 char - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, - lineType=cv2.LINE_AA) - - # Image border - cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) - - if fname is not None: - mosaic = cv2.resize(mosaic, (int(ns * w * 0.5), int(ns * h * 0.5)), interpolation=cv2.INTER_AREA) - # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save - Image.fromarray(mosaic).save(fname) # PIL save - return mosaic - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.tight_layout() - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - - -def plot_test_txt(): # from utils.general import *; plot_test() - # Plot test.txt histograms - x = np.loadtxt('test.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.general import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(f='study.txt', x=None): # from utils.general import *; plot_study_txt() - # Plot study.txt generated by test.py - fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) - ax = ax.ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - for f in ['study/study_coco_yolov5%s.txt' % x for x in ['s', 'm', 'l', 'x']]: - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] - for i in range(7): - ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[6, :j], y[3, :j] * 1E2, '.-', linewidth=2, markersize=8, - label=Path(f).stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid() - ax2.set_xlim(0, 30) - ax2.set_ylim(28, 50) - ax2.set_yticks(np.arange(30, 55, 5)) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig('study_mAP_latency.png', dpi=300) - plt.savefig(f.replace('.txt', '.png'), dpi=300) - - -def plot_labels(labels, save_dir=''): - # plot dataset labels - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_xlabel('classes') - ax[1].scatter(b[0], b[1], c=hist2d(b[0], b[1], 90), cmap='jet') - ax[1].set_xlabel('x') - ax[1].set_ylabel('y') - ax[2].scatter(b[2], b[3], c=hist2d(b[2], b[3], 90), cmap='jet') - ax[2].set_xlabel('width') - ax[2].set_ylabel('height') - plt.savefig(Path(save_dir) / 'labels.png', dpi=200) - plt.close() - - # seaborn correlogram - try: - import seaborn as sns - import pandas as pd - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - sns.pairplot(x, corner=True, diag_kind='hist', kind='scatter', markers='o', - plot_kws=dict(s=3, edgecolor=None, linewidth=1, alpha=0.02), - diag_kws=dict(bins=50)) - plt.savefig(Path(save_dir) / 'labels_correlogram.png', dpi=200) - plt.close() - except Exception as e: - pass - - -def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.general import *; plot_evolution() - # Plot hyperparameter evolution results in evolve.txt - with open(yaml_file) as f: - hyp = yaml.load(f, Loader=yaml.FullLoader) - x = np.loadtxt('evolve.txt', ndmin=2) - f = fitness(x) - # weights = (f - f.min()) ** 2 # for weighted results - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, (k, v) in enumerate(hyp.items()): - y = x[:, i + 7] - # mu = (y * weights).sum() / weights.sum() # best weighted result - mu = y[f.argmax()] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - plt.savefig('evolve.png', dpi=200) - print('\nPlot saved as evolve.png') - - -def plot_results_overlay(start=0, stop=0): # from utils.general import *; plot_results_overlay() - # Plot training 'results*.txt', overlaying train and val losses - s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends - t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles - for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) - ax = ax.ravel() - for i in range(5): - for j in [i, i + 5]: - y = results[j, x] - ax[i].plot(x, y, marker='.', label=s[j]) - # y_smooth = butter_lowpass_filtfilt(y) - # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) - - ax[i].set_title(t[i]) - ax[i].legend() - ax[i].set_ylabel(f) if i == 0 else None # add filename - fig.savefig(f.replace('.txt', '.png'), dpi=200) - - -def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): - # from utils.general import *; plot_results(save_dir='runs/exp0') - # Plot training 'results*.txt' as seen in https://github.com/ultralytics/yolov5#reproduce-our-training - fig, ax = plt.subplots(2, 5, figsize=(12, 6)) - ax = ax.ravel() - s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', - 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] - if bucket: - # os.system('rm -rf storage.googleapis.com') - # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] - files = ['results%g.txt' % x for x in id] - c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) - os.system(c) - else: - files = glob.glob(str(Path(save_dir) / 'results*.txt')) + glob.glob('../../Downloads/results*.txt') - assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - for i in range(10): - y = results[i, x] - if i in [0, 1, 2, 5, 6, 7]: - y[y == 0] = np.nan # don't show zero loss values - # y /= y[0] # normalize - label = labels[fi] if len(labels) else Path(f).stem - ax[i].plot(x, y, marker='.', label=label, linewidth=1, markersize=6) - ax[i].set_title(s[i]) - # if i in [5, 6, 7]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - fig.tight_layout() - ax[1].legend() - fig.savefig(Path(save_dir) / 'results.png', dpi=200) diff --git a/spaces/Fcjs/stablediffusionapi-edge-of-realism/README.md b/spaces/Fcjs/stablediffusionapi-edge-of-realism/README.md deleted file mode 100644 index 8c1d9e117ba6f8f30226e8787e10a7e516c7b5c0..0000000000000000000000000000000000000000 --- a/spaces/Fcjs/stablediffusionapi-edge-of-realism/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stablediffusionapi Edge Of Realism -emoji: 😻 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Felladrin/MiniSearch/src/modules/loadBar.ts b/spaces/Felladrin/MiniSearch/src/modules/loadBar.ts deleted file mode 100644 index 5f3f4bab5808710393654cab9b8b50d5e3066745..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/MiniSearch/src/modules/loadBar.ts +++ /dev/null @@ -1,7 +0,0 @@ -import LoadBar from "loadbar"; - -export const loadBar = new LoadBar({ - height: "4px", - backgroundColor: "var(--focus)", - startPoint: 1, -}); diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/helpers/theb.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/helpers/theb.py deleted file mode 100644 index 71cfd23ff34768092e4dbe3ff6b719a946dceebb..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/helpers/theb.py +++ /dev/null @@ -1,48 +0,0 @@ -import json -import sys -from re import findall -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -headers = { - 'authority': 'chatbot.theb.ai', - 'accept': 'application/json, text/plain, */*', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'content-type': 'application/json', - 'origin': 'https://chatbot.theb.ai', - 'referer': 'https://chatbot.theb.ai/', - 'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36', -} - -json_data = { - 'prompt': prompt, - 'options': {} -} - -def format(chunk): - try: - completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0] - print(completion_chunk, flush=True, end='') - - except Exception as e: - print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True) - return - -while True: - try: - response = requests.post('https://chatbot.theb.ai/api/chat-process', - headers=headers, json=json_data, content_callback=format, impersonate='chrome110') - - exit(0) - - except Exception as e: - print('[ERROR] an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/transforms.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/eval.py b/spaces/Gen-Sim/Gen-Sim/cliport/eval.py deleted file mode 100644 index 6c01c06f2e8ede349d10a2d2c659e64b8383fef0..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/eval.py +++ /dev/null @@ -1,231 +0,0 @@ -"""Ravens main training script.""" - -import os -import pickle -import json - -import numpy as np -import hydra -from cliport import agents -from cliport import dataset -from cliport import tasks -from cliport.utils import utils -from cliport.environments.environment import Environment -from torch.utils.data import DataLoader - - -@hydra.main(config_path='./cfg', config_name='eval', version_base="1.2") -def main(vcfg): - # Load train cfg - tcfg = utils.load_hydra_config(vcfg['train_config']) - - # Initialize environment and task. - env = Environment( - vcfg['assets_root'], - disp=vcfg['disp'], - shared_memory=vcfg['shared_memory'], - hz=480, - record_cfg=vcfg['record'] - ) - - # Choose eval mode and task. - mode = vcfg['mode'] - eval_task = vcfg['eval_task'] - print("eval_task!!!", eval_task) - - if mode not in {'train', 'val', 'test'}: - raise Exception("Invalid mode. Valid options: train, val, test") - - # Load eval dataset. - dataset_type = vcfg['type'] - if 'multi' in dataset_type: - ds = dataset.RavensMultiTaskDataset(vcfg['data_dir'], - tcfg, - group=eval_task, - mode=mode, - n_demos=vcfg['n_demos'], - augment=False) - else: - ds = dataset.RavensDataset(os.path.join(vcfg['data_dir'], f"{eval_task}-{mode}"), - tcfg, - n_demos=vcfg['n_demos'], - augment=False) - - all_results = {} - name = '{}-{}-n{}'.format(eval_task, vcfg['agent'], vcfg['n_demos']) - - # Save path for results. - json_name = f"multi-results-{mode}.json" if 'multi' in vcfg['model_path'] else f"results-{mode}.json" - save_path = vcfg['save_path'] - print(f"Save path for results: {save_path}") - if not os.path.exists(save_path): - os.makedirs(save_path) - save_json = os.path.join(save_path, f'{name}-{json_name}') - - # Load existing results. - existing_results = {} - if os.path.exists(save_json): - with open(save_json, 'r') as f: - existing_results = json.load(f) - - # Make a list of checkpoints to eval. - ckpts_to_eval = list_ckpts_to_eval(vcfg, existing_results) - data_loader = DataLoader(ds, shuffle=False, - pin_memory=False, - num_workers=1 ) - - # Evaluation loop - print(f"Evaluating: {str(ckpts_to_eval)}") - for ckpt in ckpts_to_eval: - model_file = os.path.join(vcfg['model_path'], ckpt) - - if not os.path.exists(model_file) or not os.path.isfile(model_file): - print(f"Checkpoint not found: {model_file}") - continue - elif not vcfg['update_results'] and ckpt in existing_results: - print(f"Skipping because of existing results for {model_file}.") - continue - - results = [] - mean_reward = 0.0 - - # Run testing for each training run. - for train_run in range(vcfg['n_repeats']): - - # Initialize agent. - utils.set_seed(train_run, torch=True) - agent = agents.names[vcfg['agent']](name, tcfg, data_loader, data_loader) - - # Load checkpoint - agent.load(model_file) - print(f"Loaded: {model_file}") - - record = vcfg['record']['save_video'] - n_demos = vcfg['n_demos'] - - # Run testing and save total rewards with last transition info. - for i in range(0, n_demos): - print(f'Test: {i + 1}/{n_demos}') - try: - episode, seed = ds.load(i) - except: - print(f"skip bad example {i}") - continue - goal = episode[-1] - total_reward = 0 - np.random.seed(seed) - - # set task - if 'multi' in dataset_type: - task_name = ds.get_curr_task() - task = tasks.names[task_name]() - print(f'Evaluating on {task_name}') - else: - task_name = vcfg['eval_task'] - task = tasks.names[task_name]() - - task.mode = mode - env.seed(seed) - env.set_task(task) - obs = env.reset() - info = env.info - reward = 0 - - # Start recording video (NOTE: super slow) - if record: - video_name = f'{task_name}-{i+1:06d}' - if 'multi' in vcfg['model_task']: - video_name = f"{vcfg['model_task']}-{video_name}" - env.start_rec(video_name) - - for _ in range(task.max_steps): - act = agent.act(obs, info, goal) - lang_goal = info['lang_goal'] - - # print(f'Lang Goal: {lang_goal}') - obs, reward, done, info = env.step(act) - total_reward += reward - # print(f'Total Reward: {total_reward:.3f} | Done: {done}\n') - if done: - break - - results.append((total_reward, info)) - mean_reward = np.mean([r for r, i in results]) - print(f'Mean: {mean_reward} | Task: {task_name} | Ckpt: {ckpt}') - - # End recording video - if record: - env.end_rec() - - all_results[ckpt] = { - 'episodes': results, - 'mean_reward': mean_reward, - } - - # Save results in a json file. - if vcfg['save_results']: - print("save results to:", save_json) - # Load existing results - if os.path.exists(save_json): - with open(save_json, 'r') as f: - existing_results = json.load(f) - existing_results.update(all_results) - all_results = existing_results - - with open(save_json, 'w') as f: - json.dump(all_results, f, indent=4) - - -def list_ckpts_to_eval(vcfg, existing_results): - ckpts_to_eval = [] - - # Just the last.ckpt - if vcfg['checkpoint_type'] == 'last': - last_ckpt = 'last.ckpt' - ckpts_to_eval.append(last_ckpt) - - # Validation checkpoints that haven't been already evaluated. - elif vcfg['checkpoint_type'] == 'val_missing': - checkpoints = sorted([c for c in os.listdir(vcfg['model_path']) if "steps=" in c]) - ckpts_to_eval = [c for c in checkpoints if c not in existing_results] - - # Find the best checkpoint from validation and run eval on the test set. - elif vcfg['checkpoint_type'] == 'test_best': - result_jsons = [c for c in os.listdir(vcfg['results_path']) if "results-val" in c] - if 'multi' in vcfg['model_task']: - result_jsons = [r for r in result_jsons if "multi" in r] - else: - result_jsons = [r for r in result_jsons if "multi" not in r] - - if len(result_jsons) > 0: - result_json = result_jsons[0] - with open(os.path.join(vcfg['results_path'], result_json), 'r') as f: - eval_res = json.load(f) - best_checkpoint = 'last.ckpt' - best_success = -1.0 - for ckpt, res in eval_res.items(): - if res['mean_reward'] > best_success: - best_checkpoint = ckpt - best_success = res['mean_reward'] - print(best_checkpoint) - ckpt = best_checkpoint - ckpts_to_eval.append(ckpt) - else: - print("No best val ckpt found. Using last.ckpt") - ckpt = 'last.ckpt' - ckpts_to_eval.append(ckpt) - - # Load a specific checkpoint with a substring e.g: 'steps=10000' - else: - print(f"Looking for: {vcfg['checkpoint_type']}") - checkpoints = [c for c in os.listdir(vcfg['model_path']) if vcfg['checkpoint_type'] in c] - checkpoint = checkpoints[0] if len(checkpoints) > 0 else "" - ckpt = checkpoint - ckpts_to_eval.append(ckpt) - - print("ckpts_to_eval:", ckpts_to_eval) - return ckpts_to_eval - - -if __name__ == '__main__': - main() diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/place_blue_on_line_ends.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/place_blue_on_line_ends.py deleted file mode 100644 index 27315d5da9c10ad40d55d4ae740c332b0a57d8cf..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/place_blue_on_line_ends.py +++ /dev/null @@ -1,47 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class PlaceBlueOnLineEnds(Task): - """Pick up each blue box and accurately place it at the end of a green line.""" - - def __init__(self): - super().__init__() - self.max_steps = 10 - self.lang_template = "place the blue box at the end of the green line" - self.task_completed_desc = "done placing blue boxes on line ends." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add lines. - line_size = (0.3, 0.01, 0.01) - line_template = 'line/line-template.urdf' - replace = {'DIM': line_size} - line_urdf = self.fill_template(line_template, replace) - - line_colors = ['green'] - line_poses = [] - - line_pose = self.get_random_pose(env, line_size) - color = utils.COLORS[line_colors[0]] - env.add_object(line_urdf, line_pose, 'fixed', color=color) - line_poses.append(utils.apply(line_pose, (-0.15,0,0))) - line_poses.append(utils.apply(line_pose, (0.15,0,0))) - - # Add blue boxes. - box_size = (0.04, 0.04, 0.04) - box_urdf = 'box/box-template.urdf' - box_color = utils.COLORS['blue'] - boxes = [] - for _ in range(2): - box_pose = self.get_random_pose(env, box_size) - box_id = env.add_object(box_urdf, box_pose, color=box_color) - boxes.append(box_id) - - # Goal: each blue box is at the end of a different colored line. - for i in range(2): - language_goal = self.lang_template.format(line_colors[0]) - self.add_goal(objs=[boxes[i]], matches=np.ones((1, 1)), targ_poses=[line_poses[i]], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 2, language_goal=language_goal) diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/rainbow_stack.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/rainbow_stack.py deleted file mode 100644 index 7d1fb25ba504bd0c2b8f2c28b7d172eb16e1d18b..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/rainbow_stack.py +++ /dev/null @@ -1,39 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class RainbowStack(Task): - """Pick up blocks of seven different colors and stack them on the stand in the order of the rainbow (red, orange, yellow, green, blue, indigo, violet) from bottom to top.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "stack the blocks on the stand in the order of the rainbow from bottom to top" - self.task_completed_desc = "done stacking." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add stand. - # x, y, z dimensions for the asset size - stand_size = (0.12, 0.12, 0.02) - stand_pose = self.get_random_pose(env, stand_size) - stand_urdf = 'stacking/stand.urdf' - env.add_object(stand_urdf, stand_pose, 'fixed') - - # Add blocks. - # x, y, z dimensions for the asset size - block_size = (0.04, 0.04, 0.04) - block_urdf = 'stacking/block.urdf' - colors = ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] - blocks = [] - for color in colors: - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=color) - blocks.append(block_id) - - # Goal: stack the blocks on the stand in the order of the rainbow from bottom to top. - for i in range(len(blocks)): - self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[stand_pose], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / len(blocks), language_goal=self.lang_template) \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/core/transport.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/core/transport.py deleted file mode 100644 index bc81e33d800f9b9e1ce504c8c9352672037a32e4..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/models/core/transport.py +++ /dev/null @@ -1,109 +0,0 @@ -import numpy as np -import cliport.models as models -from cliport.utils import utils - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class Transport(nn.Module): - - def __init__(self, stream_fcn, in_shape, n_rotations, crop_size, preprocess, cfg, device): - """Transport (a.k.a Place) module.""" - super().__init__() - - self.iters = 0 - self.stream_fcn = stream_fcn - self.n_rotations = n_rotations - self.crop_size = crop_size # crop size must be N*16 (e.g. 96) - self.preprocess = preprocess - self.cfg = cfg - self.device = device - self.batchnorm = self.cfg['train']['batchnorm'] - - self.pad_size = int(self.crop_size / 2) - self.padding = np.zeros((3, 2), dtype=int) - self.padding[:2, :] = self.pad_size - - in_shape = np.array(in_shape) - in_shape = tuple(in_shape) - self.in_shape = in_shape - - # Crop before network (default from Transporters CoRL 2020). - self.kernel_shape = (self.crop_size, self.crop_size, self.in_shape[2]) - - if not hasattr(self, 'output_dim'): - self.output_dim = 3 - if not hasattr(self, 'kernel_dim'): - self.kernel_dim = 3 - - self.rotator = utils.ImageRotator(self.n_rotations) - - self._build_nets() - - def _build_nets(self): - stream_one_fcn, _ = self.stream_fcn - model = models.names[stream_one_fcn] - self.key_resnet = model(self.in_shape, self.output_dim, self.cfg, self.device) - self.query_resnet = model(self.kernel_shape, self.kernel_dim, self.cfg, self.device) - print(f"Transport FCN: {stream_one_fcn}") - - def correlate(self, in0, in1, softmax): - """Correlate two input tensors.""" - output = F.conv2d(in0, in1, padding=(self.pad_size, self.pad_size)) - output = F.interpolate(output, size=(in0.shape[-2], in0.shape[-1]), mode='bilinear') - output = output[:,:,self.pad_size:-self.pad_size, self.pad_size:-self.pad_size] - output_shape = output.shape - - # a hack around the batch size 1. The shape needs to tile back. - channel_num = in1.shape[0] // in0.shape[0] - output = torch.stack([output[i,i*channel_num:(i+1)*channel_num] for i in range(len(output))], dim=0) - if softmax: - output = output.reshape((len(output), -1)) - output = F.softmax(output, dim=-1) - output = output.reshape(len(output),channel_num,output_shape[2],output_shape[3]) - - return output - - def transport(self, in_tensor, crop): - logits = self.key_resnet(in_tensor) - kernel = self.query_resnet(crop) - return logits, kernel - - def forward(self, inp_img, p, softmax=True): - """Forward pass.""" - img_unprocessed = np.pad(inp_img, self.padding, mode='constant') - input_data = img_unprocessed - in_shape = input_data.shape - if len(inp_shape) == 3: - inp_shape = (1,) + inp_shape - input_data = input_data.reshape(in_shape) # [B W H D] - in_tensor = torch.from_numpy(input_data).to(dtype=torch.float, device=self.device) - - # Rotation pivot. - pv = p + self.pad_size # np.array([p[0], p[1]]) - - # Crop before network (default from Transporters CoRL 2020). - hcrop = self.pad_size - in_tensor = in_tensor.permute(0, 3, 1, 2) # [B D W H] - - crop = in_tensor.repeat(self.n_rotations, 1, 1, 1) - crop = self.rotator(crop, pivot=pv) - crop = torch.cat(crop, dim=0) - crop = crop[:, :, pv[0]-hcrop:pv[0]+hcrop, pv[1]-hcrop:pv[1]+hcrop] - - logits, kernel = self.transport(in_tensor, crop) - - # TODO(Mohit): Crop after network. Broken for now. - # in_tensor = in_tensor.permute(0, 3, 1, 2) - # logits, crop = self.transport(in_tensor) - # crop = crop.repeat(self.n_rotations, 1, 1, 1) - # crop = self.rotator(crop, pivot=pv) - # crop = torch.cat(crop, dim=0) - - # kernel = crop[:, :, pv[0]-hcrop:pv[0]+hcrop, pv[1]-hcrop:pv[1]+hcrop] - # kernel = crop[:, :, p[0]:(p[0] + self.crop_size), p[1]:(p[1] + self.crop_size)] - - return self.correlate(logits, kernel, softmax) - diff --git a/spaces/Gen-Sim/Gen-Sim/transfer.sh b/spaces/Gen-Sim/Gen-Sim/transfer.sh deleted file mode 100644 index 10f58b7a96dd7daaee77895ef4d2208c31214d7c..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/transfer.sh +++ /dev/null @@ -1,2 +0,0 @@ -cp -r cliport gensim BLOG.md setup.py prompts media .gitignore ../Hf_GenSim/ -cp -r cliport gensim BLOG.md setup.py prompts media README.md requirements.txt .gitignore ../GenSim/ \ No newline at end of file diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/utils/modelutils.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/utils/modelutils.py deleted file mode 100644 index 6acaa984e0c7876f9149fc1ff99001b7761dc80b..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/utils/modelutils.py +++ /dev/null @@ -1,17 +0,0 @@ -from pathlib import Path - -def check_model_paths(encoder_path: Path, synthesizer_path: Path, vocoder_path: Path): - # This function tests the model paths and makes sure at least one is valid. - if encoder_path.is_file() or encoder_path.is_dir(): - return - if synthesizer_path.is_file() or synthesizer_path.is_dir(): - return - if vocoder_path.is_file() or vocoder_path.is_dir(): - return - - # If none of the paths exist, remind the user to download models if needed - print("********************************************************************************") - print("Error: Model files not found. Follow these instructions to get and install the models:") - print("https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models") - print("********************************************************************************\n") - quit(-1) diff --git a/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/model.py b/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/model.py deleted file mode 100644 index ec541cc3bb5dd326802f4408db4f77cbc80776f8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/model.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch -import numpy as np - - -def create_fashion_inference(): - device = 'cuda' if torch.cuda.is_available() else 'cpu' - use_gpu = True if torch.cuda.is_available() else False - fashion = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', 'DCGAN', pretrained=True, useGPU=use_gpu) - fashion_noise, _ = fashion.buildNoiseData(1) - @torch.inference_mode() - def fashion_generator(latents): - latents = [torch.from_numpy(latent).float().to(device) for latent in latents] - latents = torch.stack(latents) - out = fashion.test(latents) - outs = [] - for out_i in out: - out_i = ((out_i.permute(1,2,0) + 1) * 127.5).clamp(0,255).cpu().numpy() - out_i = np.uint8(out_i) - outs.append(out_i) - return outs - - return { - 'name': 'Fashion', - 'generator': fashion_generator, - 'latent_dim': fashion_noise.shape[1], - 'fps': 15, - 'batch_size': 8, - 'strength': 0.6, - 'max_duration': 30, - 'use_peak': True - } diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py deleted file mode 100644 index 9bbc86ead7003ab75264f8cf0cd18edb735fe9fd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py +++ /dev/null @@ -1,17 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py' -# model settings -conv_cfg = dict(type='ConvWS') -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://jhu/resnext50_32x4d_gn_ws', - backbone=dict( - type='ResNeXt', - depth=50, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 6644a58dea86fd38e208abbedffe4f836e677078..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './fcn_r50-d8_769x769_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet18_v1c', - backbone=dict(depth=18), - decode_head=dict( - in_channels=512, - channels=128, - ), - auxiliary_head=dict(in_channels=256, channels=64)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_512x512_40k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_512x512_40k_voc12aug.py deleted file mode 100644 index 0669b741b9b3e3e1a309147b920d3d2a1952ab75..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_512x512_40k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './upernet_r50_512x512_40k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/export.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/export.py deleted file mode 100644 index 28b214017d9ac23934b67e8254a96131cefa6501..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/export.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf -import torch - -from audiocraft import __version__ - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_file: tp.Union[Path, str]): - """Export only the best state from the given EnCodec checkpoint. This - should be used if you trained your own EnCodec model. - """ - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - 'version': __version__, - 'exported': True, - } - Path(out_file).parent.mkdir(exist_ok=True, parents=True) - torch.save(new_pkg, out_file) - return out_file - - -def export_pretrained_compression_model(pretrained_encodec: str, out_file: tp.Union[Path, str]): - """Export a compression model (potentially EnCodec) from a pretrained model. - This is required for packaging the audio tokenizer along a MusicGen or AudioGen model. - Do not include the //pretrained/ prefix. For instance if you trained a model - with `facebook/encodec_32khz`, just put that as a name. Same for `dac_44khz`. - - In that case, this will not actually include a copy of the model, simply the reference - to the model used. - """ - if Path(pretrained_encodec).exists(): - pkg = torch.load(pretrained_encodec) - assert 'best_state' in pkg - assert 'xp.cfg' in pkg - assert 'version' in pkg - assert 'exported' in pkg - else: - pkg = { - 'pretrained': pretrained_encodec, - 'exported': True, - 'version': __version__, - } - Path(out_file).parent.mkdir(exist_ok=True, parents=True) - torch.save(pkg, out_file) - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_file: tp.Union[Path, str]): - """Export only the best state from the given MusicGen or AudioGen checkpoint. - """ - pkg = torch.load(checkpoint_path, 'cpu') - if pkg['fsdp_best_state']: - best_state = pkg['fsdp_best_state']['model'] - else: - assert pkg['best_state'] - best_state = pkg['best_state']['model'] - new_pkg = { - 'best_state': best_state, - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - 'version': __version__, - 'exported': True, - } - - Path(out_file).parent.mkdir(exist_ok=True, parents=True) - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/GroveStreet/GTA_SOVITS/modules/modules.py b/spaces/GroveStreet/GTA_SOVITS/modules/modules.py deleted file mode 100644 index 54290fd207b25e93831bd21005990ea137e6b50e..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/modules/modules.py +++ /dev/null @@ -1,342 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import modules.commons as commons -from modules.commons import init_weights, get_padding - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/app_utils.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/app_utils.py deleted file mode 100644 index 4cdd417241eaa945788bfeecfdeac94a7d76819c..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/app_utils.py +++ /dev/null @@ -1,94 +0,0 @@ -import os -import random -import time -from typing import Tuple, Union - -import cv2 -import numpy as np -import streamlit as st -from PIL import Image -from torch import nn - -num_format = "{:,}".format - -def count_parameters(model: nn.Module) -> str: - '''Count the number of parameters of a model''' - return num_format(sum(p.numel() for p in model.parameters() if p.requires_grad)) - -def hex_to_rgb(hex: str) -> np.ndarray: - """Convert hex color to rgb color - - Args: - hex (str): "#00f900" - - Returns: - np.ndarray: numpy array of rgb color - """ - hex = hex[1:] - rgb = [] - for i in (0, 2, 4): - decimal = int(hex[i:i+2], 16) - rgb.append(decimal) - - return np.array(rgb) / 255.0 - -class FrameRate: - def __init__(self) -> None: - self.c: int = 0 - self.start_time: float = None - self.NO_FRAMES = 100 - self.fps: float = -1 - - def reset(self) -> None: - self.start_time = time.time() - self.c = 0 - self.fps = -1 - - def count(self) -> None: - self.c += 1 - if self.c % self.NO_FRAMES == 0: - self.c = 0 - end_time = time.time() - self.fps = self.NO_FRAMES / (end_time - self.start_time) - self.start_time = end_time - - def show_fps(self, image: np.ndarray) -> np.ndarray: - if self.fps != -1: - return cv2.putText( - image, - f'FPS {self.fps:.0f}', - (50, 50), - cv2.FONT_HERSHEY_SIMPLEX, - fontScale=1, - color=(255, 0, 0), - thickness=2 - ) - else: - return image - -class ImgContainer: - img: np.ndarray = None # raw image - frame_rate: FrameRate = FrameRate() - -def load_video(video_path: str) -> bytes: - if not os.path.isfile(video_path): - return - with st.spinner(f'Loading video {video_path} ...'): - video_bytes = open(video_path, 'rb').read() - st.video(video_bytes, format='video/mp4') - -def normalize(data: np.ndarray) -> np.ndarray: - return (data - data.min()) / (data.max() - data.min() + 1e-8) - -def get_size(image: Union[Image.Image, np.ndarray]) -> Tuple[int, int]: - """Get resolution (w, h) of an image - An input image can be Pillow Image or CV2 Image - """ - if type(image) == np.ndarray: - return (image.shape[1], image.shape[0]) - else: - return image.size - -def random_choice(p: float) -> bool: - '''Return True if random float <= p ''' - return random.random() <= p diff --git a/spaces/HUIYI/huiyili/app.py b/spaces/HUIYI/huiyili/app.py deleted file mode 100644 index dd21e0e07a23c993e42a5451cfcdd2a403c32e8d..0000000000000000000000000000000000000000 --- a/spaces/HUIYI/huiyili/app.py +++ /dev/null @@ -1,11 +0,0 @@ -#libraries -import gradio as gr -from gradio.mix import Parallel - -#variables, functions and parameters -model1 = gr.Interface.load("huggingface/gpt2") -model2 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -model3 = gr.Interface.load("huggingface/distilgpt2") - -#functions, parameters and variables -gr.Parallel(model1, model2, model3).launch() \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/configuration_zen1.py b/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/configuration_zen1.py deleted file mode 100644 index c7cbeb5657ea07b2a4e8429199a6091be39864c8..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/configuration_zen1.py +++ /dev/null @@ -1,80 +0,0 @@ -# coding=utf-8 -# Copyright 2022 IDEA-CCNL and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TransfoXLDenoise model configuration """ - -from transformers.configuration_utils import PretrainedConfig - - -class ZenConfig(PretrainedConfig): - - """Configuration class to store the configuration of a `ZenModel`. - """ - - def __init__(self, - # vocab_size_or_config_json_file, - # word_vocab_size, - hidden_size=768, - num_hidden_layers=12, - num_attention_heads=12, - intermediate_size=3072, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - max_position_embeddings=512, - type_vocab_size=2, - initializer_range=0.02, - layer_norm_eps=1e-12, - num_hidden_word_layers=6, - **kwargs): - """Constructs ZenConfig. - - Args: - vocab_size_or_config_json_file: Vocabulary size of `inputs_ids` in `BertModel`. - hidden_size: Size of the encoder layers and the pooler layer. - num_hidden_layers: Number of hidden layers in the Transformer encoder. - num_attention_heads: Number of attention heads for each attention layer in - the Transformer encoder. - intermediate_size: The size of the "intermediate" (i.e., feed-forward) - layer in the Transformer encoder. - hidden_act: The non-linear activation function (function or string) in the - encoder and pooler. If string, "gelu", "relu" and "swish" are supported. - hidden_dropout_prob: The dropout probabilitiy for all fully connected - layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob: The dropout ratio for the attention - probabilities. - max_position_embeddings: The maximum sequence length that this model might - ever be used with. Typically set this to something large just in case - (e.g., 512 or 1024 or 2048). - type_vocab_size: The vocabulary size of the `token_type_ids` passed into - `BertModel`. - initializer_range: The sttdev of the truncated_normal_initializer for - initializing all weight matrices. - layer_norm_eps: The epsilon used by LayerNorm. - """ - # self.vocab_size = vocab_size_or_config_json_file - # self.word_size = word_vocab_size - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.hidden_act = hidden_act - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.type_vocab_size = type_vocab_size - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - self.num_hidden_word_layers = num_hidden_word_layers - super().__init__(**kwargs) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py deleted file mode 100644 index 5a04851a74624e9c8ebc259805b7aed6c638b0de..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - - -def _normalize_spaces(line): - return " ".join(line.split()) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--input_file", required=True, type=str) - parser.add_argument("-n", "--repeat_times", required=True, type=int) - parser.add_argument("-o", "--output_file", required=False, type=str) - args = parser.parse_args() - stream = open(args.output_file, "w") if args.output_file else sys.stdout - - for line in open(args.input_file): - for _ in range(args.repeat_times): - stream.write(_normalize_spaces(line) + "\n") - - -if __name__ == "__main__": - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/models/ofa/unify_transformer_layer.py b/spaces/HarryLee/eCommerceImageCaptioning/models/ofa/unify_transformer_layer.py deleted file mode 100644 index bffe870a509722679d98a0a45ced8b6123094c8f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/models/ofa/unify_transformer_layer.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor - -from .unify_multihead_attention import MultiheadAttention - - -def drop_path(x, drop_prob: float = 0.0, training: bool = False): - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, - however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the - layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the - argument. - """ - if drop_prob == 0.0 or not training: - return x - keep_prob = 1 - drop_prob - shape = (1, x.shape[1], 1) - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob=None): - super().__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return "p={}".format(self.drop_prob) - - -class TransformerEncoderLayer(nn.Module): - """Encoder layer block. - - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.encoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, args, drop_path_rate=0.0): - super().__init__() - self.args = args - self.embed_dim = args.encoder_embed_dim - self.quant_noise = getattr(args, 'quant_noise_pq', 0) - self.quant_noise_block_size = getattr(args, 'quant_noise_pq_block_size', 8) or 8 - self.self_attn = self.build_self_attention(self.embed_dim, args) - self.self_attn_layer_norm = LayerNorm(self.embed_dim) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, 'activation_fn', 'relu') or "relu" - ) - activation_dropout_p = getattr(args, "activation_dropout", 0) or 0 - if activation_dropout_p == 0: - # for backwards compatibility with models that use args.relu_dropout - activation_dropout_p = getattr(args, "relu_dropout", 0) or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = self.build_fc1( - self.embed_dim, - args.encoder_ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - args.encoder_ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None - self.nh = self.self_attn.num_heads - self.head_dim = self.self_attn.head_dim - - self.ffn_layernorm = LayerNorm(args.encoder_ffn_embed_dim) if getattr(args, 'scale_fc', False) else None - self.w_resid = nn.Parameter(torch.ones(self.embed_dim, ), requires_grad=True) if getattr(args, 'scale_resids', False) else None - - self.final_layer_norm = LayerNorm(self.embed_dim) - - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_self_attention(self, embed_dim, args): - return MultiheadAttention( - embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - scale_factor=args.attn_scale_factor, - scale_heads=getattr(args, 'scale_heads', False) - ) - - def residual_connection(self, x, residual): - return residual + self.drop_path(x) - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - if "{}.{}.{}".format(name, new, m) not in state_dict and "{}.{}".format(new, m) in self.state_dict(): - state_dict[ - "{}.{}.{}".format(name, new, m) - ] = self.state_dict()["{}.{}".format(new, m)] - - prefix = name + "." if name != "" else "" - for param_name, param_tensor in self.state_dict().items(): - if (prefix + param_name) not in state_dict: - state_dict[prefix + param_name] = self.state_dict()[param_name] - - def forward( - self, - x, - encoder_padding_mask: Optional[Tensor], - attn_mask: Optional[Tensor] = None, - self_attn_bias: Optional[Tensor] = None - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, seq_len)` where padding elements are indicated by ``1``. - attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`, - where `tgt_len` is the length of output and `src_len` is the - length of input, though here both are equal to `seq_len`. - `attn_mask[tgt_i, src_j] = 1` means that when calculating the - embedding for `tgt_i`, we exclude (mask out) `src_j`. This is - useful for strided self-attention. - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - # anything in original attn_mask = 1, becomes -1e8 - # anything in original attn_mask = 0, becomes 0 - # Note that we cannot use -inf here, because at some edge cases, - # the attention weight (before softmax) for some padded element in query - # will become -inf, which results in NaN in model parameters - if attn_mask is not None: - attn_mask = attn_mask.masked_fill( - attn_mask.to(torch.bool), - -1e8 if x.dtype == torch.float32 else -1e4 - ) - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - attn_mask=attn_mask, - attn_bias=self_attn_bias - ) - if self.attn_ln is not None: - x = self.attn_ln(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - if self.ffn_layernorm is not None: - x = self.ffn_layernorm(x) - x = self.fc2(x) - x = self.dropout_module(x) - if self.w_resid is not None: - residual = torch.mul(self.w_resid, residual) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - return x - - -class TransformerDecoderLayer(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.decoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False, drop_path_rate=0.0 - ): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.quant_noise = getattr(args, "quant_noise_pq", 0) - self.quant_noise_block_size = getattr(args, "quant_noise_pq_block_size", 8) - - self.cross_self_attention = getattr(args, "cross_self_attention", False) - - self.self_attn = self.build_self_attention( - self.embed_dim, - args, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.self_attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None - self.cross_attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None - self.nh = self.self_attn.num_heads - self.head_dim = self.self_attn.head_dim - - self.activation_fn = utils.get_activation_fn( - activation=str(args.activation_fn) - if getattr(args, "activation_fn", None) is not None - else "relu" - ) - activation_dropout_p = getattr(args, "activation_dropout", 0) or 0 - if activation_dropout_p == 0: - # for backwards compatibility with models that use args.relu_dropout - activation_dropout_p = getattr(args, "relu_dropout", 0) or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - # use layerNorm rather than FusedLayerNorm for exporting. - # char_inputs can be used to determint this. - # TODO remove this once we update apex with the fix - export = getattr(args, "char_inputs", False) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, args) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - self.ffn_layernorm = LayerNorm(args.decoder_ffn_embed_dim) if getattr(args, 'scale_fc', False) else None - self.w_resid = nn.Parameter(torch.ones(self.embed_dim, ), requires_grad=True) if getattr(args, 'scale_resids', False) else None - - self.fc1 = self.build_fc1( - self.embed_dim, - args.decoder_ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - args.decoder_ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=export) - self.need_attn = True - - self.onnx_trace = False - - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, embed_dim, args, add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttention( - embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=not getattr(args, "cross_self_attention", False), - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - scale_factor=args.attn_scale_factor, - scale_heads=getattr(args, 'scale_heads', False) - ) - - def build_encoder_attention(self, embed_dim, args): - return MultiheadAttention( - embed_dim, - args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - scale_factor=args.attn_scale_factor, - scale_heads=getattr(args, 'scale_heads', False) - ) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def residual_connection(self, x, residual): - return residual + self.drop_path(x) - - def forward( - self, - x, - encoder_out: Optional[torch.Tensor] = None, - encoder_padding_mask: Optional[torch.Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[torch.Tensor]] = None, - prev_attn_state: Optional[List[torch.Tensor]] = None, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - self_attn_bias: Optional[Tensor] = None, - cross_attn_bias: Optional[Tensor] = None - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - attn_bias=self_attn_bias - ) - if self.self_attn_ln is not None: - x = self.self_attn_ln(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - attn_bias=cross_attn_bias - ) - if self.cross_attn_ln is not None: - x = self.cross_attn_ln(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - if self.ffn_layernorm is not None: - x = self.ffn_layernorm(x) - x = self.fc2(x) - x = self.dropout_module(x) - if self.w_resid is not None: - residual = torch.mul(self.w_resid, residual) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None - - def make_generation_fast_(self, need_attn: bool = False, **kwargs): - self.need_attn = need_attn - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - # update layer norms - layer_norm_map = { - "0": "self_attn_layer_norm", - "1": "encoder_attn_layer_norm", - "2": "final_layer_norm", - } - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict[ - "{}.{}.{}".format(name, new, m) - ] = state_dict[k] - del state_dict[k] - if "{}.{}.{}".format(name, new, m) not in state_dict and "{}.{}".format(new, m) in self.state_dict(): - state_dict[ - "{}.{}.{}".format(name, new, m) - ] = self.state_dict()["{}.{}".format(new, m)] - - prefix = name + "." if name != "" else "" - for param_name, param_tensor in self.state_dict().items(): - if (prefix + param_name) not in state_dict: - state_dict[prefix + param_name] = self.state_dict()[param_name] diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/generate_mels.sh b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/generate_mels.sh deleted file mode 100644 index 26e82aa439ea10de30cbb04a7cedba7127a4dbb6..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/generate_mels.sh +++ /dev/null @@ -1,4 +0,0 @@ -melsdir='' -modeldir='' - -python ../src/glow_tts/generate_mels.py -s $melsdir -m $modeldir diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/inference_e2e.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/inference_e2e.py deleted file mode 100644 index 062aecd4280925336ab1d36420d2cd47febf661c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/inference_e2e.py +++ /dev/null @@ -1,91 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals - -import glob -import os -import numpy as np -import argparse -import json -import torch -from scipy.io.wavfile import write -from env import AttrDict -from meldataset import MAX_WAV_VALUE -from models import Generator - -h = None -device = None - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + "*") - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return "" - return sorted(cp_list)[-1] - - -def inference(a): - generator = Generator(h).to(device) - - state_dict_g = load_checkpoint(a.checkpoint_file, device) - generator.load_state_dict(state_dict_g["generator"]) - - filelist = os.listdir(a.input_mels_dir) - - os.makedirs(a.output_dir, exist_ok=True) - - generator.eval() - generator.remove_weight_norm() - with torch.no_grad(): - for i, filname in enumerate(filelist): - x = np.load(os.path.join(a.input_mels_dir, filname)) - x = torch.FloatTensor(x).to(device) - y_g_hat = generator(x) - audio = y_g_hat.squeeze() - audio = audio * MAX_WAV_VALUE - audio = audio.cpu().numpy().astype("int16") - - output_file = os.path.join( - a.output_dir, os.path.splitext(filname)[0] + "_generated_e2e.wav" - ) - write(output_file, h.sampling_rate, audio) - print(output_file) - - -def main(): - print("Initializing Inference Process..") - - parser = argparse.ArgumentParser() - parser.add_argument("--input_mels_dir", default="test_mel_files") - parser.add_argument("--output_dir", default="generated_files_from_mel") - parser.add_argument("--checkpoint_file", required=True) - a = parser.parse_args() - - config_file = os.path.join(os.path.split(a.checkpoint_file)[0], "config.json") - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - torch.manual_seed(h.seed) - global device - if torch.cuda.is_available(): - torch.cuda.manual_seed(h.seed) - device = torch.device("cuda") - else: - device = torch.device("cpu") - - inference(a) - - -if __name__ == "__main__": - main() diff --git a/spaces/Harveenchadha/en_to_indic_translation/learn_bpe.sh b/spaces/Harveenchadha/en_to_indic_translation/learn_bpe.sh deleted file mode 100644 index 3219ac8d5615643344237eaa0279af3fe7ced254..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/learn_bpe.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash - -expdir=$1 # EXPDIR -num_operations=${2:-32000} - -#`dirname $0`/env.sh -SUBWORD_NMT_DIR="subword-nmt" -data_dir="$expdir/data" -train_file=$data_dir/train -# num_operations=32000 - -echo Input file: $train_file - -mkdir -p $expdir/vocab - -echo "learning joint BPE" -cat $train_file.SRC $train_file.TGT > $train_file.ALL -python $SUBWORD_NMT_DIR/subword_nmt/learn_bpe.py \ - --input $train_file.ALL \ - -s $num_operations \ - -o $expdir/vocab/bpe_codes.32k.SRC_TGT \ - --num-workers -1 - -echo "computing SRC vocab" -python $SUBWORD_NMT_DIR/subword_nmt/apply_bpe.py \ - -c $expdir/vocab/bpe_codes.32k.SRC_TGT \ - --num-workers -1 \ - -i $train_file.SRC | \ -python $SUBWORD_NMT_DIR/subword_nmt/get_vocab.py \ - > $expdir/vocab/vocab.tmp.SRC -python scripts/clean_vocab.py $expdir/vocab/vocab.tmp.SRC $expdir/vocab/vocab.SRC -#rm $expdir/vocab/vocab.tmp.SRC - -echo "computing TGT vocab" -python $SUBWORD_NMT_DIR/subword_nmt/apply_bpe.py \ - -c $expdir/vocab/bpe_codes.32k.SRC_TGT \ - --num-workers -1 \ - -i $train_file.TGT | \ -python $SUBWORD_NMT_DIR/subword_nmt/get_vocab.py \ - > $expdir/vocab/vocab.tmp.TGT -python scripts/clean_vocab.py $expdir/vocab/vocab.tmp.TGT $expdir/vocab/vocab.TGT -#rm $expdir/vocab/vocab.tmp.TGT - -rm $train_file.ALL diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py deleted file mode 100644 index 66a426d2223ce75ffae6cee2131770556c5949bc..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py +++ /dev/null @@ -1,167 +0,0 @@ -import collections -import io -import json -import librosa -import numpy as np -import soundfile as sf -import time -import torch -from scipy.io.wavfile import read -from .text import SOS_TOK, EOS_TOK - - -def get_mask_from_lengths(lengths): - max_len = torch.max(lengths).item() - ids = torch.arange(0, max_len, out=torch.cuda.LongTensor(max_len)) - mask = (ids < lengths.unsqueeze(1)) - return mask - - -def load_wav_to_torch(full_path, sr=None): - data, sr = librosa.load(full_path, sr=sr) - data = np.clip(data, -1, 1) # potentially out of [-1, 1] due to resampling - data = data * 32768.0 # match values loaded by scipy - return torch.FloatTensor(data.astype(np.float32)), sr - - -def read_binary_audio(bin_data, tar_sr=None): - """ - read binary audio (`bytes` or `uint8` `numpy.ndarray`) to `float32` - `numpy.ndarray` - - RETURNS: - data (np.ndarray) : audio of shape (n,) or (2, n) - tar_sr (int) : sample rate - """ - data, ori_sr = sf.read(io.BytesIO(bin_data), dtype='float32') - data = data.T - if (tar_sr is not None) and (ori_sr != tar_sr): - data = librosa.resample(data, ori_sr, tar_sr) - else: - tar_sr = ori_sr - data = np.clip(data, -1, 1) - data = data * 32768.0 - return torch.FloatTensor(data.astype(np.float32)), tar_sr - - -def load_filepaths_and_text(filename): - with open(filename, encoding='utf-8') as f: - data = [json.loads(line.rstrip()) for line in f] - return data - - -def to_gpu(x): - x = x.contiguous() - - if torch.cuda.is_available(): - x = x.cuda(non_blocking=True) - return torch.autograd.Variable(x) - - -def load_code_dict(path, add_sos=False, add_eos=False): - if not path: - return {} - - with open(path, 'r') as f: - codes = ['_'] + [line.rstrip() for line in f] # '_' for pad - code_dict = {c: i for i, c in enumerate(codes)} - - if add_sos: - code_dict[SOS_TOK] = len(code_dict) - if add_eos: - code_dict[EOS_TOK] = len(code_dict) - assert(set(code_dict.values()) == set(range(len(code_dict)))) - - return code_dict - - -def load_obs_label_dict(path): - if not path: - return {} - with open(path, 'r') as f: - obs_labels = [line.rstrip() for line in f] - return {c: i for i, c in enumerate(obs_labels)} - - -# A simple timer class inspired from `tnt.TimeMeter` -class CudaTimer: - def __init__(self, keys): - self.keys = keys - self.reset() - - def start(self, key): - s = torch.cuda.Event(enable_timing=True) - s.record() - self.start_events[key].append(s) - return self - - def stop(self, key): - e = torch.cuda.Event(enable_timing=True) - e.record() - self.end_events[key].append(e) - return self - - def reset(self): - self.start_events = collections.defaultdict(list) - self.end_events = collections.defaultdict(list) - self.running_times = collections.defaultdict(float) - self.n = collections.defaultdict(int) - return self - - def value(self): - self._synchronize() - return {k: self.running_times[k] / self.n[k] for k in self.keys} - - def _synchronize(self): - torch.cuda.synchronize() - for k in self.keys: - starts = self.start_events[k] - ends = self.end_events[k] - if len(starts) == 0: - raise ValueError("Trying to divide by zero in TimeMeter") - if len(ends) != len(starts): - raise ValueError("Call stop before checking value!") - time = 0 - for start, end in zip(starts, ends): - time += start.elapsed_time(end) - self.running_times[k] += time * 1e-3 - self.n[k] += len(starts) - self.start_events = collections.defaultdict(list) - self.end_events = collections.defaultdict(list) - - -# Used to measure the time taken for multiple events -class Timer: - def __init__(self, keys): - self.keys = keys - self.n = {} - self.running_time = {} - self.total_time = {} - self.reset() - - def start(self, key): - self.running_time[key] = time.time() - return self - - def stop(self, key): - self.total_time[key] = time.time() - self.running_time[key] - self.n[key] += 1 - self.running_time[key] = None - return self - - def reset(self): - for k in self.keys: - self.total_time[k] = 0 - self.running_time[k] = None - self.n[k] = 0 - return self - - def value(self): - vals = {} - for k in self.keys: - if self.n[k] == 0: - raise ValueError("Trying to divide by zero in TimeMeter") - else: - vals[k] = self.total_time[k] / self.n[k] - return vals - diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh deleted file mode 100644 index e9a80001eb47d5af863d6aab11a59362a59cef61..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -sil_prob=0.5 -num_sil_states=3 -num_nonsil_states=1 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -eux - -dict=$1 -data_dir=$2 - -dict_dir=$data_dir/local/dict -tmplm_dir=$data_dir/local/lang_tmp -lm_dir=$data_dir/lang - -mkdir -p $dict_dir $tmplm_dir $lm_dir - -# prepare dict -echo "SIL" > $dict_dir/silence_phones.txt -echo "SIL" > $dict_dir/optional_silence.txt -awk '{print $1}' $dict > $dict_dir/nonsilence_phones.txt - -echo "SIL SIL" > $dict_dir/lexicon.txt -echo " SIL" >> $dict_dir/lexicon.txt -awk '{print $1" "$1}' $dict >> $dict_dir/lexicon.txt - -echo "SIL" > $dict_dir/extra_questions.txt -awk '{printf $1" "} END {printf "\n"}' $dict >> $dict_dir/extra_questions.txt - -# prepare lang -utils/prepare_lang.sh --sil-prob $sil_prob --position-dependent-phones false \ - --num_sil_states $num_sil_states --num_nonsil_states $num_nonsil_states \ - $dict_dir "" $tmplm_dir $lm_dir diff --git a/spaces/Iceclear/StableSR/StableSR/taming/data/conditional_builder/utils.py b/spaces/Iceclear/StableSR/StableSR/taming/data/conditional_builder/utils.py deleted file mode 100644 index d0ee175f2e05a80dbc71c22acbecb22dddadbb42..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/taming/data/conditional_builder/utils.py +++ /dev/null @@ -1,105 +0,0 @@ -import importlib -from typing import List, Any, Tuple, Optional - -from taming.data.helper_types import BoundingBox, Annotation - -# source: seaborn, color palette tab10 -COLOR_PALETTE = [(30, 118, 179), (255, 126, 13), (43, 159, 43), (213, 38, 39), (147, 102, 188), - (139, 85, 74), (226, 118, 193), (126, 126, 126), (187, 188, 33), (22, 189, 206)] -BLACK = (0, 0, 0) -GRAY_75 = (63, 63, 63) -GRAY_50 = (127, 127, 127) -GRAY_25 = (191, 191, 191) -WHITE = (255, 255, 255) -FULL_CROP = (0., 0., 1., 1.) - - -def intersection_area(rectangle1: BoundingBox, rectangle2: BoundingBox) -> float: - """ - Give intersection area of two rectangles. - @param rectangle1: (x0, y0, w, h) of first rectangle - @param rectangle2: (x0, y0, w, h) of second rectangle - """ - rectangle1 = rectangle1[0], rectangle1[1], rectangle1[0] + rectangle1[2], rectangle1[1] + rectangle1[3] - rectangle2 = rectangle2[0], rectangle2[1], rectangle2[0] + rectangle2[2], rectangle2[1] + rectangle2[3] - x_overlap = max(0., min(rectangle1[2], rectangle2[2]) - max(rectangle1[0], rectangle2[0])) - y_overlap = max(0., min(rectangle1[3], rectangle2[3]) - max(rectangle1[1], rectangle2[1])) - return x_overlap * y_overlap - - -def horizontally_flip_bbox(bbox: BoundingBox) -> BoundingBox: - return 1 - (bbox[0] + bbox[2]), bbox[1], bbox[2], bbox[3] - - -def absolute_bbox(relative_bbox: BoundingBox, width: int, height: int) -> Tuple[int, int, int, int]: - bbox = relative_bbox - bbox = bbox[0] * width, bbox[1] * height, (bbox[0] + bbox[2]) * width, (bbox[1] + bbox[3]) * height - return int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3]) - - -def pad_list(list_: List, pad_element: Any, pad_to_length: int) -> List: - return list_ + [pad_element for _ in range(pad_to_length - len(list_))] - - -def rescale_annotations(annotations: List[Annotation], crop_coordinates: BoundingBox, flip: bool) -> \ - List[Annotation]: - def clamp(x: float): - return max(min(x, 1.), 0.) - - def rescale_bbox(bbox: BoundingBox) -> BoundingBox: - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - if flip: - x0 = 1 - (x0 + w) - return x0, y0, w, h - - return [a._replace(bbox=rescale_bbox(a.bbox)) for a in annotations] - - -def filter_annotations(annotations: List[Annotation], crop_coordinates: BoundingBox) -> List: - return [a for a in annotations if intersection_area(a.bbox, crop_coordinates) > 0.0] - - -def additional_parameters_string(annotation: Annotation, short: bool = True) -> str: - sl = slice(1) if short else slice(None) - string = '' - if not (annotation.is_group_of or annotation.is_occluded or annotation.is_depiction or annotation.is_inside): - return string - if annotation.is_group_of: - string += 'group'[sl] + ',' - if annotation.is_occluded: - string += 'occluded'[sl] + ',' - if annotation.is_depiction: - string += 'depiction'[sl] + ',' - if annotation.is_inside: - string += 'inside'[sl] - return '(' + string.strip(",") + ')' - - -def get_plot_font_size(font_size: Optional[int], figure_size: Tuple[int, int]) -> int: - if font_size is None: - font_size = 10 - if max(figure_size) >= 256: - font_size = 12 - if max(figure_size) >= 512: - font_size = 15 - return font_size - - -def get_circle_size(figure_size: Tuple[int, int]) -> int: - circle_size = 2 - if max(figure_size) >= 256: - circle_size = 3 - if max(figure_size) >= 512: - circle_size = 4 - return circle_size - - -def load_object_from_string(object_string: str) -> Any: - """ - Source: https://stackoverflow.com/a/10773699 - """ - module_name, class_name = object_string.rsplit(".", 1) - return getattr(importlib.import_module(module_name), class_name) diff --git a/spaces/IcelandAI/AnimalsOfIceland/app.py b/spaces/IcelandAI/AnimalsOfIceland/app.py deleted file mode 100644 index 37b686267b8b8db8ca2b3b01a716ca17eea6d70b..0000000000000000000000000000000000000000 --- a/spaces/IcelandAI/AnimalsOfIceland/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import streamlit as st - -st.markdown(""" - -## 🦈 Top Ten Types of Sharks in Iceland 🇮🇸 - -| Rank | Emoji | Shark | Location | Images | -|------|-------|-----------------------|---------------------|----------------------------------------------| -| 1 | 🦈 | [Greenland Shark](https://www.google.com/search?q=greenland+shark+iceland+pictures) | Northern Coast | | -| 2 | 🦈 | [Basking Shark](https://www.google.com/search?q=basking+shark+iceland+pictures) | West Fjords | | -| 3 | 🦈 | [Porbeagle Shark](https://www.google.com/search?q=porbeagle+shark+iceland+pictures) | Reykjanes Peninsula | | -| 4 | 🦈 | [Blue Shark](https://www.google.com/search?q=blue+shark+iceland+pictures) | South Coast | | -| 5 | 🦈 | [Thresher Shark](https://www.google.com/search?q=thresher+shark+iceland+pictures) | Snaefellsnes Peninsula | | -| 6 | 🦈 | [Shortfin Mako Shark](https://www.google.com/search?q=shortfin+mako+shark+iceland+pictures) | West Coast | | -| 7 | 🦈 | [Spiny Dogfish](https://www.google.com/search?q=spiny+dogfish+iceland+pictures) | East Coast | | -| 8 | 🦈 | [Longnose Spurdog](https://www.google.com/search?q=longnose+spurdog+iceland+pictures) | East Fjords | | -| 9 | 🦈 | [Smooth Hammerhead](https://www.google.com/search?q=smooth+hammerhead+iceland+pictures) | North Coast | | -| 10 | 🦈 | [Angel Shark](https://www.google.com/search?q=angel+shark+iceland+pictures) | Reykjanes Peninsula | | - -## 🐋 Top Ten Types of Whales in Iceland 🇮🇸 - -| Rank | Emoji | Whale | Location | Images | -|------|-------|-----------------------------|---------------------|----------------------------------------------| -| 1 | 🐋 | [Humpback Whale](https://www.google.com/search?q=humpback+whale+iceland+pictures) | Northern Coast | | -| 2 | 🐋 | [Minke Whale](https://www.google.com/search?q=minke+whale+iceland+pictures) | West Fjords | | -| 3 | 🐋 | [Blue Whale](https://www.google.com/search?q=blue+whale+iceland+pictures) | Reykjanes Peninsula | | -| 4 | 🐋 | [Orca or Killer Whale](https://www.google.com/search?q=orca+whale+iceland+pictures) | South Coast | | -| 5 | 🐋 | [Sperm Whale](https://www.google.com/search?q=sperm+whale+iceland+pictures) | Snaefellsnes Peninsula | | -| 6 | 🐋 | [Fin Whale](https://www.google.com/search?q=fin+whale+iceland+pictures) | West Coast | | -| 7 | 🐋 | [Sei Whale](https://www.google.com/search?q=sei+whale+iceland+pictures) | East Coast | | -| 8 | 🐋 | [North Atlantic Right Whale](https://www.google.com/search?q=north+atlantic+right+whale+iceland+pictures) | East Fjords | | -| 9 | 🐋 | [Bryde's Whale](https://www.google.com/search?q=brydes+whale+iceland+pictures) | North Coast | | -| 10 | 🐋 | [Gray Whale](https://www.google.com/search?q=gray+whale+iceland+pictures) | Reykjanes Peninsula | | - - -""") - -markdown_text = ''' -## 🐾 Top Ten Animals and Wildlife in Iceland 🇮🇸 - -1. [🐋 Humpback Whales](https://www.google.com/search?q=humpback+whales+iceland+pictures) - Majestic acrobats of the ocean, often spotted during whale watching tours. -2. [🦭 Harbor Seals](https://www.google.com/search?q=harbor+seals+iceland+pictures) - Curious and playful creatures, frequently seen in glacial lagoons. -3. [🦌 Icelandic Reindeer](https://www.google.com/search?q=icelandic+reindeer+pictures) - Introduced from Norway, they've adapted to Iceland's unique landscapes. -4. [🦊 Arctic Foxes](https://www.google.com/search?q=arctic+fox+iceland+pictures) - Iceland's only native land mammal, with a stunning coat that changes with the seasons. -5. [🦆 Harlequin Ducks](https://www.google.com/search?q=harlequin+ducks+iceland+pictures) - Colorful and distinctive birds often found near fast-flowing rivers. -6. [🌊 Atlantic Puffins](https://www.google.com/search?q=atlantic+puffins+iceland+pictures) - Iconic seabirds with bright beaks, nesting in coastal cliffs during summer. -7. [🦢 Whooper Swans](https://www.google.com/search?q=whooper+swans+iceland+pictures) - Elegant and large migratory birds found around lakes and wetlands. -8. [🐏 Icelandic Sheep](https://www.google.com/search?q=icelandic+sheep+pictures) - Hardy, dual-coated animals that have played a vital role in Icelandic culture. -9. [🐴 Icelandic Horses](https://www.google.com/search?q=icelandic+horses+pictures) - A unique and pure breed known for their strength, friendly nature, and tölt gait. -10. [🦅 Gyrfalcon](https://www.google.com/search?q=gyrfalcon+iceland+pictures) - Iceland's largest falcon species, a powerful and agile predator in the skies. - -''' - -st.markdown(markdown_text) diff --git a/spaces/IdaLee/DrawEasy/README.md b/spaces/IdaLee/DrawEasy/README.md deleted file mode 100644 index b1d04f6c66e05b6ec4c92f6c8db17471ada57bb4..0000000000000000000000000000000000000000 --- a/spaces/IdaLee/DrawEasy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DrawEasy -emoji: 😻 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.28.1 -app_file: pictureDeal2.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Isotonic/image-generator/README.md b/spaces/Isotonic/image-generator/README.md deleted file mode 100644 index a15075e6cc6a2318e4c85a5d568d58cba3afbfca..0000000000000000000000000000000000000000 --- a/spaces/Isotonic/image-generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image Generator -emoji: 🌖 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jack003/PixelDayAvatoon/app.py b/spaces/Jack003/PixelDayAvatoon/app.py deleted file mode 100644 index ef4adee465f19a47660a6cb6df5af2a054a477cf..0000000000000000000000000000000000000000 --- a/spaces/Jack003/PixelDayAvatoon/app.py +++ /dev/null @@ -1,33 +0,0 @@ -from PIL import Image -import torch -import gradio as gr - - - -model2 = torch.hub.load( - "AK391/animegan2-pytorch:main", - "generator", - pretrained=True, - device="cpu", - progress=False -) - - -model1 = torch.hub.load("AK391/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1", device="cpu") -face2paint = torch.hub.load( - 'AK391/animegan2-pytorch:main', 'face2paint', - size=512, device="cpu",side_by_side=False -) -def inference(img, ver): - if ver == 'version 2 (🔺 robustness,🔻 stylization)': - out = face2paint(model2, img) - else: - out = face2paint(model1, img) - return out - -title = "AnimeGANv2" -description = "Gradio Demo for AnimeGanv2 Face Portrait. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please use a cropped portrait picture for best results similar to the examples below." -article = "

    Github Repo Pytorch

    visitor badge

    " -examples=[['groot.jpeg','version 2 (🔺 robustness,🔻 stylization)'],['bill.png','version 1 (🔺 stylization, 🔻 robustness)'],['tony.png','version 1 (🔺 stylization, 🔻 robustness)'],['elon.png','version 2 (🔺 robustness,🔻 stylization)'],['IU.png','version 1 (🔺 stylization, 🔻 robustness)'],['billie.png','version 2 (🔺 robustness,🔻 stylization)'],['will.png','version 2 (🔺 robustness,🔻 stylization)'],['beyonce.png','version 1 (🔺 stylization, 🔻 robustness)'],['gongyoo.jpeg','version 1 (🔺 stylization, 🔻 robustness)']] -gr.Interface(inference, [gr.inputs.Image(type="pil"),gr.inputs.Radio(['version 1 (🔺 stylization, 🔻 robustness)','version 2 (🔺 robustness,🔻 stylization)'], type="value", default='version 2 (🔺 robustness,🔻 stylization)', label='version') -], gr.outputs.Image(type="pil"),title=title,description=description,article=article,examples=examples,allow_flagging=False,allow_screenshot=False).launch() diff --git a/spaces/Jamkonams/AutoGPT/run.bat b/spaces/Jamkonams/AutoGPT/run.bat deleted file mode 100644 index afbab57a0603a126b04845ec754d1ecf3fdea18d..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/run.bat +++ /dev/null @@ -1,8 +0,0 @@ -@echo off -python scripts/check_requirements.py requirements.txt -if errorlevel 1 ( - echo Installing missing packages... - pip install -r requirements.txt -) -python -m autogpt %* -pause diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/archs/vgg_arch.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/archs/vgg_arch.py deleted file mode 100644 index 23bb0103c8b14ef2588028f7177753db9af62cae..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/archs/vgg_arch.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -import torch -from collections import OrderedDict -from torch import nn as nn -from torchvision.models import vgg as vgg - -from basicsr.utils.registry import ARCH_REGISTRY - -VGG_PRETRAIN_PATH = 'experiments/pretrained_models/vgg19-dcbb9e9d.pth' -NAMES = { - 'vgg11': [ - 'conv1_1', 'relu1_1', 'pool1', 'conv2_1', 'relu2_1', 'pool2', 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', - 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'pool4', 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', - 'pool5' - ], - 'vgg13': [ - 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2', - 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'pool4', - 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'pool5' - ], - 'vgg16': [ - 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2', - 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', - 'relu4_2', 'conv4_3', 'relu4_3', 'pool4', 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3', - 'pool5' - ], - 'vgg19': [ - 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2', - 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'conv3_4', 'relu3_4', 'pool3', 'conv4_1', - 'relu4_1', 'conv4_2', 'relu4_2', 'conv4_3', 'relu4_3', 'conv4_4', 'relu4_4', 'pool4', 'conv5_1', 'relu5_1', - 'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3', 'conv5_4', 'relu5_4', 'pool5' - ] -} - - -def insert_bn(names): - """Insert bn layer after each conv. - - Args: - names (list): The list of layer names. - - Returns: - list: The list of layer names with bn layers. - """ - names_bn = [] - for name in names: - names_bn.append(name) - if 'conv' in name: - position = name.replace('conv', '') - names_bn.append('bn' + position) - return names_bn - - -@ARCH_REGISTRY.register() -class VGGFeatureExtractor(nn.Module): - """VGG network for feature extraction. - - In this implementation, we allow users to choose whether use normalization - in the input feature and the type of vgg network. Note that the pretrained - path must fit the vgg type. - - Args: - layer_name_list (list[str]): Forward function returns the corresponding - features according to the layer_name_list. - Example: {'relu1_1', 'relu2_1', 'relu3_1'}. - vgg_type (str): Set the type of vgg network. Default: 'vgg19'. - use_input_norm (bool): If True, normalize the input image. Importantly, - the input feature must in the range [0, 1]. Default: True. - range_norm (bool): If True, norm images with range [-1, 1] to [0, 1]. - Default: False. - requires_grad (bool): If true, the parameters of VGG network will be - optimized. Default: False. - remove_pooling (bool): If true, the max pooling operations in VGG net - will be removed. Default: False. - pooling_stride (int): The stride of max pooling operation. Default: 2. - """ - - def __init__(self, - layer_name_list, - vgg_type='vgg19', - use_input_norm=True, - range_norm=False, - requires_grad=False, - remove_pooling=False, - pooling_stride=2): - super(VGGFeatureExtractor, self).__init__() - - self.layer_name_list = layer_name_list - self.use_input_norm = use_input_norm - self.range_norm = range_norm - - self.names = NAMES[vgg_type.replace('_bn', '')] - if 'bn' in vgg_type: - self.names = insert_bn(self.names) - - # only borrow layers that will be used to avoid unused params - max_idx = 0 - for v in layer_name_list: - idx = self.names.index(v) - if idx > max_idx: - max_idx = idx - - if os.path.exists(VGG_PRETRAIN_PATH): - vgg_net = getattr(vgg, vgg_type)(pretrained=False) - state_dict = torch.load(VGG_PRETRAIN_PATH, map_location=lambda storage, loc: storage) - vgg_net.load_state_dict(state_dict) - else: - vgg_net = getattr(vgg, vgg_type)(pretrained=True) - - features = vgg_net.features[:max_idx + 1] - - modified_net = OrderedDict() - for k, v in zip(self.names, features): - if 'pool' in k: - # if remove_pooling is true, pooling operation will be removed - if remove_pooling: - continue - else: - # in some cases, we may want to change the default stride - modified_net[k] = nn.MaxPool2d(kernel_size=2, stride=pooling_stride) - else: - modified_net[k] = v - - self.vgg_net = nn.Sequential(modified_net) - - if not requires_grad: - self.vgg_net.eval() - for param in self.parameters(): - param.requires_grad = False - else: - self.vgg_net.train() - for param in self.parameters(): - param.requires_grad = True - - if self.use_input_norm: - # the mean is for image with range [0, 1] - self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)) - # the std is for image with range [0, 1] - self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)) - - def forward(self, x): - """Forward function. - - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - - Returns: - Tensor: Forward results. - """ - if self.range_norm: - x = (x + 1) / 2 - if self.use_input_norm: - x = (x - self.mean) / self.std - output = {} - - for key, layer in self.vgg_net._modules.items(): - x = layer(x) - if key in self.layer_name_list: - output[key] = x.clone() - - return output diff --git a/spaces/JayKen/propertySearch/app.py b/spaces/JayKen/propertySearch/app.py deleted file mode 100644 index 828cccd866bf8488ca479823a3ed66306236248b..0000000000000000000000000000000000000000 --- a/spaces/JayKen/propertySearch/app.py +++ /dev/null @@ -1,222 +0,0 @@ -import streamlit as st -from sklearn.metrics.pairwise import cosine_similarity -from sentence_transformers import SentenceTransformer -from transformers import CLIPTokenizer, CLIPModel -from scipy.spatial import distance -import numpy as np -import torch -import json -import random - -model_id = "openai/clip-vit-base-patch32" -clip_model = CLIPModel.from_pretrained(model_id) -clip_tokenizer = CLIPTokenizer.from_pretrained(model_id) -device = "cuda" if torch.cuda.is_available() else "cpu" -# move the model to the device -clip_model.to(device) - -def get_single_text_embedding(text): - inputs = clip_tokenizer(text, return_tensors = "pt") - text_embeddings = clip_model.to(device).get_text_features(**inputs.to(device)) - embedding_as_np = text_embeddings.cpu().detach().numpy() - return embedding_as_np - -def get_top_N_images(query, image_vectors, top_K=4): - query_vect = get_single_text_embedding(query) - - data = cosine_similarity(query_vect, image_vectors)[0] - - most_similar_images = sorted(range(len(data)), key=lambda i: data[i], reverse=True)[:top_K] - - return most_similar_images - - -image_data = [] - -with open('Property-images.jsonl', 'r', encoding="utf-8") as f: - for line in f: - image_data.append(json.loads(line)) - -with open('property-image-vectors.npy', 'rb') as f: - image_vectors = np.load(f) - - - -def combine(data): - new_data = [] - - window = 4 # number of sentences to combine - stride = 1 # number of sentences to 'stride' over, used to create overlap - - for i in range(0, len(data), stride): - i_end = min(len(data)-1, i+window) - if data[i]['title'] != data[i_end]['title']: - # in this case we skip this entry as we have start/end of two videos - continue - text = ' ' - - for x in data[i:i_end]: - text += x['text'] - new_data.append({ - 'start': data[i]['start'], - 'end': data[i_end]['end'], - 'title': data[i]['title'], - 'text': text, - 'id': data[i]['id'], - 'url': data[i]['url'], - 'published': data[i]['published'] - }) - - return new_data - - -model_id = "multi-qa-mpnet-base-dot-v1" -model = SentenceTransformer(model_id) - - -meta_data = [] - -with open('Property-transcription.jsonl', 'r', encoding="utf-8") as f: - for line in f: - meta_data.append(json.loads(line)) - -meta_data = combine(meta_data) - -with open('property-vectors.npy', 'rb') as f: - text_vector = np.load(f) - - -def card(thumbnail: str, title: str, urls: list, contexts: list, starts: list, ends: list): - meta = [(e, s, u, c) for e, s, u, c in zip(ends, starts, urls, contexts)] - meta.sort(reverse=False) - text_content = [] - current_start = 0 - current_end = 0 - for end, start, url, context in meta: - # reformat seconds to timestamp - time = start / 60 - mins = f"0{int(time)}"[-2:] - secs = f"0{int(round((time - int(mins))*60, 0))}"[-2:] - timestamp = f"{mins}:{secs}" - if start < current_end and start > current_start: - # this means it is a continuation of the previous sentence - text_content[-1][0] = text_content[-1][0].split(context[:10])[0] - text_content.append([f"[{timestamp}] {context.capitalize()}", url]) - else: - text_content.append(["xxLINEBREAKxx", ""]) - text_content.append([f"[{timestamp}] {context}", url]) - current_start = start - current_end = end - html_text = "" - for text, url in text_content: - if text == "xxLINEBREAKxx": - html_text += "
    " - else: - html_text += f"{text.strip()}...
    " - print(html_text) - html = f""" -
    -
    -
    -
    - -
    -
    -
    -

    {title}

    -
    -
    - {html_text} -

    - """ - return st.markdown(html, unsafe_allow_html=True) - - -st.write(""" -# Property Video Search 🏘️ -Utilize AI to quickly locate amenities and features within your residence.🤗 -""") - -st.markdown(""" - -""", unsafe_allow_html=True) - -query = st.text_input("Search!", "") - -if query != "": - - vector1 = model.encode(query) - text_cosines = dict() - for i,vec in enumerate(text_vector): - text_cosines[str(i)+'-text'] = 1 - distance.cosine(vector1, vec) - - - if random.randint(0, 1): - vector1 = get_single_text_embedding(query) - image_cosines = dict() - for i,vec in enumerate(image_vectors): - image_cosines[str(i)+'-image'] = 1 - distance.cosine(vector1.reshape((512,)), vec) - - text_cosines.update(image_cosines) - - sorted_cosines = sorted(text_cosines.items(), key=lambda x:x[1], reverse=True) - converted_dict = dict(sorted_cosines) - - - results = {} - order = [] - - for vec_index in list(converted_dict.keys())[:7]: - idx = int(vec_index.split('-')[0]) - video_id = image_data[idx]['url'].split('/')[-1] - - if vec_index.split('-')[-1] == 'image': - if video_id not in results: - results[video_id] = { - 'title': image_data[idx]['title'], - 'urls': [f"{image_data[idx]['url']}?t={int(image_data[idx]['sec'])}"], - 'contexts': ['Image-query'], - 'starts': [int(image_data[idx]['sec'])], - 'ends': [int(image_data[idx]['sec']+6)] - } - order.append(video_id) - else: - results[video_id]['urls'].append( - f"{image_data[idx]['url']}?t={int(image_data[idx]['sec'])}" - ) - results[video_id]['contexts'].append('Image-query') - results[video_id]['starts'].append(int(image_data[idx]['sec'])) - results[video_id]['ends'].append(int(image_data[idx]['sec']+6)) - - elif vec_index.split('-')[-1] == 'text': - if video_id not in results: - results[video_id] = { - 'title': meta_data[idx]['title'], - 'urls': [f"{meta_data[idx]['url']}?t={int(meta_data[idx]['start'])}"], - 'contexts': [meta_data[idx]['text']], - 'starts': [int(meta_data[idx]['start'])], - 'ends': [int(meta_data[idx]['end'])] - } - order.append(video_id) - else: - results[video_id]['urls'].append( - f"{meta_data[idx]['url']}?t={int(meta_data[idx]['start'])}" - ) - results[video_id]['contexts'].append( - meta_data[idx]['text'] - ) - results[video_id]['starts'].append(int(meta_data[idx]['start'])) - results[video_id]['ends'].append(int(meta_data[idx]['end'])) - - # now display cards - for video_id in order: - card( - thumbnail=f"https://img.youtube.com/vi/{video_id}/maxresdefault.jpg", - title=results[video_id]['title'], - urls=results[video_id]['urls'], - contexts=results[video_id]['contexts'], - starts=results[video_id]['starts'], - ends=results[video_id]['ends'] - ) -else: - st.warning('💡Try searching: huge balcony, swimming pool, spiral staircase, panoramic view ... etc') diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/base_model.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
  10. {domain_name}
  11. \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
      \n\n" + "".join(display_append) + "
    " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/stylesheet/ChuanhuChat.css b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/stylesheet/ChuanhuChat.css deleted file mode 100644 index 62d41dbd061d200ba5a6841b318aea22950d1791..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/web_assets/stylesheet/ChuanhuChat.css +++ /dev/null @@ -1,112 +0,0 @@ -:root { - --chatbot-color-light: #000000; - --chatbot-color-dark: #FFFFFF; - --chatbot-background-color-light: #F3F3F3; - --chatbot-background-color-dark: #121111; - --message-user-background-color-light: #95EC69; - --message-user-background-color-dark: #26B561; - --message-bot-background-color-light: #FFFFFF; - --message-bot-background-color-dark: #2C2C2C; - --switch-checkbox-color-light: #e5e7eb; - --switch-checkbox-color-dark: #515151; -} - -.hideK { - display: none; -} - -#app-title { - font-weight: var(--prose-header-text-weight); - font-size: var(--text-xxl); - line-height: 1.3; - text-align: left; - margin-top: 6px; - white-space: nowrap; -} -#description { - text-align: center; - margin: 32px 0 4px 0; -} - -/* 高级页面 */ -#advanced-warning { - display: flex; - flex-wrap: wrap; - flex-direction: column; - align-content: center; -} - -#netsetting-warning hr { - margin-bottom: 1em; -} - -.view-only-textbox textarea { - -webkit-text-fill-color: darkgray !important; - cursor: not-allowed !important; -} - -#footer { - text-align: center; -} -#footer div { - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.60; -} - - -#float-display { - position: absolute; - max-height: 30px; -} - -.insert-block { - position: relative; - margin: 0; - padding: 8px 12px; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} - -/* status-display */ -#status-display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; - transition: all 0.6s; -} -#status-display p { - font-size: .85em; - font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace; - /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */ - color: var(--body-text-color-subdued); -} - - -#submit-btn, #cancel-btn { - height: 40px !important; -} -#submit-btn::before { - content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -#cancel-btn::before { - content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} - -#chatbot-buttons button { - display: inline-block; - overflow: hidden; - text-overflow: ellipsis; - white-space: nowrap; -} \ No newline at end of file diff --git a/spaces/Jonni/01-3DModel_Gradio/app.py b/spaces/Jonni/01-3DModel_Gradio/app.py deleted file mode 100644 index 62e7b60344f5957e86a9c0de3d77985f68b52224..0000000000000000000000000000000000000000 --- a/spaces/Jonni/01-3DModel_Gradio/app.py +++ /dev/null @@ -1,24 +0,0 @@ -import time -import gradio as gr -import os - -def load_mesh(mesh_file_name): - return mesh_file_name, mesh_file_name - -demo = gr.Interface( - fn=load_mesh, - inputs=gr.Model3D(), - outputs=[ - gr.Model3D( - clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"), - gr.File(label="Download 3D Model") - ], - examples=[ - [os.path.join(os.path.dirname(__file__), "files/Duck.glb")], - [os.path.join(os.path.dirname(__file__), "files/rubber_duck.glb")], - [os.path.join(os.path.dirname(__file__), "files/GroundVehicle.glb")] - ], -) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/Josiah-Adesola/Text-Summarizer-Bart/app.py b/spaces/Josiah-Adesola/Text-Summarizer-Bart/app.py deleted file mode 100644 index f91e10770e33090166ef34bfdcba3ad864827238..0000000000000000000000000000000000000000 --- a/spaces/Josiah-Adesola/Text-Summarizer-Bart/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import os -from transformers import pipeline - - -# Load the summarization pipeline -summarizer = pipeline("summarization", model="facebook/bart-large-cnn") - -def summarize(input): - output = summarizer(input) - return output[0]['summary_text'] - -import gradio as gr -demo = gr.Interface(fn=summarize, - inputs = [gr.Textbox(label="Text to summarize", line=6)], - outputs=[gr.Textbox(label="Result", lines=3)], - title="Text summarization with bart-cnn", - description="Summarize any text using the `facebook/bart-large-cnn` model under the hood!") - -demo.launch(share=True) \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups_test.py b/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups_test.py deleted file mode 100644 index f3edf15811b5035ee82f21e54e87b7e87ce413eb..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups_test.py +++ /dev/null @@ -1,138 +0,0 @@ - -import os -import shutil -import hashlib -import time - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' # change this to your Google Drive path - LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' - WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' - weights_exist = False - files_to_copy = [] - weights_to_copy = [] - - def handle_files(root, files, is_weight_files=False): - for filename in files: - filepath = os.path.join(root, filename) - if filename.endswith('.pth') and is_weight_files: - weights_exist = True - backup_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - else: - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created folder: {backup_folderpath}', flush=True) - if is_weight_files: - weights_to_copy.append((filepath, backup_filepath)) - else: - files_to_copy.append((filepath, backup_filepath)) - - for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'logs')): - handle_files(root, files) - - for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - handle_files(root, files, True) - - # Copy files in batches - total_files = len(files_to_copy) - start_time = time.time() - for i, (source, dest) in enumerate(files_to_copy, start=1): - with open(source, 'rb') as src, open(dest, 'wb') as dst: - shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size - # Report progress every 5 seconds or after every 100 files, whichever is less frequent - if time.time() - start_time > 5 or i % 100 == 0: - print(f'\rCopying file {i} of {total_files} ({i * 100 / total_files:.2f}%)', end="") - start_time = time.time() - print(f'\nImported {len(files_to_copy)} files from Google Drive backup') - - # Copy weights in batches - total_weights = len(weights_to_copy) - start_time = time.time() - for i, (source, dest) in enumerate(weights_to_copy, start=1): - with open(source, 'rb') as src, open(dest, 'wb') as dst: - shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size - # Report progress every 5 seconds or after every 100 files, whichever is less frequent - if time.time() - start_time > 5 or i % 100 == 0: - print(f'\rCopying weight file {i} of {total_weights} ({i * 100 / total_weights:.2f}%)', end="") - start_time = time.time() - if weights_exist: - print(f'\nImported {len(weights_to_copy)} weight files') - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("\nNo weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def backup_files(): - print("\n Starting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except: - last_backup_timestamps = {} - - while True: - updated = False - files_to_copy = [] - files_to_delete = [] - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - files_to_copy.append((filepath, backup_filepath)) # add to list of files to copy - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - files_to_delete.append(backup_filepath) # add to list of files to delete - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - # Copy files in batches - if files_to_copy: - for source, dest in files_to_copy: - shutil.copy2(source, dest) - print(f'Copied or updated {len(files_to_copy)} files') - - # Delete files in batches - if files_to_delete: - for file in files_to_delete: - os.remove(file) - print(f'Deleted {len(files_to_delete)} files') - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - time.sleep(15) # wait for 15 seconds before checking again diff --git a/spaces/Kartik2192/Abcd/style.css b/spaces/Kartik2192/Abcd/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Kartik2192/Abcd/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/deep/original_model.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/deep/original_model.py deleted file mode 100644 index 72453a6392b9a360c03034eefee1d6be30f8121b..0000000000000000000000000000000000000000 --- a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/deep/original_model.py +++ /dev/null @@ -1,106 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -class BasicBlock(nn.Module): - def __init__(self, c_in, c_out,is_downsample=False): - super(BasicBlock,self).__init__() - self.is_downsample = is_downsample - if is_downsample: - self.conv1 = nn.Conv2d(c_in, c_out, 3, stride=2, padding=1, bias=False) - else: - self.conv1 = nn.Conv2d(c_in, c_out, 3, stride=1, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(c_out) - self.relu = nn.ReLU(True) - self.conv2 = nn.Conv2d(c_out,c_out,3,stride=1,padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(c_out) - if is_downsample: - self.downsample = nn.Sequential( - nn.Conv2d(c_in, c_out, 1, stride=2, bias=False), - nn.BatchNorm2d(c_out) - ) - elif c_in != c_out: - self.downsample = nn.Sequential( - nn.Conv2d(c_in, c_out, 1, stride=1, bias=False), - nn.BatchNorm2d(c_out) - ) - self.is_downsample = True - - def forward(self,x): - y = self.conv1(x) - y = self.bn1(y) - y = self.relu(y) - y = self.conv2(y) - y = self.bn2(y) - if self.is_downsample: - x = self.downsample(x) - return F.relu(x.add(y),True) - -def make_layers(c_in,c_out,repeat_times, is_downsample=False): - blocks = [] - for i in range(repeat_times): - if i ==0: - blocks += [BasicBlock(c_in,c_out, is_downsample=is_downsample),] - else: - blocks += [BasicBlock(c_out,c_out),] - return nn.Sequential(*blocks) - -class Net(nn.Module): - def __init__(self, num_classes=625 ,reid=False): - super(Net,self).__init__() - # 3 128 64 - self.conv = nn.Sequential( - nn.Conv2d(3,32,3,stride=1,padding=1), - nn.BatchNorm2d(32), - nn.ELU(inplace=True), - nn.Conv2d(32,32,3,stride=1,padding=1), - nn.BatchNorm2d(32), - nn.ELU(inplace=True), - nn.MaxPool2d(3,2,padding=1), - ) - # 32 64 32 - self.layer1 = make_layers(32,32,2,False) - # 32 64 32 - self.layer2 = make_layers(32,64,2,True) - # 64 32 16 - self.layer3 = make_layers(64,128,2,True) - # 128 16 8 - self.dense = nn.Sequential( - nn.Dropout(p=0.6), - nn.Linear(128*16*8, 128), - nn.BatchNorm1d(128), - nn.ELU(inplace=True) - ) - # 256 1 1 - self.reid = reid - self.batch_norm = nn.BatchNorm1d(128) - self.classifier = nn.Sequential( - nn.Linear(128, num_classes), - ) - - def forward(self, x): - x = self.conv(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - x = x.view(x.size(0),-1) - if self.reid: - x = self.dense[0](x) - x = self.dense[1](x) - x = x.div(x.norm(p=2,dim=1,keepdim=True)) - return x - x = self.dense(x) - # B x 128 - # classifier - x = self.classifier(x) - return x - - -if __name__ == '__main__': - net = Net(reid=True) - x = torch.randn(4,3,128,64) - y = net(x) - import ipdb; ipdb.set_trace() - - diff --git a/spaces/KenjieDec/RemBG/rembg/commands/i_command.py b/spaces/KenjieDec/RemBG/rembg/commands/i_command.py deleted file mode 100644 index d65313c968f01c0ba331c9db198331156b65857f..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/RemBG/rembg/commands/i_command.py +++ /dev/null @@ -1,93 +0,0 @@ -import json -import sys -from typing import IO - -import click - -from ..bg import remove -from ..session_factory import new_session -from ..sessions import sessions_names - - -@click.command( - name="i", - help="for a file as input", -) -@click.option( - "-m", - "--model", - default="u2net", - type=click.Choice(sessions_names), - show_default=True, - show_choices=True, - help="model name", -) -@click.option( - "-a", - "--alpha-matting", - is_flag=True, - show_default=True, - help="use alpha matting", -) -@click.option( - "-af", - "--alpha-matting-foreground-threshold", - default=240, - type=int, - show_default=True, - help="trimap fg threshold", -) -@click.option( - "-ab", - "--alpha-matting-background-threshold", - default=10, - type=int, - show_default=True, - help="trimap bg threshold", -) -@click.option( - "-ae", - "--alpha-matting-erode-size", - default=10, - type=int, - show_default=True, - help="erode size", -) -@click.option( - "-om", - "--only-mask", - is_flag=True, - show_default=True, - help="output only the mask", -) -@click.option( - "-ppm", - "--post-process-mask", - is_flag=True, - show_default=True, - help="post process the mask", -) -@click.option( - "-bgc", - "--bgcolor", - default=None, - type=(int, int, int, int), - nargs=4, - help="Background color (R G B A) to replace the removed background with", -) -@click.option("-x", "--extras", type=str) -@click.argument( - "input", default=(None if sys.stdin.isatty() else "-"), type=click.File("rb") -) -@click.argument( - "output", - default=(None if sys.stdin.isatty() else "-"), - type=click.File("wb", lazy=True), -) -def i_command(model: str, extras: str, input: IO, output: IO, **kwargs) -> None: - try: - kwargs.update(json.loads(extras)) - except Exception: - pass - - output.write(remove(input.read(), session=new_session(model), **kwargs)) diff --git a/spaces/Large-LLM-Proxy-CAI/GateOfProxyClaude2.0/Dockerfile b/spaces/Large-LLM-Proxy-CAI/GateOfProxyClaude2.0/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/Large-LLM-Proxy-CAI/GateOfProxyClaude2.0/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Lianjd/stock_dashboard/backtrader/feeds/vcdata.py b/spaces/Lianjd/stock_dashboard/backtrader/feeds/vcdata.py deleted file mode 100644 index 6449bcd6ed56491800d06fafc99312fc485df42f..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/feeds/vcdata.py +++ /dev/null @@ -1,595 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -from datetime import datetime, timedelta, tzinfo - -import backtrader as bt -from backtrader import TimeFrame, date2num, num2date -from backtrader.feed import DataBase -from backtrader.metabase import MetaParams -from backtrader.utils.py3 import (integer_types, queue, string_types, - with_metaclass) - -from backtrader.stores import vcstore - - -class MetaVCData(DataBase.__class__): - def __init__(cls, name, bases, dct): - '''Class has already been created ... register''' - # Initialize the class - super(MetaVCData, cls).__init__(name, bases, dct) - - # Register with the store - vcstore.VCStore.DataCls = cls - - -class VCData(with_metaclass(MetaVCData, DataBase)): - '''VisualChart Data Feed. - - Params: - - - ``qcheck`` (default: ``0.5``) - Default timeout for waking up to let a resampler/replayer that the - current bar can be check for due delivery - - The value is only used if a resampling/replaying filter has been - inserted in the data - - - ``historical`` (default: ``False``) - If no ``todate`` parameter is supplied (defined in the base class), - this will force a historical only download if set to ``True`` - - If ``todate`` is supplied the same effect is achieved - - - ``milliseconds`` (default: ``True``) - The bars constructed by *Visual Chart* have this aspect: - HH:MM:59.999000 - - If this parameter is ``True`` a millisecond will be added to this time - to make it look like: HH::MM + 1:00.000000 - - - ``tradename`` (default: ``None``) - Continous futures cannot be traded but are ideal for data tracking. If - this parameter is supplied it will be the name of the current future - which will be the trading asset. Example: - - - 001ES -> ES-Mini continuous supplied as ``dataname`` - - - ESU16 -> ES-Mini 2016-09. If this is supplied in ``tradename`` it - will be the trading asset. - - - ``usetimezones`` (default: ``True``) - For most markets the time offset information provided by *Visual Chart* - allows for datetime to be converted to market time (*backtrader* choice - for representation) - - Some markets are special (``096``) and need special internal coverage - and timezone support to display in the user expected market time. - - If this parameter is set to ``True`` importing ``pytz`` will be - attempted to use timezones (default) - - Disabling it will remove timezone usage (may help if the load is - excesive) - ''' - params = ( - ('qcheck', 0.5), # timeout in seconds (float) to check for events - ('historical', False), # usual industry value - ('millisecond', True), # fix missing millisecond in time - ('tradename', None), # name of the real asset to trade on - ('usetimezones', True), # use pytz timezones if found - ) - - # Holds the calculated offset to the timestamps of the VC Server - _TOFFSET = timedelta() - - # States for the Finite State Machine in _load - _ST_START, _ST_FEEDING, _ST_NOTFOUND = range(3) - - # Base NULL Date for VB/Excel date compatibility - NULLDATE = datetime(1899, 12, 30, 0, 0, 0) - - # To correct HH:MM:59.999 times - MILLISECOND = timedelta(microseconds=1000) - - # Large ping timeout - PING_TIMEOUT = 25.0 - - # Timezones for the different exchanges - _TZS = { - 'Europe/London': ('011', '024', '027', '036', '049', '092', '114', - # These are the global markets - '033', '034', '035', '043', '054', '096', '300',), - - 'Europe/Berlin': ('005', '006', '008', '012', '013', '014', '015', - '017', '019', '025', '029', '030', '037', '038', - '052', '053', '060', '061', '072', '073', '074', - '075', '080', '093', '094', '097', '111', '112', - '113',), - - 'Asia/Tokyo': ('031',), - 'Australia/Melbourne': ('032',), - 'America/Argentina/Buenos_Aires': ('044',), - 'America/Sao_Paulo': ('045',), - 'America/Mexico_City': ('046',), - 'America/Santiago': ('047',), - - 'US/Eastern': ('003', '004', '009', '010', '028', '040', '041', '055', - '090', '095', '099',), - 'US/Central': ('001', '002', '020', '021', '022', '023', '056',), - } - - # The global assets may have a different output timezoe - _TZOUT = { - '096.FTSE': 'Europe/London', - '096.FTEU3': 'Europe/London', - '096.MIB30': 'Europe/Berlin', - '096.SSMI': 'Europe/Berlin', - '096.HSI': 'Asia/Hong_Kong', - '096.BVSP': 'America/Sao_Paulo', - '096.MERVAL': 'America/Argentina/Buenos_Aires', - '096.DJI': 'US/Eastern', - '096.IXIC': 'US/Eastern', - '096.NDX': 'US/Eastern', - } - - # These global markets deliver data in local time dst adjuste unlike those - # from above and need a readjustment - _EXTRA_TIMEOFFSET = ('096',) - - _TIMEFRAME_BACKFILL = { - TimeFrame.Ticks: timedelta(days=1), - TimeFrame.MicroSeconds: timedelta(days=1), - TimeFrame.Seconds: timedelta(days=1), - TimeFrame.Minutes: timedelta(days=2), - TimeFrame.Days: timedelta(days=365), - TimeFrame.Weeks: timedelta(days=365*2), - TimeFrame.Months: timedelta(days=365*5), - TimeFrame.Years: timedelta(days=365*20), - } - - def _timeoffset(self): - '''Returns the calculated time offset local equipment -> data server''' - return self._TOFFSET - - def _gettzinput(self): - '''Returns the timezone to consider for the input data''' - return self._gettz(tzin=True) - - def _gettz(self, tzin=False): - '''Returns the default output timezone for the data - - This defaults to be the timezone in which the market is traded - ''' - # If no object has been provided by the user and a timezone can be - # found via contractdtails, then try to get it from pytz, which may or - # may not be available. - - # The timezone specifications returned by TWS seem to be abbreviations - # understood by pytz, but the full list which TWS may return is not - # documented and one of the abbreviations may fail - ptz = self.p.tz - tzstr = isinstance(ptz, string_types) - if ptz is not None and not tzstr: - return bt.utils.date.Localizer(ptz) - - if self._state == self._ST_NOTFOUND: - return None # nothing else can be done - - if not self.p.usetimezones: - return None - - try: - import pytz # keep the import very local - except ImportError: - return None # nothing can be done - - # dataname 010ABCXXXXX -> ABC (3, 4 and 5) is market code - if tzstr: - tzs = ptz - else: - tzs = None - - if not tzin: - if self.p.dataname in self._TZOUT: - tzs = self._TZOUT[self.p.dataname] - - if tzs is None: - for mktz, mktcodes in self._TZS.items(): - if self._mktcode in mktcodes: - tzs = mktz - break - - if tzs is None: - return None - - if isinstance(tzs, tzinfo): - return bt.utils.date.Localizer(tzs) - - if tzs: - try: - tz = pytz.timezone(tzs) - except pytz.UnknownTimeZoneError: - return None # nothing can be done - else: - return None - - # contractdetails there, import ok, timezone found, return it - return tz - - def islive(self): - '''Returns ``True`` to notify ``Cerebro`` that preloading and runonce - should be deactivated''' - return True - - def __init__(self, **kwargs): - self.store = vcstore.VCStore(**kwargs) - - # Correct a copy past directly from VisualChart - dataname = self.p.dataname - if dataname[3].isspace(): - dataname = dataname[0:2] + dataname[4:] - self.p.dataname = dataname - - self._dataname = '010' + self.p.dataname - self._mktcode = self.p.dataname[0:3] - - self._tradename = tradename = self.p.tradename or self._dataname - # Correct a copy past directly from VisualChart - if tradename[3].isspace(): - tradename = tradename[0:2] + tradename[4:] - self._tradename = tradename - - def setenvironment(self, env): - '''Receives an environment (cerebro) and passes it over to the store it - belongs to''' - super(VCData, self).setenvironment(env) - env.addstore(self.store) - - def start(self): - '''Starts the VC connecction and gets the real contract and - contractdetails if it exists''' - super(VCData, self).start() - - self._state = self._ST_START # mini finite state machine - - self._newticks = True # control processing of initial ticks - - self._pingtmout = self.PING_TIMEOUT # Initial timeout for ping - - self.idx = 1 # counter for the dataserie (vb is based at 1) - self.q = None # where bars are received - - # market time offsets - self._mktoffset = None - self._mktoff1 = None - self._mktoffdiff = None - - if not self.store.connected(): - # Not connected -> go away - self.put_notification(self.DISCONNECTED) - self._state = self._ST_NOTFOUND - return - - self.put_notification(self.CONNECTED) - # get real contract details with real conId (contractId) - self.qrt = queue.Queue() # to await a ping - self.store._rtdata(self, self._dataname) - symfound = self.qrt.get() - if not symfound: - # Kill any further action and signal it - self.put_notification(self.NOTSUBSCRIBED) - self.put_notification(self.DISCONNECTED) - self._state = self._ST_NOTFOUND - return - - if self.replaying: - # In this case don't request the final - # timeframe from vc, but the original that has to be replayed - self._tf, self._comp = self.p.timeframe, self.p.compression - else: - # Else (even if resampling) pass the final timeframe which may - # been modified by a resampling filter - self._tf, self._comp = self._timeframe, self._compression, - - self._ticking = self.store._ticking(self._tf) - self._syminfo = syminfo = self.store._symboldata(self._dataname) - - # For most markets: - # mktoffset == mktoff1 and substracting this value from reported times - # is enough to report the "market time". Visual Chart changes this from - # a value X to 0 if the appropriate setting in the GUI is changed to - # change display of time from local <-> market - # - # But some markets (at least 096XXX) that theoretically live in - # Europe/London seem to be displaced 1 hour to the west and an extra - # hour is needed. - # These markets do also need "usetimezoned" True to actually display - # the market time, because this is done internally using the - # definitions in TZOUTS - - # Record and calculate market offsets - self._mktoffset = timedelta(seconds=syminfo.TimeOffset) - # Add millisecond to pusth HH:MM:59.999 -> 00.000 unless ticks - if self.p.millisecond and not self._ticking: - self._mktoffset -= self.MILLISECOND - - self._mktoff1 = self._mktoffset - if self._mktcode in self._EXTRA_TIMEOFFSET: - # These codes live theoretically in - # (UTC+00:00) Dublin, Edinburgh, Lisbon, London which is - # 'Europe/London' - # But all experiments show the times to be displaced 1 hour to - # the west and hence the extra 3600 seconds - self._mktoffset -= timedelta(seconds=3600) - - self._mktoffdiff = self._mktoffset - self._mktoff1 - - if self._state == self._ST_START: - self.put_notification(self.DELAYED) - - # Now request the data and get a comms queue for it - self.q = self.store._directdata( - self, - self._dataname, - self._tf, self._comp, - self.p.fromdate, self.p.todate, - self.p.historical) - - self._state = self._ST_FEEDING - - def stop(self): - '''Stops and tells the store to stop''' - super(VCData, self).stop() - if self.q: - self.store._canceldirectdata(self.q) - - def _setserie(self, serie): - # Accepts a serie (COM Object) to use in ping events - self._serie = serie - - def haslivedata(self): - return self._laststatus == self.LIVE and self.q - - def _load(self): - if self._state == self._ST_NOTFOUND: - return False # nothing can be done - - while True: - try: - # tmout <> 0 only if resampling/replaying, else no waking up - tmout = self._qcheck * bool(self.resampling) - msg = self.q.get(timeout=tmout) - except queue.Empty: - return None - - if msg is None: - return False # end of stream - - if msg == self.store._RT_SHUTDOWN: - self.put_notification(self.DISCONNECTED) - return False # VC has exited - - if msg == self.store._RT_DISCONNECTED: - self.put_notification(self.CONNBROKEN) - continue - - if msg == self.store._RT_CONNECTED: - self.put_notification(self.CONNECTED) - self.put_notification(self.DELAYED) - continue - - if msg == self.store._RT_LIVE: - if self._laststatus != self.LIVE: - self.put_notification(self.LIVE) - continue - - if msg == self.store._RT_DELAYED: - if self._laststatus != self.DELAYED: - self.put_notification(self.DELAYED) - continue - - if isinstance(msg, integer_types): - self.put_notification(self.UNKNOWN, msg) - continue - - # it must be a bar - bar = msg - - # Put the tick into the bar - self.lines.open[0] = bar.Open - self.lines.high[0] = bar.High - self.lines.low[0] = bar.Low - self.lines.close[0] = bar.Close - self.lines.volume[0] = bar.Volume - self.lines.openinterest[0] = bar.OpenInterest - - # Convert time to "market" time (096 exception) - dt = self.NULLDATE + timedelta(days=bar.Date) - self._mktoffset - self.lines.datetime[0] = date2num(dt) - - return True - - # - # DS Events - # - def _getpingtmout(self): - '''Returns the actual ping timeout for PumpEvents to wake up and call - ping, which will check if the not yet delivered bar can be - delivered. The bar may be stalled because vc awaits a new tick and - during low negotiation hour this can take several seconds after the - actual expected delivery time''' - if self._ticking: - return -1 # no timeout - - return self._pingtmout - - def OnNewDataSerieBar(self, DataSerie, forcepush=False): - # Processes the COM Event (also called directly when 1st creating the - # data serie - ssize = DataSerie.Size - - if ssize - self.idx > 1: - # More than 1 bar on-board -> delay in place - if self._laststatus != self.DELAYED: - self.q.put(self.store._RT_DELAYED) - - # return everything if original tf is ticks or force pushing - ssize += forcepush or self._ticking - for idx in range(self.idx, ssize): - bar = DataSerie.GetBarValues(idx) - self.q.put(bar) - - if not forcepush and not self._ticking and ssize: - # A bar has been left in place - dtnow = datetime.now() - self._TOFFSET # adjust local time - - bar = DataSerie.GetBarValues(ssize) - dt = self.NULLDATE + timedelta(days=bar.Date) - self._mktoffdiff - if dtnow < dt: - # A bar is there, not deliverable yet - LIVE - if self._laststatus != self.LIVE: - self.q.put(self.store._RT_LIVE) - - # Adjust ping timeout to the bar boundary (plus mini leeway) - self._pingtmout = (dt - dtnow).total_seconds() + 0.5 - - else: - self._pingtmout = self.PING_TIMEOUT # no bar left, long pause - self.q.put(bar) # push bar and update index - ssize += 1 # pushed last one out - - # Write down the last processed bar - self.idx = max(1, ssize) - - def ping(self): - ssize = self._serie.Size - - if self.idx > ssize: - return # no bar available - - if self._laststatus == self.CONNBROKEN: - self._pingtmout = self.PING_TIMEOUT - return # do not push during disconnection - - dtnow = datetime.now() - self._TOFFSET - # CHECK: there should be a maximum of 1 bar when pinging - # In any case the algorithm doesn't hurt - for idx in range(self.idx, ssize + 1): # reach ssize - bar = self._serie.GetBarValues(self.idx) - # dt = (self.NULLDATE + timedelta(days=bar.Date) + self._mktoff1) - dt = self.NULLDATE + timedelta(days=bar.Date) - self._mktoffdiff - if dtnow < dt: - self._pingtmout = (dt - dtnow).total_seconds() + 0.5 - break # cannot deliver anything - - # Adjust ping timeout to the bar boundary (plus mini leeway) - self._pingtmout = self.PING_TIMEOUT # no bar, nothing to check - self.q.put(bar) # push bar and update index - self.idx += 1 - - # - # RTEvents - # - # Can be used on a per data basis to check the connection status - if False: - def OnInternalEvent(self, p1, p2, p3): - if p1 != 1: # Apparently "Connection Event" - return - - if p2 == self.lastconn: - return # do not notify twice - - self.lastconn = p2 # keep new notification code - - # p2 should be 0 (disconn), 1 (conn) - self.store._vcrt_connection(self.store._RT_BASEMSG - p2) - - def OnNewTicks(self, ArrayTicks): - # Process the COM Event for New Ticks. This is only used temporarily - # for 2 purposes - # - # 1. If tick.Field == Field_Description is returned, it can be checked - # if the requested symbol has been found or not (tick.Date == 0 -> not - # found). tick.Text has 'Not Found', but this is more likely to change - # Once Field_Description has been seen, the 2nd stage takes place - # - # 2. When a tick.Field == Field_Time is seen and tick.TickIndex == 0, - # the 1st tick of a second is seen and the tick.Date value can be used - # to calculate a time offset to the feed server. This is later used to - # check if a bar is due delivery or not - # - # After this the reception of ticks is cancelled - - aticks = ArrayTicks[0] - # self.debug_ticks(aticks) - ticks = dict() - for tick in aticks: - ticks[tick.Field] = tick - - if self.store.vcrtmod.Field_Description in ticks: - if self._newticks: - self._newticks = False - hasdate = bool(ticks.get(self.store.vcrtmod.Field_Date, False)) - self.qrt.put(hasdate) - return - - else: - try: - tick = ticks[self.store.vcrtmod.Field_Time] - except KeyError: - return - - if tick.TickIndex == 0 and self._mktoff1 is not None: - # Adjust the tick time using the mktoffset (with the 096 excep) - dttick = (self.NULLDATE + timedelta(days=tick.Date) + - self._mktoff1) - - self._TOFFSET = datetime.now() - dttick - if self._mktcode in self._EXTRA_TIMEOFFSET: - # These codes live theoretically in (UTC+00:00) Dublin, - # Edinburgh, Lisbon, London which is 'Europe/London' - # But all experiments show the times to be displaced 1 - # hour to the west and hence the extra 3600 seconds - self._TOFFSET -= timedelta(seconds=3600) - - # Cancel ticks - self._vcrt.CancelSymbolFeed(self._dataname, False) - - def debug_ticks(self, ticks): - print('*' * 50, 'DEBUG OnNewTicks') - for tick in ticks: - print('-' * 40) - print('tick.SymbolCode', tick.SymbolCode.encode('ascii', 'ignore')) - fname = self.store.vcrtfields.get(tick.Field, tick.Field) - print(' tick.Field : {} ({})'.format(fname, tick.Field)) - print(' tick.FieldEx :', tick.FieldEx) - tdate = tick.Date - if tdate: - tdate = self.NULLDATE + timedelta(days=tick.Date) - print(' tick.Date :', tdate) - - print(' tick.Index :', tick.TickIndex) - print(' tick.Value :', tick.Value) - print(' tick.Text :', tick.Text.encode('ascii', 'ignore')) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py deleted file mode 100644 index f0cf9779b270e1aead32845006f8b881fcba37ad..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py +++ /dev/null @@ -1,273 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from torch import Tensor, nn -from torchvision.ops.boxes import nms -from transformers import BertConfig, BertModel, BertPreTrainedModel -from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions - - -class BertModelWarper(nn.Module): - def __init__(self, bert_model): - super().__init__() - # self.bert = bert_modelc - - self.config = bert_model.config - self.embeddings = bert_model.embeddings - self.encoder = bert_model.encoder - self.pooler = bert_model.pooler - - self.get_extended_attention_mask = bert_model.get_extended_attention_mask - self.invert_attention_mask = bert_model.invert_attention_mask - self.get_head_mask = bert_model.get_head_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = ( - output_attentions if output_attentions is not None else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - batch_size, seq_length = input_shape - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = ( - past_key_values[0][0].shape[2] if past_key_values is not None else 0 - ) - - if attention_mask is None: - attention_mask = torch.ones( - ((batch_size, seq_length + past_key_values_length)), device=device - ) - if token_type_ids is None: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask( - attention_mask, input_shape, device - ) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -class TextEncoderShell(nn.Module): - def __init__(self, text_encoder): - super().__init__() - self.text_encoder = text_encoder - self.config = self.text_encoder.config - - def forward(self, **kw): - # feed into text encoder - return self.text_encoder(**kw) - - -def generate_masks_with_special_tokens(tokenized, special_tokens_list, tokenizer): - """Generate attention mask between each pair of special tokens - Args: - input_ids (torch.Tensor): input ids. Shape: [bs, num_token] - special_tokens_mask (list): special tokens mask. - Returns: - torch.Tensor: attention mask between each special tokens. - """ - input_ids = tokenized["input_ids"] - bs, num_token = input_ids.shape - # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens - special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool() - for special_token in special_tokens_list: - special_tokens_mask |= input_ids == special_token - - # idxs: each row is a list of indices of special tokens - idxs = torch.nonzero(special_tokens_mask) - - # generate attention mask and positional ids - attention_mask = ( - torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1) - ) - position_ids = torch.zeros((bs, num_token), device=input_ids.device) - previous_col = 0 - for i in range(idxs.shape[0]): - row, col = idxs[i] - if (col == 0) or (col == num_token - 1): - attention_mask[row, col, col] = True - position_ids[row, col] = 0 - else: - attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True - position_ids[row, previous_col + 1 : col + 1] = torch.arange( - 0, col - previous_col, device=input_ids.device - ) - - previous_col = col - - # # padding mask - # padding_mask = tokenized['attention_mask'] - # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool() - - return attention_mask, position_ids.to(torch.long) - - -def generate_masks_with_special_tokens_and_transfer_map(tokenized, special_tokens_list, tokenizer): - """Generate attention mask between each pair of special tokens - Args: - input_ids (torch.Tensor): input ids. Shape: [bs, num_token] - special_tokens_mask (list): special tokens mask. - Returns: - torch.Tensor: attention mask between each special tokens. - """ - input_ids = tokenized["input_ids"] - bs, num_token = input_ids.shape - # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens - special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool() - for special_token in special_tokens_list: - special_tokens_mask |= input_ids == special_token - - # idxs: each row is a list of indices of special tokens - idxs = torch.nonzero(special_tokens_mask) - - # generate attention mask and positional ids - attention_mask = ( - torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1) - ) - position_ids = torch.zeros((bs, num_token), device=input_ids.device) - cate_to_token_mask_list = [[] for _ in range(bs)] - previous_col = 0 - for i in range(idxs.shape[0]): - row, col = idxs[i] - if (col == 0) or (col == num_token - 1): - attention_mask[row, col, col] = True - position_ids[row, col] = 0 - else: - attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True - position_ids[row, previous_col + 1 : col + 1] = torch.arange( - 0, col - previous_col, device=input_ids.device - ) - c2t_maski = torch.zeros((num_token), device=input_ids.device).bool() - c2t_maski[previous_col + 1 : col] = True - cate_to_token_mask_list[row].append(c2t_maski) - previous_col = col - - cate_to_token_mask_list = [ - torch.stack(cate_to_token_mask_listi, dim=0) - for cate_to_token_mask_listi in cate_to_token_mask_list - ] - - # # padding mask - # padding_mask = tokenized['attention_mask'] - # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool() - - return attention_mask, position_ids.to(torch.long), cate_to_token_mask_list diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/grounded_sam_inference.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/grounded_sam_inference.py deleted file mode 100644 index 4dfb31126a3cc39f9a0c80d7c3d161ff2cefa9ac..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/grounded_sam_inference.py +++ /dev/null @@ -1,256 +0,0 @@ -import sys -import os - -project_dir = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(project_dir) -import argparse -import os -import copy - -import numpy as np -import json -import torch -from PIL import Image, ImageDraw, ImageFont - -# Grounding DINO -import GroundedSAM.GroundingDINO.groundingdino.datasets.transforms as T -from GroundedSAM.GroundingDINO.groundingdino.models import build_model -from GroundedSAM.GroundingDINO.groundingdino.util import box_ops -from GroundedSAM.GroundingDINO.groundingdino.util.slconfig import SLConfig -from GroundedSAM.GroundingDINO.groundingdino.util.utils import clean_state_dict, get_phrases_from_posmap - -# segment anything -from GroundedSAM.segment_anything.segment_anything import build_sam, SamPredictor -import cv2 -import numpy as np -import matplotlib.pyplot as plt -from glob import glob -import ipdb -import imageio -from tqdm import tqdm - - -''' -processing multiple images with grounded sam -only one text one time -''' - -def load_image(image_path): - # load image - image_pil = Image.open(image_path).convert("RGB") # load image - - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image, _ = transform(image_pil, None) # 3, h, w - return image_pil, image - - -def load_model(model_config_path, model_checkpoint_path, device): - args = SLConfig.fromfile(model_config_path) - args.device = device - model = build_model(args) - checkpoint = torch.load(model_checkpoint_path, map_location="cpu") - load_res = model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) - print(load_res) - _ = model.eval() - return model - - -def get_grounding_output(model, image, caption, box_threshold, text_threshold, with_logits=True, device="cpu"): - caption = caption.lower() - caption = caption.strip() - if not caption.endswith("."): - caption = caption + "." - model = model.to(device) - image = image.to(device) - with torch.no_grad(): - outputs = model(image[None], captions=[caption]) - logits = outputs["pred_logits"].cpu().sigmoid()[0] # (nq, 256) - boxes = outputs["pred_boxes"].cpu()[0] # (nq, 4) - logits.shape[0] - - # filter output - logits_filt = logits.clone() - boxes_filt = boxes.clone() - filt_mask = logits_filt.max(dim=1)[0] > box_threshold - logits_filt = logits_filt[filt_mask] # num_filt, 256 - boxes_filt = boxes_filt[filt_mask] # num_filt, 4 - logits_filt.shape[0] - - # get phrase - tokenlizer = model.tokenizer - tokenized = tokenlizer(caption) - # build pred - pred_phrases = [] - for logit, box in zip(logits_filt, boxes_filt): - pred_phrase = get_phrases_from_posmap(logit > text_threshold, tokenized, tokenlizer) - if with_logits: - pred_phrases.append(pred_phrase + f"({str(logit.max().item())[:4]})") - else: - pred_phrases.append(pred_phrase) - - return boxes_filt, pred_phrases, logits_filt - -def show_mask(mask, ax, random_color=False): - if random_color: - color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0) - else: - color = np.array([30/255, 144/255, 255/255, 0.6]) - h, w = mask.shape[-2:] - mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1) - ax.imshow(mask_image) - - -def show_box(box, ax, label): - x0, y0 = box[0], box[1] - w, h = box[2] - box[0], box[3] - box[1] - ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0,0,0,0), lw=2)) - ax.text(x0, y0, label) - - -def save_mask_data(output_dir, mask_list, box_list, label_list): - value = 0 # 0 for background - - mask_img = torch.zeros(mask_list.shape[-2:]) - for idx, mask in enumerate(mask_list): - mask_img[mask.cpu().numpy()[0] == True] = value + idx + 1 - plt.figure(figsize=(10, 10)) - plt.imshow(mask_img.numpy()) - plt.axis('off') - plt.savefig(os.path.join(output_dir, 'mask.jpg'), bbox_inches="tight", dpi=300, pad_inches=0.0) - - json_data = [{ - 'value': value, - 'label': 'background' - }] - for label, box in zip(label_list, box_list): - value += 1 - name, logit = label.split('(') - logit = logit[:-1] # the last is ')' - json_data.append({ - 'value': value, - 'label': name, - 'logit': float(logit), - 'box': box.numpy().tolist(), - }) - with open(os.path.join(output_dir, 'mask.json'), 'w') as f: - json.dump(json_data, f) - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser("Grounded-Segment-Anything Demo", add_help=True) - parser.add_argument("-d", "--data", type=str, required=True, help="path to image file") - parser.add_argument("-t", "--text_prompt", type=str, required=True, help="text prompt") - parser.add_argument( - "--output_dir", "-o", type=str, default="outputs", required=False, help="output directory" - ) - - parser.add_argument("--config", type=str, - default="experts/GroundedSAM/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py", - help="path to config file") - parser.add_argument( - "--grounded_checkpoint", type=str, default="checkpoints/groundingdino_swinb_cogcoor.pth", help="path to checkpoint file" - ) - parser.add_argument( - "--sam_checkpoint", type=str, default="checkpoints/sam_vit_h_4b8939.pth", help="path to checkpoint file" - ) - - parser.add_argument("--box_threshold", type=float, default=0.3, help="box threshold") - parser.add_argument("--text_threshold", type=float, default=0.25, help="text threshold") - - parser.add_argument("--device", type=str, default="cpu", help="running on cpu only!, default=False") - - parser.add_argument("--masked_out", action='store_true', help="save the masked image") - args = parser.parse_args() - - # cfg - config_file = args.config # change the path of the model config file - grounded_checkpoint = args.grounded_checkpoint # change the path of the model - sam_checkpoint = args.sam_checkpoint - # image_path = args.data - text_prompt = args.text_prompt - output_dir = os.path.dirname(os.path.dirname(args.data)) - box_threshold = args.box_threshold - text_threshold = args.text_threshold - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - # make dir - text_prompt_dir = "-".join(text_prompt.split(" ")) - - # text_prompt_dir - os.makedirs(output_dir, exist_ok=True) - # os.makedirs(os.path.join(output_dir, "raw"), exist_ok=True) - os.makedirs(os.path.join(output_dir, "{}.viz".format(text_prompt_dir)), exist_ok=True) - os.makedirs(os.path.join(output_dir, "{}.mask".format(text_prompt_dir)), exist_ok=True) - - # load model - model = load_model(config_file, grounded_checkpoint, device=device) - # initialize SAM - predictor = SamPredictor(build_sam(checkpoint=sam_checkpoint).to(device)) - - if os.path.isdir(args.data): - images = sorted(glob(os.path.join(args.data, "*.jpg"))) + sorted(glob(os.path.join(args.data, "*.png"))) - else: - images = [args.data] - - for image_path in tqdm(images): - fname = os.path.basename(image_path).split('.')[0] - # load image - image_pil, image = load_image(image_path) - - # run grounding dino model - boxes_filt, pred_phrases, logits_filt = get_grounding_output( - model, image, text_prompt, box_threshold, text_threshold, device=device - ) - - image = cv2.imread(image_path) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - predictor.set_image(image) - - size = image_pil.size - H, W = size[1], size[0] - for i in range(boxes_filt.size(0)): - boxes_filt[i] = boxes_filt[i] * torch.Tensor([W, H, W, H]) - boxes_filt[i][:2] -= boxes_filt[i][2:] / 2 - boxes_filt[i][2:] += boxes_filt[i][:2] - - boxes_filt = boxes_filt.cpu() - transformed_boxes = predictor.transform.apply_boxes_torch(boxes_filt, image.shape[:2]).to(device) - - masks, _, _ = predictor.predict_torch( - point_coords = None, - point_labels = None, - boxes = transformed_boxes.to(device), - multimask_output = False, - ) - - # draw output image - plt.figure(figsize=(10, 10)) - plt.imshow(image) - for mask in masks: - show_mask(mask.cpu().numpy(), plt.gca(), random_color=True) - for box, label in zip(boxes_filt, pred_phrases): - show_box(box.numpy(), plt.gca(), label) - - plt.axis('off') - plt.savefig( - os.path.join(output_dir, "{}.viz".format(text_prompt_dir), fname + ".jpg"), - bbox_inches="tight", dpi=300, pad_inches=0.0 - ) - - # ipdb.set_trace() - max_logit_index = logits_filt.max(-1)[0].argmax().item() - _mask = masks[max_logit_index,0].cpu().numpy().astype(np.uint8) * 255 - imageio.imwrite(os.path.join(output_dir, "{}.mask".format(text_prompt_dir), fname + ".png"), _mask) - - if args.masked_out: - masked_image = np.asarray(image_pil).astype(np.float32) * _mask[:,:,None].astype(np.float32) / 255 - imageio.imwrite(os.path.join(output_dir, "masked_" + fname + ".png"), masked_image.astype(np.uint8)) - # save_mask_data(output_dir, masks, boxes_filt, pred_phrases) - diff --git a/spaces/MaximilianChen/Casper/README.md b/spaces/MaximilianChen/Casper/README.md deleted file mode 100644 index f9b1eb81f7233a212b396d161f7653fef4938ba9..0000000000000000000000000000000000000000 --- a/spaces/MaximilianChen/Casper/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MaximilianChen Casper -emoji: 📚 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MedicalAILabo/Xp-age/lib/__init__.py b/spaces/MedicalAILabo/Xp-age/lib/__init__.py deleted file mode 100644 index 768b90dccf5d730fe0340bd35b1dd2e43deea256..0000000000000000000000000000000000000000 --- a/spaces/MedicalAILabo/Xp-age/lib/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from .options import ( - ParamSet, - set_options, - save_parameter, - print_parameter - ) -from .dataloader import create_dataloader -from .framework import create_model -from .metrics import set_eval -from .logger import BaseLogger - -__all__ = [ - 'ParamSet', - 'set_options', - 'print_parameter', - 'save_parameter', - 'create_dataloader', - 'create_model', - 'set_eval', - 'BaseLogger' - ] diff --git a/spaces/MichaelWelsch/FreeVC/speaker_encoder/data_objects/utterance.py b/spaces/MichaelWelsch/FreeVC/speaker_encoder/data_objects/utterance.py deleted file mode 100644 index 0768c3420f422a7464f305b4c1fb6752c57ceda7..0000000000000000000000000000000000000000 --- a/spaces/MichaelWelsch/FreeVC/speaker_encoder/data_objects/utterance.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy as np - - -class Utterance: - def __init__(self, frames_fpath, wave_fpath): - self.frames_fpath = frames_fpath - self.wave_fpath = wave_fpath - - def get_frames(self): - return np.load(self.frames_fpath) - - def random_partial(self, n_frames): - """ - Crops the frames into a partial utterance of n_frames - - :param n_frames: The number of frames of the partial utterance - :return: the partial utterance frames and a tuple indicating the start and end of the - partial utterance in the complete utterance. - """ - frames = self.get_frames() - if frames.shape[0] == n_frames: - start = 0 - else: - start = np.random.randint(0, frames.shape[0] - n_frames) - end = start + n_frames - return frames[start:end], (start, end) \ No newline at end of file diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/groundingdino.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/groundingdino.py deleted file mode 100644 index 052df6220595a1b39b7e2aea37ca4872d113dfd2..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/groundingdino.py +++ /dev/null @@ -1,395 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR model and criterion classes. -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ -# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR) -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# ------------------------------------------------------------------------ -import copy -from typing import List - -import torch -import torch.nn.functional as F -from torch import nn -from torchvision.ops.boxes import nms -from transformers import AutoTokenizer, BertModel, BertTokenizer, RobertaModel, RobertaTokenizerFast - -from groundingdino.util import box_ops, get_tokenlizer -from groundingdino.util.misc import ( - NestedTensor, - accuracy, - get_world_size, - interpolate, - inverse_sigmoid, - is_dist_avail_and_initialized, - nested_tensor_from_tensor_list, -) -from groundingdino.util.utils import get_phrases_from_posmap -from groundingdino.util.visualizer import COCOVisualizer -from groundingdino.util.vl_utils import create_positive_map_from_span - -from ..registry import MODULE_BUILD_FUNCS -from .backbone import build_backbone -from .bertwarper import ( - BertModelWarper, - generate_masks_with_special_tokens, - generate_masks_with_special_tokens_and_transfer_map, -) -from .transformer import build_transformer -from .utils import MLP, ContrastiveEmbed, sigmoid_focal_loss - - -class GroundingDINO(nn.Module): - """This is the Cross-Attention Detector module that performs object detection""" - - def __init__( - self, - backbone, - transformer, - num_queries, - aux_loss=False, - iter_update=False, - query_dim=2, - num_feature_levels=1, - nheads=8, - # two stage - two_stage_type="no", # ['no', 'standard'] - dec_pred_bbox_embed_share=True, - two_stage_class_embed_share=True, - two_stage_bbox_embed_share=True, - num_patterns=0, - dn_number=100, - dn_box_noise_scale=0.4, - dn_label_noise_ratio=0.5, - dn_labelbook_size=100, - text_encoder_type="bert-base-uncased", - sub_sentence_present=True, - max_text_len=256, - ): - """Initializes the model. - Parameters: - backbone: torch module of the backbone to be used. See backbone.py - transformer: torch module of the transformer architecture. See transformer.py - num_queries: number of object queries, ie detection slot. This is the maximal number of objects - Conditional DETR can detect in a single image. For COCO, we recommend 100 queries. - aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used. - """ - super().__init__() - self.num_queries = num_queries - self.transformer = transformer - self.hidden_dim = hidden_dim = transformer.d_model - self.num_feature_levels = num_feature_levels - self.nheads = nheads - self.max_text_len = 256 - self.sub_sentence_present = sub_sentence_present - - # setting query dim - self.query_dim = query_dim - assert query_dim == 4 - - # for dn training - self.num_patterns = num_patterns - self.dn_number = dn_number - self.dn_box_noise_scale = dn_box_noise_scale - self.dn_label_noise_ratio = dn_label_noise_ratio - self.dn_labelbook_size = dn_labelbook_size - - # bert - self.tokenizer = get_tokenlizer.get_tokenlizer(text_encoder_type) - self.bert = get_tokenlizer.get_pretrained_language_model(text_encoder_type) - self.bert.pooler.dense.weight.requires_grad_(False) - self.bert.pooler.dense.bias.requires_grad_(False) - self.bert = BertModelWarper(bert_model=self.bert) - - self.feat_map = nn.Linear(self.bert.config.hidden_size, self.hidden_dim, bias=True) - nn.init.constant_(self.feat_map.bias.data, 0) - nn.init.xavier_uniform_(self.feat_map.weight.data) - # freeze - - # special tokens - self.specical_tokens = self.tokenizer.convert_tokens_to_ids(["[CLS]", "[SEP]", ".", "?"]) - - # prepare input projection layers - if num_feature_levels > 1: - num_backbone_outs = len(backbone.num_channels) - input_proj_list = [] - for _ in range(num_backbone_outs): - in_channels = backbone.num_channels[_] - input_proj_list.append( - nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - ) - ) - for _ in range(num_feature_levels - num_backbone_outs): - input_proj_list.append( - nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(32, hidden_dim), - ) - ) - in_channels = hidden_dim - self.input_proj = nn.ModuleList(input_proj_list) - else: - assert two_stage_type == "no", "two_stage_type should be no if num_feature_levels=1 !!!" - self.input_proj = nn.ModuleList( - [ - nn.Sequential( - nn.Conv2d(backbone.num_channels[-1], hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - ) - ] - ) - - self.backbone = backbone - self.aux_loss = aux_loss - self.box_pred_damping = box_pred_damping = None - - self.iter_update = iter_update - assert iter_update, "Why not iter_update?" - - # prepare pred layers - self.dec_pred_bbox_embed_share = dec_pred_bbox_embed_share - # prepare class & box embed - _class_embed = ContrastiveEmbed() - - _bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3) - nn.init.constant_(_bbox_embed.layers[-1].weight.data, 0) - nn.init.constant_(_bbox_embed.layers[-1].bias.data, 0) - - if dec_pred_bbox_embed_share: - box_embed_layerlist = [_bbox_embed for i in range(transformer.num_decoder_layers)] - else: - box_embed_layerlist = [ - copy.deepcopy(_bbox_embed) for i in range(transformer.num_decoder_layers) - ] - class_embed_layerlist = [_class_embed for i in range(transformer.num_decoder_layers)] - self.bbox_embed = nn.ModuleList(box_embed_layerlist) - self.class_embed = nn.ModuleList(class_embed_layerlist) - self.transformer.decoder.bbox_embed = self.bbox_embed - self.transformer.decoder.class_embed = self.class_embed - - # two stage - self.two_stage_type = two_stage_type - assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format( - two_stage_type - ) - if two_stage_type != "no": - if two_stage_bbox_embed_share: - assert dec_pred_bbox_embed_share - self.transformer.enc_out_bbox_embed = _bbox_embed - else: - self.transformer.enc_out_bbox_embed = copy.deepcopy(_bbox_embed) - - if two_stage_class_embed_share: - assert dec_pred_bbox_embed_share - self.transformer.enc_out_class_embed = _class_embed - else: - self.transformer.enc_out_class_embed = copy.deepcopy(_class_embed) - - self.refpoint_embed = None - - self._reset_parameters() - - def _reset_parameters(self): - # init input_proj - for proj in self.input_proj: - nn.init.xavier_uniform_(proj[0].weight, gain=1) - nn.init.constant_(proj[0].bias, 0) - - def init_ref_points(self, use_num_queries): - self.refpoint_embed = nn.Embedding(use_num_queries, self.query_dim) - - def forward(self, samples: NestedTensor, targets: List = None, **kw): - """The forward expects a NestedTensor, which consists of: - - samples.tensor: batched images, of shape [batch_size x 3 x H x W] - - samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels - - It returns a dict with the following elements: - - "pred_logits": the classification logits (including no-object) for all queries. - Shape= [batch_size x num_queries x num_classes] - - "pred_boxes": The normalized boxes coordinates for all queries, represented as - (center_x, center_y, width, height). These values are normalized in [0, 1], - relative to the size of each individual image (disregarding possible padding). - See PostProcess for information on how to retrieve the unnormalized bounding box. - - "aux_outputs": Optional, only returned when auxilary losses are activated. It is a list of - dictionnaries containing the two above keys for each decoder layer. - """ - if targets is None: - captions = kw["captions"] - else: - captions = [t["caption"] for t in targets] - len(captions) - - # encoder texts - tokenized = self.tokenizer(captions, padding="longest", return_tensors="pt").to( - samples.device - ) - ( - text_self_attention_masks, - position_ids, - cate_to_token_mask_list, - ) = generate_masks_with_special_tokens_and_transfer_map( - tokenized, self.specical_tokens, self.tokenizer - ) - - if text_self_attention_masks.shape[1] > self.max_text_len: - text_self_attention_masks = text_self_attention_masks[ - :, : self.max_text_len, : self.max_text_len - ] - position_ids = position_ids[:, : self.max_text_len] - tokenized["input_ids"] = tokenized["input_ids"][:, : self.max_text_len] - tokenized["attention_mask"] = tokenized["attention_mask"][:, : self.max_text_len] - tokenized["token_type_ids"] = tokenized["token_type_ids"][:, : self.max_text_len] - - # extract text embeddings - if self.sub_sentence_present: - tokenized_for_encoder = {k: v for k, v in tokenized.items() if k != "attention_mask"} - tokenized_for_encoder["attention_mask"] = text_self_attention_masks - tokenized_for_encoder["position_ids"] = position_ids - else: - # import ipdb; ipdb.set_trace() - tokenized_for_encoder = tokenized - - bert_output = self.bert(**tokenized_for_encoder) # bs, 195, 768 - - encoded_text = self.feat_map(bert_output["last_hidden_state"]) # bs, 195, d_model - text_token_mask = tokenized.attention_mask.bool() # bs, 195 - # text_token_mask: True for nomask, False for mask - # text_self_attention_masks: True for nomask, False for mask - - if encoded_text.shape[1] > self.max_text_len: - encoded_text = encoded_text[:, : self.max_text_len, :] - text_token_mask = text_token_mask[:, : self.max_text_len] - position_ids = position_ids[:, : self.max_text_len] - text_self_attention_masks = text_self_attention_masks[ - :, : self.max_text_len, : self.max_text_len - ] - - text_dict = { - "encoded_text": encoded_text, # bs, 195, d_model - "text_token_mask": text_token_mask, # bs, 195 - "position_ids": position_ids, # bs, 195 - "text_self_attention_masks": text_self_attention_masks, # bs, 195,195 - } - - # import ipdb; ipdb.set_trace() - - if isinstance(samples, (list, torch.Tensor)): - samples = nested_tensor_from_tensor_list(samples) - features, poss = self.backbone(samples) - - srcs = [] - masks = [] - for l, feat in enumerate(features): - src, mask = feat.decompose() - srcs.append(self.input_proj[l](src)) - masks.append(mask) - assert mask is not None - if self.num_feature_levels > len(srcs): - _len_srcs = len(srcs) - for l in range(_len_srcs, self.num_feature_levels): - if l == _len_srcs: - src = self.input_proj[l](features[-1].tensors) - else: - src = self.input_proj[l](srcs[-1]) - m = samples.mask - mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0] - pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype) - srcs.append(src) - masks.append(mask) - poss.append(pos_l) - - input_query_bbox = input_query_label = attn_mask = dn_meta = None - hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer( - srcs, masks, input_query_bbox, poss, input_query_label, attn_mask, text_dict - ) - - # deformable-detr-like anchor update - outputs_coord_list = [] - for dec_lid, (layer_ref_sig, layer_bbox_embed, layer_hs) in enumerate( - zip(reference[:-1], self.bbox_embed, hs) - ): - layer_delta_unsig = layer_bbox_embed(layer_hs) - layer_outputs_unsig = layer_delta_unsig + inverse_sigmoid(layer_ref_sig) - layer_outputs_unsig = layer_outputs_unsig.sigmoid() - outputs_coord_list.append(layer_outputs_unsig) - outputs_coord_list = torch.stack(outputs_coord_list) - - # output - outputs_class = torch.stack( - [ - layer_cls_embed(layer_hs, text_dict) - for layer_cls_embed, layer_hs in zip(self.class_embed, hs) - ] - ) - out = {"pred_logits": outputs_class[-1], "pred_boxes": outputs_coord_list[-1]} - - # # for intermediate outputs - # if self.aux_loss: - # out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord_list) - - # # for encoder output - # if hs_enc is not None: - # # prepare intermediate outputs - # interm_coord = ref_enc[-1] - # interm_class = self.transformer.enc_out_class_embed(hs_enc[-1], text_dict) - # out['interm_outputs'] = {'pred_logits': interm_class, 'pred_boxes': interm_coord} - # out['interm_outputs_for_matching_pre'] = {'pred_logits': interm_class, 'pred_boxes': init_box_proposal} - - return out - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_coord): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - return [ - {"pred_logits": a, "pred_boxes": b} - for a, b in zip(outputs_class[:-1], outputs_coord[:-1]) - ] - - -@MODULE_BUILD_FUNCS.registe_with_name(module_name="groundingdino") -def build_groundingdino(args): - - backbone = build_backbone(args) - transformer = build_transformer(args) - - dn_labelbook_size = args.dn_labelbook_size - dec_pred_bbox_embed_share = args.dec_pred_bbox_embed_share - sub_sentence_present = args.sub_sentence_present - - model = GroundingDINO( - backbone, - transformer, - num_queries=args.num_queries, - aux_loss=True, - iter_update=True, - query_dim=4, - num_feature_levels=args.num_feature_levels, - nheads=args.nheads, - dec_pred_bbox_embed_share=dec_pred_bbox_embed_share, - two_stage_type=args.two_stage_type, - two_stage_bbox_embed_share=args.two_stage_bbox_embed_share, - two_stage_class_embed_share=args.two_stage_class_embed_share, - num_patterns=args.num_patterns, - dn_number=0, - dn_box_noise_scale=args.dn_box_noise_scale, - dn_label_noise_ratio=args.dn_label_noise_ratio, - dn_labelbook_size=dn_labelbook_size, - text_encoder_type=args.text_encoder_type, - sub_sentence_present=sub_sentence_present, - max_text_len=args.max_text_len, - ) - - return model diff --git a/spaces/MirageML/sjc/sd1/ldm/lr_scheduler.py b/spaces/MirageML/sjc/sd1/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/sd1/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/MrVicente/RA-BART/custom_bart/bart_mask_attention.py b/spaces/MrVicente/RA-BART/custom_bart/bart_mask_attention.py deleted file mode 100644 index 83b117663960b3a6a78c3e4885fd3b3b39cfffea..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/custom_bart/bart_mask_attention.py +++ /dev/null @@ -1,238 +0,0 @@ -############################# -# Imports -############################# - -# Python modules -from typing import Optional, Tuple - -# Remote modules -import torch -from torch import nn - -# Local modules -from .attention_utils import update_weights_regarding_relations_on_specific_head - - -class BartCustomMaskAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__( - self, - embed_dim: int, - num_heads: int, - dropout: float = 0.0, - is_decoder: bool = False, - bias: bool = True, - num_relation_kinds: int = 0, - heads_mask: Optional[torch.Tensor] = None, - ): - super().__init__() - self.embed_dim = embed_dim - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - - if (self.head_dim * num_heads) != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}" - f" and `num_heads`: {num_heads})." - ) - if heads_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {heads_mask.size()}" - ) - self.heads_mask = heads_mask - - self.scaling = self.head_dim**-0.5 - self.is_decoder = is_decoder - - self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - - self.num_relation_kinds = num_relation_kinds - - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - key_value_states: Optional[torch.Tensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - output_attentions: bool = False, - relation_inputs: Optional[torch.Tensor] = None, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel""" - - # if key_value_states are provided this layer is used as a cross-attention layer - # for the decoder - is_cross_attention = key_value_states is not None - - bsz, tgt_len, embed_dim = hidden_states.size() - - #print(relation_inputs.shape, 'VS ', (bsz, tgt_len, tgt_len)) - if relation_inputs is None: - # TODO - relation_inputs = torch.zeros((bsz, tgt_len, tgt_len)).to('cuda').long() - assert relation_inputs.shape == (bsz, tgt_len, tgt_len) - - # get query proj - query_states = self.q_proj(hidden_states) * self.scaling - # get key, value proj - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_states = past_key_value[0] - value_states = past_key_value[1] - elif is_cross_attention: - # cross_attentions - key_states = self._shape(self.k_proj(key_value_states), -1, bsz) - value_states = self._shape(self.v_proj(key_value_states), -1, bsz) - elif past_key_value is not None: - # reuse k, v, self_attention - key_states = self._shape(self.k_proj(hidden_states), -1, bsz) - value_states = self._shape(self.v_proj(hidden_states), -1, bsz) - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - else: - # self_attention - key_states = self._shape(self.k_proj(hidden_states), -1, bsz) - value_states = self._shape(self.v_proj(hidden_states), -1, bsz) - - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_states, value_states) - - proj_shape = (bsz * self.num_heads, -1, self.head_dim) - query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) - key_states = key_states.view(*proj_shape) - value_states = value_states.view(*proj_shape) - - src_len = key_states.size(1) - attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) - - if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, tgt_len, src_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - if self.heads_mask is not None:# and layer_head_mask is not None: - if self.heads_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - ) - h_mask = layer_head_mask - #print('h_mask: ', h_mask) - if layer_head_mask is None: - h_mask = self.heads_mask - #h_mask.to(attn_weights.device) - attn_weights = update_weights_regarding_relations_on_specific_head(h_mask, attn_weights, - relation_inputs, bsz, self.num_heads, tgt_len, - src_len, verbose=False) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - elif layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - - if output_attentions: - # this operation is a bit awkward, but it's required to - # make sure that attn_weights keeps its gradient. - # In order to do so, attn_weights have to be reshaped - # twice and have to be reused in the following - attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len) - else: - attn_weights_reshaped = None - - attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - - attn_output = torch.bmm(attn_probs, value_states) - - if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {attn_output.size()}" - ) - - attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) - attn_output = attn_output.transpose(1, 2) - - # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be - # partitioned aross GPUs when using tensor-parallelism. - attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) - - attn_output = self.out_proj(attn_output) - - return attn_output, attn_weights_reshaped, past_key_value - - def find_head_to_mask(self, heads_mask) -> int: - head_idx = torch.argmax(heads_mask) - head_idx_simple = head_idx.item() - return head_idx_simple - - def create_commonsense_mask(self, bsz, n_tokens, commonsense_matrix, num_heads=16, specific_head=0): - commonsense_mask = torch.zeros( - ((bsz, num_heads, n_tokens, n_tokens)) - ) - if commonsense_matrix is None: - commonsense_matrix = torch.zeros( - ((bsz, n_tokens, n_tokens)) - ) - commonsense_mask = commonsense_mask.reshape((num_heads, bsz, n_tokens, n_tokens)) - commonsense_mask[specific_head] = commonsense_matrix - commonsense_mask = commonsense_mask.reshape((bsz, num_heads, n_tokens, n_tokens)) - return commonsense_mask - - def commonsense_attention_mask_update(self, bsz, n_tokens, commonsense_matrix, attn_weights, - specific_head=0): - num_heads = self.num_heads - commonsense_mask = torch.zeros( - ((bsz, num_heads, n_tokens, n_tokens)) - ) - attn_weights_helper = attn_weights.reshape((num_heads, bsz, n_tokens, n_tokens)) - zeros = torch.zeros( - ((bsz, n_tokens, n_tokens)) - ) - head_previous_attention_weights = attn_weights_helper[specific_head] - attn_weights_helper[specific_head] = zeros - attn_weights_helper = attn_weights_helper.reshape((bsz, num_heads, n_tokens, n_tokens)) - if commonsense_matrix is None: - # ignore is not passed (ones -> neutral since multiplication is used) - commonsense_matrix = torch.ones( - ((bsz, n_tokens, n_tokens)) - ) - commonsense_mask = commonsense_mask.reshape((num_heads, bsz, n_tokens, n_tokens)) - commonsense_mask[specific_head] = head_previous_attention_weights * commonsense_matrix - # TODO Stupid conversion - commonsense_mask = commonsense_mask.reshape((bsz, num_heads, n_tokens, n_tokens)).to('cuda') - return attn_weights_helper + commonsense_mask - - def convert_relations_to_binary_mask(self, input_relations): - relations_binary_mask = input_relations.clone() - relations_binary_mask[relations_binary_mask > 1] = 1 - return relations_binary_mask diff --git a/spaces/NATSpeech/DiffSpeech/tasks/tts/fs2_orig.py b/spaces/NATSpeech/DiffSpeech/tasks/tts/fs2_orig.py deleted file mode 100644 index a234df565d3a1679bf8bc5f3c7821256152ed456..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/tasks/tts/fs2_orig.py +++ /dev/null @@ -1,138 +0,0 @@ -import torch -import torch.nn.functional as F -from modules.tts.fs2_orig import FastSpeech2Orig -from tasks.tts.dataset_utils import FastSpeechDataset -from tasks.tts.fs import FastSpeechTask -from utils.commons.dataset_utils import collate_1d, collate_2d -from utils.commons.hparams import hparams -from utils.plot.plot import spec_to_figure -import numpy as np - - -class FastSpeech2OrigDataset(FastSpeechDataset): - def __init__(self, prefix, shuffle=False, items=None, data_dir=None): - super().__init__(prefix, shuffle, items, data_dir) - self.pitch_type = hparams.get('pitch_type') - - def __getitem__(self, index): - sample = super().__getitem__(index) - item = self._get_item(index) - hparams = self.hparams - mel = sample['mel'] - T = mel.shape[0] - sample['energy'] = (mel.exp() ** 2).sum(-1).sqrt() - if hparams['use_pitch_embed'] and self.pitch_type == 'cwt': - cwt_spec = torch.Tensor(item['cwt_spec'])[:T] - f0_mean = item.get('f0_mean', item.get('cwt_mean')) - f0_std = item.get('f0_std', item.get('cwt_std')) - sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std}) - return sample - - def collater(self, samples): - if len(samples) == 0: - return {} - batch = super().collater(samples) - if hparams['use_pitch_embed']: - energy = collate_1d([s['energy'] for s in samples], 0.0) - else: - energy = None - batch.update({'energy': energy}) - if self.pitch_type == 'cwt': - cwt_spec = collate_2d([s['cwt_spec'] for s in samples]) - f0_mean = torch.Tensor([s['f0_mean'] for s in samples]) - f0_std = torch.Tensor([s['f0_std'] for s in samples]) - batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std}) - return batch - - -class FastSpeech2OrigTask(FastSpeechTask): - def __init__(self): - super(FastSpeech2OrigTask, self).__init__() - self.dataset_cls = FastSpeech2OrigDataset - - def build_tts_model(self): - dict_size = len(self.token_encoder) - self.model = FastSpeech2Orig(dict_size, hparams) - - def run_model(self, sample, infer=False, *args, **kwargs): - txt_tokens = sample['txt_tokens'] # [B, T_t] - spk_embed = sample.get('spk_embed') - spk_id = sample.get('spk_ids') - if not infer: - target = sample['mels'] # [B, T_s, 80] - mel2ph = sample['mel2ph'] # [B, T_s] - f0 = sample.get('f0') - uv = sample.get('uv') - energy = sample.get('energy') - output = self.model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, spk_id=spk_id, - f0=f0, uv=uv, energy=energy, infer=False) - losses = {} - self.add_mel_loss(output['mel_out'], target, losses) - self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses) - if hparams['use_pitch_embed']: - self.add_pitch_loss(output, sample, losses) - if hparams['use_energy_embed']: - self.add_energy_loss(output, sample, losses) - return losses, output - else: - mel2ph, uv, f0, energy = None, None, None, None - use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur']) - use_gt_f0 = kwargs.get('infer_use_gt_f0', hparams['use_gt_f0']) - use_gt_energy = kwargs.get('infer_use_gt_energy', hparams['use_gt_energy']) - if use_gt_dur: - mel2ph = sample['mel2ph'] - if use_gt_f0: - f0 = sample['f0'] - uv = sample['uv'] - if use_gt_energy: - energy = sample['energy'] - output = self.model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, spk_id=spk_id, - f0=f0, uv=uv, energy=energy, infer=True) - return output - - def save_valid_result(self, sample, batch_idx, model_out): - super(FastSpeech2OrigTask, self).save_valid_result(sample, batch_idx, model_out) - self.plot_cwt(batch_idx, model_out['cwt'], sample['cwt_spec']) - - def plot_cwt(self, batch_idx, cwt_out, cwt_gt=None): - if len(cwt_out.shape) == 3: - cwt_out = cwt_out[0] - if isinstance(cwt_out, torch.Tensor): - cwt_out = cwt_out.cpu().numpy() - if cwt_gt is not None: - if len(cwt_gt.shape) == 3: - cwt_gt = cwt_gt[0] - if isinstance(cwt_gt, torch.Tensor): - cwt_gt = cwt_gt.cpu().numpy() - cwt_out = np.concatenate([cwt_out, cwt_gt], -1) - name = f'cwt_val_{batch_idx}' - self.logger.add_figure(name, spec_to_figure(cwt_out), self.global_step) - - def add_pitch_loss(self, output, sample, losses): - if hparams['pitch_type'] == 'cwt': - cwt_spec = sample[f'cwt_spec'] - f0_mean = sample['f0_mean'] - uv = sample['uv'] - mel2ph = sample['mel2ph'] - f0_std = sample['f0_std'] - cwt_pred = output['cwt'][:, :, :10] - f0_mean_pred = output['f0_mean'] - f0_std_pred = output['f0_std'] - nonpadding = (mel2ph != 0).float() - losses['C'] = F.l1_loss(cwt_pred, cwt_spec) * hparams['lambda_f0'] - if hparams['use_uv']: - assert output['cwt'].shape[-1] == 11 - uv_pred = output['cwt'][:, :, -1] - losses['uv'] = (F.binary_cross_entropy_with_logits(uv_pred, uv, reduction='none') - * nonpadding).sum() / nonpadding.sum() * hparams['lambda_uv'] - losses['f0_mean'] = F.l1_loss(f0_mean_pred, f0_mean) * hparams['lambda_f0'] - losses['f0_std'] = F.l1_loss(f0_std_pred, f0_std) * hparams['lambda_f0'] - else: - super(FastSpeech2OrigTask, self).add_pitch_loss(output, sample, losses) - - def add_energy_loss(self, output, sample, losses): - energy_pred, energy = output['energy_pred'], sample['energy'] - nonpadding = (energy != 0).float() - loss = (F.mse_loss(energy_pred, energy, reduction='none') * nonpadding).sum() / nonpadding.sum() - loss = loss * hparams['lambda_energy'] - losses['e'] = loss diff --git a/spaces/NMEX/rvc-hoyo-game/README.md b/spaces/NMEX/rvc-hoyo-game/README.md deleted file mode 100644 index 7a0fa940c0e2f72a7b560bb70f50dcae86dde9c2..0000000000000000000000000000000000000000 --- a/spaces/NMEX/rvc-hoyo-game/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: RVC Hoyo Games -emoji: 🎤 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ArkanDash/rvc-models-new ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh deleted file mode 100644 index 4655936149cab212b3cfa14f306d71153729f9d7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -if [ -z $SPM_PATH ] ; -then - echo "Please install sentence piecence from https://github.com/google/sentencepiece and set SPM_PATH pointing to the installed spm_encode.py. Exitting..." - exit -fi - -ML50=${WORKDIR_ROOT}/ML50 - -mkdir -p $ML50/dedup -mkdir -p $ML50/cleaned_dedup - -python ./dedup_all.py --from-folder $ML50/raw --to-folder $ML50/dedup -python ./remove_valid_test_in_train.py --from-folder $ML50/dedup --to-folder $ML50/clean -python ./binarize.py --raw-folder $ML50/clean \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/README.md deleted file mode 100644 index 0a63e2f0d844ce157f9502c82738aac2a0de3f0c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/README.md +++ /dev/null @@ -1,10 +0,0 @@ -# GSLM Metrics - -## ASR Metrics -The suite of metrics here uses an ASR model to transcribe the synthesized speech into text, and then uses text-based metrics. We also use word error rate from ASR transcription itself as one of the metrics. [More details](asr_metrics) - -## ABX Metrics -We use [ABX](https://www.semanticscholar.org/paper/ABX-Discriminability-Measures-and-Applications-Schatz/13d3537228f728c1063cc83743cb118bba3367a0) to evaluate how well-separated phonetic categories are with quantized representations. [More details](abx_metrics) - -## sWUGGY and sBLIMP -We refer to [ZeroSpeech challenge](https://www.zerospeech.com/2021/track_s.html#scoring-based-metrics) for details on the sWUGGY and sBLIMP metrics. diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/model_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/model_utils.py deleted file mode 100644 index 732d66b1d5f695151c26d29eb7f6b53179c269f1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/model_utils.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import List, Optional - -import torch -from torch import Tensor - - -@torch.jit.script -def script_skip_tensor_list(x: List[Tensor], mask): - res = [xi[mask] if xi.size(0) == mask.size(0) else xi[:, mask] for xi in x] - outputs = [] - for i, t in enumerate(res): - if t.numel() != 0: - outputs.append(t) - else: - outputs.append(x[i]) - return outputs - - -@torch.jit.script -def script_skip_tensor(x: Tensor, mask): - # None case - if x.size(0) == 0: - return x - res = x[mask] if x.size(0) == mask.size(0) else x[:, mask] - if res.numel() == 0: - return x - else: - return res - - -@torch.jit.script -def expand_2d_or_3d_tensor(x, trg_dim: int, padding_idx: int): - """ - Expand 2D/3D tensor on dim=1 - """ - if x is None: - return None - - assert x.dim() == 2 or x.dim() == 3 - assert trg_dim >= x.size(1), (trg_dim, x.size()) - if trg_dim == x.size(1): - return x - - dims = [x.size(0), trg_dim - x.size(1)] - if x.dim() == 3: - dims.append(x.size(2)) - x = torch.cat([x, torch.zeros(dims).to(x).fill_(padding_idx)], 1) - - return x - - -@torch.jit.script -def coalesce(x: Optional[Tensor], y: Tensor) -> Tensor: - return x if x is not None else y - - -@torch.jit.script -def fill_tensors( - x: Optional[Tensor], mask, y: Optional[Tensor], padding_idx: int -) -> Optional[Tensor]: - """ - Filling tensor x with y at masked positions (dim=0). - """ - if x is None or x.size()[0] == 0 or y is None: - return x - assert x.dim() == y.dim() and mask.size(0) == x.size(0) - assert x.dim() == 2 or (x.dim() == 3 and x.size(2) == y.size(2)) - - n_selected = mask.sum() - if n_selected == 0: - return x - assert n_selected == y.size(0) - if n_selected == x.size(0): - return y - - if x.size(1) < y.size(1): - x = expand_2d_or_3d_tensor(x, y.size(1), padding_idx) - x[mask] = y - elif x.size(1) > y.size(1): - x[mask] = torch.tensor(padding_idx).type_as(x) - if x.dim() == 2: - x[mask, : y.size(1)] = y - else: - x[mask, : y.size(1), :] = y - else: - x[mask] = y - return x diff --git a/spaces/OFA-Sys/OFA-Image_Caption/data/mm_data/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/data/mm_data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamic_convolution.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamic_convolution.py deleted file mode 100644 index 0121d453b9e026f5128dd41fce691aa1b4486448..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamic_convolution.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout - -from .unfold import unfold1d - - -def DynamicConv( - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - renorm_padding=False, - bias=False, - conv_bias=False, - query_size=None, - in_proj=False, -): - if torch.cuda.is_available(): - try: - from fairseq.modules.dynamicconv_layer import DynamicconvLayer - - return DynamicconvLayer( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - renorm_padding=renorm_padding, - bias=bias, - conv_bias=conv_bias, - query_size=query_size, - ) - except ImportError as e: - print(e) - return DynamicConv1dTBC( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - renorm_padding=renorm_padding, - bias=bias, - conv_bias=conv_bias, - query_size=query_size, - ) - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@with_incremental_state -class DynamicConv1dTBC(nn.Module): - """Dynamic lightweight convolution taking T x B x C inputs - Args: - input_size: # of channels of the input - kernel_size: convolution channels - padding_l: padding to the left when using "same" padding - num_heads: number of heads used. The weight is of shape (num_heads, 1, kernel_size) - weight_dropout: the drop rate of the DropConnect to drop the weight - weight_softmax: normalize the weight with softmax before the convolution - renorm_padding: re-normalize the filters to ignore the padded part (only the non-padding parts sum up to 1) - bias: use bias - conv_bias: bias of the convolution - query_size: specified when feeding a different input as the query - in_proj: project the input and generate the filter together - - Shape: - Input: TxBxC, i.e. (timesteps, batch_size, input_size) - Output: TxBxC, i.e. (timesteps, batch_size, input_size) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - renorm_padding=False, - bias=False, - conv_bias=False, - query_size=None, - in_proj=False, - ): - super().__init__() - self.input_size = input_size - self.query_size = input_size if query_size is None else query_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.weight_softmax = weight_softmax - self.renorm_padding = renorm_padding - - if in_proj: - self.weight_linear = Linear( - self.input_size, self.input_size + num_heads * kernel_size * 1 - ) - else: - self.weight_linear = Linear( - self.query_size, num_heads * kernel_size * 1, bias=bias - ) - if conv_bias: - self.conv_bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.conv_bias = None - self.reset_parameters() - - @property - def in_proj(self): - return ( - self.weight_linear.out_features - == self.input_size + self.num_heads * self.kernel_size - ) - - def reset_parameters(self): - self.weight_linear.reset_parameters() - if self.conv_bias is not None: - nn.init.constant_(self.conv_bias, 0.0) - - def forward(self, x, incremental_state=None, query=None, unfold=None): - """Assuming the input, x, of the shape T x B x C and producing an output in the shape T x B x C - args: - x: Input of shape T x B x C, i.e. (timesteps, batch_size, input_size) - incremental_state: A dict to keep the state - unfold: unfold the input or not. If not, we use the matrix trick instead - query: use the specified query to predict the conv filters - """ - unfold = ( - x.size(0) > 512 if unfold is None else unfold - ) # use unfold mode as default for long sequence to save memory - unfold = unfold or (incremental_state is not None) - assert query is None or not self.in_proj - - if query is None: - query = x - if unfold: - output = self._forward_unfolded(x, incremental_state, query) - else: - output = self._forward_expanded(x, incremental_state, query) - - if self.conv_bias is not None: - output = output + self.conv_bias.view(1, 1, -1) - return output - - def _forward_unfolded(self, x, incremental_state, query): - """The conventional implementation of convolutions. - Unfolding the input by having a window shifting to the right.""" - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - if self.in_proj: - proj = self.weight_linear(x) - x = proj.narrow(2, 0, self.input_size).contiguous() - weight = ( - proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1) - ) - else: - weight = self.weight_linear(query).view(T * B * H, -1) - - # renorm_padding is only implemented in _forward_expanded - assert not self.renorm_padding or incremental_state is not None - - if incremental_state is not None: - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - else: - padding_l = self.padding_l - if K > T and padding_l == K - 1: - weight = weight.narrow(1, K - T, T) - K, padding_l = T, T - 1 - # unfold the input: T x B x C --> T' x B x C x K - x_unfold = unfold1d(x, K, padding_l, 0) - x_unfold = x_unfold.view(T * B * H, R, K) - - if self.weight_softmax and not self.renorm_padding: - weight = F.softmax(weight, dim=1) - weight = weight.narrow(1, 0, K) - - if incremental_state is not None: - weight = weight[:, -x_unfold.size(2) :] - K = weight.size(1) - - if self.weight_softmax and self.renorm_padding: - weight = F.softmax(weight, dim=1) - - weight = self.weight_dropout_module(weight, inplace=False) - - output = torch.bmm(x_unfold, weight.unsqueeze(2)) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - def _forward_expanded(self, x, incremental_stat, query): - """Turn the convolution filters into band matrices and do matrix multiplication. - This is faster when the sequence is short, but less memory efficient. - This is not used in the decoder during inference. - """ - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - if self.in_proj: - proj = self.weight_linear(x) - x = proj.narrow(2, 0, self.input_size).contiguous() - weight = ( - proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1) - ) - else: - weight = self.weight_linear(query).view(T * B * H, -1) - - if not self.renorm_padding: - if self.weight_softmax: - weight = F.softmax(weight, dim=1) - weight = self.weight_dropout_module(weight, inplace=False) - weight = weight.narrow(1, 0, K).contiguous() - weight = weight.view(T, B * H, K).transpose(0, 1) - - x = x.view(T, B * H, R).transpose(0, 1) - if self.weight_softmax and self.renorm_padding: - # turn the convolution filters into band matrices - weight_expanded = weight.new(B * H, T, T + K - 1).fill_(float("-inf")) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, self.padding_l, T) - # normalize the weight over valid positions like self-attention - weight_expanded = F.softmax(weight_expanded, dim=2) - weight_expanded = self.weight_dropout_module(weight_expanded, inplace=False) - else: - P = self.padding_l - # For efficiency, we cut the kernel size and reduce the padding when the kernel is larger than the length - if K > T and P == K - 1: - weight = weight.narrow(2, K - T, T) - K, P = T, T - 1 - # turn the convolution filters into band matrices - weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, P, T) # B*H x T x T - output = torch.bmm(weight_expanded, x) - output = output.transpose(0, 1).contiguous().view(T, B, C) - return output - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def extra_repr(self): - s = "{}, kernel_size={}, padding_l={}, num_heads={}, weight_softmax={}, conv_bias={}, renorm_padding={}, in_proj={}".format( - self.input_size, - self.kernel_size, - self.padding_l, - self.num_heads, - self.weight_softmax, - self.conv_bias is not None, - self.renorm_padding, - self.in_proj, - ) - - if self.query_size != self.input_size: - s += ", query_size={}".format(self.query_size) - if self.weight_dropout_module.p > 0.0: - s += ", weight_dropout={}".format(self.weight_dropout_module.p) - return s diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/denoising.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/denoising.py deleted file mode 100644 index d1dff26c36d51e394e1c955c6683fa4a20c52395..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/denoising.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -from fairseq import utils -from fairseq.data import ( - AppendTokenDataset, - DenoisingDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - PadDataset, - PrependTokenDataset, - StripTokenDataset, - TokenBlockDataset, - data_utils, -) -from fairseq.data.encoders.utils import get_whole_word_mask -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import LegacyFairseqTask, register_task -import numpy as np - - -logger = logging.getLogger(__name__) - - -@register_task("denoising") -class DenoisingTask(LegacyFairseqTask): - """ - Denoising task for applying sequence to sequence denoising. (ie. BART) - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", help="path to data directory") - parser.add_argument( - "--tokens-per-sample", - default=512, - type=int, - help="max number of total tokens over all segments" - " per sample for dataset", - ) - parser.add_argument( - "--sample-break-mode", - default="complete_doc", - type=str, - help="mode for breaking sentence", - ) - parser.add_argument( - "--mask", - default=0.0, - type=float, - help="fraction of words/subwords that will be masked", - ) - parser.add_argument( - "--mask-random", - default=0.0, - type=float, - help="instead of using [MASK], use random token this often", - ) - parser.add_argument( - "--insert", - default=0.0, - type=float, - help="insert this percentage of additional random tokens", - ) - parser.add_argument( - "--permute", - default=0.0, - type=float, - help="take this proportion of subwords and permute them", - ) - parser.add_argument( - "--rotate", - default=0.5, - type=float, - help="rotate this proportion of inputs", - ) - parser.add_argument( - "--poisson-lambda", - default=3.0, - type=float, - help="randomly shuffle sentences for this proportion of inputs", - ) - parser.add_argument( - "--permute-sentences", - default=0.0, - type=float, - help="shuffle this proportion of sentences in all inputs", - ) - parser.add_argument( - "--mask-length", - default="subword", - type=str, - choices=["subword", "word", "span-poisson"], - help="mask length to choose", - ) - parser.add_argument( - "--replace-length", - default=-1, - type=int, - help="when masking N tokens, replace with 0, 1, or N tokens (use -1 for N)", - ) - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - - parser.add_argument( - "--shorten-method", - default="none", - choices=["none", "truncate", "random_crop"], - help="if not none, shorten sequences that exceed --tokens-per-sample", - ) - parser.add_argument( - "--shorten-data-split-list", - default="", - help="comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)', - ) - - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = self.dictionary.add_symbol("") - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - return cls(args, dictionary) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - dataset = StripTokenDataset(dataset, self.dictionary.eos()) - - dataset = maybe_shorten_dataset( - dataset, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.tokens_per_sample, - self.args.seed, - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 2, # one less for and one for - pad=self.dictionary.pad(), - eos=self.dictionary.eos(), - break_mode=self.args.sample_break_mode, - document_sep_len=0, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - dataset = AppendTokenDataset(dataset, self.source_dictionary.eos()) - - mask_whole_words = ( - get_whole_word_mask(self.args, self.source_dictionary) - if self.args.mask_length != "subword" - else None - ) - - self.datasets[split] = DenoisingDataset( - dataset, - dataset.sizes, - self.dictionary, - self.mask_idx, - mask_whole_words, - shuffle=self.args.shuffle_instance, - seed=self.seed, - args=self.args, - ) - logger.info( - "Split: {0}, Loaded {1} samples of denoising_dataset".format( - split, - len(self.datasets[split]), - ) - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - """ - Generate batches for inference. We assume that the input begins with a - bos symbol (``) and ends with an eos symbol (``). - """ - pad = self.source_dictionary.pad() - eos = self.source_dictionary.eos() - src_dataset = TokenBlockDataset( - src_tokens, - src_lengths, - block_size=self.args.tokens_per_sample - 2, # for and - pad=pad, - eos=eos, - break_mode=self.args.sample_break_mode, - document_sep_len=0, - ) - prev_output_tokens = PrependTokenDataset( - StripTokenDataset(src_dataset, eos), eos - ) - src_dataset = PadDataset(src_dataset, pad_idx=pad, left_pad=False) - return NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": src_dataset, - "src_lengths": NumelDataset(src_dataset, reduce=False), - "prev_output_tokens": PadDataset( - prev_output_tokens, pad_idx=pad, left_pad=False - ), - }, - "target": src_dataset, - }, - sizes=[np.array(src_lengths)], - ) - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - return (self.args.max_source_positions, self.args.max_target_positions) - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary`.""" - return self.dictionary - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary`.""" - return self.dictionary diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/token_generation_constraints.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/token_generation_constraints.py deleted file mode 100644 index e708dc51bcb0ffb7b411496239c74d5e6f3c2448..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/token_generation_constraints.py +++ /dev/null @@ -1,506 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -"""Implements tracking of constraints for a beam item. - -A list of constraints is given as a list of one or more token -sequences, each of length at least one token. For example, for an input sentence - -> Die maschinelle Übersetzung ist schwer zu kontrollieren. - -We could have the constraints: -* to influence -* hard - -There are two implementations: -* OrderedConstraintState: Tracks progress through an ordered list of multitoken constraints. -* UnorderedConstraintState: Tracks progress through an unordered list of multitoken constraints. - -The difference is that in the first, the constraints are assumed to be -in order; the algorithm will permit zero or more tokens between them. -In the second, the constraints are not ordered, so many orderings will -be explored. - -The same sequence can be present any number of times, and will appear -that many times in the output. -""" - -from collections import Counter -from typing import List, Optional, Set, Tuple - -import torch - - -class ConstraintState: - def __init__(self): - pass - - -def pack_constraints(batch_constraints: List[List[torch.Tensor]]) -> torch.Tensor: - """Takes a list of list of constraints in tensor form (a list of - tensor constraints for each sentence) and transforms it into a - packed Tensor. For example, here is a batch of size 3 with 3, 0, - and 1 constraints: - - [ [ [3 1 2], [3], [4 5 6 7], ] - [], - [ [1 8 9 10 1 4 11 12], ] - ] - - Its corresponding packed structure is: - - [ [ 3 3 1 2 0 3 0 4 5 6 7 0], - [ 0 0 0 0 0 0 0 0 0 0 0 0], - [ 1 1 8 9 10 1 4 11 12 0 0 0] ] - - The packed tensor has shape (batch size, maxlen), where - maxlen is defined below. Each row contains concatenated - constraint tokens for that sentence, with 0 appended after - each constraint. The first item in each row is the number - of constraints for that sentence. So maxlen is the maximum - of - - (number of constraints) + (sum length of constraints) + 1. - - across all sentences in the batch. - """ - # The maximum word length of concatenated constraints for any sentence - max_constraints_len = 1 - for sentence_constraints in batch_constraints: - if len(sentence_constraints): - # number of constraints, plus sum of constrain lens, plus a zero after each - constraints_len = ( - 1 - + sum([c.size(0) for c in sentence_constraints]) - + len(sentence_constraints) - ) - max_constraints_len = max(max_constraints_len, constraints_len) - - batch_size = len(batch_constraints) - constraints_tensor = torch.zeros((batch_size, max_constraints_len)).long() - for i, sentence_constraints in enumerate(batch_constraints): - constraints_tensor[i, 0] = len(sentence_constraints) - offset = 1 - for j, constraint in enumerate(sentence_constraints): - this_len = constraint.size(0) - constraints_tensor[i, offset : offset + this_len] = constraint - offset += this_len + 1 - - return constraints_tensor.long() - - -def unpack_constraints(constraint_tensor: torch.Tensor) -> List[torch.Tensor]: - """ - Transforms *one row* of a packed constraint tensor (e.g., for one - sentence in the batch) into a list of constraint tensors. - """ - constraint_list = [] - num_constraints = constraint_tensor[0] - constraints = constraint_tensor.tolist() - offset = 1 - for i in range(num_constraints): - where = constraints.index(0, offset) - constraint_list.append(constraint_tensor[offset:where]) - offset = where + 1 - - return constraint_list - - -class ConstraintNode: - """ - Represents a node in a trie managing unordered constraints. - """ - - def __init__(self, token: int = None, parent=None): - # The token associate with this node (None for the root) - self.token = int(token) if token is not None else None - # The parent (None at the root) - self.parent = parent - # Whether this node is a completed constraint - self.terminal = 0 - # List of child nodes - self.children = {} - - # The cumulative number of constraints from this point in the - # trie forward - self.num_constraints = 0 - - @property - def id(self): - return self.token - - def __str__(self): - term = self.terminal != 0 - return f"[{self.token}].{term}#{self.num_constraints}" - - def __getitem__(self, key: int): - return self.children.get(key, None) - - def next_tokens(self) -> Set[int]: - """The set of child labels.""" - return set(self.children.keys()) - - @staticmethod - def create(constraints: List[List[int]]): - root = ConstraintNode() - for sequence in constraints: - root.add_sequence(sequence) - - return root - - @staticmethod - def print_graph(node: "ConstraintNode"): - if len(node.children) == 0: - return str(node) - else: - s = f"({node}" - for child in node.children.values(): - s += " " + ConstraintNode.print_graph(child) - s += ")" - return s - - def token_counts(self) -> Counter: - """Returns a counter of the number of times each token is used - in a constraint. - """ - token_counts = Counter() - kids = list(self.children.values()) - while len(kids) > 0: - kid = kids.pop() - token_counts[kid.id] += kid.num_constraints - kids += list(kid.children.values()) - - return token_counts - - def tokens(self) -> Set[int]: - """Returns the set of tokens in constraints.""" - return set(self.token_counts().keys()) - - def add_sequence(self, sequence: List[int]): - """Adds a constraint, represented as a list of integers, to - the trie.""" - assert len(sequence) > 0 - - token = int(sequence[0]) - if token not in self.children: - self.children[token] = ConstraintNode(token, parent=self) - - node = self.children[token] - if len(sequence) == 1: - node.terminal += 1 - node.num_constraints += 1 - parent = node.parent - while parent is not None: - parent.num_constraints += 1 - parent = parent.parent - else: - node.add_sequence(sequence[1:]) - - -class UnorderedConstraintState(ConstraintState): - """ - Records progress through the set of constraints for each item in the beam - using a trie. - """ - - def __init__(self, node: ConstraintNode, copy_from: "ConstraintState" = None): - self.node = node - - if copy_from is None: - # The root node - self.root = node - # The set of states in the graph that have been completed - self.completed = Counter() - # The... - self.generated = Counter() - # The list of tokens we need to generate - self.needed_tokens = self.root.tokens() - else: - self.completed = Counter(copy_from.completed) - self.generated = Counter(copy_from.generated) - self.root = copy_from.root - - # Mark the node as generated - if self.node != self.root: - self.generated[node] += 1 - - @staticmethod - def create(constraint_tensor: torch.Tensor): - constraint_list = unpack_constraints(constraint_tensor) - constraint_trie_root = ConstraintNode.create(constraint_list) - return UnorderedConstraintState(constraint_trie_root) - - def __str__(self): - gen_str = ",".join([str(node) for node in self.generated]) - return f"{self.name}/{self.bank}({gen_str})x{self.num_completed}" - - def __copy__(self): - copied_state = UnorderedConstraintState(self.node, copy_from=self) - return copied_state - - def copy(self): - return self.__copy__() - - @property - def name(self): - if self.node.id is None: - return "ROOT" - else: - return str(self.node.id) - - @property - def is_root(self): - return self.node == self.root - - @property - def bank(self): - return sum(self.generated.values()) - - @property - def num_completed(self): - """The number of constraints (not constraint tokens) that are completed. - In addition to the already-completed states, we need to account for the - current state, which might get marked as completed when another token - is generated. - """ - in_final = self.node.terminal and self.completed[self.node] < self.node.terminal - return sum(self.completed.values()) + in_final - - @property - def finished(self): - return self.root.num_constraints - self.num_completed == 0 - - @property - def token_counts(self): - return self.root.token_counts() - - @property - def tokens(self): - return self.root.tokens() - - @property - def num_constraint_tokens(self): - return sum(self.token_counts.values()) - - def next_tokens(self) -> Set[int]: - """Returns the list of tokens that could come next. - These are (a) all tokens extending the root state and, for - non-root states, additionally all tokens extending the current - state.""" - - if self.node != self.root: - return self.root.next_tokens().union(self.node.next_tokens()) - else: - return self.root.next_tokens() - - def advance(self, token: int): - """Reads in a token and advances the state. Here's how it works. - - We can advance to the next state if: - - there is a matching child - - its path isn't blocked - - A path is blocked when all constraints that are descendants of - that node have already been generated, in the current state. - - If we are not able to advance from the current state, we "fall - off the graph" and return to the root state. There, we again - try to advance, checking the same criteria. - - In any case, when falling off the graph, we need to do some - bookkeeping. We: - - check whether any constraints were met (all prefixes of - current state) - - if one is found, mark it as completed - - adjust visited nodes accordingly - """ - token = int(token) - - next_state = None - child = self.node[token] - if child is not None and self.generated[child] < child.num_constraints: - next_state = UnorderedConstraintState(child, copy_from=self) - - def rewind(): - """If we're mid-trie and an "illegal" token is chosen next, we need - to reset our state to the root state. However, along the way, we need - to check whether a prefix of the current trie state represents a state - we could mark as completed. - """ - node = self.node - while node != self.root: - if node.terminal and self.completed[node] < node.terminal: - next_state.completed[node] += 1 - return - - next_state.generated[node] -= 1 - node = node.parent - - # Fall off the graph, check the root - if next_state is None and token in self.root.next_tokens(): - child = self.root[token] - # We can only traverse this edge if it's not saturated - if self.generated[child] < child.num_constraints: - next_state = UnorderedConstraintState(child, copy_from=self) - else: - next_state = UnorderedConstraintState(self.root, copy_from=self) - - # Rewind - rewind() - - elif next_state is None: - next_state = UnorderedConstraintState(self.root, copy_from=self) - # Rewind - rewind() - - return next_state - - -class ConstraintSequence: - def __init__(self, sequences: List[List[int]]): - """Represents a set of possibly multitoken constraints by - concatenating them and internally recording the end points. - """ - self.sequences = [] - self.endpoints = [] - self.num_tokens = 0 - self.tokens = set() - for sequence in sequences: - for token in sequence: - self.tokens.add(token) - self.num_tokens += len(sequence) - self.endpoints += [False for x in range(len(sequence) - 1)] + [True] - self.sequences += sequence - - def __getitem__(self, key: int): - return self.sequences[key] - - def __len__(self): - return len(self.sequences) - - def __str__(self): - return str(self.sequences) - - -class OrderedConstraintState(ConstraintState): - """ - Records progress through the set of linear nonbranching constraints with gaps. - """ - - def __init__(self, sequence: ConstraintSequence, state: int = -1): - self.sequence = sequence - self.state = state - - @staticmethod - def create(constraint_tensor: torch.Tensor): - constraint_list = unpack_constraints(constraint_tensor) - return OrderedConstraintState(ConstraintSequence(constraint_list), -1) - - def __str__(self): - return f"{self.state}/{self.bank}x{self.num_completed}" - - def __copy__(self): - return OrderedConstraintState(self.sequence, self.state) - - def copy(self): - return self.__copy__() - - @property - def num_completed(self): - if self.state == -1: - return 0 - count = len( - list(filter(lambda x: x, self.sequence.endpoints[0 : self.state + 1])) - ) - return count - - @property - def is_root(self): - return self.state == -1 - - @property - def name(self): - if self.state == -1: - return "ROOT" - else: - return str(self.sequence[self.state]) - - @property - def bank(self) -> int: - return self.state + 1 - - @property - def finished(self): - return self.state + 1 == len(self.sequence) - - @property - def token_counts(self): - return self.sequence.token_counts() - - @property - def tokens(self): - return self.sequence.tokens - - @property - def num_constraint_tokens(self): - return sum(self.token_counts.values()) - - def next_tokens(self) -> Set[int]: - """Returns the list of tokens that could come next. - These are (a) all tokens extending the root state and, for - non-root states, additionally all tokens extending the current - state.""" - - tokens = set() - if self.state > 0: - tokens.add(self.sequence[0]) - if not self.finished: - tokens.add(self.sequence[self.state + 1]) - return tokens - - def advance(self, token: int): - """Reads in a token and advances the state. Here's how it works. - - We can advance to the next state if: - - there is a matching child - - its path isn't blocked - - A path is blocked when all constraints that are descendants of - that node have already been generated, in the current state. - - If we are not able to advance from the current state, we "fall - off the graph" and return to the root state. There, we again - try to advance, checking the same criteria. - - In any case, when falling off the graph, we need to do some - bookkeeping. We: - - check whether any constraints were met (all prefixes of - current state) - - if one is found, mark it as completed - - adjust visited nodes accordingly - """ - token = int(token) - # print(f"{self} ADVANCE({token}) {self.sequence} -> ", end="") - - if self.finished: - # Accept anything - next_state = self.copy() - - elif self.sequence[self.state + 1] == token: - # Advance to the next token - next_state = OrderedConstraintState(self.sequence, self.state + 1) - - elif self.sequence.endpoints[self.state]: - # Accept anything between constraints (*) - next_state = self.copy() - - elif token == self.sequence[0]: - # Start over having generated the first token - next_state = OrderedConstraintState(self.sequence, 0) - else: - # Start over from the root - next_state = OrderedConstraintState(self.sequence, -1) - - return next_state diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/laser/laser_src/__init__.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/laser/laser_src/__init__.py deleted file mode 100644 index 9ffbd656d8786e421008fb4cb0d1d8911dc8330c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/laser/laser_src/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .laser_task import * # noqa -from .laser_lstm import * # noqa -from .laser_transformer import * # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py deleted file mode 100644 index 0e3e4c5cd7aef15dae0b41b0ec7b33e17f66597f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -from collections import defaultdict -from itertools import chain -from pathlib import Path - -import numpy as np -import torchaudio -import torchaudio.sox_effects as ta_sox -import yaml -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_tsv_to_dicts -from examples.speech_synthesis.preprocessing.speaker_embedder import SpkrEmbedder - - -def extract_embedding(audio_path, embedder): - wav, sr = torchaudio.load(audio_path) # 2D - if sr != embedder.RATE: - wav, sr = ta_sox.apply_effects_tensor( - wav, sr, [["rate", str(embedder.RATE)]] - ) - try: - emb = embedder([wav[0].cuda().float()]).cpu().numpy() - except RuntimeError: - emb = None - return emb - - -def process(args): - print("Fetching data...") - raw_manifest_root = Path(args.raw_manifest_root).absolute() - samples = [load_tsv_to_dicts(raw_manifest_root / (s + ".tsv")) - for s in args.splits] - samples = list(chain(*samples)) - with open(args.config, "r") as f: - config = yaml.load(f, Loader=yaml.FullLoader) - with open(f"{config['audio_root']}/{config['speaker_set_filename']}") as f: - speaker_to_id = {r.strip(): i for i, r in enumerate(f)} - - embedder = SpkrEmbedder(args.ckpt).cuda() - speaker_to_cnt = defaultdict(float) - speaker_to_emb = defaultdict(float) - for sample in tqdm(samples, desc="extract emb"): - emb = extract_embedding(sample["audio"], embedder) - if emb is not None: - speaker_to_cnt[sample["speaker"]] += 1 - speaker_to_emb[sample["speaker"]] += emb - if len(speaker_to_emb) != len(speaker_to_id): - missed = set(speaker_to_id) - set(speaker_to_emb.keys()) - print( - f"WARNING: missing embeddings for {len(missed)} speaker:\n{missed}" - ) - speaker_emb_mat = np.zeros((len(speaker_to_id), len(emb)), float) - for speaker in speaker_to_emb: - idx = speaker_to_id[speaker] - emb = speaker_to_emb[speaker] - cnt = speaker_to_cnt[speaker] - speaker_emb_mat[idx, :] = emb / cnt - speaker_emb_name = "speaker_emb.npy" - speaker_emb_path = f"{config['audio_root']}/{speaker_emb_name}" - np.save(speaker_emb_path, speaker_emb_mat) - config["speaker_emb_filename"] = speaker_emb_name - - with open(args.new_config, "w") as f: - yaml.dump(config, f) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--raw-manifest-root", "-m", required=True, type=str) - parser.add_argument("--splits", "-s", type=str, nargs="+", - default=["train"]) - parser.add_argument("--config", "-c", required=True, type=str) - parser.add_argument("--new-config", "-n", required=True, type=str) - parser.add_argument("--ckpt", required=True, type=str, - help="speaker embedder checkpoint") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/Omnibus/MusicGen/setup.py b/spaces/Omnibus/MusicGen/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/MusicGen/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/cloc.sh b/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/cloc.sh deleted file mode 100644 index 2dc336fc8aa81350fbe9a03c543927734ff00c2b..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/cloc.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/bash - -# This scripts counts the lines of code and comments in all source files -# and prints the results to the command line. It uses the commandline tool -# "cloc". You can either pass --loc, --comments or --percentage to show the -# respective values only. -# Some parts below need to be adapted to your project! - -# Get the location of this script. -SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" - -# Run cloc - this counts code lines, blank lines and comment lines -# for the specified languages. You will need to change this accordingly. -# For C++, you could use "C++,C/C++ Header" for example. -# We are only interested in the summary, therefore the tail -1 -SUMMARY="$(cloc "${SCRIPT_DIR}" --include-lang="Python" --md | tail -1)" - -# The $SUMMARY is one line of a markdown table and looks like this: -# SUM:|101|3123|2238|10783 -# We use the following command to split it into an array. -IFS='|' read -r -a TOKENS <<<"$SUMMARY" - -# Store the individual tokens for better readability. -NUMBER_OF_FILES=${TOKENS[1]} -COMMENT_LINES=${TOKENS[3]} -LINES_OF_CODE=${TOKENS[4]} - -# To make the estimate of commented lines more accurate, we have to -# subtract any copyright header which is included in each file. -# For Fly-Pie, this header has the length of five lines. -# All dumb comments like those /////////// or those // ------------ -# are also subtracted. As cloc does not count inline comments, -# the overall estimate should be rather conservative. -# Change the lines below according to your project. -# DUMB_COMMENTS="$(grep -r -E '//////|// -----' "${SCRIPT_DIR}" | wc -l)" -# COMMENT_LINES=$(($COMMENT_LINES - 5 * $NUMBER_OF_FILES - $DUMB_COMMENTS)) - -# Print all results if no arguments are given. -if [[ $# -eq 0 ]]; then - awk -v a=$LINES_OF_CODE \ - 'BEGIN {printf "Lines of source code: %6.1fk\n", a/1000}' - awk -v a=$COMMENT_LINES \ - 'BEGIN {printf "Lines of comments: %6.1fk\n", a/1000}' - awk -v a=$COMMENT_LINES -v b=$LINES_OF_CODE \ - 'BEGIN {printf "Comment Percentage: %6.1f%\n", 100*a/b}' - exit 0 -fi - -# Show lines of code if --loc is given. -if [[ $* == *--loc* ]]; then - awk -v a=$LINES_OF_CODE \ - 'BEGIN {printf "%.1fk\n", a/1000}' -fi - -# Show lines of comments if --comments is given. -if [[ $* == *--comments* ]]; then - awk -v a=$COMMENT_LINES \ - 'BEGIN {printf "%.1fk\n", a/1000}' -fi - -# Show precentage of comments if --percentage is given. -if [[ $* == *--percentage* ]]; then - awk -v a=$COMMENT_LINES -v b=$LINES_OF_CODE \ - 'BEGIN {printf "%.1f\n", 100*a/b}' -fi diff --git a/spaces/OpenGVLab/InternGPT/iGPT/chatbot/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/chatbot/__init__.py deleted file mode 100644 index 3b9b92f088214e5de1573f1610f941b787495adb..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/chatbot/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .chatbot import ConversationBot \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/predictor.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/predictor.py deleted file mode 100644 index 6c95bfa4168af4376be065db8e22e2f70e937896..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Jialian Wu from https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/visualizer.py -import torch - -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.visualizer import ColorMode, Visualizer - - -class BatchDefaultPredictor(DefaultPredictor): - def __call__(self, original_images): - """ - Args: - original_image (np.ndarray): an image of shape (H, W, C) (in BGR order). - - Returns: - predictions (dict): - the output of the model for one image only. - See :doc:`/tutorials/models` for details about the format. - """ - with torch.no_grad(): # https://github.com/sphinx-doc/sphinx/issues/4258 - # Apply pre-processing to image. - height, width = original_images.shape[1:3] - batch_inputs = [] - for original_image in original_images: - image = self.aug.get_transform(original_image).apply_image(original_image) - image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) - - inputs = {"image": image, "height": height, "width": width} - batch_inputs.append(inputs) - predictions = self.model(batch_inputs)[0] - return predictions - -class Visualizer_GRiT(Visualizer): - def __init__(self, image, instance_mode=None): - super().__init__(image, instance_mode=instance_mode) - - def draw_instance_predictions(self, predictions): - boxes = predictions.pred_boxes if predictions.has("pred_boxes") else None - scores = predictions.scores if predictions.has("scores") else None - classes = predictions.pred_classes.tolist() if predictions.has("pred_classes") else None - object_description = predictions.pred_object_descriptions.data - # uncomment to output scores in visualized images - # object_description = [c + '|' + str(round(s.item(), 1)) for c, s in zip(object_description, scores)] - - if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"): - colors = [ - self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in classes - ] - alpha = 0.8 - else: - colors = None - alpha = 0.5 - - if self._instance_mode == ColorMode.IMAGE_BW: - self.output.reset_image( - self._create_grayscale_image( - (predictions.pred_masks.any(dim=0) > 0).numpy() - if predictions.has("pred_masks") - else None - ) - ) - alpha = 0.3 - - self.overlay_instances( - masks=None, - boxes=boxes, - labels=object_description, - keypoints=None, - assigned_colors=colors, - alpha=alpha, - ) - return self.output - - -class VisualizationDemo(object): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE): - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.predictor = DefaultPredictor(cfg) - - def run_on_image(self, image): - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer_GRiT(image, instance_mode=self.instance_mode) - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output \ No newline at end of file diff --git a/spaces/OpenGVLab/VideoChatGPT/README.md b/spaces/OpenGVLab/VideoChatGPT/README.md deleted file mode 100644 index 749a458739316e2caf347bc41f3535e620010c58..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/VideoChatGPT/README.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -title: 'VideoChat: Chat-Centric Video Understanding' -emoji: 👀 -colorFrom: green -colorTo: blue -sdk: gradio -python_version: 3.8.16 -app_file: app.py -pinned: false -license: mit ---- - -# 🦜 VideoChat [[paper](https://arxiv.org/abs/2305.06355)] - -![images](assert/framework.png) -In this study, we initiate an exploration into video understanding by introducing VideoChat, an **end-to-end chat-centric video understanding system**. It integrates video foundation models and large language models via a learnable neural interface, excelling in **spatiotemporal reasoning, event localization, and causal relationship inference**. To instructively tune this system, we propose a **video-centric instruction dataset**, composed of thousands of videos matched with detailed descriptions and conversations. This dataset emphasizes **spatiotemporal reasoning and causal relationships**, providing a valuable asset for training chat-centric video understanding systems. Preliminary qualitative experiments reveal our system’s potential across a broad spectrum of video applications and set the standard for future research. - - -# :fire: Updates -- **2023/05/11**: Release the 🦜**VideoChat V1**, which can **handle both image and video understanding!** - - [Model](https://drive.google.com/file/d/1BqmWHWCZBPkhTNWDAq0IfGpbkKLz9C0V/view?usp=share_link) and [Data](https://github.com/OpenGVLab/InternVideo/blob/main/Data/instruction_data.md). - - 🧑‍💻 *Online demo is Preparing*. - - 🧑‍🔧 *Tuning scripts are cleaning*. - -# :hourglass_flowing_sand: Schedule - -- [x] Small-scale video instuction data and tuning -- [x] Instruction tuning on BLIP+UniFormerV2+Vicuna -- [ ] Large-scale and complex video instuction data -- [ ] Instruction tuning on strong video foundation model -- [ ] User-friendly interactions with longer videos -- [ ] ... - -# :speech_balloon: Example - -
    - - Comparison with ChatGPT, MiniGPT-4, LLaVA and mPLUG-Owl. -
    - Our VideoChat can handle both image and video understanding well! -
    -
    -
    - -
    - -
    - - [Video] Why the video is funny? - -
    -
    - -
    - -
    - - [Video] Spatial perception - -
    -
    - -
    - -
    - - [Video] Temporal perception - -
    -
    - -
    - -
    - - [Video] Multi-turn conversation - -
    -
    - -
    - -
    - - Image understanding - -
    -
    - -
    - -# :running: Usage - -- Prepare the envirment. - ```shell - pip install -r requirements.txt - ``` - -- Download [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) model: - - ViT: `wget https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/eva_vit_g.pth` - - QFormer: `wget https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/blip2_pretrained_flant5xxl.pth` - - Change the `vit_model_path` and `q_former_model_path` in [config.json](./configs/config.json). - -- Download [StabelVicuna](https://huggingface.co/CarperAI/stable-vicuna-13b-delta) model: - - LLAMA: Download it from the [original repo](https://github.com/facebookresearch/llama) or [hugging face](https://huggingface.co/decapoda-research/llama-13b-hf). - - If you download LLAMA from the original repo, please process it via the following command: - ```shell - # convert_llama_weights_to_hf is copied from transformers - python src/transformers/models/llama/convert_llama_weights_to_hf.py \ - --input_dir /path/to/downloaded/llama/weights \ - --model_size 7B --output_dir /output/path - ``` - - Download [StableVicuna-13b-deelta](https://huggingface.co/CarperAI/stable-vicuna-13b-delta) and process it: - ```shell - # fastchat v0.1.10 - python3 apply_delta.py \ - --base /path/to/model_weights/llama-13b \ - --target stable-vicuna-13b \ - --delta CarperAI/stable-vicuna-13b-delta - ``` - - Change the `llama_model_path` in [config.json](./configs/config.json). - -- Download [VideoChat](https://drive.google.com/file/d/1BqmWHWCZBPkhTNWDAq0IfGpbkKLz9C0V/view?usp=share_link) model: - - - Change the `videochat_model_path` in [config.json](./configs/config.json). - -- Running demo with Gradio: - ```shell - python demo.py - ``` - -- Another demo on Jupyter Notebook can found in [demo.ipynb](demo.ipynb) - - -# :page_facing_up: Citation - -If you find this project useful in your research, please consider cite: -```BibTeX -@article{2023videochat, - title={VideoChat: Chat-Centric Video Understanding}, - author={KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao}, - journal={arXiv preprint arXiv:2305.06355}, - year={2023} -} -``` - -# :thumbsup: Acknowledgement - -Thanks to the open source of the following projects: - -[InternVideo](https://github.com/OpenGVLab/InternVideo), [UniFormerV2](https://github.com/OpenGVLab/UniFormerV2), [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4), [LLaVA](https://github.com/haotian-liu/LLaVA), [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2), [StableLM](https://github.com/Stability-AI/StableLM). \ No newline at end of file diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_m.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_m.py deleted file mode 100644 index 241cabcfbaa0778922e052a4cd66721215a9d051..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_m.py +++ /dev/null @@ -1,156 +0,0 @@ -import os -import rich -import random -import pickle -import codecs as cs -import numpy as np -from torch.utils import data -from rich.progress import track -from os.path import join as pjoin - - -class MotionDataset(data.Dataset): - def __init__( - self, - data_root, - split, - mean, - std, - max_motion_length=196, - min_motion_length=20, - unit_length=4, - fps=20, - tmpFile=True, - tiny=False, - debug=False, - **kwargs, - ): - - # restrian the length of motion and text - self.max_motion_length = max_motion_length - self.min_motion_length = min_motion_length - self.unit_length = unit_length - - # Data mean and std - self.mean = mean - self.std = std - - # Data path - split_file = pjoin(data_root, split + '.txt') - motion_dir = pjoin(data_root, 'new_joint_vecs') - text_dir = pjoin(data_root, 'texts') - - # Data id list - self.id_list = [] - with cs.open(split_file, "r") as f: - for line in f.readlines(): - self.id_list.append(line.strip()) - - # Debug mode - if tiny or debug: - enumerator = enumerate( - track( - self.id_list, - f"Loading HumanML3D {split}", - )) - maxdata = 100 - subset = '_tiny' - else: - enumerator = enumerate(self.id_list) - maxdata = 1e10 - subset = '' - - new_name_list = [] - motion_dict = {} - - # Fast loading - if os.path.exists(pjoin(data_root, f'tmp/{split}{subset}_motion.pkl')): - with rich.progress.open(pjoin(data_root, f'tmp/{split}{subset}_motion.pkl'), - 'rb', description=f"Loading HumanML3D {split}") as file: - motion_dict = pickle.load(file) - with open(pjoin(data_root, f'tmp/{split}{subset}_index.pkl'), 'rb') as file: - new_name_list = pickle.load(file) - else: - for idx, name in enumerator: - if len(new_name_list) > maxdata: - break - try: - motion = [np.load(pjoin(motion_dir, name + ".npy"))] - - # Read text - with cs.open(pjoin(text_dir, name + '.txt')) as f: - text_data = [] - flag = False - lines = f.readlines() - - for line in lines: - try: - line_split = line.strip().split('#') - f_tag = float(line_split[2]) - to_tag = float(line_split[3]) - f_tag = 0.0 if np.isnan(f_tag) else f_tag - to_tag = 0.0 if np.isnan(to_tag) else to_tag - - if f_tag == 0.0 and to_tag == 0.0: - flag = True - else: - motion_new = [tokens[int(f_tag*fps/unit_length) : int(to_tag*fps/unit_length)] for tokens in motion if int(f_tag*fps/unit_length) < int(to_tag*fps/unit_length)] - - if len(motion_new) == 0: - continue - new_name = '%s_%f_%f'%(name, f_tag, to_tag) - - motion_dict[new_name] = { - 'motion': motion_new, - "length": [len(m[0]) for m in motion_new]} - new_name_list.append(new_name) - except: - pass - - if flag: - motion_dict[name] = { - 'motion': motion, - "length": [len(m[0]) for m in motion]} - new_name_list.append(name) - except: - pass - - if tmpFile: - os.makedirs(pjoin(data_root, 'tmp'), exist_ok=True) - - with open(pjoin(data_root, f'tmp/{split}{subset}_motion.pkl'),'wb') as file: - pickle.dump(motion_dict, file) - with open(pjoin(data_root, f'tmp/{split}{subset}_index.pkl'), 'wb') as file: - pickle.dump(new_name_list, file) - - self.motion_dict = motion_dict - self.name_list = new_name_list - self.nfeats = motion_dict[new_name_list[0]]['motion'][0].shape[1] - - def __len__(self): - return len(self.name_list) - - def __getitem__(self, item): - data = self.motion_dict[self.name_list[item]] - motion_list, m_length = data["motion"], data["length"] - - # Randomly select a motion - motion = random.choice(motion_list) - - # Crop the motions in to times of 4, and introduce small variations - if self.unit_length < 10: - coin2 = np.random.choice(["single", "single", "double"]) - else: - coin2 = "single" - - if coin2 == "double": - m_length = (m_length // self.unit_length - 1) * self.unit_length - elif coin2 == "single": - m_length = (m_length // self.unit_length) * self.unit_length - idx = random.randint(0, len(motion) - m_length) - motion = motion[idx:idx + m_length] - - # Z Normalization - motion = (motion - self.mean) / self.std - - return None, motion, m_length, None, None, None, None, diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/sampling.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/sampling.py deleted file mode 100644 index e1714ee6a21633eefaacdcdcb25cd6991063abe6..0000000000000000000000000000000000000000 --- a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/sampling.py +++ /dev/null @@ -1,70 +0,0 @@ -from typing import * -import numpy as np -import random - -def get_random_sample(search_space: dict, p: Union[List[float], None] = None): - """Recuperate a random sample - - Args: - search_space (dict): A dictionary defining the search space - - Raises: - ValueError: 'min' and 'max' can only be numbers - KeyError: Only the following keys can be provided {'min', 'max'}, {'value'}, {'values'} or {'values', 'p'} - - Returns: - Union[int, float, str]: The random sample - """ - - keys = set(search_space) - - if keys == set(['min', 'max']): - - assert search_space['min'] < search_space['max'] - - if isinstance(search_space['min'], int) and isinstance(search_space['max'], int): - - return random.randint(search_space['min'], search_space['max']) - - elif isinstance(search_space['min'], float) or isinstance(search_space, float): - - return random.uniform(search_space['min'], search_space['max']) - - else: - - raise ValueError("You can only provide int or float values with min max!") - - elif keys == set(['value']): - - return search_space['value'] - - elif keys.issubset(set(['values'])): - - p = None - - if 'p' in keys: p = search_space['p'] - - return np.random.choice(search_space['values'], size = (1), p = p)[0] - - else: - - raise KeyError("You didn't provide right keys! Try between: {'min', 'max'}, {'value'}, {'values'} or {'values', 'p'}") - - -def get_random_samples(search_spaces: dict): - """Recuperate random samples from a dictionary of search spaces - - Args: - search_spaces (dict): A dictionary where the keys are the hyperparameter names and the values are the search spaces - - Returns: - dict: A dictionary where the keys are the hyperparameter names and the values are the sampled values from the search spaces - """ - - samples = {} - - for search_space in search_spaces: - - samples[search_space] = get_random_sample(search_spaces[search_space]) - - return samples diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip_retrieval.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip_retrieval.py deleted file mode 100644 index 1debe7e2e664f8dd603f8d4c537e3599c68638d7..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip_retrieval.py +++ /dev/null @@ -1,319 +0,0 @@ -from models.med import BertConfig, BertModel -from transformers import BertTokenizer - -import torch -from torch import nn -import torch.nn.functional as F - -from models.blip import create_vit, init_tokenizer, load_checkpoint - -class BLIP_Retrieval(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 384, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - embed_dim = 256, - queue_size = 57600, - momentum = 0.995, - negative_all_rank = False, - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer) - self.tokenizer = init_tokenizer() - med_config = BertConfig.from_json_file(med_config) - med_config.encoder_width = vision_width - self.text_encoder = BertModel(config=med_config, add_pooling_layer=False) - - text_width = self.text_encoder.config.hidden_size - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.itm_head = nn.Linear(text_width, 2) - - # create momentum encoders - self.visual_encoder_m, vision_width = create_vit(vit,image_size) - self.vision_proj_m = nn.Linear(vision_width, embed_dim) - self.text_encoder_m = BertModel(config=med_config, add_pooling_layer=False) - self.text_proj_m = nn.Linear(text_width, embed_dim) - - self.model_pairs = [[self.visual_encoder,self.visual_encoder_m], - [self.vision_proj,self.vision_proj_m], - [self.text_encoder,self.text_encoder_m], - [self.text_proj,self.text_proj_m], - ] - self.copy_params() - - # create the queue - self.register_buffer("image_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("text_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("idx_queue", torch.full((1,queue_size),-100)) - self.register_buffer("ptr_queue", torch.zeros(1, dtype=torch.long)) - - self.image_queue = nn.functional.normalize(self.image_queue, dim=0) - self.text_queue = nn.functional.normalize(self.text_queue, dim=0) - - self.queue_size = queue_size - self.momentum = momentum - self.temp = nn.Parameter(0.07*torch.ones([])) - - self.negative_all_rank = negative_all_rank - - - def forward(self, image, caption, alpha, idx): - with torch.no_grad(): - self.temp.clamp_(0.001,0.5) - - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - image_feat = F.normalize(self.vision_proj(image_embeds[:,0,:]),dim=-1) - - text = self.tokenizer(caption, padding='max_length', truncation=True, max_length=35, - return_tensors="pt").to(image.device) - - text_output = self.text_encoder(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - text_feat = F.normalize(self.text_proj(text_output.last_hidden_state[:,0,:]),dim=-1) - - ###============== Image-text Contrastive Learning ===================### - idx = idx.view(-1,1) - idx_all = torch.cat([idx.t(), self.idx_queue.clone().detach()],dim=1) - pos_idx = torch.eq(idx, idx_all).float() - sim_targets = pos_idx / pos_idx.sum(1,keepdim=True) - - # get momentum features - with torch.no_grad(): - self._momentum_update() - image_embeds_m = self.visual_encoder_m(image) - image_feat_m = F.normalize(self.vision_proj_m(image_embeds_m[:,0,:]),dim=-1) - image_feat_m_all = torch.cat([image_feat_m.t(),self.image_queue.clone().detach()],dim=1) - - text_output_m = self.text_encoder_m(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - text_feat_m = F.normalize(self.text_proj_m(text_output_m.last_hidden_state[:,0,:]),dim=-1) - text_feat_m_all = torch.cat([text_feat_m.t(),self.text_queue.clone().detach()],dim=1) - - sim_i2t_m = image_feat_m @ text_feat_m_all / self.temp - sim_t2i_m = text_feat_m @ image_feat_m_all / self.temp - - sim_i2t_targets = alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets - sim_t2i_targets = alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets - - sim_i2t = image_feat @ text_feat_m_all / self.temp - sim_t2i = text_feat @ image_feat_m_all / self.temp - - loss_i2t = -torch.sum(F.log_softmax(sim_i2t, dim=1)*sim_i2t_targets,dim=1).mean() - loss_t2i = -torch.sum(F.log_softmax(sim_t2i, dim=1)*sim_t2i_targets,dim=1).mean() - - loss_ita = (loss_i2t+loss_t2i)/2 - - idxs = concat_all_gather(idx) - self._dequeue_and_enqueue(image_feat_m, text_feat_m, idxs) - - ###============== Image-text Matching ===================### - encoder_input_ids = text.input_ids.clone() - encoder_input_ids[:,0] = self.tokenizer.enc_token_id - - # forward the positve image-text pair - bs = image.size(0) - output_pos = self.text_encoder(encoder_input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True, - ) - - - if self.negative_all_rank: - # compute sample similarity - with torch.no_grad(): - mask = torch.eq(idx, idxs.t()) - - image_feat_world = concat_all_gather(image_feat) - text_feat_world = concat_all_gather(text_feat) - - sim_i2t = image_feat @ text_feat_world.t() / self.temp - sim_t2i = text_feat @ image_feat_world.t() / self.temp - - weights_i2t = F.softmax(sim_i2t,dim=1) - weights_i2t.masked_fill_(mask, 0) - - weights_t2i = F.softmax(sim_t2i,dim=1) - weights_t2i.masked_fill_(mask, 0) - - image_embeds_world = all_gather_with_grad(image_embeds) - - # select a negative image (from all ranks) for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds_world[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg,dim=0) - - # select a negative text (from all ranks) for each image - input_ids_world = concat_all_gather(encoder_input_ids) - att_mask_world = concat_all_gather(text.attention_mask) - - text_ids_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_ids_neg.append(input_ids_world[neg_idx]) - text_atts_neg.append(att_mask_world[neg_idx]) - - else: - with torch.no_grad(): - mask = torch.eq(idx, idx.t()) - - sim_i2t = image_feat @ text_feat.t() / self.temp - sim_t2i = text_feat @ image_feat.t() / self.temp - - weights_i2t = F.softmax(sim_i2t,dim=1) - weights_i2t.masked_fill_(mask, 0) - - weights_t2i = F.softmax(sim_t2i,dim=1) - weights_t2i.masked_fill_(mask, 0) - - # select a negative image (from same rank) for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg,dim=0) - - # select a negative text (from same rank) for each image - text_ids_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_ids_neg.append(encoder_input_ids[neg_idx]) - text_atts_neg.append(text.attention_mask[neg_idx]) - - text_ids_neg = torch.stack(text_ids_neg,dim=0) - text_atts_neg = torch.stack(text_atts_neg,dim=0) - - text_ids_all = torch.cat([encoder_input_ids, text_ids_neg],dim=0) - text_atts_all = torch.cat([text.attention_mask, text_atts_neg],dim=0) - - image_embeds_all = torch.cat([image_embeds_neg,image_embeds],dim=0) - image_atts_all = torch.cat([image_atts,image_atts],dim=0) - - output_neg = self.text_encoder(text_ids_all, - attention_mask = text_atts_all, - encoder_hidden_states = image_embeds_all, - encoder_attention_mask = image_atts_all, - return_dict = True, - ) - - - vl_embeddings = torch.cat([output_pos.last_hidden_state[:,0,:], output_neg.last_hidden_state[:,0,:]],dim=0) - vl_output = self.itm_head(vl_embeddings) - - itm_labels = torch.cat([torch.ones(bs,dtype=torch.long),torch.zeros(2*bs,dtype=torch.long)], - dim=0).to(image.device) - loss_itm = F.cross_entropy(vl_output, itm_labels) - - return loss_ita, loss_itm - - - @torch.no_grad() - def copy_params(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data.copy_(param.data) # initialize - param_m.requires_grad = False # not update by gradient - - - @torch.no_grad() - def _momentum_update(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data = param_m.data * self.momentum + param.data * (1. - self.momentum) - - - @torch.no_grad() - def _dequeue_and_enqueue(self, image_feat, text_feat, idxs): - # gather keys before updating queue - image_feats = concat_all_gather(image_feat) - text_feats = concat_all_gather(text_feat) - - - batch_size = image_feats.shape[0] - - ptr = int(self.ptr_queue) - assert self.queue_size % batch_size == 0 # for simplicity - - # replace the keys at ptr (dequeue and enqueue) - self.image_queue[:, ptr:ptr + batch_size] = image_feats.T - self.text_queue[:, ptr:ptr + batch_size] = text_feats.T - self.idx_queue[:, ptr:ptr + batch_size] = idxs.T - ptr = (ptr + batch_size) % self.queue_size # move pointer - - self.ptr_queue[0] = ptr - - -def blip_retrieval(pretrained='',**kwargs): - model = BLIP_Retrieval(**kwargs) - if pretrained: - model,msg = load_checkpoint(model,pretrained) - print("missing keys:") - print(msg.missing_keys) - return model - - -@torch.no_grad() -def concat_all_gather(tensor): - """ - Performs all_gather operation on the provided tensors. - *** Warning ***: torch.distributed.all_gather has no gradient. - """ - tensors_gather = [torch.ones_like(tensor) - for _ in range(torch.distributed.get_world_size())] - torch.distributed.all_gather(tensors_gather, tensor, async_op=False) - - output = torch.cat(tensors_gather, dim=0) - return output - - -class GatherLayer(torch.autograd.Function): - """ - Gather tensors from all workers with support for backward propagation: - This implementation does not cut the gradients as torch.distributed.all_gather does. - """ - - @staticmethod - def forward(ctx, x): - output = [torch.zeros_like(x) for _ in range(torch.distributed.get_world_size())] - torch.distributed.all_gather(output, x) - return tuple(output) - - @staticmethod - def backward(ctx, *grads): - all_gradients = torch.stack(grads) - torch.distributed.all_reduce(all_gradients) - return all_gradients[torch.distributed.get_rank()] - - -def all_gather_with_grad(tensors): - """ - Performs all_gather operation on the provided tensors. - Graph remains connected for backward grad computation. - """ - # Queue the gathered tensors - world_size = torch.distributed.get_world_size() - # There is no need for reduction in the single-proc case - if world_size == 1: - return tensors - - tensor_all = GatherLayer.apply(tensors) - - return torch.cat(tensor_all, dim=0) diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/models/autoencoder.py b/spaces/Purple11/Grounded-Diffusion/ldm/models/autoencoder.py deleted file mode 100644 index 6a9c4f45498561953b8085981609b2a3298a5473..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/ldm/models/autoencoder.py +++ /dev/null @@ -1,443 +0,0 @@ -import torch -import pytorch_lightning as pl -import torch.nn.functional as F -from contextlib import contextmanager - -from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer - -from ldm.modules.diffusionmodules.model import Encoder, Decoder -from ldm.modules.distributions.distributions import DiagonalGaussianDistribution - -from ldm.util import instantiate_from_config - - -class VQModel(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - batch_resize_range=None, - scheduler_config=None, - lr_g_factor=1.0, - remap=None, - sane_index_shape=False, # tell vector quantizer to return indices as bhw - use_ema=False - ): - super().__init__() - self.embed_dim = embed_dim - self.n_embed = n_embed - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25, - remap=remap, - sane_index_shape=sane_index_shape) - self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - self.batch_resize_range = batch_resize_range - if self.batch_resize_range is not None: - print(f"{self.__class__.__name__}: Using per-batch resizing in range {batch_resize_range}.") - - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - self.scheduler_config = scheduler_config - self.lr_g_factor = lr_g_factor - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.parameters()) - self.model_ema.copy_to(self) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - print(f"Unexpected Keys: {unexpected}") - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self) - - def encode(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - quant, emb_loss, info = self.quantize(h) - return quant, emb_loss, info - - def encode_to_prequant(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - return h - - def decode(self, quant): - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - - def decode_code(self, code_b): - quant_b = self.quantize.embed_code(code_b) - dec = self.decode(quant_b) - return dec - - def forward(self, input, return_pred_indices=False): - quant, diff, (_,_,ind) = self.encode(input) - dec = self.decode(quant) - if return_pred_indices: - return dec, diff, ind - return dec, diff - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - if self.batch_resize_range is not None: - lower_size = self.batch_resize_range[0] - upper_size = self.batch_resize_range[1] - if self.global_step <= 4: - # do the first few batches with max size to avoid later oom - new_resize = upper_size - else: - new_resize = np.random.choice(np.arange(lower_size, upper_size+16, 16)) - if new_resize != x.shape[2]: - x = F.interpolate(x, size=new_resize, mode="bicubic") - x = x.detach() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - # https://github.com/pytorch/pytorch/issues/37142 - # try not to fool the heuristics - x = self.get_input(batch, self.image_key) - xrec, qloss, ind = self(x, return_pred_indices=True) - - if optimizer_idx == 0: - # autoencode - aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train", - predicted_indices=ind) - - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True) - return aeloss - - if optimizer_idx == 1: - # discriminator - discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True) - return discloss - - def validation_step(self, batch, batch_idx): - log_dict = self._validation_step(batch, batch_idx) - with self.ema_scope(): - log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema") - return log_dict - - def _validation_step(self, batch, batch_idx, suffix=""): - x = self.get_input(batch, self.image_key) - xrec, qloss, ind = self(x, return_pred_indices=True) - aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0, - self.global_step, - last_layer=self.get_last_layer(), - split="val"+suffix, - predicted_indices=ind - ) - - discloss, log_dict_disc = self.loss(qloss, x, xrec, 1, - self.global_step, - last_layer=self.get_last_layer(), - split="val"+suffix, - predicted_indices=ind - ) - rec_loss = log_dict_ae[f"val{suffix}/rec_loss"] - self.log(f"val{suffix}/rec_loss", rec_loss, - prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) - self.log(f"val{suffix}/aeloss", aeloss, - prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) - if version.parse(pl.__version__) >= version.parse('1.4.0'): - del log_dict_ae[f"val{suffix}/rec_loss"] - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr_d = self.learning_rate - lr_g = self.lr_g_factor*self.learning_rate - print("lr_d", lr_d) - print("lr_g", lr_g) - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quantize.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr_g, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr_d, betas=(0.5, 0.9)) - - if self.scheduler_config is not None: - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt_ae, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }, - { - 'scheduler': LambdaLR(opt_disc, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }, - ] - return [opt_ae, opt_disc], scheduler - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if only_inputs: - log["inputs"] = x - return log - xrec, _ = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["inputs"] = x - log["reconstructions"] = xrec - if plot_ema: - with self.ema_scope(): - xrec_ema, _ = self(x) - if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema) - log["reconstructions_ema"] = xrec_ema - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class VQModelInterface(VQModel): - def __init__(self, embed_dim, *args, **kwargs): - super().__init__(embed_dim=embed_dim, *args, **kwargs) - self.embed_dim = embed_dim - - def encode(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - return h - - def decode(self, h, force_not_quantize=False): - # also go through quantization layer - if not force_not_quantize: - quant, emb_loss, info = self.quantize(h) - else: - quant = h - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - - -class AutoencoderKL(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - ): - super().__init__() - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - def encode(self, x): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior - - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - return dec - - def forward(self, input, sample_posterior=True): - posterior = self.encode(input) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val") - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val") - - self.log("val/rec_loss", log_dict_ae["val/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class IdentityFirstStage(torch.nn.Module): - def __init__(self, *args, vq_interface=False, **kwargs): - self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff - super().__init__() - - def encode(self, x, *args, **kwargs): - return x - - def decode(self, x, *args, **kwargs): - return x - - def quantize(self, x, *args, **kwargs): - if self.vq_interface: - return x, None, [None, None, None] - return x - - def forward(self, x, *args, **kwargs): - return x diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py deleted file mode 100644 index 787681b03e9ec2fd4490de10cdc95e58c893c8b5..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py +++ /dev/null @@ -1,482 +0,0 @@ -import collections -import operator - -from .providers import AbstractResolver -from .structs import DirectedGraph, IteratorMapping, build_iter_view - -RequirementInformation = collections.namedtuple( - "RequirementInformation", ["requirement", "parent"] -) - - -class ResolverException(Exception): - """A base class for all exceptions raised by this module. - - Exceptions derived by this class should all be handled in this module. Any - bubbling pass the resolver should be treated as a bug. - """ - - -class RequirementsConflicted(ResolverException): - def __init__(self, criterion): - super(RequirementsConflicted, self).__init__(criterion) - self.criterion = criterion - - def __str__(self): - return "Requirements conflict: {}".format( - ", ".join(repr(r) for r in self.criterion.iter_requirement()), - ) - - -class InconsistentCandidate(ResolverException): - def __init__(self, candidate, criterion): - super(InconsistentCandidate, self).__init__(candidate, criterion) - self.candidate = candidate - self.criterion = criterion - - def __str__(self): - return "Provided candidate {!r} does not satisfy {}".format( - self.candidate, - ", ".join(repr(r) for r in self.criterion.iter_requirement()), - ) - - -class Criterion(object): - """Representation of possible resolution results of a package. - - This holds three attributes: - - * `information` is a collection of `RequirementInformation` pairs. - Each pair is a requirement contributing to this criterion, and the - candidate that provides the requirement. - * `incompatibilities` is a collection of all known not-to-work candidates - to exclude from consideration. - * `candidates` is a collection containing all possible candidates deducted - from the union of contributing requirements and known incompatibilities. - It should never be empty, except when the criterion is an attribute of a - raised `RequirementsConflicted` (in which case it is always empty). - - .. note:: - This class is intended to be externally immutable. **Do not** mutate - any of its attribute containers. - """ - - def __init__(self, candidates, information, incompatibilities): - self.candidates = candidates - self.information = information - self.incompatibilities = incompatibilities - - def __repr__(self): - requirements = ", ".join( - "({!r}, via={!r})".format(req, parent) - for req, parent in self.information - ) - return "Criterion({})".format(requirements) - - def iter_requirement(self): - return (i.requirement for i in self.information) - - def iter_parent(self): - return (i.parent for i in self.information) - - -class ResolutionError(ResolverException): - pass - - -class ResolutionImpossible(ResolutionError): - def __init__(self, causes): - super(ResolutionImpossible, self).__init__(causes) - # causes is a list of RequirementInformation objects - self.causes = causes - - -class ResolutionTooDeep(ResolutionError): - def __init__(self, round_count): - super(ResolutionTooDeep, self).__init__(round_count) - self.round_count = round_count - - -# Resolution state in a round. -State = collections.namedtuple("State", "mapping criteria backtrack_causes") - - -class Resolution(object): - """Stateful resolution object. - - This is designed as a one-off object that holds information to kick start - the resolution process, and holds the results afterwards. - """ - - def __init__(self, provider, reporter): - self._p = provider - self._r = reporter - self._states = [] - - @property - def state(self): - try: - return self._states[-1] - except IndexError: - raise AttributeError("state") - - def _push_new_state(self): - """Push a new state into history. - - This new state will be used to hold resolution results of the next - coming round. - """ - base = self._states[-1] - state = State( - mapping=base.mapping.copy(), - criteria=base.criteria.copy(), - backtrack_causes=base.backtrack_causes[:], - ) - self._states.append(state) - - def _add_to_criteria(self, criteria, requirement, parent): - self._r.adding_requirement(requirement=requirement, parent=parent) - - identifier = self._p.identify(requirement_or_candidate=requirement) - criterion = criteria.get(identifier) - if criterion: - incompatibilities = list(criterion.incompatibilities) - else: - incompatibilities = [] - - matches = self._p.find_matches( - identifier=identifier, - requirements=IteratorMapping( - criteria, - operator.methodcaller("iter_requirement"), - {identifier: [requirement]}, - ), - incompatibilities=IteratorMapping( - criteria, - operator.attrgetter("incompatibilities"), - {identifier: incompatibilities}, - ), - ) - - if criterion: - information = list(criterion.information) - information.append(RequirementInformation(requirement, parent)) - else: - information = [RequirementInformation(requirement, parent)] - - criterion = Criterion( - candidates=build_iter_view(matches), - information=information, - incompatibilities=incompatibilities, - ) - if not criterion.candidates: - raise RequirementsConflicted(criterion) - criteria[identifier] = criterion - - def _get_preference(self, name): - return self._p.get_preference( - identifier=name, - resolutions=self.state.mapping, - candidates=IteratorMapping( - self.state.criteria, - operator.attrgetter("candidates"), - ), - information=IteratorMapping( - self.state.criteria, - operator.attrgetter("information"), - ), - backtrack_causes=self.state.backtrack_causes, - ) - - def _is_current_pin_satisfying(self, name, criterion): - try: - current_pin = self.state.mapping[name] - except KeyError: - return False - return all( - self._p.is_satisfied_by(requirement=r, candidate=current_pin) - for r in criterion.iter_requirement() - ) - - def _get_updated_criteria(self, candidate): - criteria = self.state.criteria.copy() - for requirement in self._p.get_dependencies(candidate=candidate): - self._add_to_criteria(criteria, requirement, parent=candidate) - return criteria - - def _attempt_to_pin_criterion(self, name): - criterion = self.state.criteria[name] - - causes = [] - for candidate in criterion.candidates: - try: - criteria = self._get_updated_criteria(candidate) - except RequirementsConflicted as e: - causes.append(e.criterion) - continue - - # Check the newly-pinned candidate actually works. This should - # always pass under normal circumstances, but in the case of a - # faulty provider, we will raise an error to notify the implementer - # to fix find_matches() and/or is_satisfied_by(). - satisfied = all( - self._p.is_satisfied_by(requirement=r, candidate=candidate) - for r in criterion.iter_requirement() - ) - if not satisfied: - raise InconsistentCandidate(candidate, criterion) - - self._r.pinning(candidate=candidate) - self.state.criteria.update(criteria) - - # Put newly-pinned candidate at the end. This is essential because - # backtracking looks at this mapping to get the last pin. - self.state.mapping.pop(name, None) - self.state.mapping[name] = candidate - - return [] - - # All candidates tried, nothing works. This criterion is a dead - # end, signal for backtracking. - return causes - - def _backtrack(self): - """Perform backtracking. - - When we enter here, the stack is like this:: - - [ state Z ] - [ state Y ] - [ state X ] - .... earlier states are irrelevant. - - 1. No pins worked for Z, so it does not have a pin. - 2. We want to reset state Y to unpinned, and pin another candidate. - 3. State X holds what state Y was before the pin, but does not - have the incompatibility information gathered in state Y. - - Each iteration of the loop will: - - 1. Discard Z. - 2. Discard Y but remember its incompatibility information gathered - previously, and the failure we're dealing with right now. - 3. Push a new state Y' based on X, and apply the incompatibility - information from Y to Y'. - 4a. If this causes Y' to conflict, we need to backtrack again. Make Y' - the new Z and go back to step 2. - 4b. If the incompatibilities apply cleanly, end backtracking. - """ - while len(self._states) >= 3: - # Remove the state that triggered backtracking. - del self._states[-1] - - # Retrieve the last candidate pin and known incompatibilities. - broken_state = self._states.pop() - name, candidate = broken_state.mapping.popitem() - incompatibilities_from_broken = [ - (k, list(v.incompatibilities)) - for k, v in broken_state.criteria.items() - ] - - # Also mark the newly known incompatibility. - incompatibilities_from_broken.append((name, [candidate])) - - self._r.backtracking(candidate=candidate) - - # Create a new state from the last known-to-work one, and apply - # the previously gathered incompatibility information. - def _patch_criteria(): - for k, incompatibilities in incompatibilities_from_broken: - if not incompatibilities: - continue - try: - criterion = self.state.criteria[k] - except KeyError: - continue - matches = self._p.find_matches( - identifier=k, - requirements=IteratorMapping( - self.state.criteria, - operator.methodcaller("iter_requirement"), - ), - incompatibilities=IteratorMapping( - self.state.criteria, - operator.attrgetter("incompatibilities"), - {k: incompatibilities}, - ), - ) - candidates = build_iter_view(matches) - if not candidates: - return False - incompatibilities.extend(criterion.incompatibilities) - self.state.criteria[k] = Criterion( - candidates=candidates, - information=list(criterion.information), - incompatibilities=incompatibilities, - ) - return True - - self._push_new_state() - success = _patch_criteria() - - # It works! Let's work on this new state. - if success: - return True - - # State does not work after applying known incompatibilities. - # Try the still previous state. - - # No way to backtrack anymore. - return False - - def resolve(self, requirements, max_rounds): - if self._states: - raise RuntimeError("already resolved") - - self._r.starting() - - # Initialize the root state. - self._states = [ - State( - mapping=collections.OrderedDict(), - criteria={}, - backtrack_causes=[], - ) - ] - for r in requirements: - try: - self._add_to_criteria(self.state.criteria, r, parent=None) - except RequirementsConflicted as e: - raise ResolutionImpossible(e.criterion.information) - - # The root state is saved as a sentinel so the first ever pin can have - # something to backtrack to if it fails. The root state is basically - # pinning the virtual "root" package in the graph. - self._push_new_state() - - for round_index in range(max_rounds): - self._r.starting_round(index=round_index) - - unsatisfied_names = [ - key - for key, criterion in self.state.criteria.items() - if not self._is_current_pin_satisfying(key, criterion) - ] - - # All criteria are accounted for. Nothing more to pin, we are done! - if not unsatisfied_names: - self._r.ending(state=self.state) - return self.state - - # Choose the most preferred unpinned criterion to try. - name = min(unsatisfied_names, key=self._get_preference) - failure_causes = self._attempt_to_pin_criterion(name) - - if failure_causes: - causes = [i for c in failure_causes for i in c.information] - # Backtrack if pinning fails. The backtrack process puts us in - # an unpinned state, so we can work on it in the next round. - self._r.resolving_conflicts(causes=causes) - success = self._backtrack() - self.state.backtrack_causes[:] = causes - - # Dead ends everywhere. Give up. - if not success: - raise ResolutionImpossible(self.state.backtrack_causes) - else: - # Pinning was successful. Push a new state to do another pin. - self._push_new_state() - - self._r.ending_round(index=round_index, state=self.state) - - raise ResolutionTooDeep(max_rounds) - - -def _has_route_to_root(criteria, key, all_keys, connected): - if key in connected: - return True - if key not in criteria: - return False - for p in criteria[key].iter_parent(): - try: - pkey = all_keys[id(p)] - except KeyError: - continue - if pkey in connected: - connected.add(key) - return True - if _has_route_to_root(criteria, pkey, all_keys, connected): - connected.add(key) - return True - return False - - -Result = collections.namedtuple("Result", "mapping graph criteria") - - -def _build_result(state): - mapping = state.mapping - all_keys = {id(v): k for k, v in mapping.items()} - all_keys[id(None)] = None - - graph = DirectedGraph() - graph.add(None) # Sentinel as root dependencies' parent. - - connected = {None} - for key, criterion in state.criteria.items(): - if not _has_route_to_root(state.criteria, key, all_keys, connected): - continue - if key not in graph: - graph.add(key) - for p in criterion.iter_parent(): - try: - pkey = all_keys[id(p)] - except KeyError: - continue - if pkey not in graph: - graph.add(pkey) - graph.connect(pkey, key) - - return Result( - mapping={k: v for k, v in mapping.items() if k in connected}, - graph=graph, - criteria=state.criteria, - ) - - -class Resolver(AbstractResolver): - """The thing that performs the actual resolution work.""" - - base_exception = ResolverException - - def resolve(self, requirements, max_rounds=100): - """Take a collection of constraints, spit out the resolution result. - - The return value is a representation to the final resolution result. It - is a tuple subclass with three public members: - - * `mapping`: A dict of resolved candidates. Each key is an identifier - of a requirement (as returned by the provider's `identify` method), - and the value is the resolved candidate. - * `graph`: A `DirectedGraph` instance representing the dependency tree. - The vertices are keys of `mapping`, and each edge represents *why* - a particular package is included. A special vertex `None` is - included to represent parents of user-supplied requirements. - * `criteria`: A dict of "criteria" that hold detailed information on - how edges in the graph are derived. Each key is an identifier of a - requirement, and the value is a `Criterion` instance. - - The following exceptions may be raised if a resolution cannot be found: - - * `ResolutionImpossible`: A resolution cannot be found for the given - combination of requirements. The `causes` attribute of the - exception is a list of (requirement, parent), giving the - requirements that could not be satisfied. - * `ResolutionTooDeep`: The dependency tree is too deeply nested and - the resolver gave up. This is usually caused by a circular - dependency, but you can try to resolve this by increasing the - `max_rounds` argument. - """ - resolution = Resolution(self.provider, self.reporter) - state = resolution.resolve(requirements, max_rounds=max_rounds) - return _build_result(state) diff --git a/spaces/Rayzggz/illi-Bert-VITS2/server.py b/spaces/Rayzggz/illi-Bert-VITS2/server.py deleted file mode 100644 index 2ecd50307fdae5c5e26d8cc9453de296532b95ff..0000000000000000000000000000000000000000 --- a/spaces/Rayzggz/illi-Bert-VITS2/server.py +++ /dev/null @@ -1,170 +0,0 @@ -from flask import Flask, request, Response -from io import BytesIO -import torch -from av import open as avopen - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -from scipy.io import wavfile - -# Flask Init -app = Flask(__name__) -app.config["JSON_AS_ASCII"] = False - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - assert bert.shape[-1] == len(phone), phone - - if language_str == "ZH": - bert = bert - ja_bert = torch.zeros(768, len(phone)) - elif language_str == "JA": - ja_bert = bert - bert = torch.zeros(1024, len(phone)) - else: - bert = torch.zeros(1024, len(phone)) - ja_bert = torch.zeros(768, len(phone)) - assert bert.shape[-1] == len( - phone - ), f"Bert seq len {bert.shape[-1]} != {len(phone)}" - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, ja_bert, phone, tone, language - - -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid, language): - bert, ja_bert, phones, tones, lang_ids = get_text(text, language, hps) - with torch.no_grad(): - x_tst = phones.to(dev).unsqueeze(0) - tones = tones.to(dev).unsqueeze(0) - lang_ids = lang_ids.to(dev).unsqueeze(0) - bert = bert.to(dev).unsqueeze(0) - ja_bert = ja_bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(dev) - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(dev) - audio = ( - net_g.infer( - x_tst, - x_tst_lengths, - speakers, - tones, - lang_ids, - bert, - ja_bert, - sdp_ratio=sdp_ratio, - noise_scale=noise_scale, - noise_scale_w=noise_scale_w, - length_scale=length_scale, - )[0][0, 0] - .data.cpu() - .float() - .numpy() - ) - return audio - - -def replace_punctuation(text, i=2): - punctuation = ",。?!" - for char in punctuation: - text = text.replace(char, char * i) - return text - - -def wav2(i, o, format): - inp = avopen(i, "rb") - out = avopen(o, "wb", format=format) - if format == "ogg": - format = "libvorbis" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): - out.mux(p) - - for p in ostream.encode(None): - out.mux(p) - - out.close() - inp.close() - - -# Load Generator -hps = utils.get_hparams_from_file("./configs/config.json") - -dev = "cuda" -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model, -).to(dev) -_ = net_g.eval() - -_ = utils.load_checkpoint("logs/G_649000.pth", net_g, None, skip_optimizer=True) - - -@app.route("/") -def main(): - try: - speaker = request.args.get("speaker") - text = request.args.get("text").replace("/n", "") - sdp_ratio = float(request.args.get("sdp_ratio", 0.2)) - noise = float(request.args.get("noise", 0.5)) - noisew = float(request.args.get("noisew", 0.6)) - length = float(request.args.get("length", 1.2)) - language = request.args.get("language") - if length >= 2: - return "Too big length" - if len(text) >= 250: - return "Too long text" - fmt = request.args.get("format", "wav") - if None in (speaker, text): - return "Missing Parameter" - if fmt not in ("mp3", "wav", "ogg"): - return "Invalid Format" - if language not in ("JA", "ZH"): - return "Invalid language" - except: - return "Invalid Parameter" - - with torch.no_grad(): - audio = infer( - text, - sdp_ratio=sdp_ratio, - noise_scale=noise, - noise_scale_w=noisew, - length_scale=length, - sid=speaker, - language=language, - ) - - with BytesIO() as wav: - wavfile.write(wav, hps.data.sampling_rate, audio) - torch.cuda.empty_cache() - if fmt == "wav": - return Response(wav.getvalue(), mimetype="audio/wav") - wav.seek(0, 0) - with BytesIO() as ofp: - wav2(wav, ofp, fmt) - return Response( - ofp.getvalue(), mimetype="audio/mpeg" if fmt == "mp3" else "audio/ogg" - ) diff --git a/spaces/Realcat/image-matching-webui/hloc/matchers/dkm.py b/spaces/Realcat/image-matching-webui/hloc/matchers/dkm.py deleted file mode 100644 index 5de526bc7c3ab1f65527c5614ea616be76f0dd43..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/matchers/dkm.py +++ /dev/null @@ -1,61 +0,0 @@ -import sys -from pathlib import Path -import torch -from PIL import Image -import subprocess -from ..utils.base_model import BaseModel -from .. import logger - -sys.path.append(str(Path(__file__).parent / "../../third_party")) -from DKM.dkm import DKMv3_outdoor - -dkm_path = Path(__file__).parent / "../../third_party/DKM" -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -class DKMv3(BaseModel): - default_conf = { - "model_name": "DKMv3_outdoor.pth", - "match_threshold": 0.2, - "checkpoint_dir": dkm_path / "pretrained", - } - required_inputs = [ - "image0", - "image1", - ] - # Models exported using - dkm_models = { - "DKMv3_outdoor.pth": "https://github.com/Parskatt/storage/releases/download/dkmv3/DKMv3_outdoor.pth", - "DKMv3_indoor.pth": "https://github.com/Parskatt/storage/releases/download/dkmv3/DKMv3_indoor.pth", - } - - def _init(self, conf): - model_path = dkm_path / "pretrained" / conf["model_name"] - - # Download the model. - if not model_path.exists(): - model_path.parent.mkdir(exist_ok=True) - link = self.dkm_models[conf["model_name"]] - cmd = ["wget", link, "-O", str(model_path)] - logger.info(f"Downloading the DKMv3 model with `{cmd}`.") - subprocess.run(cmd, check=True) - logger.info(f"Loading DKMv3 model...") - self.net = DKMv3_outdoor(path_to_weights=str(model_path), device=device) - - def _forward(self, data): - img0 = data["image0"].cpu().numpy().squeeze() * 255 - img1 = data["image1"].cpu().numpy().squeeze() * 255 - img0 = img0.transpose(1, 2, 0) - img1 = img1.transpose(1, 2, 0) - img0 = Image.fromarray(img0.astype("uint8")) - img1 = Image.fromarray(img1.astype("uint8")) - W_A, H_A = img0.size - W_B, H_B = img1.size - - warp, certainty = self.net.match(img0, img1, device=device) - matches, certainty = self.net.sample(warp, certainty) - kpts1, kpts2 = self.net.to_pixel_coordinates( - matches, H_A, W_A, H_B, W_B - ) - pred = {} - pred["keypoints0"], pred["keypoints1"] = kpts1, kpts2 - return pred diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/demo/demo_scoremap.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/demo/demo_scoremap.py deleted file mode 100644 index 1a0a2b2470783c69753960725aee1b689b0cb2cc..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/demo/demo_scoremap.py +++ /dev/null @@ -1,24 +0,0 @@ -import torch -from PIL import Image -import numpy as np - -from DeDoDe import dedode_detector_L -from DeDoDe.utils import tensor_to_pil - -detector = dedode_detector_L(weights=torch.load("dedode_detector_l.pth")) -H, W = 768, 768 -im_path = "assets/im_A.jpg" - -out = detector.detect_from_path(im_path, dense=True, H=H, W=W) - -logit_map = out["dense_keypoint_logits"].clone() -min = logit_map.max() - 3 -logit_map[logit_map < min] = min -logit_map = (logit_map - min) / (logit_map.max() - min) -logit_map = logit_map.cpu()[0].expand(3, H, W) -im_A = torch.tensor(np.array(Image.open(im_path).resize((W, H))) / 255.0).permute( - 2, 0, 1 -) -tensor_to_pil(logit_map * logit_map + 0.15 * (1 - logit_map) * im_A).save( - "demo/dense_logits.png" -) diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/setup.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/setup.py deleted file mode 100644 index 94d1fd8ed2e5ac769222afce4f084ac19029a2a4..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/setup.py +++ /dev/null @@ -1,10 +0,0 @@ -from setuptools import setup, find_packages - - -setup( - name="DeDoDe", - packages=find_packages(include=["DeDoDe*"]), - install_requires=open("requirements.txt", "r").read().split("\n"), - version="0.0.1", - author="Johan Edstedt", -) diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/profiler.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/profiler.py deleted file mode 100644 index 0275ea34e3eb9cceb4ed809bebeda209749f5bc5..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/profiler.py +++ /dev/null @@ -1,40 +0,0 @@ -import torch -from pytorch_lightning.profiler import SimpleProfiler, PassThroughProfiler -from contextlib import contextmanager -from pytorch_lightning.utilities import rank_zero_only - - -class InferenceProfiler(SimpleProfiler): - """ - This profiler records duration of actions with cuda.synchronize() - Use this in test time. - """ - - def __init__(self): - super().__init__() - self.start = rank_zero_only(self.start) - self.stop = rank_zero_only(self.stop) - self.summary = rank_zero_only(self.summary) - - @contextmanager - def profile(self, action_name: str) -> None: - try: - torch.cuda.synchronize() - self.start(action_name) - yield action_name - finally: - torch.cuda.synchronize() - self.stop(action_name) - - -def build_profiler(name): - if name == "inference": - return InferenceProfiler() - elif name == "pytorch": - from pytorch_lightning.profiler import PyTorchProfiler - - return PyTorchProfiler(use_cuda=True, profile_memory=True, row_limit=100) - elif name is None: - return PassThroughProfiler() - else: - raise ValueError(f"Invalid profiler: {name}") diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/fused_act.py b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/fused_act.py deleted file mode 100644 index 74815adafbf7a37d5d4def41ac60dbdeefdbff30..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/fused_act.py +++ /dev/null @@ -1,34 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, bias=True, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - if bias: - self.bias = nn.Parameter(torch.zeros(channel)) - - else: - self.bias = None - - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, inputs): - return fused_leaky_relu(inputs, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(inputs, bias=None, negative_slope=0.2, scale=2 ** 0.5): - if bias is not None: - rest_dim = [1] * (inputs.ndim - bias.ndim - 1) - return ( - F.leaky_relu( - inputs + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope - ) - * scale - ) - - else: - return F.leaky_relu(inputs, negative_slope=negative_slope) * scale \ No newline at end of file diff --git a/spaces/Reha2704/VToonify/vtoonify/train_vtoonify_t.py b/spaces/Reha2704/VToonify/vtoonify/train_vtoonify_t.py deleted file mode 100644 index 147d5f38a5b25822ab05f089173cd96c6aa22c12..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/train_vtoonify_t.py +++ /dev/null @@ -1,432 +0,0 @@ -import os -#os.environ['CUDA_VISIBLE_DEVICES'] = "0" -import argparse -import math -import random - -import numpy as np -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils import data -import torch.distributed as dist -from torchvision import transforms, utils -from tqdm import tqdm -from PIL import Image -from util import * -from model.stylegan import lpips -from model.stylegan.model import Generator, Downsample -from model.vtoonify import VToonify, ConditionalDiscriminator -from model.bisenet.model import BiSeNet -from model.simple_augment import random_apply_affine -from model.stylegan.distributed import ( - get_rank, - synchronize, - reduce_loss_dict, - reduce_sum, - get_world_size, -) - -# In the paper, --weight for each style is set as follows, -# cartoon: default -# caricature: default -# pixar: 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 -# comic: 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 1 1 1 1 1 1 -# arcane: 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 1 1 1 1 1 1 - -class TrainOptions(): - def __init__(self): - - self.parser = argparse.ArgumentParser(description="Train VToonify-T") - self.parser.add_argument("--iter", type=int, default=2000, help="total training iterations") - self.parser.add_argument("--batch", type=int, default=8, help="batch sizes for each gpus") - self.parser.add_argument("--lr", type=float, default=0.0001, help="learning rate") - self.parser.add_argument("--local_rank", type=int, default=0, help="local rank for distributed training") - self.parser.add_argument("--start_iter", type=int, default=0, help="start iteration") - self.parser.add_argument("--save_every", type=int, default=30000, help="interval of saving a checkpoint") - self.parser.add_argument("--save_begin", type=int, default=30000, help="when to start saving a checkpoint") - self.parser.add_argument("--log_every", type=int, default=200, help="interval of saving an intermediate image result") - - self.parser.add_argument("--adv_loss", type=float, default=0.01, help="the weight of adv loss") - self.parser.add_argument("--grec_loss", type=float, default=0.1, help="the weight of mse recontruction loss") - self.parser.add_argument("--perc_loss", type=float, default=0.01, help="the weight of perceptual loss") - self.parser.add_argument("--tmp_loss", type=float, default=1.0, help="the weight of temporal consistency loss") - - self.parser.add_argument("--encoder_path", type=str, default=None, help="path to the pretrained encoder model") - self.parser.add_argument("--direction_path", type=str, default='./checkpoint/directions.npy', help="path to the editing direction latents") - self.parser.add_argument("--stylegan_path", type=str, default='./checkpoint/stylegan2-ffhq-config-f.pt', help="path to the stylegan model") - self.parser.add_argument("--finetunegan_path", type=str, default='./checkpoint/cartoon/finetune-000600.pt', help="path to the finetuned stylegan model") - self.parser.add_argument("--weight", type=float, nargs=18, default=[1]*9+[0]*9, help="the weight for blending two models") - self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model") - self.parser.add_argument("--style_encoder_path", type=str, default='./checkpoint/encoder.pt', help="path of the style encoder") - - self.parser.add_argument("--name", type=str, default='vtoonify_t_cartoon', help="saved model name") - self.parser.add_argument("--pretrain", action="store_true", help="if true, only pretrain the encoder") - - def parse(self): - self.opt = self.parser.parse_args() - if self.opt.encoder_path is None: - self.opt.encoder_path = os.path.join('./checkpoint/', self.opt.name, 'pretrain.pt') - args = vars(self.opt) - if self.opt.local_rank == 0: - print('Load options') - for name, value in sorted(args.items()): - print('%s: %s' % (str(name), str(value))) - return self.opt - - -# pretrain E of vtoonify. -# We train E so that its the last-layer feature matches the original 8-th-layer input feature of G1 -# See Model initialization in Sec. 4.1.2 for the detail -def pretrain(args, generator, g_optim, g_ema, parsingpredictor, down, directions, basemodel, device): - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01) - - recon_loss = torch.tensor(0.0, device=device) - loss_dict = {} - - if args.distributed: - g_module = generator.module - else: - g_module = generator - - accum = 0.5 ** (32 / (10 * 1000)) - - requires_grad(g_module.encoder, True) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - break - - with torch.no_grad(): - # during pretraining, no geometric transformations are applied. - noise_sample = torch.randn(args.batch, 512).cuda() - ws_ = basemodel.style(noise_sample).unsqueeze(1).repeat(1,18,1) # random w - ws_[:, 3:7] += directions[torch.randint(0, directions.shape[0], (args.batch,)), 3:7] # w''=w'=w+n - img_gen, _ = basemodel([ws_], input_is_latent=True, truncation=0.5, truncation_latent=0) # image part of x' - img_gen = torch.clamp(img_gen, -1, 1).detach() - img_gen512 = down(img_gen.detach()) - img_gen256 = down(img_gen512.detach()) # image part of x'_down - mask512 = parsingpredictor(2*torch.clamp(img_gen512, -1, 1))[0] - real_input = torch.cat((img_gen256, down(mask512)/16.0), dim=1).detach() # x'_down - # f_G1^(8)(w'') - real_feat, real_skip = g_ema.generator([ws_], input_is_latent=True, return_feature_ind = 6, truncation=0.5, truncation_latent=0) - real_feat = real_feat.detach() - real_skip = real_skip.detach() - - # f_E^(last)(x'_down) - fake_feat, fake_skip = generator(real_input, style=None, return_feat=True) - - # L_E in Eq.(1) - recon_loss = F.mse_loss(fake_feat, real_feat) + F.mse_loss(fake_skip, real_skip) - - loss_dict["emse"] = recon_loss - - generator.zero_grad() - recon_loss.backward() - g_optim.step() - - accumulate(g_ema.encoder, g_module.encoder, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - emse_loss_val = loss_reduced["emse"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"iter: {i:d}; emse: {emse_loss_val:.3f}" - ) - ) - - if ((i+1) >= args.save_begin and (i+1) % args.save_every == 0) or (i+1) == args.iter: - if (i+1) == args.iter: - savename = f"checkpoint/%s/pretrain.pt"%(args.name) - else: - savename = f"checkpoint/%s/pretrain-%05d.pt"%(args.name, i+1) - torch.save( - { - #"g": g_module.encoder.state_dict(), - "g_ema": g_ema.encoder.state_dict(), - }, - savename, - ) - - -# generate paired data and train vtoonify, see Sec. 4.1.2 for the detail -def train(args, generator, discriminator, g_optim, d_optim, g_ema, percept, parsingpredictor, down, pspencoder, directions, basemodel, device): - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, smoothing=0.01, ncols=120, dynamic_ncols=False) - - d_loss = torch.tensor(0.0, device=device) - g_loss = torch.tensor(0.0, device=device) - grec_loss = torch.tensor(0.0, device=device) - gfeat_loss = torch.tensor(0.0, device=device) - temporal_loss = torch.tensor(0.0, device=device) - loss_dict = {} - - if args.distributed: - g_module = generator.module - d_module = discriminator.module - - else: - g_module = generator - d_module = discriminator - - accum = 0.5 ** (32 / (10 * 1000)) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - break - - ###### This part is for data generation. Generate pair (x, y, w'') as in Fig. 5 of the paper - with torch.no_grad(): - noise_sample = torch.randn(args.batch, 512).cuda() - wc = basemodel.style(noise_sample).unsqueeze(1).repeat(1,18,1) # random w - wc[:, 3:7] += directions[torch.randint(0, directions.shape[0], (args.batch,)), 3:7] # w'=w+n - wc = wc.detach() - xc, _ = basemodel([wc], input_is_latent=True, truncation=0.5, truncation_latent=0) - xc = torch.clamp(xc, -1, 1).detach() # x' - xl = pspencoder(F.adaptive_avg_pool2d(xc, 256)) - xl = basemodel.style(xl.reshape(xl.shape[0]*xl.shape[1], xl.shape[2])).reshape(xl.shape) # E_s(x'_down) - xl = torch.cat((wc[:,0:7]*0.5, xl[:,7:18]), dim=1).detach() # w'' = concatenate w' and E_s(x'_down) - xs, _ = g_ema.generator([xl], input_is_latent=True) - xs = torch.clamp(xs, -1, 1).detach() # y' - # during training, random geometric transformations are applied. - imgs, _ = random_apply_affine(torch.cat((xc.detach(),xs), dim=1), 0.2, None) - real_input1024 = imgs[:,0:3].detach() # image part of x - real_input512 = down(real_input1024).detach() - real_input256 = down(real_input512).detach() - mask512 = parsingpredictor(2*real_input512)[0] - mask256 = down(mask512).detach() - mask = F.adaptive_avg_pool2d(mask512, 1024).detach() # parsing part of x - real_output = imgs[:,3:].detach() # y - real_input = torch.cat((real_input256, mask256/16.0), dim=1) # x_down - # for log, sample a fixed input-output pair (x_down, y, w'') - if idx == 0 or i == 0: - samplein = real_input.clone().detach() - sampleout = real_output.clone().detach() - samplexl = xl.clone().detach() - - ###### This part is for training discriminator - - requires_grad(g_module.encoder, False) - requires_grad(g_module.fusion_out, False) - requires_grad(g_module.fusion_skip, False) - requires_grad(discriminator, True) - - fake_output = generator(real_input, xl) - fake_pred = discriminator(F.adaptive_avg_pool2d(fake_output, 256)) - real_pred = discriminator(F.adaptive_avg_pool2d(real_output, 256)) - - # L_adv in Eq.(3) - d_loss = d_logistic_loss(real_pred, fake_pred) * args.adv_loss - loss_dict["d"] = d_loss - - discriminator.zero_grad() - d_loss.backward() - d_optim.step() - - ###### This part is for training generator (encoder and fusion modules) - - requires_grad(g_module.encoder, True) - requires_grad(g_module.fusion_out, True) - requires_grad(g_module.fusion_skip, True) - requires_grad(discriminator, False) - - fake_output = generator(real_input, xl) - fake_pred = discriminator(F.adaptive_avg_pool2d(fake_output, 256)) - # L_adv in Eq.(3) - g_loss = g_nonsaturating_loss(fake_pred) * args.adv_loss - # L_rec in Eq.(2) - grec_loss = F.mse_loss(fake_output, real_output) * args.grec_loss - gfeat_loss = percept(F.adaptive_avg_pool2d(fake_output, 512), # 1024 will out of memory - F.adaptive_avg_pool2d(real_output, 512)).sum() * args.perc_loss # 256 will get blurry output - - loss_dict["g"] = g_loss - loss_dict["gr"] = grec_loss - loss_dict["gf"] = gfeat_loss - - w = random.randint(0,1024-896) - h = random.randint(0,1024-896) - crop_input = torch.cat((real_input1024[:,:,w:w+896,h:h+896], mask[:,:,w:w+896,h:h+896]/16.0), dim=1).detach() - crop_input = down(down(crop_input)) - crop_fake_output = fake_output[:,:,w:w+896,h:h+896] - fake_crop_output = generator(crop_input, xl) - # L_tmp in Eq.(4), gradually increase the weight of L_tmp - temporal_loss = ((fake_crop_output-crop_fake_output)**2).mean() * max(idx/(args.iter/2.0)-1, 0) * args.tmp_loss - loss_dict["tp"] = temporal_loss - - generator.zero_grad() - (g_loss + grec_loss + gfeat_loss + temporal_loss).backward() - g_optim.step() - - accumulate(g_ema.encoder, g_module.encoder, accum) - accumulate(g_ema.fusion_out, g_module.fusion_out, accum) - accumulate(g_ema.fusion_skip, g_module.fusion_skip, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - d_loss_val = loss_reduced["d"].mean().item() - g_loss_val = loss_reduced["g"].mean().item() - gr_loss_val = loss_reduced["gr"].mean().item() - gf_loss_val = loss_reduced["gf"].mean().item() - tmp_loss_val = loss_reduced["tp"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"iter: {i:d}; advd: {d_loss_val:.3f}; advg: {g_loss_val:.3f}; mse: {gr_loss_val:.3f}; " - f"perc: {gf_loss_val:.3f}; tmp: {tmp_loss_val:.3f}" - ) - ) - - if i % args.log_every == 0 or (i+1) == args.iter: - with torch.no_grad(): - g_ema.eval() - sample = g_ema(samplein, samplexl) - sample = F.interpolate(torch.cat((sampleout, sample), dim=0), 256) - utils.save_image( - sample, - f"log/%s/%05d.jpg"%(args.name, i), - nrow=int(args.batch), - normalize=True, - range=(-1, 1), - ) - - if ((i+1) >= args.save_begin and (i+1) % args.save_every == 0) or (i+1) == args.iter: - if (i+1) == args.iter: - savename = f"checkpoint/%s/vtoonify.pt"%(args.name) - else: - savename = f"checkpoint/%s/vtoonify_%05d.pt"%(args.name, i+1) - torch.save( - { - #"g": g_module.state_dict(), - #"d": d_module.state_dict(), - "g_ema": g_ema.state_dict(), - }, - savename, - ) - - - -if __name__ == "__main__": - - device = "cuda" - parser = TrainOptions() - args = parser.parse() - if args.local_rank == 0: - print('*'*98) - if not os.path.exists("log/%s/"%(args.name)): - os.makedirs("log/%s/"%(args.name)) - if not os.path.exists("checkpoint/%s/"%(args.name)): - os.makedirs("checkpoint/%s/"%(args.name)) - - n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1 - args.distributed = n_gpu > 1 - - if args.distributed: - torch.cuda.set_device(args.local_rank) - torch.distributed.init_process_group(backend="nccl", init_method="env://") - synchronize() - - generator = VToonify(backbone = 'toonify').to(device) - generator.apply(weights_init) - g_ema = VToonify(backbone = 'toonify').to(device) - g_ema.eval() - - basemodel = Generator(1024, 512, 8, 2).to(device) # G0 - finetunemodel = Generator(1024, 512, 8, 2).to(device) - basemodel.load_state_dict(torch.load(args.stylegan_path, map_location=lambda storage, loc: storage)['g_ema']) - finetunemodel.load_state_dict(torch.load(args.finetunegan_path, map_location=lambda storage, loc: storage)['g_ema']) - fused_state_dict = blend_models(finetunemodel, basemodel, args.weight) # G1 - generator.generator.load_state_dict(fused_state_dict) # load G1 - g_ema.generator.load_state_dict(fused_state_dict) - requires_grad(basemodel, False) - requires_grad(generator.generator, False) - requires_grad(g_ema.generator, False) - - if not args.pretrain: - generator.encoder.load_state_dict(torch.load(args.encoder_path, map_location=lambda storage, loc: storage)["g_ema"]) - # we initialize the fusion modules to map f_G \otimes f_E to f_G. - for k in generator.fusion_out: - k.weight.data *= 0.01 - k.weight[:,0:k.weight.shape[0],1,1].data += torch.eye(k.weight.shape[0]).cuda() - for k in generator.fusion_skip: - k.weight.data *= 0.01 - k.weight[:,0:k.weight.shape[0],1,1].data += torch.eye(k.weight.shape[0]).cuda() - - accumulate(g_ema.encoder, generator.encoder, 0) - accumulate(g_ema.fusion_out, generator.fusion_out, 0) - accumulate(g_ema.fusion_skip, generator.fusion_skip, 0) - - g_parameters = list(generator.encoder.parameters()) - if not args.pretrain: - g_parameters = g_parameters + list(generator.fusion_out.parameters()) + list(generator.fusion_skip.parameters()) - - g_optim = optim.Adam( - g_parameters, - lr=args.lr, - betas=(0.9, 0.99), - ) - - if args.distributed: - generator = nn.parallel.DistributedDataParallel( - generator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - find_unused_parameters=True, - ) - - parsingpredictor = BiSeNet(n_classes=19) - parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage)) - parsingpredictor.to(device).eval() - requires_grad(parsingpredictor, False) - - # we apply gaussian blur to the images to avoid flickers caused during downsampling - down = Downsample(kernel=[1, 3, 3, 1], factor=2).to(device) - requires_grad(down, False) - - directions = torch.tensor(np.load(args.direction_path)).to(device) - - if not args.pretrain: - discriminator = ConditionalDiscriminator(256).to(device) - - d_optim = optim.Adam( - discriminator.parameters(), - lr=args.lr, - betas=(0.9, 0.99), - ) - - if args.distributed: - discriminator = nn.parallel.DistributedDataParallel( - discriminator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - find_unused_parameters=True, - ) - - percept = lpips.PerceptualLoss(model="net-lin", net="vgg", use_gpu=device.startswith("cuda"), gpu_ids=[args.local_rank]) - requires_grad(percept.model.net, False) - - pspencoder = load_psp_standalone(args.style_encoder_path, device) - - if args.local_rank == 0: - print('Load models and data successfully loaded!') - - if args.pretrain: - pretrain(args, generator, g_optim, g_ema, parsingpredictor, down, directions, basemodel, device) - else: - train(args, generator, discriminator, g_optim, d_optim, g_ema, percept, parsingpredictor, down, pspencoder, directions, basemodel, device) diff --git a/spaces/RichardMB1217/blip/transform/randaugment.py b/spaces/RichardMB1217/blip/transform/randaugment.py deleted file mode 100644 index 094d9f4cacc93146d2bab7311d9dc04feb07032c..0000000000000000000000000000000000000000 --- a/spaces/RichardMB1217/blip/transform/randaugment.py +++ /dev/null @@ -1,340 +0,0 @@ -import cv2 -import numpy as np - - -## aug functions -def identity_func(img): - return img - - -def autocontrast_func(img, cutoff=0): - ''' - same output as PIL.ImageOps.autocontrast - ''' - n_bins = 256 - - def tune_channel(ch): - n = ch.size - cut = cutoff * n // 100 - if cut == 0: - high, low = ch.max(), ch.min() - else: - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - low = np.argwhere(np.cumsum(hist) > cut) - low = 0 if low.shape[0] == 0 else low[0] - high = np.argwhere(np.cumsum(hist[::-1]) > cut) - high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0] - if high <= low: - table = np.arange(n_bins) - else: - scale = (n_bins - 1) / (high - low) - offset = -low * scale - table = np.arange(n_bins) * scale + offset - table[table < 0] = 0 - table[table > n_bins - 1] = n_bins - 1 - table = table.clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def equalize_func(img): - ''' - same output as PIL.ImageOps.equalize - PIL's implementation is different from cv2.equalize - ''' - n_bins = 256 - - def tune_channel(ch): - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - non_zero_hist = hist[hist != 0].reshape(-1) - step = np.sum(non_zero_hist[:-1]) // (n_bins - 1) - if step == 0: return ch - n = np.empty_like(hist) - n[0] = step // 2 - n[1:] = hist[:-1] - table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def rotate_func(img, degree, fill=(0, 0, 0)): - ''' - like PIL, rotate by degree, not radians - ''' - H, W = img.shape[0], img.shape[1] - center = W / 2, H / 2 - M = cv2.getRotationMatrix2D(center, degree, 1) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill) - return out - - -def solarize_func(img, thresh=128): - ''' - same output as PIL.ImageOps.posterize - ''' - table = np.array([el if el < thresh else 255 - el for el in range(256)]) - table = table.clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def color_func(img, factor): - ''' - same output as PIL.ImageEnhance.Color - ''' - ## implementation according to PIL definition, quite slow - # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis] - # out = blend(degenerate, img, factor) - # M = ( - # np.eye(3) * factor - # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor) - # )[np.newaxis, np.newaxis, :] - M = ( - np.float32([ - [0.886, -0.114, -0.114], - [-0.587, 0.413, -0.587], - [-0.299, -0.299, 0.701]]) * factor - + np.float32([[0.114], [0.587], [0.299]]) - ) - out = np.matmul(img, M).clip(0, 255).astype(np.uint8) - return out - - -def contrast_func(img, factor): - """ - same output as PIL.ImageEnhance.Contrast - """ - mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299])) - table = np.array([( - el - mean) * factor + mean - for el in range(256) - ]).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def brightness_func(img, factor): - ''' - same output as PIL.ImageEnhance.Contrast - ''' - table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def sharpness_func(img, factor): - ''' - The differences the this result and PIL are all on the 4 boundaries, the center - areas are same - ''' - kernel = np.ones((3, 3), dtype=np.float32) - kernel[1][1] = 5 - kernel /= 13 - degenerate = cv2.filter2D(img, -1, kernel) - if factor == 0.0: - out = degenerate - elif factor == 1.0: - out = img - else: - out = img.astype(np.float32) - degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :] - out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate) - out = out.astype(np.uint8) - return out - - -def shear_x_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, factor, 0], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_x_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, -offset], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_y_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [0, 1, -offset]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def posterize_func(img, bits): - ''' - same output as PIL.ImageOps.posterize - ''' - out = np.bitwise_and(img, np.uint8(255 << (8 - bits))) - return out - - -def shear_y_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [factor, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def cutout_func(img, pad_size, replace=(0, 0, 0)): - replace = np.array(replace, dtype=np.uint8) - H, W = img.shape[0], img.shape[1] - rh, rw = np.random.random(2) - pad_size = pad_size // 2 - ch, cw = int(rh * H), int(rw * W) - x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H) - y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W) - out = img.copy() - out[x1:x2, y1:y2, :] = replace - return out - - -### level to args -def enhance_level_to_args(MAX_LEVEL): - def level_to_args(level): - return ((level / MAX_LEVEL) * 1.8 + 0.1,) - return level_to_args - - -def shear_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 0.3 - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def translate_level_to_args(translate_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * float(translate_const) - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = int((level / MAX_LEVEL) * cutout_const) - return (level, replace_value) - - return level_to_args - - -def solarize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 256) - return (level, ) - return level_to_args - - -def none_level_to_args(level): - return () - - -def posterize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 4) - return (level, ) - return level_to_args - - -def rotate_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 30 - if np.random.random() < 0.5: - level = -level - return (level, replace_value) - - return level_to_args - - -func_dict = { - 'Identity': identity_func, - 'AutoContrast': autocontrast_func, - 'Equalize': equalize_func, - 'Rotate': rotate_func, - 'Solarize': solarize_func, - 'Color': color_func, - 'Contrast': contrast_func, - 'Brightness': brightness_func, - 'Sharpness': sharpness_func, - 'ShearX': shear_x_func, - 'TranslateX': translate_x_func, - 'TranslateY': translate_y_func, - 'Posterize': posterize_func, - 'ShearY': shear_y_func, -} - -translate_const = 10 -MAX_LEVEL = 10 -replace_value = (128, 128, 128) -arg_dict = { - 'Identity': none_level_to_args, - 'AutoContrast': none_level_to_args, - 'Equalize': none_level_to_args, - 'Rotate': rotate_level_to_args(MAX_LEVEL, replace_value), - 'Solarize': solarize_level_to_args(MAX_LEVEL), - 'Color': enhance_level_to_args(MAX_LEVEL), - 'Contrast': enhance_level_to_args(MAX_LEVEL), - 'Brightness': enhance_level_to_args(MAX_LEVEL), - 'Sharpness': enhance_level_to_args(MAX_LEVEL), - 'ShearX': shear_level_to_args(MAX_LEVEL, replace_value), - 'TranslateX': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'TranslateY': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'Posterize': posterize_level_to_args(MAX_LEVEL), - 'ShearY': shear_level_to_args(MAX_LEVEL, replace_value), -} - - -class RandomAugment(object): - - def __init__(self, N=2, M=10, isPIL=False, augs=[]): - self.N = N - self.M = M - self.isPIL = isPIL - if augs: - self.augs = augs - else: - self.augs = list(arg_dict.keys()) - - def get_random_ops(self): - sampled_ops = np.random.choice(self.augs, self.N) - return [(op, 0.5, self.M) for op in sampled_ops] - - def __call__(self, img): - if self.isPIL: - img = np.array(img) - ops = self.get_random_ops() - for name, prob, level in ops: - if np.random.random() > prob: - continue - args = arg_dict[name](level) - img = func_dict[name](img, *args) - return img - - -if __name__ == '__main__': - a = RandomAugment() - img = np.random.randn(32, 32, 3) - a(img) \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/pisa_retinanet_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/pisa_retinanet_head.py deleted file mode 100644 index bd87b9aeb07e05ff94b444ac8999eca3f616711a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/pisa_retinanet_head.py +++ /dev/null @@ -1,154 +0,0 @@ -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import images_to_levels -from ..builder import HEADS -from ..losses import carl_loss, isr_p -from .retina_head import RetinaHead - - -@HEADS.register_module() -class PISARetinaHead(RetinaHead): - """PISA Retinanet Head. - - The head owns the same structure with Retinanet Head, but differs in two - aspects: - 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to - change the positive loss weights. - 2. Classification-aware regression loss is adopted as a third loss. - """ - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes of each image - with shape (num_obj, 4). - gt_labels (list[Tensor]): Ground truth labels of each image - with shape (num_obj, 4). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): Ignored gt bboxes of each image. - Default: None. - - Returns: - dict: Loss dict, comprise classification loss, regression loss and - carl loss. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results_list) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - num_imgs = len(img_metas) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, label_channels) - for cls_score in cls_scores - ] - flatten_cls_scores = torch.cat( - flatten_cls_scores, dim=1).reshape(-1, - flatten_cls_scores[0].size(-1)) - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_bbox_preds = torch.cat( - flatten_bbox_preds, dim=1).view(-1, flatten_bbox_preds[0].size(-1)) - flatten_labels = torch.cat(labels_list, dim=1).reshape(-1) - flatten_label_weights = torch.cat( - label_weights_list, dim=1).reshape(-1) - flatten_anchors = torch.cat(all_anchor_list, dim=1).reshape(-1, 4) - flatten_bbox_targets = torch.cat( - bbox_targets_list, dim=1).reshape(-1, 4) - flatten_bbox_weights = torch.cat( - bbox_weights_list, dim=1).reshape(-1, 4) - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - all_targets = (flatten_labels, flatten_label_weights, - flatten_bbox_targets, flatten_bbox_weights) - with torch.no_grad(): - all_targets = isr_p( - flatten_cls_scores, - flatten_bbox_preds, - all_targets, - flatten_anchors, - sampling_results_list, - bbox_coder=self.bbox_coder, - loss_cls=self.loss_cls, - num_class=self.num_classes, - **self.train_cfg.isr) - (flatten_labels, flatten_label_weights, flatten_bbox_targets, - flatten_bbox_weights) = all_targets - - # For convenience we compute loss once instead separating by fpn level, - # so that we don't need to separate the weights by level again. - # The result should be the same - losses_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels, - flatten_label_weights, - avg_factor=num_total_samples) - losses_bbox = self.loss_bbox( - flatten_bbox_preds, - flatten_bbox_targets, - flatten_bbox_weights, - avg_factor=num_total_samples) - loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - # CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - flatten_cls_scores, - flatten_labels, - flatten_bbox_preds, - flatten_bbox_targets, - self.loss_bbox, - **self.train_cfg.carl, - avg_factor=num_total_pos, - sigmoid=True, - num_class=self.num_classes) - loss_dict.update(loss_carl) - - return loss_dict diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/utils.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/utils.py deleted file mode 100644 index 0f5712cb42c38a2e8563bf563efb6681383cab9b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import MODULE_WRAPPERS - - -def is_module_wrapper(module): - """Check if a module is a module wrapper. - - The following 3 modules in MMCV (and their subclasses) are regarded as - module wrappers: DataParallel, DistributedDataParallel, - MMDistributedDataParallel (the deprecated version). You may add you own - module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: True if the input module is a module wrapper. - """ - module_wrappers = tuple(MODULE_WRAPPERS.module_dict.values()) - return isinstance(module, module_wrappers) diff --git a/spaces/Robinn/WordSent/README.md b/spaces/Robinn/WordSent/README.md deleted file mode 100644 index 968a18b24575aef69201e15bb7a73e1c919f1e25..0000000000000000000000000000000000000000 --- a/spaces/Robinn/WordSent/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WordSent -emoji: 📈 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Rongjiehuang/ProDiff/modules/FastDiff/task/FastDiff.py b/spaces/Rongjiehuang/ProDiff/modules/FastDiff/task/FastDiff.py deleted file mode 100644 index c8902b4309ff45b4c1b88707e45c43238f52b795..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/modules/FastDiff/task/FastDiff.py +++ /dev/null @@ -1,133 +0,0 @@ -import os - -import torch -import utils -from modules.FastDiff.module.FastDiff_model import FastDiff -from tasks.vocoder.vocoder_base import VocoderBaseTask -from utils import audio -from utils.hparams import hparams -from modules.FastDiff.module.util import theta_timestep_loss, compute_hyperparams_given_schedule, sampling_given_noise_schedule - - -class FastDiffTask(VocoderBaseTask): - def __init__(self): - super(FastDiffTask, self).__init__() - - def build_model(self): - self.model = FastDiff(audio_channels=hparams['audio_channels'], - inner_channels=hparams['inner_channels'], - cond_channels=hparams['cond_channels'], - upsample_ratios=hparams['upsample_ratios'], - lvc_layers_each_block=hparams['lvc_layers_each_block'], - lvc_kernel_size=hparams['lvc_kernel_size'], - kpnet_hidden_channels=hparams['kpnet_hidden_channels'], - kpnet_conv_size=hparams['kpnet_conv_size'], - dropout=hparams['dropout'], - diffusion_step_embed_dim_in=hparams['diffusion_step_embed_dim_in'], - diffusion_step_embed_dim_mid=hparams['diffusion_step_embed_dim_mid'], - diffusion_step_embed_dim_out=hparams['diffusion_step_embed_dim_out'], - use_weight_norm=hparams['use_weight_norm']) - utils.print_arch(self.model) - - # Init hyperparameters by linear schedule - noise_schedule = torch.linspace(float(hparams["beta_0"]), float(hparams["beta_T"]), int(hparams["T"])).cuda() - diffusion_hyperparams = compute_hyperparams_given_schedule(noise_schedule) - - # map diffusion hyperparameters to gpu - for key in diffusion_hyperparams: - if key in ["beta", "alpha", "sigma"]: - diffusion_hyperparams[key] = diffusion_hyperparams[key].cuda() - self.diffusion_hyperparams = diffusion_hyperparams - - return self.model - - def _training_step(self, sample, batch_idx, optimizer_idx): - mels = sample['mels'] - y = sample['wavs'] - X = (mels, y) - loss = theta_timestep_loss(self.model, X, self.diffusion_hyperparams) - return loss, {'loss': loss} - - - def validation_step(self, sample, batch_idx): - mels = sample['mels'] - y = sample['wavs'] - X = (mels, y) - loss = theta_timestep_loss(self.model, X, self.diffusion_hyperparams) - return loss, {'loss': loss} - - - def test_step(self, sample, batch_idx): - mels = sample['mels'] - y = sample['wavs'] - loss_output = {} - - if hparams['noise_schedule'] != '': - noise_schedule = hparams['noise_schedule'] - if isinstance(noise_schedule, list): - noise_schedule = torch.FloatTensor(noise_schedule).cuda() - else: - # Select Schedule - try: - reverse_step = int(hparams.get('N')) - except: - print('Please specify $N (the number of revere iterations) in config file. Now denoise with 4 iterations.') - reverse_step = 4 - if reverse_step == 1000: - noise_schedule = torch.linspace(0.000001, 0.01, 1000).cuda() - elif reverse_step == 200: - noise_schedule = torch.linspace(0.0001, 0.02, 200).cuda() - - # Below are schedules derived by Noise Predictor. - # We will release codes of noise predictor training process & noise scheduling process soon. Please Stay Tuned! - elif reverse_step == 8: - noise_schedule = [6.689325005027058e-07, 1.0033881153503899e-05, 0.00015496854030061513, - 0.002387222135439515, 0.035597629845142365, 0.3681158423423767, 0.4735414385795593, 0.5] - elif reverse_step == 6: - noise_schedule = [1.7838445955931093e-06, 2.7984189728158526e-05, 0.00043231004383414984, - 0.006634317338466644, 0.09357017278671265, 0.6000000238418579] - elif reverse_step == 4: - noise_schedule = [3.2176e-04, 2.5743e-03, 2.5376e-02, 7.0414e-01] - elif reverse_step == 3: - noise_schedule = [9.0000e-05, 9.0000e-03, 6.0000e-01] - else: - raise NotImplementedError - - if isinstance(noise_schedule, list): - noise_schedule = torch.FloatTensor(noise_schedule).cuda() - - audio_length = mels.shape[-1] * hparams["hop_size"] - # generate using DDPM reverse process - - y_ = sampling_given_noise_schedule( - self.model, (1, 1, audio_length), self.diffusion_hyperparams, noise_schedule, - condition=mels, ddim=False, return_sequence=False) - gen_dir = os.path.join(hparams['work_dir'], f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}') - os.makedirs(gen_dir, exist_ok=True) - - if len(y) == 0: - # Inference from mel - for idx, (wav_pred, item_name) in enumerate(zip(y_, sample["item_name"])): - wav_pred = wav_pred / wav_pred.abs().max() - audio.save_wav(wav_pred.view(-1).cpu().float().numpy(), f'{gen_dir}/{item_name}_pred.wav', - hparams['audio_sample_rate']) - else: - for idx, (wav_pred, wav_gt, item_name) in enumerate(zip(y_, y, sample["item_name"])): - wav_gt = wav_gt / wav_gt.abs().max() - wav_pred = wav_pred / wav_pred.abs().max() - audio.save_wav(wav_gt.view(-1).cpu().float().numpy(), f'{gen_dir}/{item_name}_gt.wav', hparams['audio_sample_rate']) - audio.save_wav(wav_pred.view(-1).cpu().float().numpy(), f'{gen_dir}/{item_name}_pred.wav', hparams['audio_sample_rate']) - return loss_output - - def build_optimizer(self, model): - self.optimizer = optimizer = torch.optim.AdamW( - self.model.parameters(), - lr=float(hparams['lr']), weight_decay=float(hparams['weight_decay'])) - return optimizer - - def compute_rtf(self, sample, generation_time, sample_rate=22050): - """ - Computes RTF for a given sample. - """ - total_length = sample.shape[-1] - return float(generation_time * sample_rate / total_length) \ No newline at end of file diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/prompts.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/prompts.py deleted file mode 100644 index f6015dc218057f4cdbfa78cc2bb2d5243b9a3c91..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/prompts.py +++ /dev/null @@ -1,91 +0,0 @@ -import langchain.prompts as prompts -from langchain.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate -from datetime import datetime - -summary_template = """Summarize and provide direct quotes from the text below to help answer a question. -Do not directly answer the question, instead provide a summary and quotes with the context of the user's question. -Do not use outside sources. -Reply with "Not applicable" if the text is unrelated to the question. -Use 75 or less words. -Remember, if the user does not specify a language, reply in the language of the user's question. - -{context_str} - -User's question: {question} -Relevant Information Summary:""" -summary_prompt = prompts.PromptTemplate( - input_variables=["question", "context_str"], - template=summary_template, -) - -qa_template = """Write an answer for the user's question below solely based on the provided context. -If the user does not specify how many words the answer should be, the length of the answer should be {length}. -If the context is irrelevant, reply "Your question falls outside the scope of University of Sydney policy, so I cannot answer". -For each sentence in your answer, indicate which sources most support it via valid citation markers at the end of sentences, like (Example2012). -Answer in an unbiased and professional tone. -Make clear what is your opinion. -Use Markdown for formatting code or text, and try to use direct quotes to support arguments. -Remember, if the user does not specify a language, answer in the language of the user's question. - -Context: -{context_str} - - -User's question: {question} -Answer: -""" -qa_prompt = prompts.PromptTemplate( - input_variables=["question", "context_str", "length"], - template=qa_template, -) - -# usado por GPCL -qa_prompt_GPCL = prompts.PromptTemplate( - input_variables=["question", "context_str"], - template="You are an AI assistant providing helpful advice about University of Sydney policy. You are given the following extracted parts of a long document and a question. Provide a conversational answer based on the context provided." - "You should only provide hyperlinks that reference the context below. Do NOT make up hyperlinks." - 'If you can not find the answer in the context below, just say "Hmm, I am not sure. Could you please rephrase your question?" Do not try to make up an answer.' - "If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.\n\n" - "Question: {question}\n" - "=========\n" - "{context_str}\n" - "=========\n" - "Answer in Markdown:", -) - -search_prompt = prompts.PromptTemplate( - input_variables=["question"], - template="We want to answer the following question: {question} \n" - "Provide three different targeted keyword searches (one search per line) " - "that will find papers that help answer the question. Do not use boolean operators. " - "Recent years are 2021, 2022, 2023.\n\n" - "1.", -) - - -def _get_datetime(): - now = datetime.now() - return now.strftime("%m/%d/%Y") - - -citation_prompt = prompts.PromptTemplate( - input_variables=["text"], - template="Provide a possible citation for the following text in MLA Format. Today's date is {date}\n" - "{text}\n\n" - "Citation:", - partial_variables={"date": _get_datetime}, -) - -system_template = """You are an AI chatbot with knowledge of the University of Sydney's legal policies that answers in an unbiased, professional tone. -You sometimes refuse to answer if there is insufficient information. -If the user does not specify a language, answer in the language of the user's question. """ -system_message_prompt = SystemMessagePromptTemplate.from_template(system_template) - -human_summary_message_prompt = HumanMessagePromptTemplate.from_template(summary_template) -chat_summary_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_summary_message_prompt]) - -human_qa_message_prompt = HumanMessagePromptTemplate.from_template(qa_template) -# chat_qa_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_qa_message_prompt]) # TODO: borrar - -# human_condense_message_prompt = HumanMessagePromptTemplate.from_template(condense_template) -# chat_condense_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_condense_message_prompt]) diff --git a/spaces/Sapphire-356/Video2MC/app.py b/spaces/Sapphire-356/Video2MC/app.py deleted file mode 100644 index 712097e1bae6e0c5303a721b030f4bebe32dfbd9..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/app.py +++ /dev/null @@ -1,234 +0,0 @@ -import gradio as gr -from videopose_PSTMO import gr_video2mc -import os - -# ffmpeg -i input_videos/kun_1280x720_30fps_0-14_0-32.mp4 -vf trim=0:5,setpts=PTS-STARTPTS input_videos/kun_test_5sec.mp4 -# ffmpeg -i input.mp4 -vf scale=320:-1 output.mp4 - -Count = 0 - -def Video2MC(video, progress=gr.Progress(track_tqdm=True)): - - progress(1.0, desc="Step 0: Starting") - output_path, output_video = gr_video2mc(video, progress) - - global Count - Count += 1 - print(f"Count: {Count}") - - return output_path, output_path, output_video - -with gr.Blocks() as iface: - - text1 = gr.Markdown( - f""" -
    - - ![](file/Video2MC.png) - -
    - """ - ) - - with gr.Tab("English"): - - text2 = gr.Markdown( - """ -

    Video2MC: 3D-HPE based Mine-imator animation generation

    - -
    - - **有问题或者改进建议请在B站视频评论区发表评论,感谢支持!** - -

    请将本仓库复制到你的个人账户使用!复制方法请参考视频最后一段“复制仓库”。

    - -

    服务器算力有限,目前使用人数过多,已过载。

    - -
    - - ## Introduction - - Using computer vision algorithms, I have achieved cost-effective "motion capture," and I am now officially releasing the Video2MC algorithm for automatic generation of Mine-imator animations! - - Before using, it is **highly recommended** to watch my [introductory video](https://www.bilibili.com/video/BV1SP411W7pw), as it will help you quickly understand what this project is about. - - Enjoy it! - - """ - ) - - with gr.Accordion("Related Links", open=False): - - text_req = gr.Markdown( - """ - ## Related Links - - Github: https://github.com/Balloon-356/Video2MC - - My Bilibili (Contact): https://space.bilibili.com/244384103 - - **Introductory video:** https://www.bilibili.com/video/BV1SP411W7pw - - Implementation details: https://www.bilibili.com/read/cv25704198 - - """ - ) - - with gr.Accordion("How to Use", open=False): - - text_req = gr.Markdown( - """ - ## How to Use - - 1. Upload a video by dragging it into the box on the bottom left. The video must meet the **requirements**. - - 2. Click "Submit", and the algorithm will start running. Please wait patiently, and you can see the current progress in the box on the right. (A 5s video takes about 5min.) - - 3. Algorithm finished. You can download the .miframes file and preview the video rendered by the 3D-HPE algorithm (for previewing motion capture results). - - 4. Import the .miframes file into the Mine-imator to create a Minecraft animation (you can learn how to use it on the Mine-imator forums). - - 5. Fine-tune the motions of the skeleton model in Mine-imator. - - """ - ) - - with gr.Accordion("Video Requirements", open=False): - - text_req = gr.Markdown( - """ - ## Video Requirements - - 1. Please upload short videos, preferably not exceeding 10 seconds. (Otherwise, the algorithm will run for several tens of mins and it still works.) - - 2. The video should only contain one person, positioned at the center of the frame, fully visible from head to toe, facing the camera. - - 3. Just as shown in the "example" below. - - """ - ) - - with gr.Row(): - - with gr.Column(): - input_video = gr.Video() - with gr.Row(): - btn_c = gr.ClearButton(input_video) - btn_s = gr.Button("Submit", variant='primary') - gr.Examples([os.path.join(os.path.dirname(__file__), - "input_videos/kun_test_5sec.mp4")], input_video) - - with gr.Column(): - output_miframes = gr.File() - output_path = gr.Text() - output_video = gr.Video() - - - btn_s.click(Video2MC, inputs=[input_video], outputs=[output_miframes, output_path, output_video]) - - - with gr.Tab("中文"): - - text2 = gr.Markdown( - """ -

    Video2MC:基于3D-HPE的MC动画自动生成

    - -
    - - **有问题或者改进建议请在B站视频评论区发表评论,感谢支持!** - -

    请将本仓库复制到你的个人账户使用!复制方法请参考视频最后一段“复制仓库”。

    - -

    服务器算力有限,目前使用人数过多,已过载。

    - -
    - - ## 简单介绍 - - 利用计算机视觉算法,我实现了低成本“动作捕捉”,在此正式发布MC动画自动生成算法Video2MC! - - 使用前,强烈建议先观看我的[B站视频](https://www.bilibili.com/video/BV1SP411W7pw),快速了解该项目的用法。 - - 目前该项目还在不断优化改进,请关注我的B站帐号,获取最新消息。 - - 初次使用务必阅读下面的“使用说明”和“视频要求”! ↓↓↓ - - """ - ) - - - with gr.Accordion("相关链接", open=False): - - text_req = gr.Markdown( - """ - ## 相关链接 - - Github项目:https://github.com/Balloon-356/Video2MC - - B站帐号(私信联系):https://space.bilibili.com/244384103 - - **介绍视频:** https://www.bilibili.com/video/BV1SP411W7pw - - 实现原理:https://www.bilibili.com/read/cv25704198 - - """ - ) - - with gr.Accordion("使用说明", open=False): - - text_req = gr.Markdown( - """ - ## 使用说明 - - 1. 上传一段视频(拖入下方左侧的框中)。视频需要满足“视频要求”。 - - 2. 点击“Submit”提交视频,此时算法开始运行。请耐心等待,右侧的框中将显示算法运行的进度。(5s的视频大约需要5分钟) - - 3. 运行结束。可以在右侧的框中下载.miframes文件,并且可以通过算法渲染得到的骨架动作视频预览效果。 - - 4. 将.miframes文件导入到Mine-imator软件中,生成一段动画。(导入方法可在互联网上查询) - - 5. 微调人物动作,导出动画。 - - 注:目前使用的是CPU,代码运行较慢。GPU算力过于昂贵,若有需求请私信联系。 - - """ - ) - - with gr.Accordion("视频要求", open=False): - - text_req = gr.Markdown( - """ - ## 视频要求 - - 1. 请尽量上传时长较短的视频(10s内最好),否则算法将运行很长时间。 - - 2. 视频中应该只包含一个人,且人位于视频中心、全身完整地出现在视频中,面向相机。 - - 3. 如"example"中展示的视频一样。 - - """ - ) - - - - with gr.Row(): - - with gr.Column(): - input_video = gr.Video() - with gr.Row(): - btn_c = gr.ClearButton(input_video) - btn_s = gr.Button("Submit", variant='primary') - gr.Examples([os.path.join(os.path.dirname(__file__), - "input_videos/kun_test_5sec.mp4")], input_video) - - with gr.Column(): - output_miframes = gr.File() - output_path = gr.Text() - output_video = gr.Video() - - - btn_s.click(Video2MC, inputs=[input_video], outputs=[output_miframes, output_path, output_video]) - - -iface.queue(concurrency_count=10).launch() \ No newline at end of file diff --git a/spaces/Saturdays/HUMANDS/app.py b/spaces/Saturdays/HUMANDS/app.py deleted file mode 100644 index eedf26809b3cc7fa50b623148d9c60b3f64b7ea9..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/HUMANDS/app.py +++ /dev/null @@ -1,128 +0,0 @@ -import gradio as gr -import pandas as pd -from joblib import load - - -def humands(Sex,Age,Married,Monthlyincome,TotalWorkingYears,DistanceFromHome,Overtime,YearsAtCompany,NumCompaniesWorked): - model = load('modelo_entrenado.pkl') - df = pd.DataFrame.from_dict( - { - "MonthlyIncome" : [Monthlyincome], - "Age" : [Age], - "TotalWorkingYears" : [TotalWorkingYears], - "DailyRate" : [Monthlyincome*2/30], - "HourlyRate" : [Monthlyincome*2/1640], - "DistanceFromHome" : [DistanceFromHome], - "OverTime_Yes" : [1 if Overtime else 0], - "OverTime_No" : [1 if not Overtime else 0], - "YearsAtCompany" : [YearsAtCompany], - "MonthlyRate" : [Monthlyincome*2], - "NumCompaniesWorked" : [NumCompaniesWorked], - "PercentSalaryHike" : [15], - "YearsInCurrentRole" : [YearsAtCompany-1], - "YearsWithCurrManager" : [YearsAtCompany-1], - "StockOptionLevel" : [1], - "YearsSinceLastPromotion" : [YearsAtCompany-1], - "JobSatisfaction" : [2], - "JobLevel" : [3], - "TrainingTimesLastYear" : [0], - "EnvironmentSatisfaction" : [2], - "WorkLifeBalance" : [2], - "MaritalStatus_Single" : [1 if Married==0 else 0], - "JobInvolvement" : [2], - "RelationshipSatisfaction" : [Married+1], - "Education" : [2], - "BusinessTravel_Travel_Frequently" : [1 if Overtime else 0], - "JobRole_Sales Representative" : [0], - "EducationField_Medical" : [0], - "Department_Sales" : [0], - "JobRole_Laboratory Technician" : [0], - "Department_Research & Development" : [1], - "Gender_Female" : [1 if Sex==0 else 0], - "MaritalStatus_Married" : [1 if Married==1 else 0], - "JobRole_Sales Executive" : [0], - "EducationField_Technical Degree" : [1], - "Gender_Male" : [1 if Sex==1 else 0], - "EducationField_Life Sciences" : [0], - "BusinessTravel_Travel_Rarely" : [0], - "MaritalStatus_Divorced" : [1 if Married==2 else 0], - "JobRole_Research Scientist" : [1], - "EducationField_Marketing" : [0], - "PerformanceRating" : [3], - "EducationField_Other" : [0], - "JobRole_Human Resources" : [0], - "BusinessTravel_Non-Travel" : [1 if not Overtime else 0], - "Department_Human Resources" : [0], - "JobRole_Manufacturing Director" : [0], - "JobRole_Healthcare Representative" : [0], - "EducationField_Human Resources" : [0], - "JobRole_Manager" : [0], - "JobRole_Research Director" : [0], - } - ) - - columnas = ['Age', 'DailyRate', 'DistanceFromHome', 'Education', - 'EnvironmentSatisfaction', 'HourlyRate', 'JobInvolvement', 'JobLevel', - 'JobSatisfaction', 'MonthlyIncome', 'MonthlyRate', 'NumCompaniesWorked', - 'PercentSalaryHike', 'PerformanceRating', 'RelationshipSatisfaction', - 'StockOptionLevel', 'TotalWorkingYears', 'TrainingTimesLastYear', - 'WorkLifeBalance', 'YearsAtCompany', 'YearsInCurrentRole', - 'YearsSinceLastPromotion', 'YearsWithCurrManager', - 'BusinessTravel_Non-Travel', 'BusinessTravel_Travel_Frequently', - 'BusinessTravel_Travel_Rarely', 'Department_Human Resources', - 'Department_Research & Development', 'Department_Sales', - 'EducationField_Human Resources', 'EducationField_Life Sciences', - 'EducationField_Marketing', 'EducationField_Medical', - 'EducationField_Other', 'EducationField_Technical Degree', - 'Gender_Female', 'Gender_Male', 'JobRole_Healthcare Representative', - 'JobRole_Human Resources', 'JobRole_Laboratory Technician', - 'JobRole_Manager', 'JobRole_Manufacturing Director', - 'JobRole_Research Director', 'JobRole_Research Scientist', - 'JobRole_Sales Executive', 'JobRole_Sales Representative', - 'MaritalStatus_Divorced', 'MaritalStatus_Married', - 'MaritalStatus_Single', 'OverTime_No', 'OverTime_Yes'] - - df = df.reindex(columns=columnas) - - pred = model.predict(df)[0] - - if pred == "Yes": - predicted1="Estamos ante un trabajador con alto nivel de desgaste del trabajo. Habría que plantearse alguna acción." - predicted2="stressed_image.jpg" - else: - predicted1="Estamos ante un trabajador con un nivel bajo de desgaste del trabajo. Se ha de seguir así." - predicted2="ok_image2.jpg" - return [predicted1,predicted2] - - -iface = gr.Interface( - humands, - [ - gr.Radio(["Mujer","Hombre"],type = "index",label="Sexo"), - gr.inputs.Slider(18,70,1,label="Edad del trabajador"), - gr.Radio(["Soltero","Casado","Divorciado"],type = "index",label="Esstado civil:"), - gr.inputs.Slider(1000,20000,1,label="Ingresos mensuales del trabajador"), - gr.inputs.Slider(0,40,1,label="Total de años trabajados del trabajador"), - gr.inputs.Slider(0,100,1,label="Distancia del trabajo al domicilio en Km"), - gr.Checkbox(label="¿Realiza horas extras habitualmente?"), - gr.inputs.Slider(0,40,1,label="Años del trabajador en la empresa"), - gr.inputs.Slider(0,40,1,label="Numero de empresas en las que ha estado el trabajador"), - - ], - - ["text",gr.Image(type='filepath')], - examples=[ - ["Mujer",33,"Soltero",2917,9,1,False,9,1], - ["Hombre",42,"Casado",3111,16,5,False,7,3], - ["Hombre",50,"Divorciado",1732,20,50,True,3,3], - ["Mujer",25,"Soltero",2556,6,58,True,2,4], - ], - interpretation="default", - title = 'HUMANDS: Inteligencia artificial para empleados', - description = 'Uno de los motivos por los que las organizaciones pierden a sus empleados es la insatisfacción laboral, por ello, nuestro objetivo es predecir el verdadero nivel de desgaste de los empleados dentro de una organización mediante Inteligencia Artificial. Para saber más: https://saturdays.ai/2021/12/31/inteligencia-artificial-empleados/', - theme = 'peach' -) - - - -iface.launch() \ No newline at end of file diff --git a/spaces/Sky5408er/vits-uma-genshin-honkai/Docker/Dockerfile b/spaces/Sky5408er/vits-uma-genshin-honkai/Docker/Dockerfile deleted file mode 100644 index 4d39cdf02a2ec151686cc1d61234bf723068fed8..0000000000000000000000000000000000000000 --- a/spaces/Sky5408er/vits-uma-genshin-honkai/Docker/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM python:3.9-bullseye -VOLUME ["/app"] -WORKDIR /app -# Set apt to Chinese mirror -RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list -RUN apt-get update && apt-get -y install cmake git -RUN git clone https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai -WORKDIR /app/vits-uma-genshin-honkai -RUN sed -i "s/\.launch()/\.launch(server_name=\"0.0.0.0\")/" /app/vits-uma-genshin-honkai/app.py -ADD vits.sh /app/vits.sh -EXPOSE 7860 -ENTRYPOINT [ "/app/vits.sh" ] \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/optim/fsdp.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/optim/fsdp.py deleted file mode 100644 index b3c1a55b6bf1a33092a021c5cefbbb2ae848918a..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/optim/fsdp.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Wrapper around FSDP for more convenient use in the training loops. -""" - -from contextlib import contextmanager -import typing as tp -import dora -import torch - -from torch.distributed.fsdp import FullyShardedDataParallel as FSDP -from torch.distributed.fsdp import ( - MixedPrecision, ShardingStrategy, FullStateDictConfig, StateDictType) -from torch.distributed._shard.sharded_tensor.api import ShardedTensor - - -def is_fsdp_used() -> bool: - """Return whether we are using FSDP.""" - # A bit of a hack but should work from anywhere. - if dora.is_xp(): - cfg = dora.get_xp().cfg - if hasattr(cfg, 'fsdp'): - return cfg.fsdp.use - return False - - -def is_sharded_tensor(x: tp.Any) -> bool: - return isinstance(x, ShardedTensor) - - -@contextmanager -def switch_to_full_state_dict(models: tp.List[FSDP]): - # Another bug in FSDP makes it that we cannot use the `state_dict_type` API, - # so let's do thing manually. - for model in models: - FSDP.set_state_dict_type( # type: ignore - model, StateDictType.FULL_STATE_DICT, - FullStateDictConfig(offload_to_cpu=True, rank0_only=True)) - try: - yield - finally: - for model in models: - FSDP.set_state_dict_type(model, StateDictType.LOCAL_STATE_DICT) # type: ignore - - -def wrap_with_fsdp(cfg, model: torch.nn.Module, - block_classes: tp.Optional[tp.Set[tp.Type]] = None) -> FSDP: - """Wraps a model with FSDP.""" - # Some of the typing is disabled until this gets integrated - # into the stable version of PyTorch. - from torch.distributed.fsdp.wrap import ModuleWrapPolicy # type: ignore - - # we import this here to prevent circular import. - from ..modules.transformer import StreamingTransformerLayer - from ..modules.conditioners import ConditioningProvider - - _fix_post_backward_hook() - - assert cfg.use - sharding_strategy_dict = { - "no_shard": ShardingStrategy.NO_SHARD, - "shard_grad_op": ShardingStrategy.SHARD_GRAD_OP, - "full_shard": ShardingStrategy.FULL_SHARD, - } - - dtype_dict = { - "float32": torch.float32, - "float16": torch.float16, - "bfloat16": torch.bfloat16, - } - - mixed_precision_config = MixedPrecision( - param_dtype=dtype_dict[cfg.param_dtype], - reduce_dtype=dtype_dict[cfg.reduce_dtype], - buffer_dtype=dtype_dict[cfg.buffer_dtype], - ) - - sharding_strategy_config = sharding_strategy_dict[cfg.sharding_strategy] - # The following is going to require being a bit smart - # when doing LM, because this would flush the weights for every time step - # during generation. One possiblity is to use hybrid sharding: - # See: https://pytorch.org/docs/master/fsdp.html#torch.distributed.fsdp.ShardingStrategy - assert sharding_strategy_config != ShardingStrategy.FULL_SHARD, \ - "Not supported at the moment, requires a bit more work." - - local_rank = dora.distrib.get_distrib_spec().local_rank - assert local_rank < torch.cuda.device_count(), "Please upgrade Dora!" - - auto_wrap_policy = None - if block_classes is None: - block_classes = {StreamingTransformerLayer, ConditioningProvider} - if cfg.per_block: - auto_wrap_policy = ModuleWrapPolicy(block_classes) - wrapped = _FSDPFixStateDict( - model, - sharding_strategy=sharding_strategy_config, - mixed_precision=mixed_precision_config, - device_id=local_rank, - sync_module_states=True, - use_orig_params=True, - auto_wrap_policy=auto_wrap_policy, - ) # type: ignore - FSDP.set_state_dict_type(wrapped, StateDictType.LOCAL_STATE_DICT) # type: ignore - - # Let the wrapped model know about the wrapping! - # We use __dict__ to avoid it going into the state dict. - # This is a bit dirty, but needed during generation, as otherwise - # the wrapped model would call itself and bypass FSDP. - for module in FSDP.fsdp_modules(wrapped): - original = module._fsdp_wrapped_module - original.__dict__['_fsdp'] = module - return wrapped - - -def purge_fsdp(model: FSDP): - """Purge the FSDP cached shard inside the model. This should - allow setting the best state or switching to the EMA. - """ - from torch.distributed.fsdp._runtime_utils import _reshard # type: ignore - for module in FSDP.fsdp_modules(model): - handles = module._handles - if not handles: - continue - handle = handles[0] - unsharded_flat_param = handle._get_padded_unsharded_flat_param() - storage_size: int = unsharded_flat_param._typed_storage()._size() # type: ignore - if storage_size == 0: - continue - true_list = [True for h in handles] - _reshard(module, handles, true_list) - - -class _FSDPFixStateDict(FSDP): - @staticmethod - def _name_without_fsdp_prefix(name: str) -> str: - from torch.distributed.fsdp._common_utils import FSDP_WRAPPED_MODULE # type: ignore - parts = name.split('.') - new_parts = [part for part in parts if part != FSDP_WRAPPED_MODULE] - return '.'.join(new_parts) - - def state_dict(self) -> tp.Dict[str, tp.Any]: # type: ignore - state = dict(super().state_dict()) - for key, value in list(state.items()): - if is_sharded_tensor(value): - del state[key] - return state - - def load_state_dict(self, state: tp.Dict[str, tp.Any]): # type: ignore - if self._state_dict_type is StateDictType.FULL_STATE_DICT: - super().load_state_dict(state) - purge_fsdp(self) - return - # Fix FSDP load state dict in all situation. - # Use this only with LOCAL_STATE_DICT !!! - current_state = dict(super().state_dict()) - for key, value in state.items(): - key = _FSDPFixStateDict._name_without_fsdp_prefix(key) - if key not in current_state: - # Emulate strict loading manually. - raise RuntimeError(f"Unknown state key {key}") - current_state[key].copy_(value) - - # Purging cached weights from previous forward. - purge_fsdp(self) - - -_hook_fixed = False - - -def _fix_post_backward_hook(): - global _hook_fixed - if _hook_fixed: - return - _hook_fixed = True - - from torch.distributed.fsdp import _runtime_utils - from torch.distributed.fsdp._common_utils import TrainingState, HandleTrainingState - old_hook = _runtime_utils._post_backward_hook - - def _post_backward_hook(state, handle, *args, **kwargs): - checkpointed = getattr(state._fsdp_wrapped_module, '_audiocraft_checkpointed', False) - if checkpointed: - # there will be one more forward in the backward with checkpointing and that will - # massively confuse FSDP, so we have to make it think everything - # is going according to the plan. - state.training_state = TrainingState.FORWARD_BACKWARD - handle._training_state = HandleTrainingState.BACKWARD_PRE - old_hook(state, handle, *args, **kwargs) - - _runtime_utils._post_backward_hook = _post_backward_hook diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/enum_util.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/enum_util.py deleted file mode 100644 index 914d5d831802b330b1d627043cb20737cdeb764a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/enum_util.py +++ /dev/null @@ -1,48 +0,0 @@ -from __future__ import annotations - -from contourpy._contourpy import FillType, LineType, ZInterp - - -def as_fill_type(fill_type: FillType | str) -> FillType: - """Coerce a FillType or string value to a FillType. - - Args: - fill_type (FillType or str): Value to convert. - - Return: - FillType: Converted value. - """ - if isinstance(fill_type, str): - return FillType.__members__[fill_type] - else: - return fill_type - - -def as_line_type(line_type: LineType | str) -> LineType: - """Coerce a LineType or string value to a LineType. - - Args: - line_type (LineType or str): Value to convert. - - Return: - LineType: Converted value. - """ - if isinstance(line_type, str): - return LineType.__members__[line_type] - else: - return line_type - - -def as_z_interp(z_interp: ZInterp | str) -> ZInterp: - """Coerce a ZInterp or string value to a ZInterp. - - Args: - z_interp (ZInterp or str): Value to convert. - - Return: - ZInterp: Converted value. - """ - if isinstance(z_interp, str): - return ZInterp.__members__[z_interp] - else: - return z_interp diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/serialize.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/serialize.py deleted file mode 100644 index ed45065184f0512ef65c8f38d398de553ce576ca..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/serialize.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# import cloudpickle - - -class PicklableWrapper(object): - """ - Wrap an object to make it more picklable, note that it uses - heavy weight serialization libraries that are slower than pickle. - It's best to use it only on closures (which are usually not picklable). - - This is a simplified version of - https://github.com/joblib/joblib/blob/master/joblib/externals/loky/cloudpickle_wrapper.py - """ - - def __init__(self, obj): - while isinstance(obj, PicklableWrapper): - # Wrapping an object twice is no-op - obj = obj._obj - self._obj = obj - - # def __reduce__(self): - # s = cloudpickle.dumps(self._obj) - # return cloudpickle.loads, (s,) - - def __call__(self, *args, **kwargs): - return self._obj(*args, **kwargs) - - def __getattr__(self, attr): - # Ensure that the wrapped object can be used seamlessly as the previous object. - if attr not in ["_obj"]: - return getattr(self._obj, attr) - return getattr(self, attr) diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/dpt_depth.py b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/dpt_depth.py deleted file mode 100644 index 3129d09cb43a7c79b23916236991fabbedb78f55..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/dpt_depth.py +++ /dev/null @@ -1,166 +0,0 @@ -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_beit, - forward_swin, - forward_levit, - forward_vit, -) -from .backbones.levit import stem_b4_transpose -from timm.models.layers import get_act_layer - - -def _make_fusion_block(features, use_bn, size = None): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - size=size, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - **kwargs - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - # For the Swin, Swin 2, LeViT and Next-ViT Transformers, the hierarchical architectures prevent setting the - # hooks freely. Instead, the hooks have to be chosen according to the ranges specified in the comments. - hooks = { - "beitl16_512": [5, 11, 17, 23], - "beitl16_384": [5, 11, 17, 23], - "beitb16_384": [2, 5, 8, 11], - "swin2l24_384": [1, 1, 17, 1], # Allowed ranges: [0, 1], [0, 1], [ 0, 17], [ 0, 1] - "swin2b24_384": [1, 1, 17, 1], # [0, 1], [0, 1], [ 0, 17], [ 0, 1] - "swin2t16_256": [1, 1, 5, 1], # [0, 1], [0, 1], [ 0, 5], [ 0, 1] - "swinl12_384": [1, 1, 17, 1], # [0, 1], [0, 1], [ 0, 17], [ 0, 1] - "next_vit_large_6m": [2, 6, 36, 39], # [0, 2], [3, 6], [ 7, 36], [37, 39] - "levit_384": [3, 11, 21], # [0, 3], [6, 11], [14, 21] - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - }[backbone] - - if "next_vit" in backbone: - in_features = { - "next_vit_large_6m": [96, 256, 512, 1024], - }[backbone] - else: - in_features = None - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks, - use_readout=readout, - in_features=in_features, - ) - - self.number_layers = len(hooks) if hooks is not None else 4 - size_refinenet3 = None - self.scratch.stem_transpose = None - - if "beit" in backbone: - self.forward_transformer = forward_beit - elif "swin" in backbone: - self.forward_transformer = forward_swin - elif "next_vit" in backbone: - from .backbones.next_vit import forward_next_vit - self.forward_transformer = forward_next_vit - elif "levit" in backbone: - self.forward_transformer = forward_levit - size_refinenet3 = 7 - self.scratch.stem_transpose = stem_b4_transpose(256, 128, get_act_layer("hard_swish")) - else: - self.forward_transformer = forward_vit - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn, size_refinenet3) - if self.number_layers >= 4: - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layers = self.forward_transformer(self.pretrained, x) - if self.number_layers == 3: - layer_1, layer_2, layer_3 = layers - else: - layer_1, layer_2, layer_3, layer_4 = layers - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - if self.number_layers >= 4: - layer_4_rn = self.scratch.layer4_rn(layer_4) - - if self.number_layers == 3: - path_3 = self.scratch.refinenet3(layer_3_rn, size=layer_2_rn.shape[2:]) - else: - path_4 = self.scratch.refinenet4(layer_4_rn, size=layer_3_rn.shape[2:]) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn, size=layer_2_rn.shape[2:]) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn, size=layer_1_rn.shape[2:]) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - if self.scratch.stem_transpose is not None: - path_1 = self.scratch.stem_transpose(path_1) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - head_features_1 = kwargs["head_features_1"] if "head_features_1" in kwargs else features - head_features_2 = kwargs["head_features_2"] if "head_features_2" in kwargs else 32 - kwargs.pop("head_features_1", None) - kwargs.pop("head_features_2", None) - - head = nn.Sequential( - nn.Conv2d(head_features_1, head_features_1 // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(head_features_1 // 2, head_features_2, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(head_features_2, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - return super().forward(x).squeeze(dim=1) diff --git a/spaces/TEnngal/bingo/src/components/ui/icons.tsx b/spaces/TEnngal/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/TH5314/newbing/README.md b/spaces/TH5314/newbing/README.md deleted file mode 100644 index 90fab5f716b39d7cb21063693c1f53dd3f9ad781..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/README.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
    - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
    - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
    - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -> 由于目前微软封杀比较严重,推荐优先使用 [部署 Huggingface](#部署到-huggingface) 。 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
    -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
    - -
    -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
    - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/TH5314/newbing/src/components/ui/button.tsx b/spaces/TH5314/newbing/src/components/ui/button.tsx deleted file mode 100644 index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/ui/button.tsx +++ /dev/null @@ -1,57 +0,0 @@ -import * as React from 'react' -import { Slot } from '@radix-ui/react-slot' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const buttonVariants = cva( - 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50', - { - variants: { - variant: { - default: - 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90', - destructive: - 'bg-destructive text-destructive-foreground hover:bg-destructive/90', - outline: - 'border border-input hover:bg-accent hover:text-accent-foreground', - secondary: - 'bg-secondary text-secondary-foreground hover:bg-secondary/80', - ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground', - link: 'text-primary underline-offset-4 shadow-none hover:underline' - }, - size: { - default: 'h-8 px-4 py-2', - sm: 'h-8 rounded-md px-3', - lg: 'h-11 rounded-md px-8', - icon: 'h-8 w-8 p-0' - } - }, - defaultVariants: { - variant: 'default', - size: 'default' - } - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes, - VariantProps { - asChild?: boolean -} - -const Button = React.forwardRef( - ({ className, variant, size, asChild = false, ...props }, ref) => { - const Comp = asChild ? Slot : 'button' - return ( - - ) - } -) -Button.displayName = 'Button' - -export { Button, buttonVariants } diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/locations/_distutils.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/locations/_distutils.py deleted file mode 100644 index 92bd93179c5cd3cb377c8b9f1e9d22d13fd7d003..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/locations/_distutils.py +++ /dev/null @@ -1,173 +0,0 @@ -"""Locations where we look for configs, install stuff, etc""" - -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -# If pip's going to use distutils, it should not be using the copy that setuptools -# might have injected into the environment. This is done by removing the injected -# shim, if it's injected. -# -# See https://github.com/pypa/pip/issues/8761 for the original discussion and -# rationale for why this is done within pip. -try: - __import__("_distutils_hack").remove_shim() -except (ImportError, AttributeError): - pass - -import logging -import os -import sys -from distutils.cmd import Command as DistutilsCommand -from distutils.command.install import SCHEME_KEYS -from distutils.command.install import install as distutils_install_command -from distutils.sysconfig import get_python_lib -from typing import Dict, List, Optional, Union, cast - -from pip._internal.models.scheme import Scheme -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.virtualenv import running_under_virtualenv - -from .base import get_major_minor_version - -logger = logging.getLogger(__name__) - - -def distutils_scheme( - dist_name: str, - user: bool = False, - home: Optional[str] = None, - root: Optional[str] = None, - isolated: bool = False, - prefix: Optional[str] = None, - *, - ignore_config_files: bool = False, -) -> Dict[str, str]: - """ - Return a distutils install scheme - """ - from distutils.dist import Distribution - - dist_args: Dict[str, Union[str, List[str]]] = {"name": dist_name} - if isolated: - dist_args["script_args"] = ["--no-user-cfg"] - - d = Distribution(dist_args) - if not ignore_config_files: - try: - d.parse_config_files() - except UnicodeDecodeError: - # Typeshed does not include find_config_files() for some reason. - paths = d.find_config_files() # type: ignore - logger.warning( - "Ignore distutils configs in %s due to encoding errors.", - ", ".join(os.path.basename(p) for p in paths), - ) - obj: Optional[DistutilsCommand] = None - obj = d.get_command_obj("install", create=True) - assert obj is not None - i = cast(distutils_install_command, obj) - # NOTE: setting user or home has the side-effect of creating the home dir - # or user base for installations during finalize_options() - # ideally, we'd prefer a scheme class that has no side-effects. - assert not (user and prefix), f"user={user} prefix={prefix}" - assert not (home and prefix), f"home={home} prefix={prefix}" - i.user = user or i.user - if user or home: - i.prefix = "" - i.prefix = prefix or i.prefix - i.home = home or i.home - i.root = root or i.root - i.finalize_options() - - scheme = {} - for key in SCHEME_KEYS: - scheme[key] = getattr(i, "install_" + key) - - # install_lib specified in setup.cfg should install *everything* - # into there (i.e. it takes precedence over both purelib and - # platlib). Note, i.install_lib is *always* set after - # finalize_options(); we only want to override here if the user - # has explicitly requested it hence going back to the config - if "install_lib" in d.get_option_dict("install"): - scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib)) - - if running_under_virtualenv(): - if home: - prefix = home - elif user: - prefix = i.install_userbase - else: - prefix = i.prefix - scheme["headers"] = os.path.join( - prefix, - "include", - "site", - f"python{get_major_minor_version()}", - dist_name, - ) - - if root is not None: - path_no_drive = os.path.splitdrive(os.path.abspath(scheme["headers"]))[1] - scheme["headers"] = os.path.join(root, path_no_drive[1:]) - - return scheme - - -def get_scheme( - dist_name: str, - user: bool = False, - home: Optional[str] = None, - root: Optional[str] = None, - isolated: bool = False, - prefix: Optional[str] = None, -) -> Scheme: - """ - Get the "scheme" corresponding to the input parameters. The distutils - documentation provides the context for the available schemes: - https://docs.python.org/3/install/index.html#alternate-installation - - :param dist_name: the name of the package to retrieve the scheme for, used - in the headers scheme path - :param user: indicates to use the "user" scheme - :param home: indicates to use the "home" scheme and provides the base - directory for the same - :param root: root under which other directories are re-based - :param isolated: equivalent to --no-user-cfg, i.e. do not consider - ~/.pydistutils.cfg (posix) or ~/pydistutils.cfg (non-posix) for - scheme paths - :param prefix: indicates to use the "prefix" scheme and provides the - base directory for the same - """ - scheme = distutils_scheme(dist_name, user, home, root, isolated, prefix) - return Scheme( - platlib=scheme["platlib"], - purelib=scheme["purelib"], - headers=scheme["headers"], - scripts=scheme["scripts"], - data=scheme["data"], - ) - - -def get_bin_prefix() -> str: - # XXX: In old virtualenv versions, sys.prefix can contain '..' components, - # so we need to call normpath to eliminate them. - prefix = os.path.normpath(sys.prefix) - if WINDOWS: - bin_py = os.path.join(prefix, "Scripts") - # buildout uses 'bin' on Windows too? - if not os.path.exists(bin_py): - bin_py = os.path.join(prefix, "bin") - return bin_py - # Forcing to use /usr/local/bin for standard macOS framework installs - # Also log to ~/Library/Logs/ for use with the Console.app log viewer - if sys.platform[:6] == "darwin" and prefix[:16] == "/System/Library/": - return "/usr/local/bin" - return os.path.join(prefix, "bin") - - -def get_purelib() -> str: - return get_python_lib(plat_specific=False) - - -def get_platlib() -> str: - return get_python_lib(plat_specific=True) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/compat.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/compat.py deleted file mode 100644 index 9ab2bb48656520a95ec9ac87d090f2e741f0e544..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/compat.py +++ /dev/null @@ -1,67 +0,0 @@ -""" -requests.compat -~~~~~~~~~~~~~~~ - -This module previously handled import compatibility issues -between Python 2 and Python 3. It remains for backwards -compatibility until the next major version. -""" - -from pip._vendor import chardet - -import sys - -# ------- -# Pythons -# ------- - -# Syntax sugar. -_ver = sys.version_info - -#: Python 2.x? -is_py2 = _ver[0] == 2 - -#: Python 3.x? -is_py3 = _ver[0] == 3 - -# Note: We've patched out simplejson support in pip because it prevents -# upgrading simplejson on Windows. -import json -from json import JSONDecodeError - -# Keep OrderedDict for backwards compatibility. -from collections import OrderedDict -from collections.abc import Callable, Mapping, MutableMapping -from http import cookiejar as cookielib -from http.cookies import Morsel -from io import StringIO - -# -------------- -# Legacy Imports -# -------------- -from urllib.parse import ( - quote, - quote_plus, - unquote, - unquote_plus, - urldefrag, - urlencode, - urljoin, - urlparse, - urlsplit, - urlunparse, -) -from urllib.request import ( - getproxies, - getproxies_environment, - parse_http_list, - proxy_bypass, - proxy_bypass_environment, -) - -builtin_str = str -str = str -bytes = bytes -basestring = (str, bytes) -numeric_types = (int, float) -integer_types = (int,) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/more.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/more.py deleted file mode 100644 index e6fca4d47f661ff16fdc8c2bb7ae5b86c7f347b2..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/more.py +++ /dev/null @@ -1,3824 +0,0 @@ -import warnings - -from collections import Counter, defaultdict, deque, abc -from collections.abc import Sequence -from functools import partial, reduce, wraps -from heapq import merge, heapify, heapreplace, heappop -from itertools import ( - chain, - compress, - count, - cycle, - dropwhile, - groupby, - islice, - repeat, - starmap, - takewhile, - tee, - zip_longest, -) -from math import exp, factorial, floor, log -from queue import Empty, Queue -from random import random, randrange, uniform -from operator import itemgetter, mul, sub, gt, lt -from sys import hexversion, maxsize -from time import monotonic - -from .recipes import ( - consume, - flatten, - pairwise, - powerset, - take, - unique_everseen, -) - -__all__ = [ - 'AbortThread', - 'adjacent', - 'always_iterable', - 'always_reversible', - 'bucket', - 'callback_iter', - 'chunked', - 'circular_shifts', - 'collapse', - 'collate', - 'consecutive_groups', - 'consumer', - 'countable', - 'count_cycle', - 'mark_ends', - 'difference', - 'distinct_combinations', - 'distinct_permutations', - 'distribute', - 'divide', - 'exactly_n', - 'filter_except', - 'first', - 'groupby_transform', - 'ilen', - 'interleave_longest', - 'interleave', - 'intersperse', - 'islice_extended', - 'iterate', - 'ichunked', - 'is_sorted', - 'last', - 'locate', - 'lstrip', - 'make_decorator', - 'map_except', - 'map_reduce', - 'nth_or_last', - 'nth_permutation', - 'nth_product', - 'numeric_range', - 'one', - 'only', - 'padded', - 'partitions', - 'set_partitions', - 'peekable', - 'repeat_last', - 'replace', - 'rlocate', - 'rstrip', - 'run_length', - 'sample', - 'seekable', - 'SequenceView', - 'side_effect', - 'sliced', - 'sort_together', - 'split_at', - 'split_after', - 'split_before', - 'split_when', - 'split_into', - 'spy', - 'stagger', - 'strip', - 'substrings', - 'substrings_indexes', - 'time_limited', - 'unique_to_each', - 'unzip', - 'windowed', - 'with_iter', - 'UnequalIterablesError', - 'zip_equal', - 'zip_offset', - 'windowed_complete', - 'all_unique', - 'value_chain', - 'product_index', - 'combination_index', - 'permutation_index', -] - -_marker = object() - - -def chunked(iterable, n, strict=False): - """Break *iterable* into lists of length *n*: - - >>> list(chunked([1, 2, 3, 4, 5, 6], 3)) - [[1, 2, 3], [4, 5, 6]] - - By the default, the last yielded list will have fewer than *n* elements - if the length of *iterable* is not divisible by *n*: - - >>> list(chunked([1, 2, 3, 4, 5, 6, 7, 8], 3)) - [[1, 2, 3], [4, 5, 6], [7, 8]] - - To use a fill-in value instead, see the :func:`grouper` recipe. - - If the length of *iterable* is not divisible by *n* and *strict* is - ``True``, then ``ValueError`` will be raised before the last - list is yielded. - - """ - iterator = iter(partial(take, n, iter(iterable)), []) - if strict: - - def ret(): - for chunk in iterator: - if len(chunk) != n: - raise ValueError('iterable is not divisible by n.') - yield chunk - - return iter(ret()) - else: - return iterator - - -def first(iterable, default=_marker): - """Return the first item of *iterable*, or *default* if *iterable* is - empty. - - >>> first([0, 1, 2, 3]) - 0 - >>> first([], 'some default') - 'some default' - - If *default* is not provided and there are no items in the iterable, - raise ``ValueError``. - - :func:`first` is useful when you have a generator of expensive-to-retrieve - values and want any arbitrary one. It is marginally shorter than - ``next(iter(iterable), default)``. - - """ - try: - return next(iter(iterable)) - except StopIteration as e: - if default is _marker: - raise ValueError( - 'first() was called on an empty iterable, and no ' - 'default value was provided.' - ) from e - return default - - -def last(iterable, default=_marker): - """Return the last item of *iterable*, or *default* if *iterable* is - empty. - - >>> last([0, 1, 2, 3]) - 3 - >>> last([], 'some default') - 'some default' - - If *default* is not provided and there are no items in the iterable, - raise ``ValueError``. - """ - try: - if isinstance(iterable, Sequence): - return iterable[-1] - # Work around https://bugs.python.org/issue38525 - elif hasattr(iterable, '__reversed__') and (hexversion != 0x030800F0): - return next(reversed(iterable)) - else: - return deque(iterable, maxlen=1)[-1] - except (IndexError, TypeError, StopIteration): - if default is _marker: - raise ValueError( - 'last() was called on an empty iterable, and no default was ' - 'provided.' - ) - return default - - -def nth_or_last(iterable, n, default=_marker): - """Return the nth or the last item of *iterable*, - or *default* if *iterable* is empty. - - >>> nth_or_last([0, 1, 2, 3], 2) - 2 - >>> nth_or_last([0, 1], 2) - 1 - >>> nth_or_last([], 0, 'some default') - 'some default' - - If *default* is not provided and there are no items in the iterable, - raise ``ValueError``. - """ - return last(islice(iterable, n + 1), default=default) - - -class peekable: - """Wrap an iterator to allow lookahead and prepending elements. - - Call :meth:`peek` on the result to get the value that will be returned - by :func:`next`. This won't advance the iterator: - - >>> p = peekable(['a', 'b']) - >>> p.peek() - 'a' - >>> next(p) - 'a' - - Pass :meth:`peek` a default value to return that instead of raising - ``StopIteration`` when the iterator is exhausted. - - >>> p = peekable([]) - >>> p.peek('hi') - 'hi' - - peekables also offer a :meth:`prepend` method, which "inserts" items - at the head of the iterable: - - >>> p = peekable([1, 2, 3]) - >>> p.prepend(10, 11, 12) - >>> next(p) - 10 - >>> p.peek() - 11 - >>> list(p) - [11, 12, 1, 2, 3] - - peekables can be indexed. Index 0 is the item that will be returned by - :func:`next`, index 1 is the item after that, and so on: - The values up to the given index will be cached. - - >>> p = peekable(['a', 'b', 'c', 'd']) - >>> p[0] - 'a' - >>> p[1] - 'b' - >>> next(p) - 'a' - - Negative indexes are supported, but be aware that they will cache the - remaining items in the source iterator, which may require significant - storage. - - To check whether a peekable is exhausted, check its truth value: - - >>> p = peekable(['a', 'b']) - >>> if p: # peekable has items - ... list(p) - ['a', 'b'] - >>> if not p: # peekable is exhausted - ... list(p) - [] - - """ - - def __init__(self, iterable): - self._it = iter(iterable) - self._cache = deque() - - def __iter__(self): - return self - - def __bool__(self): - try: - self.peek() - except StopIteration: - return False - return True - - def peek(self, default=_marker): - """Return the item that will be next returned from ``next()``. - - Return ``default`` if there are no items left. If ``default`` is not - provided, raise ``StopIteration``. - - """ - if not self._cache: - try: - self._cache.append(next(self._it)) - except StopIteration: - if default is _marker: - raise - return default - return self._cache[0] - - def prepend(self, *items): - """Stack up items to be the next ones returned from ``next()`` or - ``self.peek()``. The items will be returned in - first in, first out order:: - - >>> p = peekable([1, 2, 3]) - >>> p.prepend(10, 11, 12) - >>> next(p) - 10 - >>> list(p) - [11, 12, 1, 2, 3] - - It is possible, by prepending items, to "resurrect" a peekable that - previously raised ``StopIteration``. - - >>> p = peekable([]) - >>> next(p) - Traceback (most recent call last): - ... - StopIteration - >>> p.prepend(1) - >>> next(p) - 1 - >>> next(p) - Traceback (most recent call last): - ... - StopIteration - - """ - self._cache.extendleft(reversed(items)) - - def __next__(self): - if self._cache: - return self._cache.popleft() - - return next(self._it) - - def _get_slice(self, index): - # Normalize the slice's arguments - step = 1 if (index.step is None) else index.step - if step > 0: - start = 0 if (index.start is None) else index.start - stop = maxsize if (index.stop is None) else index.stop - elif step < 0: - start = -1 if (index.start is None) else index.start - stop = (-maxsize - 1) if (index.stop is None) else index.stop - else: - raise ValueError('slice step cannot be zero') - - # If either the start or stop index is negative, we'll need to cache - # the rest of the iterable in order to slice from the right side. - if (start < 0) or (stop < 0): - self._cache.extend(self._it) - # Otherwise we'll need to find the rightmost index and cache to that - # point. - else: - n = min(max(start, stop) + 1, maxsize) - cache_len = len(self._cache) - if n >= cache_len: - self._cache.extend(islice(self._it, n - cache_len)) - - return list(self._cache)[index] - - def __getitem__(self, index): - if isinstance(index, slice): - return self._get_slice(index) - - cache_len = len(self._cache) - if index < 0: - self._cache.extend(self._it) - elif index >= cache_len: - self._cache.extend(islice(self._it, index + 1 - cache_len)) - - return self._cache[index] - - -def collate(*iterables, **kwargs): - """Return a sorted merge of the items from each of several already-sorted - *iterables*. - - >>> list(collate('ACDZ', 'AZ', 'JKL')) - ['A', 'A', 'C', 'D', 'J', 'K', 'L', 'Z', 'Z'] - - Works lazily, keeping only the next value from each iterable in memory. Use - :func:`collate` to, for example, perform a n-way mergesort of items that - don't fit in memory. - - If a *key* function is specified, the iterables will be sorted according - to its result: - - >>> key = lambda s: int(s) # Sort by numeric value, not by string - >>> list(collate(['1', '10'], ['2', '11'], key=key)) - ['1', '2', '10', '11'] - - - If the *iterables* are sorted in descending order, set *reverse* to - ``True``: - - >>> list(collate([5, 3, 1], [4, 2, 0], reverse=True)) - [5, 4, 3, 2, 1, 0] - - If the elements of the passed-in iterables are out of order, you might get - unexpected results. - - On Python 3.5+, this function is an alias for :func:`heapq.merge`. - - """ - warnings.warn( - "collate is no longer part of more_itertools, use heapq.merge", - DeprecationWarning, - ) - return merge(*iterables, **kwargs) - - -def consumer(func): - """Decorator that automatically advances a PEP-342-style "reverse iterator" - to its first yield point so you don't have to call ``next()`` on it - manually. - - >>> @consumer - ... def tally(): - ... i = 0 - ... while True: - ... print('Thing number %s is %s.' % (i, (yield))) - ... i += 1 - ... - >>> t = tally() - >>> t.send('red') - Thing number 0 is red. - >>> t.send('fish') - Thing number 1 is fish. - - Without the decorator, you would have to call ``next(t)`` before - ``t.send()`` could be used. - - """ - - @wraps(func) - def wrapper(*args, **kwargs): - gen = func(*args, **kwargs) - next(gen) - return gen - - return wrapper - - -def ilen(iterable): - """Return the number of items in *iterable*. - - >>> ilen(x for x in range(1000000) if x % 3 == 0) - 333334 - - This consumes the iterable, so handle with care. - - """ - # This approach was selected because benchmarks showed it's likely the - # fastest of the known implementations at the time of writing. - # See GitHub tracker: #236, #230. - counter = count() - deque(zip(iterable, counter), maxlen=0) - return next(counter) - - -def iterate(func, start): - """Return ``start``, ``func(start)``, ``func(func(start))``, ... - - >>> from itertools import islice - >>> list(islice(iterate(lambda x: 2*x, 1), 10)) - [1, 2, 4, 8, 16, 32, 64, 128, 256, 512] - - """ - while True: - yield start - start = func(start) - - -def with_iter(context_manager): - """Wrap an iterable in a ``with`` statement, so it closes once exhausted. - - For example, this will close the file when the iterator is exhausted:: - - upper_lines = (line.upper() for line in with_iter(open('foo'))) - - Any context manager which returns an iterable is a candidate for - ``with_iter``. - - """ - with context_manager as iterable: - yield from iterable - - -def one(iterable, too_short=None, too_long=None): - """Return the first item from *iterable*, which is expected to contain only - that item. Raise an exception if *iterable* is empty or has more than one - item. - - :func:`one` is useful for ensuring that an iterable contains only one item. - For example, it can be used to retrieve the result of a database query - that is expected to return a single row. - - If *iterable* is empty, ``ValueError`` will be raised. You may specify a - different exception with the *too_short* keyword: - - >>> it = [] - >>> one(it) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - ValueError: too many items in iterable (expected 1)' - >>> too_short = IndexError('too few items') - >>> one(it, too_short=too_short) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - IndexError: too few items - - Similarly, if *iterable* contains more than one item, ``ValueError`` will - be raised. You may specify a different exception with the *too_long* - keyword: - - >>> it = ['too', 'many'] - >>> one(it) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - ValueError: Expected exactly one item in iterable, but got 'too', - 'many', and perhaps more. - >>> too_long = RuntimeError - >>> one(it, too_long=too_long) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - RuntimeError - - Note that :func:`one` attempts to advance *iterable* twice to ensure there - is only one item. See :func:`spy` or :func:`peekable` to check iterable - contents less destructively. - - """ - it = iter(iterable) - - try: - first_value = next(it) - except StopIteration as e: - raise ( - too_short or ValueError('too few items in iterable (expected 1)') - ) from e - - try: - second_value = next(it) - except StopIteration: - pass - else: - msg = ( - 'Expected exactly one item in iterable, but got {!r}, {!r}, ' - 'and perhaps more.'.format(first_value, second_value) - ) - raise too_long or ValueError(msg) - - return first_value - - -def distinct_permutations(iterable, r=None): - """Yield successive distinct permutations of the elements in *iterable*. - - >>> sorted(distinct_permutations([1, 0, 1])) - [(0, 1, 1), (1, 0, 1), (1, 1, 0)] - - Equivalent to ``set(permutations(iterable))``, except duplicates are not - generated and thrown away. For larger input sequences this is much more - efficient. - - Duplicate permutations arise when there are duplicated elements in the - input iterable. The number of items returned is - `n! / (x_1! * x_2! * ... * x_n!)`, where `n` is the total number of - items input, and each `x_i` is the count of a distinct item in the input - sequence. - - If *r* is given, only the *r*-length permutations are yielded. - - >>> sorted(distinct_permutations([1, 0, 1], r=2)) - [(0, 1), (1, 0), (1, 1)] - >>> sorted(distinct_permutations(range(3), r=2)) - [(0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1)] - - """ - # Algorithm: https://w.wiki/Qai - def _full(A): - while True: - # Yield the permutation we have - yield tuple(A) - - # Find the largest index i such that A[i] < A[i + 1] - for i in range(size - 2, -1, -1): - if A[i] < A[i + 1]: - break - # If no such index exists, this permutation is the last one - else: - return - - # Find the largest index j greater than j such that A[i] < A[j] - for j in range(size - 1, i, -1): - if A[i] < A[j]: - break - - # Swap the value of A[i] with that of A[j], then reverse the - # sequence from A[i + 1] to form the new permutation - A[i], A[j] = A[j], A[i] - A[i + 1 :] = A[: i - size : -1] # A[i + 1:][::-1] - - # Algorithm: modified from the above - def _partial(A, r): - # Split A into the first r items and the last r items - head, tail = A[:r], A[r:] - right_head_indexes = range(r - 1, -1, -1) - left_tail_indexes = range(len(tail)) - - while True: - # Yield the permutation we have - yield tuple(head) - - # Starting from the right, find the first index of the head with - # value smaller than the maximum value of the tail - call it i. - pivot = tail[-1] - for i in right_head_indexes: - if head[i] < pivot: - break - pivot = head[i] - else: - return - - # Starting from the left, find the first value of the tail - # with a value greater than head[i] and swap. - for j in left_tail_indexes: - if tail[j] > head[i]: - head[i], tail[j] = tail[j], head[i] - break - # If we didn't find one, start from the right and find the first - # index of the head with a value greater than head[i] and swap. - else: - for j in right_head_indexes: - if head[j] > head[i]: - head[i], head[j] = head[j], head[i] - break - - # Reverse head[i + 1:] and swap it with tail[:r - (i + 1)] - tail += head[: i - r : -1] # head[i + 1:][::-1] - i += 1 - head[i:], tail[:] = tail[: r - i], tail[r - i :] - - items = sorted(iterable) - - size = len(items) - if r is None: - r = size - - if 0 < r <= size: - return _full(items) if (r == size) else _partial(items, r) - - return iter(() if r else ((),)) - - -def intersperse(e, iterable, n=1): - """Intersperse filler element *e* among the items in *iterable*, leaving - *n* items between each filler element. - - >>> list(intersperse('!', [1, 2, 3, 4, 5])) - [1, '!', 2, '!', 3, '!', 4, '!', 5] - - >>> list(intersperse(None, [1, 2, 3, 4, 5], n=2)) - [1, 2, None, 3, 4, None, 5] - - """ - if n == 0: - raise ValueError('n must be > 0') - elif n == 1: - # interleave(repeat(e), iterable) -> e, x_0, e, e, x_1, e, x_2... - # islice(..., 1, None) -> x_0, e, e, x_1, e, x_2... - return islice(interleave(repeat(e), iterable), 1, None) - else: - # interleave(filler, chunks) -> [e], [x_0, x_1], [e], [x_2, x_3]... - # islice(..., 1, None) -> [x_0, x_1], [e], [x_2, x_3]... - # flatten(...) -> x_0, x_1, e, x_2, x_3... - filler = repeat([e]) - chunks = chunked(iterable, n) - return flatten(islice(interleave(filler, chunks), 1, None)) - - -def unique_to_each(*iterables): - """Return the elements from each of the input iterables that aren't in the - other input iterables. - - For example, suppose you have a set of packages, each with a set of - dependencies:: - - {'pkg_1': {'A', 'B'}, 'pkg_2': {'B', 'C'}, 'pkg_3': {'B', 'D'}} - - If you remove one package, which dependencies can also be removed? - - If ``pkg_1`` is removed, then ``A`` is no longer necessary - it is not - associated with ``pkg_2`` or ``pkg_3``. Similarly, ``C`` is only needed for - ``pkg_2``, and ``D`` is only needed for ``pkg_3``:: - - >>> unique_to_each({'A', 'B'}, {'B', 'C'}, {'B', 'D'}) - [['A'], ['C'], ['D']] - - If there are duplicates in one input iterable that aren't in the others - they will be duplicated in the output. Input order is preserved:: - - >>> unique_to_each("mississippi", "missouri") - [['p', 'p'], ['o', 'u', 'r']] - - It is assumed that the elements of each iterable are hashable. - - """ - pool = [list(it) for it in iterables] - counts = Counter(chain.from_iterable(map(set, pool))) - uniques = {element for element in counts if counts[element] == 1} - return [list(filter(uniques.__contains__, it)) for it in pool] - - -def windowed(seq, n, fillvalue=None, step=1): - """Return a sliding window of width *n* over the given iterable. - - >>> all_windows = windowed([1, 2, 3, 4, 5], 3) - >>> list(all_windows) - [(1, 2, 3), (2, 3, 4), (3, 4, 5)] - - When the window is larger than the iterable, *fillvalue* is used in place - of missing values: - - >>> list(windowed([1, 2, 3], 4)) - [(1, 2, 3, None)] - - Each window will advance in increments of *step*: - - >>> list(windowed([1, 2, 3, 4, 5, 6], 3, fillvalue='!', step=2)) - [(1, 2, 3), (3, 4, 5), (5, 6, '!')] - - To slide into the iterable's items, use :func:`chain` to add filler items - to the left: - - >>> iterable = [1, 2, 3, 4] - >>> n = 3 - >>> padding = [None] * (n - 1) - >>> list(windowed(chain(padding, iterable), 3)) - [(None, None, 1), (None, 1, 2), (1, 2, 3), (2, 3, 4)] - """ - if n < 0: - raise ValueError('n must be >= 0') - if n == 0: - yield tuple() - return - if step < 1: - raise ValueError('step must be >= 1') - - window = deque(maxlen=n) - i = n - for _ in map(window.append, seq): - i -= 1 - if not i: - i = step - yield tuple(window) - - size = len(window) - if size < n: - yield tuple(chain(window, repeat(fillvalue, n - size))) - elif 0 < i < min(step, n): - window += (fillvalue,) * i - yield tuple(window) - - -def substrings(iterable): - """Yield all of the substrings of *iterable*. - - >>> [''.join(s) for s in substrings('more')] - ['m', 'o', 'r', 'e', 'mo', 'or', 're', 'mor', 'ore', 'more'] - - Note that non-string iterables can also be subdivided. - - >>> list(substrings([0, 1, 2])) - [(0,), (1,), (2,), (0, 1), (1, 2), (0, 1, 2)] - - """ - # The length-1 substrings - seq = [] - for item in iter(iterable): - seq.append(item) - yield (item,) - seq = tuple(seq) - item_count = len(seq) - - # And the rest - for n in range(2, item_count + 1): - for i in range(item_count - n + 1): - yield seq[i : i + n] - - -def substrings_indexes(seq, reverse=False): - """Yield all substrings and their positions in *seq* - - The items yielded will be a tuple of the form ``(substr, i, j)``, where - ``substr == seq[i:j]``. - - This function only works for iterables that support slicing, such as - ``str`` objects. - - >>> for item in substrings_indexes('more'): - ... print(item) - ('m', 0, 1) - ('o', 1, 2) - ('r', 2, 3) - ('e', 3, 4) - ('mo', 0, 2) - ('or', 1, 3) - ('re', 2, 4) - ('mor', 0, 3) - ('ore', 1, 4) - ('more', 0, 4) - - Set *reverse* to ``True`` to yield the same items in the opposite order. - - - """ - r = range(1, len(seq) + 1) - if reverse: - r = reversed(r) - return ( - (seq[i : i + L], i, i + L) for L in r for i in range(len(seq) - L + 1) - ) - - -class bucket: - """Wrap *iterable* and return an object that buckets it iterable into - child iterables based on a *key* function. - - >>> iterable = ['a1', 'b1', 'c1', 'a2', 'b2', 'c2', 'b3'] - >>> s = bucket(iterable, key=lambda x: x[0]) # Bucket by 1st character - >>> sorted(list(s)) # Get the keys - ['a', 'b', 'c'] - >>> a_iterable = s['a'] - >>> next(a_iterable) - 'a1' - >>> next(a_iterable) - 'a2' - >>> list(s['b']) - ['b1', 'b2', 'b3'] - - The original iterable will be advanced and its items will be cached until - they are used by the child iterables. This may require significant storage. - - By default, attempting to select a bucket to which no items belong will - exhaust the iterable and cache all values. - If you specify a *validator* function, selected buckets will instead be - checked against it. - - >>> from itertools import count - >>> it = count(1, 2) # Infinite sequence of odd numbers - >>> key = lambda x: x % 10 # Bucket by last digit - >>> validator = lambda x: x in {1, 3, 5, 7, 9} # Odd digits only - >>> s = bucket(it, key=key, validator=validator) - >>> 2 in s - False - >>> list(s[2]) - [] - - """ - - def __init__(self, iterable, key, validator=None): - self._it = iter(iterable) - self._key = key - self._cache = defaultdict(deque) - self._validator = validator or (lambda x: True) - - def __contains__(self, value): - if not self._validator(value): - return False - - try: - item = next(self[value]) - except StopIteration: - return False - else: - self._cache[value].appendleft(item) - - return True - - def _get_values(self, value): - """ - Helper to yield items from the parent iterator that match *value*. - Items that don't match are stored in the local cache as they - are encountered. - """ - while True: - # If we've cached some items that match the target value, emit - # the first one and evict it from the cache. - if self._cache[value]: - yield self._cache[value].popleft() - # Otherwise we need to advance the parent iterator to search for - # a matching item, caching the rest. - else: - while True: - try: - item = next(self._it) - except StopIteration: - return - item_value = self._key(item) - if item_value == value: - yield item - break - elif self._validator(item_value): - self._cache[item_value].append(item) - - def __iter__(self): - for item in self._it: - item_value = self._key(item) - if self._validator(item_value): - self._cache[item_value].append(item) - - yield from self._cache.keys() - - def __getitem__(self, value): - if not self._validator(value): - return iter(()) - - return self._get_values(value) - - -def spy(iterable, n=1): - """Return a 2-tuple with a list containing the first *n* elements of - *iterable*, and an iterator with the same items as *iterable*. - This allows you to "look ahead" at the items in the iterable without - advancing it. - - There is one item in the list by default: - - >>> iterable = 'abcdefg' - >>> head, iterable = spy(iterable) - >>> head - ['a'] - >>> list(iterable) - ['a', 'b', 'c', 'd', 'e', 'f', 'g'] - - You may use unpacking to retrieve items instead of lists: - - >>> (head,), iterable = spy('abcdefg') - >>> head - 'a' - >>> (first, second), iterable = spy('abcdefg', 2) - >>> first - 'a' - >>> second - 'b' - - The number of items requested can be larger than the number of items in - the iterable: - - >>> iterable = [1, 2, 3, 4, 5] - >>> head, iterable = spy(iterable, 10) - >>> head - [1, 2, 3, 4, 5] - >>> list(iterable) - [1, 2, 3, 4, 5] - - """ - it = iter(iterable) - head = take(n, it) - - return head.copy(), chain(head, it) - - -def interleave(*iterables): - """Return a new iterable yielding from each iterable in turn, - until the shortest is exhausted. - - >>> list(interleave([1, 2, 3], [4, 5], [6, 7, 8])) - [1, 4, 6, 2, 5, 7] - - For a version that doesn't terminate after the shortest iterable is - exhausted, see :func:`interleave_longest`. - - """ - return chain.from_iterable(zip(*iterables)) - - -def interleave_longest(*iterables): - """Return a new iterable yielding from each iterable in turn, - skipping any that are exhausted. - - >>> list(interleave_longest([1, 2, 3], [4, 5], [6, 7, 8])) - [1, 4, 6, 2, 5, 7, 3, 8] - - This function produces the same output as :func:`roundrobin`, but may - perform better for some inputs (in particular when the number of iterables - is large). - - """ - i = chain.from_iterable(zip_longest(*iterables, fillvalue=_marker)) - return (x for x in i if x is not _marker) - - -def collapse(iterable, base_type=None, levels=None): - """Flatten an iterable with multiple levels of nesting (e.g., a list of - lists of tuples) into non-iterable types. - - >>> iterable = [(1, 2), ([3, 4], [[5], [6]])] - >>> list(collapse(iterable)) - [1, 2, 3, 4, 5, 6] - - Binary and text strings are not considered iterable and - will not be collapsed. - - To avoid collapsing other types, specify *base_type*: - - >>> iterable = ['ab', ('cd', 'ef'), ['gh', 'ij']] - >>> list(collapse(iterable, base_type=tuple)) - ['ab', ('cd', 'ef'), 'gh', 'ij'] - - Specify *levels* to stop flattening after a certain level: - - >>> iterable = [('a', ['b']), ('c', ['d'])] - >>> list(collapse(iterable)) # Fully flattened - ['a', 'b', 'c', 'd'] - >>> list(collapse(iterable, levels=1)) # Only one level flattened - ['a', ['b'], 'c', ['d']] - - """ - - def walk(node, level): - if ( - ((levels is not None) and (level > levels)) - or isinstance(node, (str, bytes)) - or ((base_type is not None) and isinstance(node, base_type)) - ): - yield node - return - - try: - tree = iter(node) - except TypeError: - yield node - return - else: - for child in tree: - yield from walk(child, level + 1) - - yield from walk(iterable, 0) - - -def side_effect(func, iterable, chunk_size=None, before=None, after=None): - """Invoke *func* on each item in *iterable* (or on each *chunk_size* group - of items) before yielding the item. - - `func` must be a function that takes a single argument. Its return value - will be discarded. - - *before* and *after* are optional functions that take no arguments. They - will be executed before iteration starts and after it ends, respectively. - - `side_effect` can be used for logging, updating progress bars, or anything - that is not functionally "pure." - - Emitting a status message: - - >>> from more_itertools import consume - >>> func = lambda item: print('Received {}'.format(item)) - >>> consume(side_effect(func, range(2))) - Received 0 - Received 1 - - Operating on chunks of items: - - >>> pair_sums = [] - >>> func = lambda chunk: pair_sums.append(sum(chunk)) - >>> list(side_effect(func, [0, 1, 2, 3, 4, 5], 2)) - [0, 1, 2, 3, 4, 5] - >>> list(pair_sums) - [1, 5, 9] - - Writing to a file-like object: - - >>> from io import StringIO - >>> from more_itertools import consume - >>> f = StringIO() - >>> func = lambda x: print(x, file=f) - >>> before = lambda: print(u'HEADER', file=f) - >>> after = f.close - >>> it = [u'a', u'b', u'c'] - >>> consume(side_effect(func, it, before=before, after=after)) - >>> f.closed - True - - """ - try: - if before is not None: - before() - - if chunk_size is None: - for item in iterable: - func(item) - yield item - else: - for chunk in chunked(iterable, chunk_size): - func(chunk) - yield from chunk - finally: - if after is not None: - after() - - -def sliced(seq, n, strict=False): - """Yield slices of length *n* from the sequence *seq*. - - >>> list(sliced((1, 2, 3, 4, 5, 6), 3)) - [(1, 2, 3), (4, 5, 6)] - - By the default, the last yielded slice will have fewer than *n* elements - if the length of *seq* is not divisible by *n*: - - >>> list(sliced((1, 2, 3, 4, 5, 6, 7, 8), 3)) - [(1, 2, 3), (4, 5, 6), (7, 8)] - - If the length of *seq* is not divisible by *n* and *strict* is - ``True``, then ``ValueError`` will be raised before the last - slice is yielded. - - This function will only work for iterables that support slicing. - For non-sliceable iterables, see :func:`chunked`. - - """ - iterator = takewhile(len, (seq[i : i + n] for i in count(0, n))) - if strict: - - def ret(): - for _slice in iterator: - if len(_slice) != n: - raise ValueError("seq is not divisible by n.") - yield _slice - - return iter(ret()) - else: - return iterator - - -def split_at(iterable, pred, maxsplit=-1, keep_separator=False): - """Yield lists of items from *iterable*, where each list is delimited by - an item where callable *pred* returns ``True``. - - >>> list(split_at('abcdcba', lambda x: x == 'b')) - [['a'], ['c', 'd', 'c'], ['a']] - - >>> list(split_at(range(10), lambda n: n % 2 == 1)) - [[0], [2], [4], [6], [8], []] - - At most *maxsplit* splits are done. If *maxsplit* is not specified or -1, - then there is no limit on the number of splits: - - >>> list(split_at(range(10), lambda n: n % 2 == 1, maxsplit=2)) - [[0], [2], [4, 5, 6, 7, 8, 9]] - - By default, the delimiting items are not included in the output. - The include them, set *keep_separator* to ``True``. - - >>> list(split_at('abcdcba', lambda x: x == 'b', keep_separator=True)) - [['a'], ['b'], ['c', 'd', 'c'], ['b'], ['a']] - - """ - if maxsplit == 0: - yield list(iterable) - return - - buf = [] - it = iter(iterable) - for item in it: - if pred(item): - yield buf - if keep_separator: - yield [item] - if maxsplit == 1: - yield list(it) - return - buf = [] - maxsplit -= 1 - else: - buf.append(item) - yield buf - - -def split_before(iterable, pred, maxsplit=-1): - """Yield lists of items from *iterable*, where each list ends just before - an item for which callable *pred* returns ``True``: - - >>> list(split_before('OneTwo', lambda s: s.isupper())) - [['O', 'n', 'e'], ['T', 'w', 'o']] - - >>> list(split_before(range(10), lambda n: n % 3 == 0)) - [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]] - - At most *maxsplit* splits are done. If *maxsplit* is not specified or -1, - then there is no limit on the number of splits: - - >>> list(split_before(range(10), lambda n: n % 3 == 0, maxsplit=2)) - [[0, 1, 2], [3, 4, 5], [6, 7, 8, 9]] - """ - if maxsplit == 0: - yield list(iterable) - return - - buf = [] - it = iter(iterable) - for item in it: - if pred(item) and buf: - yield buf - if maxsplit == 1: - yield [item] + list(it) - return - buf = [] - maxsplit -= 1 - buf.append(item) - if buf: - yield buf - - -def split_after(iterable, pred, maxsplit=-1): - """Yield lists of items from *iterable*, where each list ends with an - item where callable *pred* returns ``True``: - - >>> list(split_after('one1two2', lambda s: s.isdigit())) - [['o', 'n', 'e', '1'], ['t', 'w', 'o', '2']] - - >>> list(split_after(range(10), lambda n: n % 3 == 0)) - [[0], [1, 2, 3], [4, 5, 6], [7, 8, 9]] - - At most *maxsplit* splits are done. If *maxsplit* is not specified or -1, - then there is no limit on the number of splits: - - >>> list(split_after(range(10), lambda n: n % 3 == 0, maxsplit=2)) - [[0], [1, 2, 3], [4, 5, 6, 7, 8, 9]] - - """ - if maxsplit == 0: - yield list(iterable) - return - - buf = [] - it = iter(iterable) - for item in it: - buf.append(item) - if pred(item) and buf: - yield buf - if maxsplit == 1: - yield list(it) - return - buf = [] - maxsplit -= 1 - if buf: - yield buf - - -def split_when(iterable, pred, maxsplit=-1): - """Split *iterable* into pieces based on the output of *pred*. - *pred* should be a function that takes successive pairs of items and - returns ``True`` if the iterable should be split in between them. - - For example, to find runs of increasing numbers, split the iterable when - element ``i`` is larger than element ``i + 1``: - - >>> list(split_when([1, 2, 3, 3, 2, 5, 2, 4, 2], lambda x, y: x > y)) - [[1, 2, 3, 3], [2, 5], [2, 4], [2]] - - At most *maxsplit* splits are done. If *maxsplit* is not specified or -1, - then there is no limit on the number of splits: - - >>> list(split_when([1, 2, 3, 3, 2, 5, 2, 4, 2], - ... lambda x, y: x > y, maxsplit=2)) - [[1, 2, 3, 3], [2, 5], [2, 4, 2]] - - """ - if maxsplit == 0: - yield list(iterable) - return - - it = iter(iterable) - try: - cur_item = next(it) - except StopIteration: - return - - buf = [cur_item] - for next_item in it: - if pred(cur_item, next_item): - yield buf - if maxsplit == 1: - yield [next_item] + list(it) - return - buf = [] - maxsplit -= 1 - - buf.append(next_item) - cur_item = next_item - - yield buf - - -def split_into(iterable, sizes): - """Yield a list of sequential items from *iterable* of length 'n' for each - integer 'n' in *sizes*. - - >>> list(split_into([1,2,3,4,5,6], [1,2,3])) - [[1], [2, 3], [4, 5, 6]] - - If the sum of *sizes* is smaller than the length of *iterable*, then the - remaining items of *iterable* will not be returned. - - >>> list(split_into([1,2,3,4,5,6], [2,3])) - [[1, 2], [3, 4, 5]] - - If the sum of *sizes* is larger than the length of *iterable*, fewer items - will be returned in the iteration that overruns *iterable* and further - lists will be empty: - - >>> list(split_into([1,2,3,4], [1,2,3,4])) - [[1], [2, 3], [4], []] - - When a ``None`` object is encountered in *sizes*, the returned list will - contain items up to the end of *iterable* the same way that itertools.slice - does: - - >>> list(split_into([1,2,3,4,5,6,7,8,9,0], [2,3,None])) - [[1, 2], [3, 4, 5], [6, 7, 8, 9, 0]] - - :func:`split_into` can be useful for grouping a series of items where the - sizes of the groups are not uniform. An example would be where in a row - from a table, multiple columns represent elements of the same feature - (e.g. a point represented by x,y,z) but, the format is not the same for - all columns. - """ - # convert the iterable argument into an iterator so its contents can - # be consumed by islice in case it is a generator - it = iter(iterable) - - for size in sizes: - if size is None: - yield list(it) - return - else: - yield list(islice(it, size)) - - -def padded(iterable, fillvalue=None, n=None, next_multiple=False): - """Yield the elements from *iterable*, followed by *fillvalue*, such that - at least *n* items are emitted. - - >>> list(padded([1, 2, 3], '?', 5)) - [1, 2, 3, '?', '?'] - - If *next_multiple* is ``True``, *fillvalue* will be emitted until the - number of items emitted is a multiple of *n*:: - - >>> list(padded([1, 2, 3, 4], n=3, next_multiple=True)) - [1, 2, 3, 4, None, None] - - If *n* is ``None``, *fillvalue* will be emitted indefinitely. - - """ - it = iter(iterable) - if n is None: - yield from chain(it, repeat(fillvalue)) - elif n < 1: - raise ValueError('n must be at least 1') - else: - item_count = 0 - for item in it: - yield item - item_count += 1 - - remaining = (n - item_count) % n if next_multiple else n - item_count - for _ in range(remaining): - yield fillvalue - - -def repeat_last(iterable, default=None): - """After the *iterable* is exhausted, keep yielding its last element. - - >>> list(islice(repeat_last(range(3)), 5)) - [0, 1, 2, 2, 2] - - If the iterable is empty, yield *default* forever:: - - >>> list(islice(repeat_last(range(0), 42), 5)) - [42, 42, 42, 42, 42] - - """ - item = _marker - for item in iterable: - yield item - final = default if item is _marker else item - yield from repeat(final) - - -def distribute(n, iterable): - """Distribute the items from *iterable* among *n* smaller iterables. - - >>> group_1, group_2 = distribute(2, [1, 2, 3, 4, 5, 6]) - >>> list(group_1) - [1, 3, 5] - >>> list(group_2) - [2, 4, 6] - - If the length of *iterable* is not evenly divisible by *n*, then the - length of the returned iterables will not be identical: - - >>> children = distribute(3, [1, 2, 3, 4, 5, 6, 7]) - >>> [list(c) for c in children] - [[1, 4, 7], [2, 5], [3, 6]] - - If the length of *iterable* is smaller than *n*, then the last returned - iterables will be empty: - - >>> children = distribute(5, [1, 2, 3]) - >>> [list(c) for c in children] - [[1], [2], [3], [], []] - - This function uses :func:`itertools.tee` and may require significant - storage. If you need the order items in the smaller iterables to match the - original iterable, see :func:`divide`. - - """ - if n < 1: - raise ValueError('n must be at least 1') - - children = tee(iterable, n) - return [islice(it, index, None, n) for index, it in enumerate(children)] - - -def stagger(iterable, offsets=(-1, 0, 1), longest=False, fillvalue=None): - """Yield tuples whose elements are offset from *iterable*. - The amount by which the `i`-th item in each tuple is offset is given by - the `i`-th item in *offsets*. - - >>> list(stagger([0, 1, 2, 3])) - [(None, 0, 1), (0, 1, 2), (1, 2, 3)] - >>> list(stagger(range(8), offsets=(0, 2, 4))) - [(0, 2, 4), (1, 3, 5), (2, 4, 6), (3, 5, 7)] - - By default, the sequence will end when the final element of a tuple is the - last item in the iterable. To continue until the first element of a tuple - is the last item in the iterable, set *longest* to ``True``:: - - >>> list(stagger([0, 1, 2, 3], longest=True)) - [(None, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, None), (3, None, None)] - - By default, ``None`` will be used to replace offsets beyond the end of the - sequence. Specify *fillvalue* to use some other value. - - """ - children = tee(iterable, len(offsets)) - - return zip_offset( - *children, offsets=offsets, longest=longest, fillvalue=fillvalue - ) - - -class UnequalIterablesError(ValueError): - def __init__(self, details=None): - msg = 'Iterables have different lengths' - if details is not None: - msg += (': index 0 has length {}; index {} has length {}').format( - *details - ) - - super().__init__(msg) - - -def _zip_equal_generator(iterables): - for combo in zip_longest(*iterables, fillvalue=_marker): - for val in combo: - if val is _marker: - raise UnequalIterablesError() - yield combo - - -def zip_equal(*iterables): - """``zip`` the input *iterables* together, but raise - ``UnequalIterablesError`` if they aren't all the same length. - - >>> it_1 = range(3) - >>> it_2 = iter('abc') - >>> list(zip_equal(it_1, it_2)) - [(0, 'a'), (1, 'b'), (2, 'c')] - - >>> it_1 = range(3) - >>> it_2 = iter('abcd') - >>> list(zip_equal(it_1, it_2)) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - more_itertools.more.UnequalIterablesError: Iterables have different - lengths - - """ - if hexversion >= 0x30A00A6: - warnings.warn( - ( - 'zip_equal will be removed in a future version of ' - 'more-itertools. Use the builtin zip function with ' - 'strict=True instead.' - ), - DeprecationWarning, - ) - # Check whether the iterables are all the same size. - try: - first_size = len(iterables[0]) - for i, it in enumerate(iterables[1:], 1): - size = len(it) - if size != first_size: - break - else: - # If we didn't break out, we can use the built-in zip. - return zip(*iterables) - - # If we did break out, there was a mismatch. - raise UnequalIterablesError(details=(first_size, i, size)) - # If any one of the iterables didn't have a length, start reading - # them until one runs out. - except TypeError: - return _zip_equal_generator(iterables) - - -def zip_offset(*iterables, offsets, longest=False, fillvalue=None): - """``zip`` the input *iterables* together, but offset the `i`-th iterable - by the `i`-th item in *offsets*. - - >>> list(zip_offset('0123', 'abcdef', offsets=(0, 1))) - [('0', 'b'), ('1', 'c'), ('2', 'd'), ('3', 'e')] - - This can be used as a lightweight alternative to SciPy or pandas to analyze - data sets in which some series have a lead or lag relationship. - - By default, the sequence will end when the shortest iterable is exhausted. - To continue until the longest iterable is exhausted, set *longest* to - ``True``. - - >>> list(zip_offset('0123', 'abcdef', offsets=(0, 1), longest=True)) - [('0', 'b'), ('1', 'c'), ('2', 'd'), ('3', 'e'), (None, 'f')] - - By default, ``None`` will be used to replace offsets beyond the end of the - sequence. Specify *fillvalue* to use some other value. - - """ - if len(iterables) != len(offsets): - raise ValueError("Number of iterables and offsets didn't match") - - staggered = [] - for it, n in zip(iterables, offsets): - if n < 0: - staggered.append(chain(repeat(fillvalue, -n), it)) - elif n > 0: - staggered.append(islice(it, n, None)) - else: - staggered.append(it) - - if longest: - return zip_longest(*staggered, fillvalue=fillvalue) - - return zip(*staggered) - - -def sort_together(iterables, key_list=(0,), key=None, reverse=False): - """Return the input iterables sorted together, with *key_list* as the - priority for sorting. All iterables are trimmed to the length of the - shortest one. - - This can be used like the sorting function in a spreadsheet. If each - iterable represents a column of data, the key list determines which - columns are used for sorting. - - By default, all iterables are sorted using the ``0``-th iterable:: - - >>> iterables = [(4, 3, 2, 1), ('a', 'b', 'c', 'd')] - >>> sort_together(iterables) - [(1, 2, 3, 4), ('d', 'c', 'b', 'a')] - - Set a different key list to sort according to another iterable. - Specifying multiple keys dictates how ties are broken:: - - >>> iterables = [(3, 1, 2), (0, 1, 0), ('c', 'b', 'a')] - >>> sort_together(iterables, key_list=(1, 2)) - [(2, 3, 1), (0, 0, 1), ('a', 'c', 'b')] - - To sort by a function of the elements of the iterable, pass a *key* - function. Its arguments are the elements of the iterables corresponding to - the key list:: - - >>> names = ('a', 'b', 'c') - >>> lengths = (1, 2, 3) - >>> widths = (5, 2, 1) - >>> def area(length, width): - ... return length * width - >>> sort_together([names, lengths, widths], key_list=(1, 2), key=area) - [('c', 'b', 'a'), (3, 2, 1), (1, 2, 5)] - - Set *reverse* to ``True`` to sort in descending order. - - >>> sort_together([(1, 2, 3), ('c', 'b', 'a')], reverse=True) - [(3, 2, 1), ('a', 'b', 'c')] - - """ - if key is None: - # if there is no key function, the key argument to sorted is an - # itemgetter - key_argument = itemgetter(*key_list) - else: - # if there is a key function, call it with the items at the offsets - # specified by the key function as arguments - key_list = list(key_list) - if len(key_list) == 1: - # if key_list contains a single item, pass the item at that offset - # as the only argument to the key function - key_offset = key_list[0] - key_argument = lambda zipped_items: key(zipped_items[key_offset]) - else: - # if key_list contains multiple items, use itemgetter to return a - # tuple of items, which we pass as *args to the key function - get_key_items = itemgetter(*key_list) - key_argument = lambda zipped_items: key( - *get_key_items(zipped_items) - ) - - return list( - zip(*sorted(zip(*iterables), key=key_argument, reverse=reverse)) - ) - - -def unzip(iterable): - """The inverse of :func:`zip`, this function disaggregates the elements - of the zipped *iterable*. - - The ``i``-th iterable contains the ``i``-th element from each element - of the zipped iterable. The first element is used to to determine the - length of the remaining elements. - - >>> iterable = [('a', 1), ('b', 2), ('c', 3), ('d', 4)] - >>> letters, numbers = unzip(iterable) - >>> list(letters) - ['a', 'b', 'c', 'd'] - >>> list(numbers) - [1, 2, 3, 4] - - This is similar to using ``zip(*iterable)``, but it avoids reading - *iterable* into memory. Note, however, that this function uses - :func:`itertools.tee` and thus may require significant storage. - - """ - head, iterable = spy(iter(iterable)) - if not head: - # empty iterable, e.g. zip([], [], []) - return () - # spy returns a one-length iterable as head - head = head[0] - iterables = tee(iterable, len(head)) - - def itemgetter(i): - def getter(obj): - try: - return obj[i] - except IndexError: - # basically if we have an iterable like - # iter([(1, 2, 3), (4, 5), (6,)]) - # the second unzipped iterable would fail at the third tuple - # since it would try to access tup[1] - # same with the third unzipped iterable and the second tuple - # to support these "improperly zipped" iterables, - # we create a custom itemgetter - # which just stops the unzipped iterables - # at first length mismatch - raise StopIteration - - return getter - - return tuple(map(itemgetter(i), it) for i, it in enumerate(iterables)) - - -def divide(n, iterable): - """Divide the elements from *iterable* into *n* parts, maintaining - order. - - >>> group_1, group_2 = divide(2, [1, 2, 3, 4, 5, 6]) - >>> list(group_1) - [1, 2, 3] - >>> list(group_2) - [4, 5, 6] - - If the length of *iterable* is not evenly divisible by *n*, then the - length of the returned iterables will not be identical: - - >>> children = divide(3, [1, 2, 3, 4, 5, 6, 7]) - >>> [list(c) for c in children] - [[1, 2, 3], [4, 5], [6, 7]] - - If the length of the iterable is smaller than n, then the last returned - iterables will be empty: - - >>> children = divide(5, [1, 2, 3]) - >>> [list(c) for c in children] - [[1], [2], [3], [], []] - - This function will exhaust the iterable before returning and may require - significant storage. If order is not important, see :func:`distribute`, - which does not first pull the iterable into memory. - - """ - if n < 1: - raise ValueError('n must be at least 1') - - try: - iterable[:0] - except TypeError: - seq = tuple(iterable) - else: - seq = iterable - - q, r = divmod(len(seq), n) - - ret = [] - stop = 0 - for i in range(1, n + 1): - start = stop - stop += q + 1 if i <= r else q - ret.append(iter(seq[start:stop])) - - return ret - - -def always_iterable(obj, base_type=(str, bytes)): - """If *obj* is iterable, return an iterator over its items:: - - >>> obj = (1, 2, 3) - >>> list(always_iterable(obj)) - [1, 2, 3] - - If *obj* is not iterable, return a one-item iterable containing *obj*:: - - >>> obj = 1 - >>> list(always_iterable(obj)) - [1] - - If *obj* is ``None``, return an empty iterable: - - >>> obj = None - >>> list(always_iterable(None)) - [] - - By default, binary and text strings are not considered iterable:: - - >>> obj = 'foo' - >>> list(always_iterable(obj)) - ['foo'] - - If *base_type* is set, objects for which ``isinstance(obj, base_type)`` - returns ``True`` won't be considered iterable. - - >>> obj = {'a': 1} - >>> list(always_iterable(obj)) # Iterate over the dict's keys - ['a'] - >>> list(always_iterable(obj, base_type=dict)) # Treat dicts as a unit - [{'a': 1}] - - Set *base_type* to ``None`` to avoid any special handling and treat objects - Python considers iterable as iterable: - - >>> obj = 'foo' - >>> list(always_iterable(obj, base_type=None)) - ['f', 'o', 'o'] - """ - if obj is None: - return iter(()) - - if (base_type is not None) and isinstance(obj, base_type): - return iter((obj,)) - - try: - return iter(obj) - except TypeError: - return iter((obj,)) - - -def adjacent(predicate, iterable, distance=1): - """Return an iterable over `(bool, item)` tuples where the `item` is - drawn from *iterable* and the `bool` indicates whether - that item satisfies the *predicate* or is adjacent to an item that does. - - For example, to find whether items are adjacent to a ``3``:: - - >>> list(adjacent(lambda x: x == 3, range(6))) - [(False, 0), (False, 1), (True, 2), (True, 3), (True, 4), (False, 5)] - - Set *distance* to change what counts as adjacent. For example, to find - whether items are two places away from a ``3``: - - >>> list(adjacent(lambda x: x == 3, range(6), distance=2)) - [(False, 0), (True, 1), (True, 2), (True, 3), (True, 4), (True, 5)] - - This is useful for contextualizing the results of a search function. - For example, a code comparison tool might want to identify lines that - have changed, but also surrounding lines to give the viewer of the diff - context. - - The predicate function will only be called once for each item in the - iterable. - - See also :func:`groupby_transform`, which can be used with this function - to group ranges of items with the same `bool` value. - - """ - # Allow distance=0 mainly for testing that it reproduces results with map() - if distance < 0: - raise ValueError('distance must be at least 0') - - i1, i2 = tee(iterable) - padding = [False] * distance - selected = chain(padding, map(predicate, i1), padding) - adjacent_to_selected = map(any, windowed(selected, 2 * distance + 1)) - return zip(adjacent_to_selected, i2) - - -def groupby_transform(iterable, keyfunc=None, valuefunc=None, reducefunc=None): - """An extension of :func:`itertools.groupby` that can apply transformations - to the grouped data. - - * *keyfunc* is a function computing a key value for each item in *iterable* - * *valuefunc* is a function that transforms the individual items from - *iterable* after grouping - * *reducefunc* is a function that transforms each group of items - - >>> iterable = 'aAAbBBcCC' - >>> keyfunc = lambda k: k.upper() - >>> valuefunc = lambda v: v.lower() - >>> reducefunc = lambda g: ''.join(g) - >>> list(groupby_transform(iterable, keyfunc, valuefunc, reducefunc)) - [('A', 'aaa'), ('B', 'bbb'), ('C', 'ccc')] - - Each optional argument defaults to an identity function if not specified. - - :func:`groupby_transform` is useful when grouping elements of an iterable - using a separate iterable as the key. To do this, :func:`zip` the iterables - and pass a *keyfunc* that extracts the first element and a *valuefunc* - that extracts the second element:: - - >>> from operator import itemgetter - >>> keys = [0, 0, 1, 1, 1, 2, 2, 2, 3] - >>> values = 'abcdefghi' - >>> iterable = zip(keys, values) - >>> grouper = groupby_transform(iterable, itemgetter(0), itemgetter(1)) - >>> [(k, ''.join(g)) for k, g in grouper] - [(0, 'ab'), (1, 'cde'), (2, 'fgh'), (3, 'i')] - - Note that the order of items in the iterable is significant. - Only adjacent items are grouped together, so if you don't want any - duplicate groups, you should sort the iterable by the key function. - - """ - ret = groupby(iterable, keyfunc) - if valuefunc: - ret = ((k, map(valuefunc, g)) for k, g in ret) - if reducefunc: - ret = ((k, reducefunc(g)) for k, g in ret) - - return ret - - -class numeric_range(abc.Sequence, abc.Hashable): - """An extension of the built-in ``range()`` function whose arguments can - be any orderable numeric type. - - With only *stop* specified, *start* defaults to ``0`` and *step* - defaults to ``1``. The output items will match the type of *stop*: - - >>> list(numeric_range(3.5)) - [0.0, 1.0, 2.0, 3.0] - - With only *start* and *stop* specified, *step* defaults to ``1``. The - output items will match the type of *start*: - - >>> from decimal import Decimal - >>> start = Decimal('2.1') - >>> stop = Decimal('5.1') - >>> list(numeric_range(start, stop)) - [Decimal('2.1'), Decimal('3.1'), Decimal('4.1')] - - With *start*, *stop*, and *step* specified the output items will match - the type of ``start + step``: - - >>> from fractions import Fraction - >>> start = Fraction(1, 2) # Start at 1/2 - >>> stop = Fraction(5, 2) # End at 5/2 - >>> step = Fraction(1, 2) # Count by 1/2 - >>> list(numeric_range(start, stop, step)) - [Fraction(1, 2), Fraction(1, 1), Fraction(3, 2), Fraction(2, 1)] - - If *step* is zero, ``ValueError`` is raised. Negative steps are supported: - - >>> list(numeric_range(3, -1, -1.0)) - [3.0, 2.0, 1.0, 0.0] - - Be aware of the limitations of floating point numbers; the representation - of the yielded numbers may be surprising. - - ``datetime.datetime`` objects can be used for *start* and *stop*, if *step* - is a ``datetime.timedelta`` object: - - >>> import datetime - >>> start = datetime.datetime(2019, 1, 1) - >>> stop = datetime.datetime(2019, 1, 3) - >>> step = datetime.timedelta(days=1) - >>> items = iter(numeric_range(start, stop, step)) - >>> next(items) - datetime.datetime(2019, 1, 1, 0, 0) - >>> next(items) - datetime.datetime(2019, 1, 2, 0, 0) - - """ - - _EMPTY_HASH = hash(range(0, 0)) - - def __init__(self, *args): - argc = len(args) - if argc == 1: - (self._stop,) = args - self._start = type(self._stop)(0) - self._step = type(self._stop - self._start)(1) - elif argc == 2: - self._start, self._stop = args - self._step = type(self._stop - self._start)(1) - elif argc == 3: - self._start, self._stop, self._step = args - elif argc == 0: - raise TypeError( - 'numeric_range expected at least ' - '1 argument, got {}'.format(argc) - ) - else: - raise TypeError( - 'numeric_range expected at most ' - '3 arguments, got {}'.format(argc) - ) - - self._zero = type(self._step)(0) - if self._step == self._zero: - raise ValueError('numeric_range() arg 3 must not be zero') - self._growing = self._step > self._zero - self._init_len() - - def __bool__(self): - if self._growing: - return self._start < self._stop - else: - return self._start > self._stop - - def __contains__(self, elem): - if self._growing: - if self._start <= elem < self._stop: - return (elem - self._start) % self._step == self._zero - else: - if self._start >= elem > self._stop: - return (self._start - elem) % (-self._step) == self._zero - - return False - - def __eq__(self, other): - if isinstance(other, numeric_range): - empty_self = not bool(self) - empty_other = not bool(other) - if empty_self or empty_other: - return empty_self and empty_other # True if both empty - else: - return ( - self._start == other._start - and self._step == other._step - and self._get_by_index(-1) == other._get_by_index(-1) - ) - else: - return False - - def __getitem__(self, key): - if isinstance(key, int): - return self._get_by_index(key) - elif isinstance(key, slice): - step = self._step if key.step is None else key.step * self._step - - if key.start is None or key.start <= -self._len: - start = self._start - elif key.start >= self._len: - start = self._stop - else: # -self._len < key.start < self._len - start = self._get_by_index(key.start) - - if key.stop is None or key.stop >= self._len: - stop = self._stop - elif key.stop <= -self._len: - stop = self._start - else: # -self._len < key.stop < self._len - stop = self._get_by_index(key.stop) - - return numeric_range(start, stop, step) - else: - raise TypeError( - 'numeric range indices must be ' - 'integers or slices, not {}'.format(type(key).__name__) - ) - - def __hash__(self): - if self: - return hash((self._start, self._get_by_index(-1), self._step)) - else: - return self._EMPTY_HASH - - def __iter__(self): - values = (self._start + (n * self._step) for n in count()) - if self._growing: - return takewhile(partial(gt, self._stop), values) - else: - return takewhile(partial(lt, self._stop), values) - - def __len__(self): - return self._len - - def _init_len(self): - if self._growing: - start = self._start - stop = self._stop - step = self._step - else: - start = self._stop - stop = self._start - step = -self._step - distance = stop - start - if distance <= self._zero: - self._len = 0 - else: # distance > 0 and step > 0: regular euclidean division - q, r = divmod(distance, step) - self._len = int(q) + int(r != self._zero) - - def __reduce__(self): - return numeric_range, (self._start, self._stop, self._step) - - def __repr__(self): - if self._step == 1: - return "numeric_range({}, {})".format( - repr(self._start), repr(self._stop) - ) - else: - return "numeric_range({}, {}, {})".format( - repr(self._start), repr(self._stop), repr(self._step) - ) - - def __reversed__(self): - return iter( - numeric_range( - self._get_by_index(-1), self._start - self._step, -self._step - ) - ) - - def count(self, value): - return int(value in self) - - def index(self, value): - if self._growing: - if self._start <= value < self._stop: - q, r = divmod(value - self._start, self._step) - if r == self._zero: - return int(q) - else: - if self._start >= value > self._stop: - q, r = divmod(self._start - value, -self._step) - if r == self._zero: - return int(q) - - raise ValueError("{} is not in numeric range".format(value)) - - def _get_by_index(self, i): - if i < 0: - i += self._len - if i < 0 or i >= self._len: - raise IndexError("numeric range object index out of range") - return self._start + i * self._step - - -def count_cycle(iterable, n=None): - """Cycle through the items from *iterable* up to *n* times, yielding - the number of completed cycles along with each item. If *n* is omitted the - process repeats indefinitely. - - >>> list(count_cycle('AB', 3)) - [(0, 'A'), (0, 'B'), (1, 'A'), (1, 'B'), (2, 'A'), (2, 'B')] - - """ - iterable = tuple(iterable) - if not iterable: - return iter(()) - counter = count() if n is None else range(n) - return ((i, item) for i in counter for item in iterable) - - -def mark_ends(iterable): - """Yield 3-tuples of the form ``(is_first, is_last, item)``. - - >>> list(mark_ends('ABC')) - [(True, False, 'A'), (False, False, 'B'), (False, True, 'C')] - - Use this when looping over an iterable to take special action on its first - and/or last items: - - >>> iterable = ['Header', 100, 200, 'Footer'] - >>> total = 0 - >>> for is_first, is_last, item in mark_ends(iterable): - ... if is_first: - ... continue # Skip the header - ... if is_last: - ... continue # Skip the footer - ... total += item - >>> print(total) - 300 - """ - it = iter(iterable) - - try: - b = next(it) - except StopIteration: - return - - try: - for i in count(): - a = b - b = next(it) - yield i == 0, False, a - - except StopIteration: - yield i == 0, True, a - - -def locate(iterable, pred=bool, window_size=None): - """Yield the index of each item in *iterable* for which *pred* returns - ``True``. - - *pred* defaults to :func:`bool`, which will select truthy items: - - >>> list(locate([0, 1, 1, 0, 1, 0, 0])) - [1, 2, 4] - - Set *pred* to a custom function to, e.g., find the indexes for a particular - item. - - >>> list(locate(['a', 'b', 'c', 'b'], lambda x: x == 'b')) - [1, 3] - - If *window_size* is given, then the *pred* function will be called with - that many items. This enables searching for sub-sequences: - - >>> iterable = [0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3] - >>> pred = lambda *args: args == (1, 2, 3) - >>> list(locate(iterable, pred=pred, window_size=3)) - [1, 5, 9] - - Use with :func:`seekable` to find indexes and then retrieve the associated - items: - - >>> from itertools import count - >>> from more_itertools import seekable - >>> source = (3 * n + 1 if (n % 2) else n // 2 for n in count()) - >>> it = seekable(source) - >>> pred = lambda x: x > 100 - >>> indexes = locate(it, pred=pred) - >>> i = next(indexes) - >>> it.seek(i) - >>> next(it) - 106 - - """ - if window_size is None: - return compress(count(), map(pred, iterable)) - - if window_size < 1: - raise ValueError('window size must be at least 1') - - it = windowed(iterable, window_size, fillvalue=_marker) - return compress(count(), starmap(pred, it)) - - -def lstrip(iterable, pred): - """Yield the items from *iterable*, but strip any from the beginning - for which *pred* returns ``True``. - - For example, to remove a set of items from the start of an iterable: - - >>> iterable = (None, False, None, 1, 2, None, 3, False, None) - >>> pred = lambda x: x in {None, False, ''} - >>> list(lstrip(iterable, pred)) - [1, 2, None, 3, False, None] - - This function is analogous to to :func:`str.lstrip`, and is essentially - an wrapper for :func:`itertools.dropwhile`. - - """ - return dropwhile(pred, iterable) - - -def rstrip(iterable, pred): - """Yield the items from *iterable*, but strip any from the end - for which *pred* returns ``True``. - - For example, to remove a set of items from the end of an iterable: - - >>> iterable = (None, False, None, 1, 2, None, 3, False, None) - >>> pred = lambda x: x in {None, False, ''} - >>> list(rstrip(iterable, pred)) - [None, False, None, 1, 2, None, 3] - - This function is analogous to :func:`str.rstrip`. - - """ - cache = [] - cache_append = cache.append - cache_clear = cache.clear - for x in iterable: - if pred(x): - cache_append(x) - else: - yield from cache - cache_clear() - yield x - - -def strip(iterable, pred): - """Yield the items from *iterable*, but strip any from the - beginning and end for which *pred* returns ``True``. - - For example, to remove a set of items from both ends of an iterable: - - >>> iterable = (None, False, None, 1, 2, None, 3, False, None) - >>> pred = lambda x: x in {None, False, ''} - >>> list(strip(iterable, pred)) - [1, 2, None, 3] - - This function is analogous to :func:`str.strip`. - - """ - return rstrip(lstrip(iterable, pred), pred) - - -class islice_extended: - """An extension of :func:`itertools.islice` that supports negative values - for *stop*, *start*, and *step*. - - >>> iterable = iter('abcdefgh') - >>> list(islice_extended(iterable, -4, -1)) - ['e', 'f', 'g'] - - Slices with negative values require some caching of *iterable*, but this - function takes care to minimize the amount of memory required. - - For example, you can use a negative step with an infinite iterator: - - >>> from itertools import count - >>> list(islice_extended(count(), 110, 99, -2)) - [110, 108, 106, 104, 102, 100] - - You can also use slice notation directly: - - >>> iterable = map(str, count()) - >>> it = islice_extended(iterable)[10:20:2] - >>> list(it) - ['10', '12', '14', '16', '18'] - - """ - - def __init__(self, iterable, *args): - it = iter(iterable) - if args: - self._iterable = _islice_helper(it, slice(*args)) - else: - self._iterable = it - - def __iter__(self): - return self - - def __next__(self): - return next(self._iterable) - - def __getitem__(self, key): - if isinstance(key, slice): - return islice_extended(_islice_helper(self._iterable, key)) - - raise TypeError('islice_extended.__getitem__ argument must be a slice') - - -def _islice_helper(it, s): - start = s.start - stop = s.stop - if s.step == 0: - raise ValueError('step argument must be a non-zero integer or None.') - step = s.step or 1 - - if step > 0: - start = 0 if (start is None) else start - - if start < 0: - # Consume all but the last -start items - cache = deque(enumerate(it, 1), maxlen=-start) - len_iter = cache[-1][0] if cache else 0 - - # Adjust start to be positive - i = max(len_iter + start, 0) - - # Adjust stop to be positive - if stop is None: - j = len_iter - elif stop >= 0: - j = min(stop, len_iter) - else: - j = max(len_iter + stop, 0) - - # Slice the cache - n = j - i - if n <= 0: - return - - for index, item in islice(cache, 0, n, step): - yield item - elif (stop is not None) and (stop < 0): - # Advance to the start position - next(islice(it, start, start), None) - - # When stop is negative, we have to carry -stop items while - # iterating - cache = deque(islice(it, -stop), maxlen=-stop) - - for index, item in enumerate(it): - cached_item = cache.popleft() - if index % step == 0: - yield cached_item - cache.append(item) - else: - # When both start and stop are positive we have the normal case - yield from islice(it, start, stop, step) - else: - start = -1 if (start is None) else start - - if (stop is not None) and (stop < 0): - # Consume all but the last items - n = -stop - 1 - cache = deque(enumerate(it, 1), maxlen=n) - len_iter = cache[-1][0] if cache else 0 - - # If start and stop are both negative they are comparable and - # we can just slice. Otherwise we can adjust start to be negative - # and then slice. - if start < 0: - i, j = start, stop - else: - i, j = min(start - len_iter, -1), None - - for index, item in list(cache)[i:j:step]: - yield item - else: - # Advance to the stop position - if stop is not None: - m = stop + 1 - next(islice(it, m, m), None) - - # stop is positive, so if start is negative they are not comparable - # and we need the rest of the items. - if start < 0: - i = start - n = None - # stop is None and start is positive, so we just need items up to - # the start index. - elif stop is None: - i = None - n = start + 1 - # Both stop and start are positive, so they are comparable. - else: - i = None - n = start - stop - if n <= 0: - return - - cache = list(islice(it, n)) - - yield from cache[i::step] - - -def always_reversible(iterable): - """An extension of :func:`reversed` that supports all iterables, not - just those which implement the ``Reversible`` or ``Sequence`` protocols. - - >>> print(*always_reversible(x for x in range(3))) - 2 1 0 - - If the iterable is already reversible, this function returns the - result of :func:`reversed()`. If the iterable is not reversible, - this function will cache the remaining items in the iterable and - yield them in reverse order, which may require significant storage. - """ - try: - return reversed(iterable) - except TypeError: - return reversed(list(iterable)) - - -def consecutive_groups(iterable, ordering=lambda x: x): - """Yield groups of consecutive items using :func:`itertools.groupby`. - The *ordering* function determines whether two items are adjacent by - returning their position. - - By default, the ordering function is the identity function. This is - suitable for finding runs of numbers: - - >>> iterable = [1, 10, 11, 12, 20, 30, 31, 32, 33, 40] - >>> for group in consecutive_groups(iterable): - ... print(list(group)) - [1] - [10, 11, 12] - [20] - [30, 31, 32, 33] - [40] - - For finding runs of adjacent letters, try using the :meth:`index` method - of a string of letters: - - >>> from string import ascii_lowercase - >>> iterable = 'abcdfgilmnop' - >>> ordering = ascii_lowercase.index - >>> for group in consecutive_groups(iterable, ordering): - ... print(list(group)) - ['a', 'b', 'c', 'd'] - ['f', 'g'] - ['i'] - ['l', 'm', 'n', 'o', 'p'] - - Each group of consecutive items is an iterator that shares it source with - *iterable*. When an an output group is advanced, the previous group is - no longer available unless its elements are copied (e.g., into a ``list``). - - >>> iterable = [1, 2, 11, 12, 21, 22] - >>> saved_groups = [] - >>> for group in consecutive_groups(iterable): - ... saved_groups.append(list(group)) # Copy group elements - >>> saved_groups - [[1, 2], [11, 12], [21, 22]] - - """ - for k, g in groupby( - enumerate(iterable), key=lambda x: x[0] - ordering(x[1]) - ): - yield map(itemgetter(1), g) - - -def difference(iterable, func=sub, *, initial=None): - """This function is the inverse of :func:`itertools.accumulate`. By default - it will compute the first difference of *iterable* using - :func:`operator.sub`: - - >>> from itertools import accumulate - >>> iterable = accumulate([0, 1, 2, 3, 4]) # produces 0, 1, 3, 6, 10 - >>> list(difference(iterable)) - [0, 1, 2, 3, 4] - - *func* defaults to :func:`operator.sub`, but other functions can be - specified. They will be applied as follows:: - - A, B, C, D, ... --> A, func(B, A), func(C, B), func(D, C), ... - - For example, to do progressive division: - - >>> iterable = [1, 2, 6, 24, 120] - >>> func = lambda x, y: x // y - >>> list(difference(iterable, func)) - [1, 2, 3, 4, 5] - - If the *initial* keyword is set, the first element will be skipped when - computing successive differences. - - >>> it = [10, 11, 13, 16] # from accumulate([1, 2, 3], initial=10) - >>> list(difference(it, initial=10)) - [1, 2, 3] - - """ - a, b = tee(iterable) - try: - first = [next(b)] - except StopIteration: - return iter([]) - - if initial is not None: - first = [] - - return chain(first, starmap(func, zip(b, a))) - - -class SequenceView(Sequence): - """Return a read-only view of the sequence object *target*. - - :class:`SequenceView` objects are analogous to Python's built-in - "dictionary view" types. They provide a dynamic view of a sequence's items, - meaning that when the sequence updates, so does the view. - - >>> seq = ['0', '1', '2'] - >>> view = SequenceView(seq) - >>> view - SequenceView(['0', '1', '2']) - >>> seq.append('3') - >>> view - SequenceView(['0', '1', '2', '3']) - - Sequence views support indexing, slicing, and length queries. They act - like the underlying sequence, except they don't allow assignment: - - >>> view[1] - '1' - >>> view[1:-1] - ['1', '2'] - >>> len(view) - 4 - - Sequence views are useful as an alternative to copying, as they don't - require (much) extra storage. - - """ - - def __init__(self, target): - if not isinstance(target, Sequence): - raise TypeError - self._target = target - - def __getitem__(self, index): - return self._target[index] - - def __len__(self): - return len(self._target) - - def __repr__(self): - return '{}({})'.format(self.__class__.__name__, repr(self._target)) - - -class seekable: - """Wrap an iterator to allow for seeking backward and forward. This - progressively caches the items in the source iterable so they can be - re-visited. - - Call :meth:`seek` with an index to seek to that position in the source - iterable. - - To "reset" an iterator, seek to ``0``: - - >>> from itertools import count - >>> it = seekable((str(n) for n in count())) - >>> next(it), next(it), next(it) - ('0', '1', '2') - >>> it.seek(0) - >>> next(it), next(it), next(it) - ('0', '1', '2') - >>> next(it) - '3' - - You can also seek forward: - - >>> it = seekable((str(n) for n in range(20))) - >>> it.seek(10) - >>> next(it) - '10' - >>> it.seek(20) # Seeking past the end of the source isn't a problem - >>> list(it) - [] - >>> it.seek(0) # Resetting works even after hitting the end - >>> next(it), next(it), next(it) - ('0', '1', '2') - - Call :meth:`peek` to look ahead one item without advancing the iterator: - - >>> it = seekable('1234') - >>> it.peek() - '1' - >>> list(it) - ['1', '2', '3', '4'] - >>> it.peek(default='empty') - 'empty' - - Before the iterator is at its end, calling :func:`bool` on it will return - ``True``. After it will return ``False``: - - >>> it = seekable('5678') - >>> bool(it) - True - >>> list(it) - ['5', '6', '7', '8'] - >>> bool(it) - False - - You may view the contents of the cache with the :meth:`elements` method. - That returns a :class:`SequenceView`, a view that updates automatically: - - >>> it = seekable((str(n) for n in range(10))) - >>> next(it), next(it), next(it) - ('0', '1', '2') - >>> elements = it.elements() - >>> elements - SequenceView(['0', '1', '2']) - >>> next(it) - '3' - >>> elements - SequenceView(['0', '1', '2', '3']) - - By default, the cache grows as the source iterable progresses, so beware of - wrapping very large or infinite iterables. Supply *maxlen* to limit the - size of the cache (this of course limits how far back you can seek). - - >>> from itertools import count - >>> it = seekable((str(n) for n in count()), maxlen=2) - >>> next(it), next(it), next(it), next(it) - ('0', '1', '2', '3') - >>> list(it.elements()) - ['2', '3'] - >>> it.seek(0) - >>> next(it), next(it), next(it), next(it) - ('2', '3', '4', '5') - >>> next(it) - '6' - - """ - - def __init__(self, iterable, maxlen=None): - self._source = iter(iterable) - if maxlen is None: - self._cache = [] - else: - self._cache = deque([], maxlen) - self._index = None - - def __iter__(self): - return self - - def __next__(self): - if self._index is not None: - try: - item = self._cache[self._index] - except IndexError: - self._index = None - else: - self._index += 1 - return item - - item = next(self._source) - self._cache.append(item) - return item - - def __bool__(self): - try: - self.peek() - except StopIteration: - return False - return True - - def peek(self, default=_marker): - try: - peeked = next(self) - except StopIteration: - if default is _marker: - raise - return default - if self._index is None: - self._index = len(self._cache) - self._index -= 1 - return peeked - - def elements(self): - return SequenceView(self._cache) - - def seek(self, index): - self._index = index - remainder = index - len(self._cache) - if remainder > 0: - consume(self, remainder) - - -class run_length: - """ - :func:`run_length.encode` compresses an iterable with run-length encoding. - It yields groups of repeated items with the count of how many times they - were repeated: - - >>> uncompressed = 'abbcccdddd' - >>> list(run_length.encode(uncompressed)) - [('a', 1), ('b', 2), ('c', 3), ('d', 4)] - - :func:`run_length.decode` decompresses an iterable that was previously - compressed with run-length encoding. It yields the items of the - decompressed iterable: - - >>> compressed = [('a', 1), ('b', 2), ('c', 3), ('d', 4)] - >>> list(run_length.decode(compressed)) - ['a', 'b', 'b', 'c', 'c', 'c', 'd', 'd', 'd', 'd'] - - """ - - @staticmethod - def encode(iterable): - return ((k, ilen(g)) for k, g in groupby(iterable)) - - @staticmethod - def decode(iterable): - return chain.from_iterable(repeat(k, n) for k, n in iterable) - - -def exactly_n(iterable, n, predicate=bool): - """Return ``True`` if exactly ``n`` items in the iterable are ``True`` - according to the *predicate* function. - - >>> exactly_n([True, True, False], 2) - True - >>> exactly_n([True, True, False], 1) - False - >>> exactly_n([0, 1, 2, 3, 4, 5], 3, lambda x: x < 3) - True - - The iterable will be advanced until ``n + 1`` truthy items are encountered, - so avoid calling it on infinite iterables. - - """ - return len(take(n + 1, filter(predicate, iterable))) == n - - -def circular_shifts(iterable): - """Return a list of circular shifts of *iterable*. - - >>> circular_shifts(range(4)) - [(0, 1, 2, 3), (1, 2, 3, 0), (2, 3, 0, 1), (3, 0, 1, 2)] - """ - lst = list(iterable) - return take(len(lst), windowed(cycle(lst), len(lst))) - - -def make_decorator(wrapping_func, result_index=0): - """Return a decorator version of *wrapping_func*, which is a function that - modifies an iterable. *result_index* is the position in that function's - signature where the iterable goes. - - This lets you use itertools on the "production end," i.e. at function - definition. This can augment what the function returns without changing the - function's code. - - For example, to produce a decorator version of :func:`chunked`: - - >>> from more_itertools import chunked - >>> chunker = make_decorator(chunked, result_index=0) - >>> @chunker(3) - ... def iter_range(n): - ... return iter(range(n)) - ... - >>> list(iter_range(9)) - [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - - To only allow truthy items to be returned: - - >>> truth_serum = make_decorator(filter, result_index=1) - >>> @truth_serum(bool) - ... def boolean_test(): - ... return [0, 1, '', ' ', False, True] - ... - >>> list(boolean_test()) - [1, ' ', True] - - The :func:`peekable` and :func:`seekable` wrappers make for practical - decorators: - - >>> from more_itertools import peekable - >>> peekable_function = make_decorator(peekable) - >>> @peekable_function() - ... def str_range(*args): - ... return (str(x) for x in range(*args)) - ... - >>> it = str_range(1, 20, 2) - >>> next(it), next(it), next(it) - ('1', '3', '5') - >>> it.peek() - '7' - >>> next(it) - '7' - - """ - # See https://sites.google.com/site/bbayles/index/decorator_factory for - # notes on how this works. - def decorator(*wrapping_args, **wrapping_kwargs): - def outer_wrapper(f): - def inner_wrapper(*args, **kwargs): - result = f(*args, **kwargs) - wrapping_args_ = list(wrapping_args) - wrapping_args_.insert(result_index, result) - return wrapping_func(*wrapping_args_, **wrapping_kwargs) - - return inner_wrapper - - return outer_wrapper - - return decorator - - -def map_reduce(iterable, keyfunc, valuefunc=None, reducefunc=None): - """Return a dictionary that maps the items in *iterable* to categories - defined by *keyfunc*, transforms them with *valuefunc*, and - then summarizes them by category with *reducefunc*. - - *valuefunc* defaults to the identity function if it is unspecified. - If *reducefunc* is unspecified, no summarization takes place: - - >>> keyfunc = lambda x: x.upper() - >>> result = map_reduce('abbccc', keyfunc) - >>> sorted(result.items()) - [('A', ['a']), ('B', ['b', 'b']), ('C', ['c', 'c', 'c'])] - - Specifying *valuefunc* transforms the categorized items: - - >>> keyfunc = lambda x: x.upper() - >>> valuefunc = lambda x: 1 - >>> result = map_reduce('abbccc', keyfunc, valuefunc) - >>> sorted(result.items()) - [('A', [1]), ('B', [1, 1]), ('C', [1, 1, 1])] - - Specifying *reducefunc* summarizes the categorized items: - - >>> keyfunc = lambda x: x.upper() - >>> valuefunc = lambda x: 1 - >>> reducefunc = sum - >>> result = map_reduce('abbccc', keyfunc, valuefunc, reducefunc) - >>> sorted(result.items()) - [('A', 1), ('B', 2), ('C', 3)] - - You may want to filter the input iterable before applying the map/reduce - procedure: - - >>> all_items = range(30) - >>> items = [x for x in all_items if 10 <= x <= 20] # Filter - >>> keyfunc = lambda x: x % 2 # Evens map to 0; odds to 1 - >>> categories = map_reduce(items, keyfunc=keyfunc) - >>> sorted(categories.items()) - [(0, [10, 12, 14, 16, 18, 20]), (1, [11, 13, 15, 17, 19])] - >>> summaries = map_reduce(items, keyfunc=keyfunc, reducefunc=sum) - >>> sorted(summaries.items()) - [(0, 90), (1, 75)] - - Note that all items in the iterable are gathered into a list before the - summarization step, which may require significant storage. - - The returned object is a :obj:`collections.defaultdict` with the - ``default_factory`` set to ``None``, such that it behaves like a normal - dictionary. - - """ - valuefunc = (lambda x: x) if (valuefunc is None) else valuefunc - - ret = defaultdict(list) - for item in iterable: - key = keyfunc(item) - value = valuefunc(item) - ret[key].append(value) - - if reducefunc is not None: - for key, value_list in ret.items(): - ret[key] = reducefunc(value_list) - - ret.default_factory = None - return ret - - -def rlocate(iterable, pred=bool, window_size=None): - """Yield the index of each item in *iterable* for which *pred* returns - ``True``, starting from the right and moving left. - - *pred* defaults to :func:`bool`, which will select truthy items: - - >>> list(rlocate([0, 1, 1, 0, 1, 0, 0])) # Truthy at 1, 2, and 4 - [4, 2, 1] - - Set *pred* to a custom function to, e.g., find the indexes for a particular - item: - - >>> iterable = iter('abcb') - >>> pred = lambda x: x == 'b' - >>> list(rlocate(iterable, pred)) - [3, 1] - - If *window_size* is given, then the *pred* function will be called with - that many items. This enables searching for sub-sequences: - - >>> iterable = [0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3] - >>> pred = lambda *args: args == (1, 2, 3) - >>> list(rlocate(iterable, pred=pred, window_size=3)) - [9, 5, 1] - - Beware, this function won't return anything for infinite iterables. - If *iterable* is reversible, ``rlocate`` will reverse it and search from - the right. Otherwise, it will search from the left and return the results - in reverse order. - - See :func:`locate` to for other example applications. - - """ - if window_size is None: - try: - len_iter = len(iterable) - return (len_iter - i - 1 for i in locate(reversed(iterable), pred)) - except TypeError: - pass - - return reversed(list(locate(iterable, pred, window_size))) - - -def replace(iterable, pred, substitutes, count=None, window_size=1): - """Yield the items from *iterable*, replacing the items for which *pred* - returns ``True`` with the items from the iterable *substitutes*. - - >>> iterable = [1, 1, 0, 1, 1, 0, 1, 1] - >>> pred = lambda x: x == 0 - >>> substitutes = (2, 3) - >>> list(replace(iterable, pred, substitutes)) - [1, 1, 2, 3, 1, 1, 2, 3, 1, 1] - - If *count* is given, the number of replacements will be limited: - - >>> iterable = [1, 1, 0, 1, 1, 0, 1, 1, 0] - >>> pred = lambda x: x == 0 - >>> substitutes = [None] - >>> list(replace(iterable, pred, substitutes, count=2)) - [1, 1, None, 1, 1, None, 1, 1, 0] - - Use *window_size* to control the number of items passed as arguments to - *pred*. This allows for locating and replacing subsequences. - - >>> iterable = [0, 1, 2, 5, 0, 1, 2, 5] - >>> window_size = 3 - >>> pred = lambda *args: args == (0, 1, 2) # 3 items passed to pred - >>> substitutes = [3, 4] # Splice in these items - >>> list(replace(iterable, pred, substitutes, window_size=window_size)) - [3, 4, 5, 3, 4, 5] - - """ - if window_size < 1: - raise ValueError('window_size must be at least 1') - - # Save the substitutes iterable, since it's used more than once - substitutes = tuple(substitutes) - - # Add padding such that the number of windows matches the length of the - # iterable - it = chain(iterable, [_marker] * (window_size - 1)) - windows = windowed(it, window_size) - - n = 0 - for w in windows: - # If the current window matches our predicate (and we haven't hit - # our maximum number of replacements), splice in the substitutes - # and then consume the following windows that overlap with this one. - # For example, if the iterable is (0, 1, 2, 3, 4...) - # and the window size is 2, we have (0, 1), (1, 2), (2, 3)... - # If the predicate matches on (0, 1), we need to zap (0, 1) and (1, 2) - if pred(*w): - if (count is None) or (n < count): - n += 1 - yield from substitutes - consume(windows, window_size - 1) - continue - - # If there was no match (or we've reached the replacement limit), - # yield the first item from the window. - if w and (w[0] is not _marker): - yield w[0] - - -def partitions(iterable): - """Yield all possible order-preserving partitions of *iterable*. - - >>> iterable = 'abc' - >>> for part in partitions(iterable): - ... print([''.join(p) for p in part]) - ['abc'] - ['a', 'bc'] - ['ab', 'c'] - ['a', 'b', 'c'] - - This is unrelated to :func:`partition`. - - """ - sequence = list(iterable) - n = len(sequence) - for i in powerset(range(1, n)): - yield [sequence[i:j] for i, j in zip((0,) + i, i + (n,))] - - -def set_partitions(iterable, k=None): - """ - Yield the set partitions of *iterable* into *k* parts. Set partitions are - not order-preserving. - - >>> iterable = 'abc' - >>> for part in set_partitions(iterable, 2): - ... print([''.join(p) for p in part]) - ['a', 'bc'] - ['ab', 'c'] - ['b', 'ac'] - - - If *k* is not given, every set partition is generated. - - >>> iterable = 'abc' - >>> for part in set_partitions(iterable): - ... print([''.join(p) for p in part]) - ['abc'] - ['a', 'bc'] - ['ab', 'c'] - ['b', 'ac'] - ['a', 'b', 'c'] - - """ - L = list(iterable) - n = len(L) - if k is not None: - if k < 1: - raise ValueError( - "Can't partition in a negative or zero number of groups" - ) - elif k > n: - return - - def set_partitions_helper(L, k): - n = len(L) - if k == 1: - yield [L] - elif n == k: - yield [[s] for s in L] - else: - e, *M = L - for p in set_partitions_helper(M, k - 1): - yield [[e], *p] - for p in set_partitions_helper(M, k): - for i in range(len(p)): - yield p[:i] + [[e] + p[i]] + p[i + 1 :] - - if k is None: - for k in range(1, n + 1): - yield from set_partitions_helper(L, k) - else: - yield from set_partitions_helper(L, k) - - -class time_limited: - """ - Yield items from *iterable* until *limit_seconds* have passed. - If the time limit expires before all items have been yielded, the - ``timed_out`` parameter will be set to ``True``. - - >>> from time import sleep - >>> def generator(): - ... yield 1 - ... yield 2 - ... sleep(0.2) - ... yield 3 - >>> iterable = time_limited(0.1, generator()) - >>> list(iterable) - [1, 2] - >>> iterable.timed_out - True - - Note that the time is checked before each item is yielded, and iteration - stops if the time elapsed is greater than *limit_seconds*. If your time - limit is 1 second, but it takes 2 seconds to generate the first item from - the iterable, the function will run for 2 seconds and not yield anything. - - """ - - def __init__(self, limit_seconds, iterable): - if limit_seconds < 0: - raise ValueError('limit_seconds must be positive') - self.limit_seconds = limit_seconds - self._iterable = iter(iterable) - self._start_time = monotonic() - self.timed_out = False - - def __iter__(self): - return self - - def __next__(self): - item = next(self._iterable) - if monotonic() - self._start_time > self.limit_seconds: - self.timed_out = True - raise StopIteration - - return item - - -def only(iterable, default=None, too_long=None): - """If *iterable* has only one item, return it. - If it has zero items, return *default*. - If it has more than one item, raise the exception given by *too_long*, - which is ``ValueError`` by default. - - >>> only([], default='missing') - 'missing' - >>> only([1]) - 1 - >>> only([1, 2]) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - ValueError: Expected exactly one item in iterable, but got 1, 2, - and perhaps more.' - >>> only([1, 2], too_long=TypeError) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - TypeError - - Note that :func:`only` attempts to advance *iterable* twice to ensure there - is only one item. See :func:`spy` or :func:`peekable` to check - iterable contents less destructively. - """ - it = iter(iterable) - first_value = next(it, default) - - try: - second_value = next(it) - except StopIteration: - pass - else: - msg = ( - 'Expected exactly one item in iterable, but got {!r}, {!r}, ' - 'and perhaps more.'.format(first_value, second_value) - ) - raise too_long or ValueError(msg) - - return first_value - - -def ichunked(iterable, n): - """Break *iterable* into sub-iterables with *n* elements each. - :func:`ichunked` is like :func:`chunked`, but it yields iterables - instead of lists. - - If the sub-iterables are read in order, the elements of *iterable* - won't be stored in memory. - If they are read out of order, :func:`itertools.tee` is used to cache - elements as necessary. - - >>> from itertools import count - >>> all_chunks = ichunked(count(), 4) - >>> c_1, c_2, c_3 = next(all_chunks), next(all_chunks), next(all_chunks) - >>> list(c_2) # c_1's elements have been cached; c_3's haven't been - [4, 5, 6, 7] - >>> list(c_1) - [0, 1, 2, 3] - >>> list(c_3) - [8, 9, 10, 11] - - """ - source = iter(iterable) - - while True: - # Check to see whether we're at the end of the source iterable - item = next(source, _marker) - if item is _marker: - return - - # Clone the source and yield an n-length slice - source, it = tee(chain([item], source)) - yield islice(it, n) - - # Advance the source iterable - consume(source, n) - - -def distinct_combinations(iterable, r): - """Yield the distinct combinations of *r* items taken from *iterable*. - - >>> list(distinct_combinations([0, 0, 1], 2)) - [(0, 0), (0, 1)] - - Equivalent to ``set(combinations(iterable))``, except duplicates are not - generated and thrown away. For larger input sequences this is much more - efficient. - - """ - if r < 0: - raise ValueError('r must be non-negative') - elif r == 0: - yield () - return - pool = tuple(iterable) - generators = [unique_everseen(enumerate(pool), key=itemgetter(1))] - current_combo = [None] * r - level = 0 - while generators: - try: - cur_idx, p = next(generators[-1]) - except StopIteration: - generators.pop() - level -= 1 - continue - current_combo[level] = p - if level + 1 == r: - yield tuple(current_combo) - else: - generators.append( - unique_everseen( - enumerate(pool[cur_idx + 1 :], cur_idx + 1), - key=itemgetter(1), - ) - ) - level += 1 - - -def filter_except(validator, iterable, *exceptions): - """Yield the items from *iterable* for which the *validator* function does - not raise one of the specified *exceptions*. - - *validator* is called for each item in *iterable*. - It should be a function that accepts one argument and raises an exception - if that item is not valid. - - >>> iterable = ['1', '2', 'three', '4', None] - >>> list(filter_except(int, iterable, ValueError, TypeError)) - ['1', '2', '4'] - - If an exception other than one given by *exceptions* is raised by - *validator*, it is raised like normal. - """ - for item in iterable: - try: - validator(item) - except exceptions: - pass - else: - yield item - - -def map_except(function, iterable, *exceptions): - """Transform each item from *iterable* with *function* and yield the - result, unless *function* raises one of the specified *exceptions*. - - *function* is called to transform each item in *iterable*. - It should be a accept one argument. - - >>> iterable = ['1', '2', 'three', '4', None] - >>> list(map_except(int, iterable, ValueError, TypeError)) - [1, 2, 4] - - If an exception other than one given by *exceptions* is raised by - *function*, it is raised like normal. - """ - for item in iterable: - try: - yield function(item) - except exceptions: - pass - - -def _sample_unweighted(iterable, k): - # Implementation of "Algorithm L" from the 1994 paper by Kim-Hung Li: - # "Reservoir-Sampling Algorithms of Time Complexity O(n(1+log(N/n)))". - - # Fill up the reservoir (collection of samples) with the first `k` samples - reservoir = take(k, iterable) - - # Generate random number that's the largest in a sample of k U(0,1) numbers - # Largest order statistic: https://en.wikipedia.org/wiki/Order_statistic - W = exp(log(random()) / k) - - # The number of elements to skip before changing the reservoir is a random - # number with a geometric distribution. Sample it using random() and logs. - next_index = k + floor(log(random()) / log(1 - W)) - - for index, element in enumerate(iterable, k): - - if index == next_index: - reservoir[randrange(k)] = element - # The new W is the largest in a sample of k U(0, `old_W`) numbers - W *= exp(log(random()) / k) - next_index += floor(log(random()) / log(1 - W)) + 1 - - return reservoir - - -def _sample_weighted(iterable, k, weights): - # Implementation of "A-ExpJ" from the 2006 paper by Efraimidis et al. : - # "Weighted random sampling with a reservoir". - - # Log-transform for numerical stability for weights that are small/large - weight_keys = (log(random()) / weight for weight in weights) - - # Fill up the reservoir (collection of samples) with the first `k` - # weight-keys and elements, then heapify the list. - reservoir = take(k, zip(weight_keys, iterable)) - heapify(reservoir) - - # The number of jumps before changing the reservoir is a random variable - # with an exponential distribution. Sample it using random() and logs. - smallest_weight_key, _ = reservoir[0] - weights_to_skip = log(random()) / smallest_weight_key - - for weight, element in zip(weights, iterable): - if weight >= weights_to_skip: - # The notation here is consistent with the paper, but we store - # the weight-keys in log-space for better numerical stability. - smallest_weight_key, _ = reservoir[0] - t_w = exp(weight * smallest_weight_key) - r_2 = uniform(t_w, 1) # generate U(t_w, 1) - weight_key = log(r_2) / weight - heapreplace(reservoir, (weight_key, element)) - smallest_weight_key, _ = reservoir[0] - weights_to_skip = log(random()) / smallest_weight_key - else: - weights_to_skip -= weight - - # Equivalent to [element for weight_key, element in sorted(reservoir)] - return [heappop(reservoir)[1] for _ in range(k)] - - -def sample(iterable, k, weights=None): - """Return a *k*-length list of elements chosen (without replacement) - from the *iterable*. Like :func:`random.sample`, but works on iterables - of unknown length. - - >>> iterable = range(100) - >>> sample(iterable, 5) # doctest: +SKIP - [81, 60, 96, 16, 4] - - An iterable with *weights* may also be given: - - >>> iterable = range(100) - >>> weights = (i * i + 1 for i in range(100)) - >>> sampled = sample(iterable, 5, weights=weights) # doctest: +SKIP - [79, 67, 74, 66, 78] - - The algorithm can also be used to generate weighted random permutations. - The relative weight of each item determines the probability that it - appears late in the permutation. - - >>> data = "abcdefgh" - >>> weights = range(1, len(data) + 1) - >>> sample(data, k=len(data), weights=weights) # doctest: +SKIP - ['c', 'a', 'b', 'e', 'g', 'd', 'h', 'f'] - """ - if k == 0: - return [] - - iterable = iter(iterable) - if weights is None: - return _sample_unweighted(iterable, k) - else: - weights = iter(weights) - return _sample_weighted(iterable, k, weights) - - -def is_sorted(iterable, key=None, reverse=False): - """Returns ``True`` if the items of iterable are in sorted order, and - ``False`` otherwise. *key* and *reverse* have the same meaning that they do - in the built-in :func:`sorted` function. - - >>> is_sorted(['1', '2', '3', '4', '5'], key=int) - True - >>> is_sorted([5, 4, 3, 1, 2], reverse=True) - False - - The function returns ``False`` after encountering the first out-of-order - item. If there are no out-of-order items, the iterable is exhausted. - """ - - compare = lt if reverse else gt - it = iterable if (key is None) else map(key, iterable) - return not any(starmap(compare, pairwise(it))) - - -class AbortThread(BaseException): - pass - - -class callback_iter: - """Convert a function that uses callbacks to an iterator. - - Let *func* be a function that takes a `callback` keyword argument. - For example: - - >>> def func(callback=None): - ... for i, c in [(1, 'a'), (2, 'b'), (3, 'c')]: - ... if callback: - ... callback(i, c) - ... return 4 - - - Use ``with callback_iter(func)`` to get an iterator over the parameters - that are delivered to the callback. - - >>> with callback_iter(func) as it: - ... for args, kwargs in it: - ... print(args) - (1, 'a') - (2, 'b') - (3, 'c') - - The function will be called in a background thread. The ``done`` property - indicates whether it has completed execution. - - >>> it.done - True - - If it completes successfully, its return value will be available - in the ``result`` property. - - >>> it.result - 4 - - Notes: - - * If the function uses some keyword argument besides ``callback``, supply - *callback_kwd*. - * If it finished executing, but raised an exception, accessing the - ``result`` property will raise the same exception. - * If it hasn't finished executing, accessing the ``result`` - property from within the ``with`` block will raise ``RuntimeError``. - * If it hasn't finished executing, accessing the ``result`` property from - outside the ``with`` block will raise a - ``more_itertools.AbortThread`` exception. - * Provide *wait_seconds* to adjust how frequently the it is polled for - output. - - """ - - def __init__(self, func, callback_kwd='callback', wait_seconds=0.1): - self._func = func - self._callback_kwd = callback_kwd - self._aborted = False - self._future = None - self._wait_seconds = wait_seconds - self._executor = __import__("concurrent.futures").futures.ThreadPoolExecutor(max_workers=1) - self._iterator = self._reader() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - self._aborted = True - self._executor.shutdown() - - def __iter__(self): - return self - - def __next__(self): - return next(self._iterator) - - @property - def done(self): - if self._future is None: - return False - return self._future.done() - - @property - def result(self): - if not self.done: - raise RuntimeError('Function has not yet completed') - - return self._future.result() - - def _reader(self): - q = Queue() - - def callback(*args, **kwargs): - if self._aborted: - raise AbortThread('canceled by user') - - q.put((args, kwargs)) - - self._future = self._executor.submit( - self._func, **{self._callback_kwd: callback} - ) - - while True: - try: - item = q.get(timeout=self._wait_seconds) - except Empty: - pass - else: - q.task_done() - yield item - - if self._future.done(): - break - - remaining = [] - while True: - try: - item = q.get_nowait() - except Empty: - break - else: - q.task_done() - remaining.append(item) - q.join() - yield from remaining - - -def windowed_complete(iterable, n): - """ - Yield ``(beginning, middle, end)`` tuples, where: - - * Each ``middle`` has *n* items from *iterable* - * Each ``beginning`` has the items before the ones in ``middle`` - * Each ``end`` has the items after the ones in ``middle`` - - >>> iterable = range(7) - >>> n = 3 - >>> for beginning, middle, end in windowed_complete(iterable, n): - ... print(beginning, middle, end) - () (0, 1, 2) (3, 4, 5, 6) - (0,) (1, 2, 3) (4, 5, 6) - (0, 1) (2, 3, 4) (5, 6) - (0, 1, 2) (3, 4, 5) (6,) - (0, 1, 2, 3) (4, 5, 6) () - - Note that *n* must be at least 0 and most equal to the length of - *iterable*. - - This function will exhaust the iterable and may require significant - storage. - """ - if n < 0: - raise ValueError('n must be >= 0') - - seq = tuple(iterable) - size = len(seq) - - if n > size: - raise ValueError('n must be <= len(seq)') - - for i in range(size - n + 1): - beginning = seq[:i] - middle = seq[i : i + n] - end = seq[i + n :] - yield beginning, middle, end - - -def all_unique(iterable, key=None): - """ - Returns ``True`` if all the elements of *iterable* are unique (no two - elements are equal). - - >>> all_unique('ABCB') - False - - If a *key* function is specified, it will be used to make comparisons. - - >>> all_unique('ABCb') - True - >>> all_unique('ABCb', str.lower) - False - - The function returns as soon as the first non-unique element is - encountered. Iterables with a mix of hashable and unhashable items can - be used, but the function will be slower for unhashable items. - """ - seenset = set() - seenset_add = seenset.add - seenlist = [] - seenlist_add = seenlist.append - for element in map(key, iterable) if key else iterable: - try: - if element in seenset: - return False - seenset_add(element) - except TypeError: - if element in seenlist: - return False - seenlist_add(element) - return True - - -def nth_product(index, *args): - """Equivalent to ``list(product(*args))[index]``. - - The products of *args* can be ordered lexicographically. - :func:`nth_product` computes the product at sort position *index* without - computing the previous products. - - >>> nth_product(8, range(2), range(2), range(2), range(2)) - (1, 0, 0, 0) - - ``IndexError`` will be raised if the given *index* is invalid. - """ - pools = list(map(tuple, reversed(args))) - ns = list(map(len, pools)) - - c = reduce(mul, ns) - - if index < 0: - index += c - - if not 0 <= index < c: - raise IndexError - - result = [] - for pool, n in zip(pools, ns): - result.append(pool[index % n]) - index //= n - - return tuple(reversed(result)) - - -def nth_permutation(iterable, r, index): - """Equivalent to ``list(permutations(iterable, r))[index]``` - - The subsequences of *iterable* that are of length *r* where order is - important can be ordered lexicographically. :func:`nth_permutation` - computes the subsequence at sort position *index* directly, without - computing the previous subsequences. - - >>> nth_permutation('ghijk', 2, 5) - ('h', 'i') - - ``ValueError`` will be raised If *r* is negative or greater than the length - of *iterable*. - ``IndexError`` will be raised if the given *index* is invalid. - """ - pool = list(iterable) - n = len(pool) - - if r is None or r == n: - r, c = n, factorial(n) - elif not 0 <= r < n: - raise ValueError - else: - c = factorial(n) // factorial(n - r) - - if index < 0: - index += c - - if not 0 <= index < c: - raise IndexError - - if c == 0: - return tuple() - - result = [0] * r - q = index * factorial(n) // c if r < n else index - for d in range(1, n + 1): - q, i = divmod(q, d) - if 0 <= n - d < r: - result[n - d] = i - if q == 0: - break - - return tuple(map(pool.pop, result)) - - -def value_chain(*args): - """Yield all arguments passed to the function in the same order in which - they were passed. If an argument itself is iterable then iterate over its - values. - - >>> list(value_chain(1, 2, 3, [4, 5, 6])) - [1, 2, 3, 4, 5, 6] - - Binary and text strings are not considered iterable and are emitted - as-is: - - >>> list(value_chain('12', '34', ['56', '78'])) - ['12', '34', '56', '78'] - - - Multiple levels of nesting are not flattened. - - """ - for value in args: - if isinstance(value, (str, bytes)): - yield value - continue - try: - yield from value - except TypeError: - yield value - - -def product_index(element, *args): - """Equivalent to ``list(product(*args)).index(element)`` - - The products of *args* can be ordered lexicographically. - :func:`product_index` computes the first index of *element* without - computing the previous products. - - >>> product_index([8, 2], range(10), range(5)) - 42 - - ``ValueError`` will be raised if the given *element* isn't in the product - of *args*. - """ - index = 0 - - for x, pool in zip_longest(element, args, fillvalue=_marker): - if x is _marker or pool is _marker: - raise ValueError('element is not a product of args') - - pool = tuple(pool) - index = index * len(pool) + pool.index(x) - - return index - - -def combination_index(element, iterable): - """Equivalent to ``list(combinations(iterable, r)).index(element)`` - - The subsequences of *iterable* that are of length *r* can be ordered - lexicographically. :func:`combination_index` computes the index of the - first *element*, without computing the previous combinations. - - >>> combination_index('adf', 'abcdefg') - 10 - - ``ValueError`` will be raised if the given *element* isn't one of the - combinations of *iterable*. - """ - element = enumerate(element) - k, y = next(element, (None, None)) - if k is None: - return 0 - - indexes = [] - pool = enumerate(iterable) - for n, x in pool: - if x == y: - indexes.append(n) - tmp, y = next(element, (None, None)) - if tmp is None: - break - else: - k = tmp - else: - raise ValueError('element is not a combination of iterable') - - n, _ = last(pool, default=(n, None)) - - # Python versiosn below 3.8 don't have math.comb - index = 1 - for i, j in enumerate(reversed(indexes), start=1): - j = n - j - if i <= j: - index += factorial(j) // (factorial(i) * factorial(j - i)) - - return factorial(n + 1) // (factorial(k + 1) * factorial(n - k)) - index - - -def permutation_index(element, iterable): - """Equivalent to ``list(permutations(iterable, r)).index(element)``` - - The subsequences of *iterable* that are of length *r* where order is - important can be ordered lexicographically. :func:`permutation_index` - computes the index of the first *element* directly, without computing - the previous permutations. - - >>> permutation_index([1, 3, 2], range(5)) - 19 - - ``ValueError`` will be raised if the given *element* isn't one of the - permutations of *iterable*. - """ - index = 0 - pool = list(iterable) - for i, x in zip(range(len(pool), -1, -1), element): - r = pool.index(x) - index = index * i + r - del pool[r] - - return index - - -class countable: - """Wrap *iterable* and keep a count of how many items have been consumed. - - The ``items_seen`` attribute starts at ``0`` and increments as the iterable - is consumed: - - >>> iterable = map(str, range(10)) - >>> it = countable(iterable) - >>> it.items_seen - 0 - >>> next(it), next(it) - ('0', '1') - >>> list(it) - ['2', '3', '4', '5', '6', '7', '8', '9'] - >>> it.items_seen - 10 - """ - - def __init__(self, iterable): - self._it = iter(iterable) - self.items_seen = 0 - - def __iter__(self): - return self - - def __next__(self): - item = next(self._it) - self.items_seen += 1 - - return item diff --git a/spaces/Thafx/sdrv1_3/README.md b/spaces/Thafx/sdrv1_3/README.md deleted file mode 100644 index ca223a685b180ecd52288c0ea4d7602b1cd9b2df..0000000000000000000000000000000000000000 --- a/spaces/Thafx/sdrv1_3/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Realistic Vision v1.3 -emoji: 👀 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: true -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- realistic-vision -models: -- SG161222/Realistic_Vision_V1.3 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Theivaprakasham/yolov6/deploy/OpenVINO/README.md b/spaces/Theivaprakasham/yolov6/deploy/OpenVINO/README.md deleted file mode 100644 index 76c25d011d59868d9f70ce5d9e7e7771273c3299..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/deploy/OpenVINO/README.md +++ /dev/null @@ -1,18 +0,0 @@ -## Export OpenVINO Model - -### Check requirements -```shell -pip install --upgrade pip -pip install openvino-dev -``` - -### Export script -```shell -python deploy/OpenVINO/export_openvino.py --weights yolov6s.pt --img 640 --batch 1 - -``` - -### Download -* [YOLOv6-nano](https://github.com/meituan/YOLOv6/releases/download/0.1.0/yolov6n_openvino.tar.gz) -* [YOLOv6-tiny](https://github.com/meituan/YOLOv6/releases/download/0.1.0/yolov6n_openvino.tar.gz) -* [YOLOv6-s](https://github.com/meituan/YOLOv6/releases/download/0.1.0/yolov6n_openvino.tar.gz) \ No newline at end of file diff --git a/spaces/Tonic/cybermints/theme_dropdown.py b/spaces/Tonic/cybermints/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/Tonic/cybermints/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/Vageesh1/clip_gpt2/neuralnet/dataset.py b/spaces/Vageesh1/clip_gpt2/neuralnet/dataset.py deleted file mode 100644 index b05b529218133f958e44436f9e69a026d2e5f704..0000000000000000000000000000000000000000 --- a/spaces/Vageesh1/clip_gpt2/neuralnet/dataset.py +++ /dev/null @@ -1,139 +0,0 @@ -import os # when loading file paths -import pandas as pd # for lookup in annotation file -import spacy # for tokenizer -import torch -from torch.nn.utils.rnn import pad_sequence # pad batch -from torch.utils.data import DataLoader, Dataset -from PIL import Image # Load img -import torchvision.transforms as transforms -import json - -# Download with: python -m spacy download en -spacy_eng = spacy.load("en_core_web_sm") - - -class Vocabulary: - def __init__(self, freq_threshold): - self.itos = {0: "", 1: "", 2: "", 3: ""} - self.stoi = {"": 0, "": 1, "": 2, "": 3} - self.freq_threshold = freq_threshold - - def __len__(self): - return len(self.stoi) - - @staticmethod - def tokenizer_eng(text): - return [tok.text.lower() for tok in spacy_eng.tokenizer(text)] - - def build_vocabulary(self, sentence_list): - frequencies = {} - idx = 4 - - for sentence in sentence_list: - for word in self.tokenizer_eng(sentence): - if word not in frequencies: - frequencies[word] = 1 - - else: - frequencies[word] += 1 - - if frequencies[word] == self.freq_threshold: - self.stoi[word] = idx - self.itos[idx] = word - idx += 1 - - def numericalize(self, text): - tokenized_text = self.tokenizer_eng(text) - - return [ - self.stoi[token] if token in self.stoi else self.stoi[""] - for token in tokenized_text - ] - - -class FlickrDataset(Dataset): - def __init__(self, root_dir, captions_file, transform=None, freq_threshold=5): - self.root_dir = root_dir - self.df = pd.read_csv(captions_file) - self.transform = transform - - # Get img, caption columns - self.imgs = self.df["image_name"] - self.captions = self.df["comment"] - - # Initialize vocabulary and build vocab - self.vocab = Vocabulary(freq_threshold) - self.vocab.build_vocabulary(self.captions.tolist()) - - def __len__(self): - return len(self.df) - - def __getitem__(self, index): - caption = self.captions[index] - img_id = self.imgs[index] - img = Image.open(os.path.join(self.root_dir, img_id)).convert("RGB") - - if self.transform is not None: - img = self.transform(img) - - numericalized_caption = [self.vocab.stoi[""]] - numericalized_caption += self.vocab.numericalize(caption) - numericalized_caption.append(self.vocab.stoi[""]) - - return img, torch.tensor(numericalized_caption) - - -class MyCollate: - def __init__(self, pad_idx): - self.pad_idx = pad_idx - - def __call__(self, batch): - imgs = [item[0].unsqueeze(0) for item in batch] - imgs = torch.cat(imgs, dim=0) - targets = [item[1] for item in batch] - targets = pad_sequence(targets, batch_first=False, padding_value=self.pad_idx) - - return imgs, targets - - -def get_loader( - root_folder, - annotation_file, - transform, - batch_size=64, - num_workers=2, - shuffle=True, - pin_memory=True, -): - dataset = FlickrDataset(root_folder, annotation_file, transform=transform) - - pad_idx = dataset.vocab.stoi[""] - - loader = DataLoader( - dataset=dataset, - batch_size=batch_size, - num_workers=num_workers, - shuffle=shuffle, - pin_memory=pin_memory, - collate_fn=MyCollate(pad_idx=pad_idx), - ) - - return loader, dataset - - -if __name__ == "__main__": - transform = transforms.Compose( - [transforms.Resize((224, 224)), transforms.ToTensor(),] - ) - - loader, dataset = get_loader( - "/home/koushik/vscode/Projects/pytorch/img2text_v1/flickr30k/flickr30k_images/", "/home/koushik/vscode/Projects/pytorch/img2text_v1/flickr30k/results.csv", transform=transform - ) - - for idx, (imgs, captions) in enumerate(loader): - print(imgs.shape) - print(captions.shape) - print(len(dataset.vocab)) - test = {"itos":dataset.vocab.itos, "stoi": dataset.vocab.stoi} - json.dump(test, open('test.json', 'w')) - break diff --git a/spaces/XAI/CHM-Corr/README.md b/spaces/XAI/CHM-Corr/README.md deleted file mode 100644 index 4d83111445f2455f02a103fc26c43ba50bb47ba8..0000000000000000000000000000000000000000 --- a/spaces/XAI/CHM-Corr/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: CHM-Corr -emoji: 🐨 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: mit ---- - -[Paper](https://arxiv.org/abs/2208.00780) - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Xenova/the-tokenizer-playground/assets/index-a6787aa0.js b/spaces/Xenova/the-tokenizer-playground/assets/index-a6787aa0.js deleted file mode 100644 index 4756c4161b7a48e5799ddb5187cd582e2a5a16e0..0000000000000000000000000000000000000000 --- a/spaces/Xenova/the-tokenizer-playground/assets/index-a6787aa0.js +++ /dev/null @@ -1,41 +0,0 @@ -(function(){const n=document.createElement("link").relList;if(n&&n.supports&&n.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const o of l)if(o.type==="childList")for(const u of o.addedNodes)u.tagName==="LINK"&&u.rel==="modulepreload"&&r(u)}).observe(document,{childList:!0,subtree:!0});function t(l){const o={};return l.integrity&&(o.integrity=l.integrity),l.referrerPolicy&&(o.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?o.credentials="include":l.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function r(l){if(l.ep)return;l.ep=!0;const o=t(l);fetch(l.href,o)}})();function lc(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var Wi={exports:{}},el={},Qi={exports:{}},L={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var Yt=Symbol.for("react.element"),oc=Symbol.for("react.portal"),uc=Symbol.for("react.fragment"),ic=Symbol.for("react.strict_mode"),sc=Symbol.for("react.profiler"),ac=Symbol.for("react.provider"),cc=Symbol.for("react.context"),fc=Symbol.for("react.forward_ref"),dc=Symbol.for("react.suspense"),pc=Symbol.for("react.memo"),mc=Symbol.for("react.lazy"),Mu=Symbol.iterator;function hc(e){return e===null||typeof e!="object"?null:(e=Mu&&e[Mu]||e["@@iterator"],typeof e=="function"?e:null)}var Ki={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},Xi=Object.assign,Yi={};function ot(e,n,t){this.props=e,this.context=n,this.refs=Yi,this.updater=t||Ki}ot.prototype.isReactComponent={};ot.prototype.setState=function(e,n){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,n,"setState")};ot.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function Gi(){}Gi.prototype=ot.prototype;function $o(e,n,t){this.props=e,this.context=n,this.refs=Yi,this.updater=t||Ki}var Ao=$o.prototype=new Gi;Ao.constructor=$o;Xi(Ao,ot.prototype);Ao.isPureReactComponent=!0;var Du=Array.isArray,Zi=Object.prototype.hasOwnProperty,Vo={current:null},Ji={key:!0,ref:!0,__self:!0,__source:!0};function qi(e,n,t){var r,l={},o=null,u=null;if(n!=null)for(r in n.ref!==void 0&&(u=n.ref),n.key!==void 0&&(o=""+n.key),n)Zi.call(n,r)&&!Ji.hasOwnProperty(r)&&(l[r]=n[r]);var i=arguments.length-2;if(i===1)l.children=t;else if(1>>1,G=E[W];if(0>>1;Wl(gl,z))gnl(er,gl)?(E[W]=er,E[gn]=z,W=gn):(E[W]=gl,E[yn]=z,W=yn);else if(gnl(er,z))E[W]=er,E[gn]=z,W=gn;else break e}}return P}function l(E,P){var z=E.sortIndex-P.sortIndex;return z!==0?z:E.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var o=performance;e.unstable_now=function(){return o.now()}}else{var u=Date,i=u.now();e.unstable_now=function(){return u.now()-i}}var s=[],f=[],h=1,m=null,p=3,g=!1,w=!1,k=!1,j=typeof setTimeout=="function"?setTimeout:null,c=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(E){for(var P=t(f);P!==null;){if(P.callback===null)r(f);else if(P.startTime<=E)r(f),P.sortIndex=P.expirationTime,n(s,P);else break;P=t(f)}}function v(E){if(k=!1,d(E),!w)if(t(s)!==null)w=!0,vl(x);else{var P=t(f);P!==null&&yl(v,P.startTime-E)}}function x(E,P){w=!1,k&&(k=!1,c(N),N=-1),g=!0;var z=p;try{for(d(P),m=t(s);m!==null&&(!(m.expirationTime>P)||E&&!Pe());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var G=W(m.expirationTime<=P);P=e.unstable_now(),typeof G=="function"?m.callback=G:m===t(s)&&r(s),d(P)}else r(s);m=t(s)}if(m!==null)var bt=!0;else{var yn=t(f);yn!==null&&yl(v,yn.startTime-P),bt=!1}return bt}finally{m=null,p=z,g=!1}}var C=!1,_=null,N=-1,H=5,R=-1;function Pe(){return!(e.unstable_now()-RE||125W?(E.sortIndex=z,n(f,E),t(s)===null&&E===t(f)&&(k?(c(N),N=-1):k=!0,yl(v,z-W))):(E.sortIndex=G,n(s,E),w||g||(w=!0,vl(x))),E},e.unstable_shouldYield=Pe,e.unstable_wrapCallback=function(E){var P=p;return function(){var z=p;p=P;try{return E.apply(this,arguments)}finally{p=z}}}})(ts);ns.exports=ts;var Pc=ns.exports;/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var rs=ae,ge=Pc;function y(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),Kl=Object.prototype.hasOwnProperty,zc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Fu={},Uu={};function Tc(e){return Kl.call(Uu,e)?!0:Kl.call(Fu,e)?!1:zc.test(e)?Uu[e]=!0:(Fu[e]=!0,!1)}function Lc(e,n,t,r){if(t!==null&&t.type===0)return!1;switch(typeof n){case"function":case"symbol":return!0;case"boolean":return r?!1:t!==null?!t.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Rc(e,n,t,r){if(n===null||typeof n>"u"||Lc(e,n,t,r))return!0;if(r)return!1;if(t!==null)switch(t.type){case 3:return!n;case 4:return n===!1;case 5:return isNaN(n);case 6:return isNaN(n)||1>n}return!1}function se(e,n,t,r,l,o,u){this.acceptsBooleans=n===2||n===3||n===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=t,this.propertyName=e,this.type=n,this.sanitizeURL=o,this.removeEmptyString=u}var ee={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ee[e]=new se(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var n=e[0];ee[n]=new se(n,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ee[e]=new se(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ee[e]=new se(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ee[e]=new se(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ee[e]=new se(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ee[e]=new se(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ee[e]=new se(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ee[e]=new se(e,5,!1,e.toLowerCase(),null,!1,!1)});var Ho=/[\-:]([a-z])/g;function Wo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!1,!1)});ee.xlinkHref=new se("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!0,!0)});function Qo(e,n,t,r){var l=ee.hasOwnProperty(n)?ee[n]:null;(l!==null?l.type!==0:r||!(2i||l[u]!==o[i]){var s=` -`+l[u].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=u&&0<=i);break}}}finally{Sl=!1,Error.prepareStackTrace=t}return(e=e?e.displayName||e.name:"")?gt(e):""}function jc(e){switch(e.tag){case 5:return gt(e.type);case 16:return gt("Lazy");case 13:return gt("Suspense");case 19:return gt("SuspenseList");case 0:case 2:case 15:return e=xl(e.type,!1),e;case 11:return e=xl(e.type.render,!1),e;case 1:return e=xl(e.type,!0),e;default:return""}}function Zl(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Dn:return"Fragment";case Mn:return"Portal";case Xl:return"Profiler";case Ko:return"StrictMode";case Yl:return"Suspense";case Gl:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case us:return(e.displayName||"Context")+".Consumer";case os:return(e._context.displayName||"Context")+".Provider";case Xo:var n=e.render;return e=e.displayName,e||(e=n.displayName||n.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case Yo:return n=e.displayName||null,n!==null?n:Zl(e.type)||"Memo";case Je:n=e._payload,e=e._init;try{return Zl(e(n))}catch{}}return null}function Oc(e){var n=e.type;switch(e.tag){case 24:return"Cache";case 9:return(n.displayName||"Context")+".Consumer";case 10:return(n._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=n.render,e=e.displayName||e.name||"",n.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return n;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Zl(n);case 8:return n===Ko?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof n=="function")return n.displayName||n.name||null;if(typeof n=="string")return n}return null}function dn(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ss(e){var n=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(n==="checkbox"||n==="radio")}function Mc(e){var n=ss(e)?"checked":"value",t=Object.getOwnPropertyDescriptor(e.constructor.prototype,n),r=""+e[n];if(!e.hasOwnProperty(n)&&typeof t<"u"&&typeof t.get=="function"&&typeof t.set=="function"){var l=t.get,o=t.set;return Object.defineProperty(e,n,{configurable:!0,get:function(){return l.call(this)},set:function(u){r=""+u,o.call(this,u)}}),Object.defineProperty(e,n,{enumerable:t.enumerable}),{getValue:function(){return r},setValue:function(u){r=""+u},stopTracking:function(){e._valueTracker=null,delete e[n]}}}}function rr(e){e._valueTracker||(e._valueTracker=Mc(e))}function as(e){if(!e)return!1;var n=e._valueTracker;if(!n)return!0;var t=n.getValue(),r="";return e&&(r=ss(e)?e.checked?"true":"false":e.value),e=r,e!==t?(n.setValue(e),!0):!1}function Lr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function Jl(e,n){var t=n.checked;return V({},n,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:t??e._wrapperState.initialChecked})}function Au(e,n){var t=n.defaultValue==null?"":n.defaultValue,r=n.checked!=null?n.checked:n.defaultChecked;t=dn(n.value!=null?n.value:t),e._wrapperState={initialChecked:r,initialValue:t,controlled:n.type==="checkbox"||n.type==="radio"?n.checked!=null:n.value!=null}}function cs(e,n){n=n.checked,n!=null&&Qo(e,"checked",n,!1)}function ql(e,n){cs(e,n);var t=dn(n.value),r=n.type;if(t!=null)r==="number"?(t===0&&e.value===""||e.value!=t)&&(e.value=""+t):e.value!==""+t&&(e.value=""+t);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}n.hasOwnProperty("value")?bl(e,n.type,t):n.hasOwnProperty("defaultValue")&&bl(e,n.type,dn(n.defaultValue)),n.checked==null&&n.defaultChecked!=null&&(e.defaultChecked=!!n.defaultChecked)}function Vu(e,n,t){if(n.hasOwnProperty("value")||n.hasOwnProperty("defaultValue")){var r=n.type;if(!(r!=="submit"&&r!=="reset"||n.value!==void 0&&n.value!==null))return;n=""+e._wrapperState.initialValue,t||n===e.value||(e.value=n),e.defaultValue=n}t=e.name,t!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,t!==""&&(e.name=t)}function bl(e,n,t){(n!=="number"||Lr(e.ownerDocument)!==e)&&(t==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+t&&(e.defaultValue=""+t))}var wt=Array.isArray;function Kn(e,n,t,r){if(e=e.options,n){n={};for(var l=0;l"+n.valueOf().toString()+"",n=lr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;n.firstChild;)e.appendChild(n.firstChild)}});function jt(e,n){if(n){var t=e.firstChild;if(t&&t===e.lastChild&&t.nodeType===3){t.nodeValue=n;return}}e.textContent=n}var xt={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Dc=["Webkit","ms","Moz","O"];Object.keys(xt).forEach(function(e){Dc.forEach(function(n){n=n+e.charAt(0).toUpperCase()+e.substring(1),xt[n]=xt[e]})});function ms(e,n,t){return n==null||typeof n=="boolean"||n===""?"":t||typeof n!="number"||n===0||xt.hasOwnProperty(e)&&xt[e]?(""+n).trim():n+"px"}function hs(e,n){e=e.style;for(var t in n)if(n.hasOwnProperty(t)){var r=t.indexOf("--")===0,l=ms(t,n[t],r);t==="float"&&(t="cssFloat"),r?e.setProperty(t,l):e[t]=l}}var Ic=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function to(e,n){if(n){if(Ic[e]&&(n.children!=null||n.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(n.dangerouslySetInnerHTML!=null){if(n.children!=null)throw Error(y(60));if(typeof n.dangerouslySetInnerHTML!="object"||!("__html"in n.dangerouslySetInnerHTML))throw Error(y(61))}if(n.style!=null&&typeof n.style!="object")throw Error(y(62))}}function ro(e,n){if(e.indexOf("-")===-1)return typeof n.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var lo=null;function Go(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var oo=null,Xn=null,Yn=null;function Wu(e){if(e=Jt(e)){if(typeof oo!="function")throw Error(y(280));var n=e.stateNode;n&&(n=ol(n),oo(e.stateNode,e.type,n))}}function vs(e){Xn?Yn?Yn.push(e):Yn=[e]:Xn=e}function ys(){if(Xn){var e=Xn,n=Yn;if(Yn=Xn=null,Wu(e),n)for(e=0;e>>=0,e===0?32:31-(Xc(e)/Yc|0)|0}var or=64,ur=4194304;function kt(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Mr(e,n){var t=e.pendingLanes;if(t===0)return 0;var r=0,l=e.suspendedLanes,o=e.pingedLanes,u=t&268435455;if(u!==0){var i=u&~l;i!==0?r=kt(i):(o&=u,o!==0&&(r=kt(o)))}else u=t&~l,u!==0?r=kt(u):o!==0&&(r=kt(o));if(r===0)return 0;if(n!==0&&n!==r&&!(n&l)&&(l=r&-r,o=n&-n,l>=o||l===16&&(o&4194240)!==0))return n;if(r&4&&(r|=t&16),n=e.entangledLanes,n!==0)for(e=e.entanglements,n&=r;0t;t++)n.push(e);return n}function Gt(e,n,t){e.pendingLanes|=n,n!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,n=31-je(n),e[n]=t}function qc(e,n){var t=e.pendingLanes&~n;e.pendingLanes=n,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=n,e.mutableReadLanes&=n,e.entangledLanes&=n,n=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Ct),bu=String.fromCharCode(32),ei=!1;function Fs(e,n){switch(e){case"keyup":return Pf.indexOf(n.keyCode)!==-1;case"keydown":return n.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Us(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var In=!1;function Tf(e,n){switch(e){case"compositionend":return Us(n);case"keypress":return n.which!==32?null:(ei=!0,bu);case"textInput":return e=n.data,e===bu&&ei?null:e;default:return null}}function Lf(e,n){if(In)return e==="compositionend"||!ru&&Fs(e,n)?(e=Ds(),Sr=eu=nn=null,In=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(n.ctrlKey||n.altKey||n.metaKey)||n.ctrlKey&&n.altKey){if(n.char&&1=n)return{node:t,offset:n-e};e=r}e:{for(;t;){if(t.nextSibling){t=t.nextSibling;break e}t=t.parentNode}t=void 0}t=li(t)}}function Bs(e,n){return e&&n?e===n?!0:e&&e.nodeType===3?!1:n&&n.nodeType===3?Bs(e,n.parentNode):"contains"in e?e.contains(n):e.compareDocumentPosition?!!(e.compareDocumentPosition(n)&16):!1:!1}function Hs(){for(var e=window,n=Lr();n instanceof e.HTMLIFrameElement;){try{var t=typeof n.contentWindow.location.href=="string"}catch{t=!1}if(t)e=n.contentWindow;else break;n=Lr(e.document)}return n}function lu(e){var n=e&&e.nodeName&&e.nodeName.toLowerCase();return n&&(n==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||n==="textarea"||e.contentEditable==="true")}function $f(e){var n=Hs(),t=e.focusedElem,r=e.selectionRange;if(n!==t&&t&&t.ownerDocument&&Bs(t.ownerDocument.documentElement,t)){if(r!==null&&lu(t)){if(n=r.start,e=r.end,e===void 0&&(e=n),"selectionStart"in t)t.selectionStart=n,t.selectionEnd=Math.min(e,t.value.length);else if(e=(n=t.ownerDocument||document)&&n.defaultView||window,e.getSelection){e=e.getSelection();var l=t.textContent.length,o=Math.min(r.start,l);r=r.end===void 0?o:Math.min(r.end,l),!e.extend&&o>r&&(l=r,r=o,o=l),l=oi(t,o);var u=oi(t,r);l&&u&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==u.node||e.focusOffset!==u.offset)&&(n=n.createRange(),n.setStart(l.node,l.offset),e.removeAllRanges(),o>r?(e.addRange(n),e.extend(u.node,u.offset)):(n.setEnd(u.node,u.offset),e.addRange(n)))}}for(n=[],e=t;e=e.parentNode;)e.nodeType===1&&n.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof t.focus=="function"&&t.focus(),t=0;t=document.documentMode,Fn=null,fo=null,Nt=null,po=!1;function ui(e,n,t){var r=t.window===t?t.document:t.nodeType===9?t:t.ownerDocument;po||Fn==null||Fn!==Lr(r)||(r=Fn,"selectionStart"in r&&lu(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Nt&&Ut(Nt,r)||(Nt=r,r=Fr(fo,"onSelect"),0An||(e.current=wo[An],wo[An]=null,An--)}function D(e,n){An++,wo[An]=e.current,e.current=n}var pn={},le=hn(pn),de=hn(!1),Nn=pn;function bn(e,n){var t=e.type.contextTypes;if(!t)return pn;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===n)return r.__reactInternalMemoizedMaskedChildContext;var l={},o;for(o in t)l[o]=n[o];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=n,e.__reactInternalMemoizedMaskedChildContext=l),l}function pe(e){return e=e.childContextTypes,e!=null}function $r(){F(de),F(le)}function pi(e,n,t){if(le.current!==pn)throw Error(y(168));D(le,n),D(de,t)}function qs(e,n,t){var r=e.stateNode;if(n=n.childContextTypes,typeof r.getChildContext!="function")return t;r=r.getChildContext();for(var l in r)if(!(l in n))throw Error(y(108,Oc(e)||"Unknown",l));return V({},t,r)}function Ar(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||pn,Nn=le.current,D(le,e),D(de,de.current),!0}function mi(e,n,t){var r=e.stateNode;if(!r)throw Error(y(169));t?(e=qs(e,n,Nn),r.__reactInternalMemoizedMergedChildContext=e,F(de),F(le),D(le,e)):F(de),D(de,t)}var Ve=null,ul=!1,Il=!1;function bs(e){Ve===null?Ve=[e]:Ve.push(e)}function Jf(e){ul=!0,bs(e)}function vn(){if(!Il&&Ve!==null){Il=!0;var e=0,n=M;try{var t=Ve;for(M=1;e>=u,l-=u,Be=1<<32-je(n)+l|t<N?(H=_,_=null):H=_.sibling;var R=p(c,_,d[N],v);if(R===null){_===null&&(_=H);break}e&&_&&R.alternate===null&&n(c,_),a=o(R,a,N),C===null?x=R:C.sibling=R,C=R,_=H}if(N===d.length)return t(c,_),U&&wn(c,N),x;if(_===null){for(;NN?(H=_,_=null):H=_.sibling;var Pe=p(c,_,R.value,v);if(Pe===null){_===null&&(_=H);break}e&&_&&Pe.alternate===null&&n(c,_),a=o(Pe,a,N),C===null?x=Pe:C.sibling=Pe,C=Pe,_=H}if(R.done)return t(c,_),U&&wn(c,N),x;if(_===null){for(;!R.done;N++,R=d.next())R=m(c,R.value,v),R!==null&&(a=o(R,a,N),C===null?x=R:C.sibling=R,C=R);return U&&wn(c,N),x}for(_=r(c,_);!R.done;N++,R=d.next())R=g(_,c,N,R.value,v),R!==null&&(e&&R.alternate!==null&&_.delete(R.key===null?N:R.key),a=o(R,a,N),C===null?x=R:C.sibling=R,C=R);return e&&_.forEach(function(st){return n(c,st)}),U&&wn(c,N),x}function j(c,a,d,v){if(typeof d=="object"&&d!==null&&d.type===Dn&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case tr:e:{for(var x=d.key,C=a;C!==null;){if(C.key===x){if(x=d.type,x===Dn){if(C.tag===7){t(c,C.sibling),a=l(C,d.props.children),a.return=c,c=a;break e}}else if(C.elementType===x||typeof x=="object"&&x!==null&&x.$$typeof===Je&&Si(x)===C.type){t(c,C.sibling),a=l(C,d.props),a.ref=ht(c,C,d),a.return=c,c=a;break e}t(c,C);break}else n(c,C);C=C.sibling}d.type===Dn?(a=_n(d.props.children,c.mode,v,d.key),a.return=c,c=a):(v=Tr(d.type,d.key,d.props,null,c.mode,v),v.ref=ht(c,a,d),v.return=c,c=v)}return u(c);case Mn:e:{for(C=d.key;a!==null;){if(a.key===C)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){t(c,a.sibling),a=l(a,d.children||[]),a.return=c,c=a;break e}else{t(c,a);break}else n(c,a);a=a.sibling}a=Wl(d,c.mode,v),a.return=c,c=a}return u(c);case Je:return C=d._init,j(c,a,C(d._payload),v)}if(wt(d))return w(c,a,d,v);if(ct(d))return k(c,a,d,v);pr(c,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(t(c,a.sibling),a=l(a,d),a.return=c,c=a):(t(c,a),a=Hl(d,c.mode,v),a.return=c,c=a),u(c)):t(c,a)}return j}var nt=ia(!0),sa=ia(!1),qt={},$e=hn(qt),Bt=hn(qt),Ht=hn(qt);function En(e){if(e===qt)throw Error(y(174));return e}function pu(e,n){switch(D(Ht,n),D(Bt,e),D($e,qt),e=n.nodeType,e){case 9:case 11:n=(n=n.documentElement)?n.namespaceURI:no(null,"");break;default:e=e===8?n.parentNode:n,n=e.namespaceURI||null,e=e.tagName,n=no(n,e)}F($e),D($e,n)}function tt(){F($e),F(Bt),F(Ht)}function aa(e){En(Ht.current);var n=En($e.current),t=no(n,e.type);n!==t&&(D(Bt,e),D($e,t))}function mu(e){Bt.current===e&&(F($e),F(Bt))}var $=hn(0);function Kr(e){for(var n=e;n!==null;){if(n.tag===13){var t=n.memoizedState;if(t!==null&&(t=t.dehydrated,t===null||t.data==="$?"||t.data==="$!"))return n}else if(n.tag===19&&n.memoizedProps.revealOrder!==void 0){if(n.flags&128)return n}else if(n.child!==null){n.child.return=n,n=n.child;continue}if(n===e)break;for(;n.sibling===null;){if(n.return===null||n.return===e)return null;n=n.return}n.sibling.return=n.return,n=n.sibling}return null}var Fl=[];function hu(){for(var e=0;et?t:4,e(!0);var r=Ul.transition;Ul.transition={};try{e(!1),n()}finally{M=t,Ul.transition=r}}function _a(){return Ne().memoizedState}function nd(e,n,t){var r=cn(e);if(t={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null},Na(e))Pa(n,t);else if(t=ra(e,n,t,r),t!==null){var l=ue();Oe(t,e,r,l),za(t,n,r)}}function td(e,n,t){var r=cn(e),l={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null};if(Na(e))Pa(n,l);else{var o=e.alternate;if(e.lanes===0&&(o===null||o.lanes===0)&&(o=n.lastRenderedReducer,o!==null))try{var u=n.lastRenderedState,i=o(u,t);if(l.hasEagerState=!0,l.eagerState=i,Me(i,u)){var s=n.interleaved;s===null?(l.next=l,fu(n)):(l.next=s.next,s.next=l),n.interleaved=l;return}}catch{}finally{}t=ra(e,n,l,r),t!==null&&(l=ue(),Oe(t,e,r,l),za(t,n,r))}}function Na(e){var n=e.alternate;return e===A||n!==null&&n===A}function Pa(e,n){Pt=Xr=!0;var t=e.pending;t===null?n.next=n:(n.next=t.next,t.next=n),e.pending=n}function za(e,n,t){if(t&4194240){var r=n.lanes;r&=e.pendingLanes,t|=r,n.lanes=t,Jo(e,t)}}var Yr={readContext:_e,useCallback:ne,useContext:ne,useEffect:ne,useImperativeHandle:ne,useInsertionEffect:ne,useLayoutEffect:ne,useMemo:ne,useReducer:ne,useRef:ne,useState:ne,useDebugValue:ne,useDeferredValue:ne,useTransition:ne,useMutableSource:ne,useSyncExternalStore:ne,useId:ne,unstable_isNewReconciler:!1},rd={readContext:_e,useCallback:function(e,n){return Ie().memoizedState=[e,n===void 0?null:n],e},useContext:_e,useEffect:Ei,useImperativeHandle:function(e,n,t){return t=t!=null?t.concat([e]):null,_r(4194308,4,ka.bind(null,n,e),t)},useLayoutEffect:function(e,n){return _r(4194308,4,e,n)},useInsertionEffect:function(e,n){return _r(4,2,e,n)},useMemo:function(e,n){var t=Ie();return n=n===void 0?null:n,e=e(),t.memoizedState=[e,n],e},useReducer:function(e,n,t){var r=Ie();return n=t!==void 0?t(n):n,r.memoizedState=r.baseState=n,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:n},r.queue=e,e=e.dispatch=nd.bind(null,A,e),[r.memoizedState,e]},useRef:function(e){var n=Ie();return e={current:e},n.memoizedState=e},useState:xi,useDebugValue:ku,useDeferredValue:function(e){return Ie().memoizedState=e},useTransition:function(){var e=xi(!1),n=e[0];return e=ed.bind(null,e[1]),Ie().memoizedState=e,[n,e]},useMutableSource:function(){},useSyncExternalStore:function(e,n,t){var r=A,l=Ie();if(U){if(t===void 0)throw Error(y(407));t=t()}else{if(t=n(),J===null)throw Error(y(349));zn&30||da(r,n,t)}l.memoizedState=t;var o={value:t,getSnapshot:n};return l.queue=o,Ei(ma.bind(null,r,o,e),[e]),r.flags|=2048,Kt(9,pa.bind(null,r,o,t,n),void 0,null),t},useId:function(){var e=Ie(),n=J.identifierPrefix;if(U){var t=He,r=Be;t=(r&~(1<<32-je(r)-1)).toString(32)+t,n=":"+n+"R"+t,t=Wt++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=u.createElement(t,{is:r.is}):(e=u.createElement(t),t==="select"&&(u=e,r.multiple?u.multiple=!0:r.size&&(u.size=r.size))):e=u.createElementNS(e,t),e[Fe]=n,e[Vt]=r,Fa(e,n,!1,!1),n.stateNode=e;e:{switch(u=ro(t,r),t){case"dialog":I("cancel",e),I("close",e),l=r;break;case"iframe":case"object":case"embed":I("load",e),l=r;break;case"video":case"audio":for(l=0;llt&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304)}else{if(!r)if(e=Kr(u),e!==null){if(n.flags|=128,r=!0,t=e.updateQueue,t!==null&&(n.updateQueue=t,n.flags|=4),vt(o,!0),o.tail===null&&o.tailMode==="hidden"&&!u.alternate&&!U)return te(n),null}else 2*Q()-o.renderingStartTime>lt&&t!==1073741824&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304);o.isBackwards?(u.sibling=n.child,n.child=u):(t=o.last,t!==null?t.sibling=u:n.child=u,o.last=u)}return o.tail!==null?(n=o.tail,o.rendering=n,o.tail=n.sibling,o.renderingStartTime=Q(),n.sibling=null,t=$.current,D($,r?t&1|2:t&1),n):(te(n),null);case 22:case 23:return Nu(),r=n.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(n.flags|=8192),r&&n.mode&1?he&1073741824&&(te(n),n.subtreeFlags&6&&(n.flags|=8192)):te(n),null;case 24:return null;case 25:return null}throw Error(y(156,n.tag))}function fd(e,n){switch(uu(n),n.tag){case 1:return pe(n.type)&&$r(),e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 3:return tt(),F(de),F(le),hu(),e=n.flags,e&65536&&!(e&128)?(n.flags=e&-65537|128,n):null;case 5:return mu(n),null;case 13:if(F($),e=n.memoizedState,e!==null&&e.dehydrated!==null){if(n.alternate===null)throw Error(y(340));et()}return e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 19:return F($),null;case 4:return tt(),null;case 10:return cu(n.type._context),null;case 22:case 23:return Nu(),null;case 24:return null;default:return null}}var hr=!1,re=!1,dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Wn(e,n){var t=e.ref;if(t!==null)if(typeof t=="function")try{t(null)}catch(r){B(e,n,r)}else t.current=null}function Ro(e,n,t){try{t()}catch(r){B(e,n,r)}}var ji=!1;function pd(e,n){if(mo=Dr,e=Hs(),lu(e)){if("selectionStart"in e)var t={start:e.selectionStart,end:e.selectionEnd};else e:{t=(t=e.ownerDocument)&&t.defaultView||window;var r=t.getSelection&&t.getSelection();if(r&&r.rangeCount!==0){t=r.anchorNode;var l=r.anchorOffset,o=r.focusNode;r=r.focusOffset;try{t.nodeType,o.nodeType}catch{t=null;break e}var u=0,i=-1,s=-1,f=0,h=0,m=e,p=null;n:for(;;){for(var g;m!==t||l!==0&&m.nodeType!==3||(i=u+l),m!==o||r!==0&&m.nodeType!==3||(s=u+r),m.nodeType===3&&(u+=m.nodeValue.length),(g=m.firstChild)!==null;)p=m,m=g;for(;;){if(m===e)break n;if(p===t&&++f===l&&(i=u),p===o&&++h===r&&(s=u),(g=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=g}t=i===-1||s===-1?null:{start:i,end:s}}else t=null}t=t||{start:0,end:0}}else t=null;for(ho={focusedElem:e,selectionRange:t},Dr=!1,S=n;S!==null;)if(n=S,e=n.child,(n.subtreeFlags&1028)!==0&&e!==null)e.return=n,S=e;else for(;S!==null;){n=S;try{var w=n.alternate;if(n.flags&1024)switch(n.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,j=w.memoizedState,c=n.stateNode,a=c.getSnapshotBeforeUpdate(n.elementType===n.type?k:Te(n.type,k),j);c.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=n.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){B(n,n.return,v)}if(e=n.sibling,e!==null){e.return=n.return,S=e;break}S=n.return}return w=ji,ji=!1,w}function zt(e,n,t){var r=n.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var o=l.destroy;l.destroy=void 0,o!==void 0&&Ro(n,t,o)}l=l.next}while(l!==r)}}function al(e,n){if(n=n.updateQueue,n=n!==null?n.lastEffect:null,n!==null){var t=n=n.next;do{if((t.tag&e)===e){var r=t.create;t.destroy=r()}t=t.next}while(t!==n)}}function jo(e){var n=e.ref;if(n!==null){var t=e.stateNode;switch(e.tag){case 5:e=t;break;default:e=t}typeof n=="function"?n(e):n.current=e}}function Aa(e){var n=e.alternate;n!==null&&(e.alternate=null,Aa(n)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(n=e.stateNode,n!==null&&(delete n[Fe],delete n[Vt],delete n[go],delete n[Gf],delete n[Zf])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Va(e){return e.tag===5||e.tag===3||e.tag===4}function Oi(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Va(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Oo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.nodeType===8?t.parentNode.insertBefore(e,n):t.insertBefore(e,n):(t.nodeType===8?(n=t.parentNode,n.insertBefore(e,t)):(n=t,n.appendChild(e)),t=t._reactRootContainer,t!=null||n.onclick!==null||(n.onclick=Ur));else if(r!==4&&(e=e.child,e!==null))for(Oo(e,n,t),e=e.sibling;e!==null;)Oo(e,n,t),e=e.sibling}function Mo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.insertBefore(e,n):t.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Mo(e,n,t),e=e.sibling;e!==null;)Mo(e,n,t),e=e.sibling}var q=null,Le=!1;function Ze(e,n,t){for(t=t.child;t!==null;)Ba(e,n,t),t=t.sibling}function Ba(e,n,t){if(Ue&&typeof Ue.onCommitFiberUnmount=="function")try{Ue.onCommitFiberUnmount(nl,t)}catch{}switch(t.tag){case 5:re||Wn(t,n);case 6:var r=q,l=Le;q=null,Ze(e,n,t),q=r,Le=l,q!==null&&(Le?(e=q,t=t.stateNode,e.nodeType===8?e.parentNode.removeChild(t):e.removeChild(t)):q.removeChild(t.stateNode));break;case 18:q!==null&&(Le?(e=q,t=t.stateNode,e.nodeType===8?Dl(e.parentNode,t):e.nodeType===1&&Dl(e,t),It(e)):Dl(q,t.stateNode));break;case 4:r=q,l=Le,q=t.stateNode.containerInfo,Le=!0,Ze(e,n,t),q=r,Le=l;break;case 0:case 11:case 14:case 15:if(!re&&(r=t.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var o=l,u=o.destroy;o=o.tag,u!==void 0&&(o&2||o&4)&&Ro(t,n,u),l=l.next}while(l!==r)}Ze(e,n,t);break;case 1:if(!re&&(Wn(t,n),r=t.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=t.memoizedProps,r.state=t.memoizedState,r.componentWillUnmount()}catch(i){B(t,n,i)}Ze(e,n,t);break;case 21:Ze(e,n,t);break;case 22:t.mode&1?(re=(r=re)||t.memoizedState!==null,Ze(e,n,t),re=r):Ze(e,n,t);break;default:Ze(e,n,t)}}function Mi(e){var n=e.updateQueue;if(n!==null){e.updateQueue=null;var t=e.stateNode;t===null&&(t=e.stateNode=new dd),n.forEach(function(r){var l=xd.bind(null,e,r);t.has(r)||(t.add(r),r.then(l,l))})}}function ze(e,n){var t=n.deletions;if(t!==null)for(var r=0;rl&&(l=u),r&=~o}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*hd(r/1960))-r,10e?16:e,tn===null)var r=!1;else{if(e=tn,tn=null,Jr=0,O&6)throw Error(y(331));var l=O;for(O|=4,S=e.current;S!==null;){var o=S,u=o.child;if(S.flags&16){var i=o.deletions;if(i!==null){for(var s=0;sQ()-Cu?Cn(e,0):Eu|=t),me(e,n)}function Za(e,n){n===0&&(e.mode&1?(n=ur,ur<<=1,!(ur&130023424)&&(ur=4194304)):n=1);var t=ue();e=Xe(e,n),e!==null&&(Gt(e,n,t),me(e,t))}function Sd(e){var n=e.memoizedState,t=0;n!==null&&(t=n.retryLane),Za(e,t)}function xd(e,n){var t=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(t=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(n),Za(e,t)}var Ja;Ja=function(e,n,t){if(e!==null)if(e.memoizedProps!==n.pendingProps||de.current)fe=!0;else{if(!(e.lanes&t)&&!(n.flags&128))return fe=!1,ad(e,n,t);fe=!!(e.flags&131072)}else fe=!1,U&&n.flags&1048576&&ea(n,Br,n.index);switch(n.lanes=0,n.tag){case 2:var r=n.type;Nr(e,n),e=n.pendingProps;var l=bn(n,le.current);Zn(n,t),l=yu(null,n,r,e,l,t);var o=gu();return n.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(n.tag=1,n.memoizedState=null,n.updateQueue=null,pe(r)?(o=!0,Ar(n)):o=!1,n.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,du(n),l.updater=il,n.stateNode=l,l._reactInternals=n,Co(n,r,e,t),n=Po(null,n,r,!0,o,t)):(n.tag=0,U&&o&&ou(n),oe(null,n,l,t),n=n.child),n;case 16:r=n.elementType;e:{switch(Nr(e,n),e=n.pendingProps,l=r._init,r=l(r._payload),n.type=r,l=n.tag=Cd(r),e=Te(r,e),l){case 0:n=No(null,n,r,e,t);break e;case 1:n=Ti(null,n,r,e,t);break e;case 11:n=Pi(null,n,r,e,t);break e;case 14:n=zi(null,n,r,Te(r.type,e),t);break e}throw Error(y(306,r,""))}return n;case 0:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),No(e,n,r,l,t);case 1:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Ti(e,n,r,l,t);case 3:e:{if(Ma(n),e===null)throw Error(y(387));r=n.pendingProps,o=n.memoizedState,l=o.element,la(e,n),Qr(n,r,null,t);var u=n.memoizedState;if(r=u.element,o.isDehydrated)if(o={element:r,isDehydrated:!1,cache:u.cache,pendingSuspenseBoundaries:u.pendingSuspenseBoundaries,transitions:u.transitions},n.updateQueue.baseState=o,n.memoizedState=o,n.flags&256){l=rt(Error(y(423)),n),n=Li(e,n,r,t,l);break e}else if(r!==l){l=rt(Error(y(424)),n),n=Li(e,n,r,t,l);break e}else for(ve=un(n.stateNode.containerInfo.firstChild),ye=n,U=!0,Re=null,t=sa(n,null,r,t),n.child=t;t;)t.flags=t.flags&-3|4096,t=t.sibling;else{if(et(),r===l){n=Ye(e,n,t);break e}oe(e,n,r,t)}n=n.child}return n;case 5:return aa(n),e===null&&So(n),r=n.type,l=n.pendingProps,o=e!==null?e.memoizedProps:null,u=l.children,vo(r,l)?u=null:o!==null&&vo(r,o)&&(n.flags|=32),Oa(e,n),oe(e,n,u,t),n.child;case 6:return e===null&&So(n),null;case 13:return Da(e,n,t);case 4:return pu(n,n.stateNode.containerInfo),r=n.pendingProps,e===null?n.child=nt(n,null,r,t):oe(e,n,r,t),n.child;case 11:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Pi(e,n,r,l,t);case 7:return oe(e,n,n.pendingProps,t),n.child;case 8:return oe(e,n,n.pendingProps.children,t),n.child;case 12:return oe(e,n,n.pendingProps.children,t),n.child;case 10:e:{if(r=n.type._context,l=n.pendingProps,o=n.memoizedProps,u=l.value,D(Hr,r._currentValue),r._currentValue=u,o!==null)if(Me(o.value,u)){if(o.children===l.children&&!de.current){n=Ye(e,n,t);break e}}else for(o=n.child,o!==null&&(o.return=n);o!==null;){var i=o.dependencies;if(i!==null){u=o.child;for(var s=i.firstContext;s!==null;){if(s.context===r){if(o.tag===1){s=We(-1,t&-t),s.tag=2;var f=o.updateQueue;if(f!==null){f=f.shared;var h=f.pending;h===null?s.next=s:(s.next=h.next,h.next=s),f.pending=s}}o.lanes|=t,s=o.alternate,s!==null&&(s.lanes|=t),xo(o.return,t,n),i.lanes|=t;break}s=s.next}}else if(o.tag===10)u=o.type===n.type?null:o.child;else if(o.tag===18){if(u=o.return,u===null)throw Error(y(341));u.lanes|=t,i=u.alternate,i!==null&&(i.lanes|=t),xo(u,t,n),u=o.sibling}else u=o.child;if(u!==null)u.return=o;else for(u=o;u!==null;){if(u===n){u=null;break}if(o=u.sibling,o!==null){o.return=u.return,u=o;break}u=u.return}o=u}oe(e,n,l.children,t),n=n.child}return n;case 9:return l=n.type,r=n.pendingProps.children,Zn(n,t),l=_e(l),r=r(l),n.flags|=1,oe(e,n,r,t),n.child;case 14:return r=n.type,l=Te(r,n.pendingProps),l=Te(r.type,l),zi(e,n,r,l,t);case 15:return Ra(e,n,n.type,n.pendingProps,t);case 17:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Nr(e,n),n.tag=1,pe(r)?(e=!0,Ar(n)):e=!1,Zn(n,t),ua(n,r,l),Co(n,r,l,t),Po(null,n,r,!0,e,t);case 19:return Ia(e,n,t);case 22:return ja(e,n,t)}throw Error(y(156,n.tag))};function qa(e,n){return Cs(e,n)}function Ed(e,n,t,r){this.tag=e,this.key=t,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=n,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ee(e,n,t,r){return new Ed(e,n,t,r)}function zu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Cd(e){if(typeof e=="function")return zu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===Xo)return 11;if(e===Yo)return 14}return 2}function fn(e,n){var t=e.alternate;return t===null?(t=Ee(e.tag,n,e.key,e.mode),t.elementType=e.elementType,t.type=e.type,t.stateNode=e.stateNode,t.alternate=e,e.alternate=t):(t.pendingProps=n,t.type=e.type,t.flags=0,t.subtreeFlags=0,t.deletions=null),t.flags=e.flags&14680064,t.childLanes=e.childLanes,t.lanes=e.lanes,t.child=e.child,t.memoizedProps=e.memoizedProps,t.memoizedState=e.memoizedState,t.updateQueue=e.updateQueue,n=e.dependencies,t.dependencies=n===null?null:{lanes:n.lanes,firstContext:n.firstContext},t.sibling=e.sibling,t.index=e.index,t.ref=e.ref,t}function Tr(e,n,t,r,l,o){var u=2;if(r=e,typeof e=="function")zu(e)&&(u=1);else if(typeof e=="string")u=5;else e:switch(e){case Dn:return _n(t.children,l,o,n);case Ko:u=8,l|=8;break;case Xl:return e=Ee(12,t,n,l|2),e.elementType=Xl,e.lanes=o,e;case Yl:return e=Ee(13,t,n,l),e.elementType=Yl,e.lanes=o,e;case Gl:return e=Ee(19,t,n,l),e.elementType=Gl,e.lanes=o,e;case is:return fl(t,l,o,n);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case os:u=10;break e;case us:u=9;break e;case Xo:u=11;break e;case Yo:u=14;break e;case Je:u=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return n=Ee(u,t,n,l),n.elementType=e,n.type=r,n.lanes=o,n}function _n(e,n,t,r){return e=Ee(7,e,r,n),e.lanes=t,e}function fl(e,n,t,r){return e=Ee(22,e,r,n),e.elementType=is,e.lanes=t,e.stateNode={isHidden:!1},e}function Hl(e,n,t){return e=Ee(6,e,null,n),e.lanes=t,e}function Wl(e,n,t){return n=Ee(4,e.children!==null?e.children:[],e.key,n),n.lanes=t,n.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},n}function _d(e,n,t,r,l){this.tag=n,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Cl(0),this.expirationTimes=Cl(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Cl(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Tu(e,n,t,r,l,o,u,i,s){return e=new _d(e,n,t,i,s),n===1?(n=1,o===!0&&(n|=8)):n=0,o=Ee(3,null,null,n),e.current=o,o.stateNode=e,o.memoizedState={element:r,isDehydrated:t,cache:null,transitions:null,pendingSuspenseBoundaries:null},du(o),e}function Nd(e,n,t){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(tc)}catch(e){console.error(e)}}tc(),es.exports=we;var Rd=es.exports,Bi=Rd;Ql.createRoot=Bi.createRoot,Ql.hydrateRoot=Bi.hydrateRoot;const Hi=["bg-purple-300","bg-green-300","bg-yellow-300","bg-red-300","bg-blue-300"];function jd({text:e,position:n,margin:t}){return e!==` -`?T.jsx("span",{className:`ml-${t} leading-5 inline-block ${Hi[n%Hi.length]}`,children:e}):T.jsx("br",{})}function Od(){var k;const[e,n]=ae.useState([]),[t,r]=ae.useState([]),[l,o]=ae.useState([]),[u,i]=ae.useState("text"),[s,f]=ae.useState("Xenova/gpt-4"),h=ae.useRef(null),m=ae.useRef(null),p=ae.useRef(null);ae.useEffect(()=>{p.current||(p.current=new Worker(new URL("/assets/worker-5bbef2b6.js",self.location),{type:"module"}));const j=c=>{n(c.data.token_ids),r(c.data.decoded),o(c.data.margins)};return p.current.addEventListener("message",j),()=>p.current.removeEventListener("message",j)},[]);const g=ae.useCallback(j=>{const c=s,a=j.target.value;a.length>1e4&&(i(null),console.log("User most likely pasted in a large body of text (> 10k chars), so we hide the output (until specifically requested by the user).")),p.current.postMessage({model_id:c,text:a})},[s]),w=ae.useCallback(j=>{const c=j.target.value;f(c),p.current.postMessage({model_id:c,text:h.current.value})},[]);return T.jsxs("div",{className:"w-full max-w-[720px] flex flex-col gap-4 items-center",children:[T.jsxs("div",{children:[T.jsx("h1",{className:"text-5xl font-bold mb-2",children:"The Tokenizer Playground"}),T.jsxs("h2",{className:"text-lg font-normal",children:["Experiment with different tokenizers (running ",T.jsx("a",{className:"text-gray-900 underline",href:"https://github.com/xenova/transformers.js",children:"locally"})," in your browser)."]})]}),T.jsx("div",{children:T.jsxs("select",{value:s,onChange:w,className:"bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2",children:[T.jsx("option",{value:"Xenova/gpt-4",children:"gpt-4 / gpt-3.5-turbo / text-embedding-ada-002"}),T.jsx("option",{value:"Xenova/text-davinci-003",children:"text-davinci-003 / text-davinci-002"}),T.jsx("option",{value:"Xenova/gpt-3",children:"gpt-3"}),T.jsx("option",{value:"hf-internal-testing/llama-tokenizer",children:"LLaMA / Llama 2"}),T.jsx("option",{value:"Xenova/t5-small",children:"T5"}),T.jsx("option",{value:"Xenova/bert-base-cased",children:"bert-base-cased"})]})}),T.jsx("textarea",{ref:h,onChange:g,rows:"8",className:"font-mono text-lg block w-full p-2.5 text-gray-900 bg-gray-50 rounded-lg border border-gray-200",placeholder:"Enter some text"}),T.jsxs("div",{className:"flex justify-center gap-5",children:[T.jsxs("div",{className:"flex flex-col",children:[T.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Tokens"}),T.jsx("h3",{className:"font-semibold text-3xl",children:e.length.toLocaleString()})]}),T.jsxs("div",{className:"flex flex-col",children:[T.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Characters"}),T.jsx("h3",{className:"font-semibold text-3xl",children:(((k=h.current)==null?void 0:k.value.length)??0).toLocaleString()})]})]}),T.jsx("div",{ref:m,className:"font-mono text-lg p-2.5 w-full bg-gray-100 rounded-lg border border-gray-200 whitespace-pre-wrap text-left h-[200px] overflow-y-auto",children:u==="text"?t.map((j,c)=>T.jsx(jd,{text:j,position:c,margin:l[c]},c)):u==="token_ids"?`[${e.join(", ")}]`:null}),T.jsxs("div",{className:"flex items-center gap-2 self-end",children:[T.jsxs("div",{className:"flex items-center",children:[T.jsx("input",{checked:u==="text",onChange:()=>i("text"),id:"output-radio-1",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),T.jsx("label",{htmlFor:"output-radio-1",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Text"})]}),T.jsxs("div",{className:"flex items-center",children:[T.jsx("input",{checked:u==="token_ids",onChange:()=>i("token_ids"),id:"output-radio-2",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),T.jsx("label",{htmlFor:"output-radio-2",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Token IDs"})]}),T.jsxs("div",{className:"flex items-center",children:[T.jsx("input",{checked:u===null,onChange:()=>i(null),id:"output-radio-3",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),T.jsx("label",{htmlFor:"output-radio-3",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Hide"})]})]})]})}Ql.createRoot(document.getElementById("root")).render(T.jsx(kc.StrictMode,{children:T.jsx(Od,{})})); diff --git a/spaces/XzJosh/Bella-Bert-VITS2/train_ms.py b/spaces/XzJosh/Bella-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bella-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/Y-T-G/Blur-Anything/tracker/model/losses.py b/spaces/Y-T-G/Blur-Anything/tracker/model/losses.py deleted file mode 100644 index 15f7835f66235ca4ad0918126b08e75ca8b7853c..0000000000000000000000000000000000000000 --- a/spaces/Y-T-G/Blur-Anything/tracker/model/losses.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from collections import defaultdict - - -def dice_loss(input_mask, cls_gt): - num_objects = input_mask.shape[1] - losses = [] - for i in range(num_objects): - mask = input_mask[:, i].flatten(start_dim=1) - # background not in mask, so we add one to cls_gt - gt = (cls_gt == (i + 1)).float().flatten(start_dim=1) - numerator = 2 * (mask * gt).sum(-1) - denominator = mask.sum(-1) + gt.sum(-1) - loss = 1 - (numerator + 1) / (denominator + 1) - losses.append(loss) - return torch.cat(losses).mean() - - -# https://stackoverflow.com/questions/63735255/how-do-i-compute-bootstrapped-cross-entropy-loss-in-pytorch -class BootstrappedCE(nn.Module): - def __init__(self, start_warm, end_warm, top_p=0.15): - super().__init__() - - self.start_warm = start_warm - self.end_warm = end_warm - self.top_p = top_p - - def forward(self, input, target, it): - if it < self.start_warm: - return F.cross_entropy(input, target), 1.0 - - raw_loss = F.cross_entropy(input, target, reduction="none").view(-1) - num_pixels = raw_loss.numel() - - if it > self.end_warm: - this_p = self.top_p - else: - this_p = self.top_p + (1 - self.top_p) * ( - (self.end_warm - it) / (self.end_warm - self.start_warm) - ) - loss, _ = torch.topk(raw_loss, int(num_pixels * this_p), sorted=False) - return loss.mean(), this_p - - -class LossComputer: - def __init__(self, config): - super().__init__() - self.config = config - self.bce = BootstrappedCE(config["start_warm"], config["end_warm"]) - - def compute(self, data, num_objects, it): - losses = defaultdict(int) - - b, t = data["rgb"].shape[:2] - - losses["total_loss"] = 0 - for ti in range(1, t): - for bi in range(b): - loss, p = self.bce( - data[f"logits_{ti}"][bi : bi + 1, : num_objects[bi] + 1], - data["cls_gt"][bi : bi + 1, ti, 0], - it, - ) - losses["p"] += p / b / (t - 1) - losses[f"ce_loss_{ti}"] += loss / b - - losses["total_loss"] += losses["ce_loss_%d" % ti] - losses[f"dice_loss_{ti}"] = dice_loss( - data[f"masks_{ti}"], data["cls_gt"][:, ti, 0] - ) - losses["total_loss"] += losses[f"dice_loss_{ti}"] - - return losses diff --git a/spaces/YUANAI/DiffspeechResearch/docs/prepare_vocoder.md b/spaces/YUANAI/DiffspeechResearch/docs/prepare_vocoder.md deleted file mode 100644 index 349c8f10888fa7595642b4c730a1313b5fbc4360..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/docs/prepare_vocoder.md +++ /dev/null @@ -1,49 +0,0 @@ -# Prepare Vocoder - -We use [HiFi-GAN](https://github.com/jik876/hifi-gan) as the default vocoder. - -## LJSpeech - -### Use Pretrained Model - -```bash -wget https://github.com/xx/xx/releases/download/pretrain-model/hifi_lj.zip -unzip hifi_lj.zip -mv hifi_lj checkpoints/hifi_lj -``` - -### Train Your Vocoder - -#### Set Config Path and Experiment Name - -```bash -export CONFIG_NAME=egs/datasets/audio/lj/hifigan.yaml -export MY_EXP_NAME=my_hifigan_exp -``` - -#### Prepare Dataset - -Prepare dataset following [prepare_data.md](./prepare_data.md). - -If you have run the `prepare_data` step of the acoustic -model (e.g., PortaSpeech and DiffSpeech), you only need to binarize the dataset for the vocoder training: - -```bash -python data_gen/tts/runs/binarize.py --config $CONFIG_NAME -``` - -#### Training - -```bash -CUDA_VISIBLE_DEVICES=0 python tasks/run.py --config $CONFIG_NAME --exp_name $MY_EXP_NAME --reset -``` - -#### Inference (Testing) - -```bash -CUDA_VISIBLE_DEVICES=0 python tasks/run.py --config $PS_CONFIG --exp_name $MY_EXP_NAME --infer -``` - -#### Use the trained vocoder -Modify the `vocoder_ckpt` in config files of acoustic models (e.g., `egs/datasets/audio/lj/base_text2mel.yaml`) to $MY_EXP_NAME (e.g., `vocoder_ckpt: checkpoints/my_hifigan_exp`) - diff --git a/spaces/YangHao520/testCreateFile/file.py b/spaces/YangHao520/testCreateFile/file.py deleted file mode 100644 index b38b3e50ea23d23687726db646cfd4c9a3505c0b..0000000000000000000000000000000000000000 --- a/spaces/YangHao520/testCreateFile/file.py +++ /dev/null @@ -1,48 +0,0 @@ -import os - -import gradio as gr -import tempfile -import shutil -def generate_file(file_obj): - global tmpdir - print('临时文件夹地址:{}'.format(tmpdir)) - print('上传文件的地址:{}'.format(file_obj.name)) # 输出上传后的文件在gradio中保存的绝对地址 - - #获取到上传后的文件的绝对路径后,其余的操作就和平常一致了 - - # 将文件复制到临时目录中 - shutil.copy(file_obj.name, tmpdir) - - # 获取上传Gradio的文件名称 - FileName=os.path.basename(file_obj.name) - - # 获取拷贝在临时目录的新的文件地址 - NewfilePath=os.path.join(tmpdir,FileName) - print(NewfilePath) - - # 打开复制到新路径后的文件 - with open(NewfilePath, 'rb') as file_obj: - - #在本地电脑打开一个新的文件,并且将上传文件内容写入到新文件 - outputPath=os.path.join(tmpdir,"New"+FileName) - with open(outputPath,'wb') as w: - w.write(file_obj.read()) - - # 返回新文件的的地址(注意这里) - return outputPath -def main(): - global tmpdir - with tempfile.TemporaryDirectory(dir='.') as tmpdir: - # 定义输入和输出 - inputs = gr.components.File(label="上传文件") - outputs = gr.components.File(label="下载文件") - - # 创建 Gradio 应用程序g - app = gr.Interface(fn=generate_file, inputs=inputs, outputs=outputs, title="文件上传、并生成可下载文件demo", - description="上传任何文件都可以,只要大小别超过你电脑的内存即可" - ) - - # 启动应用程序 - app.launch(share=True) -if __name__=="__main__": - main() \ No newline at end of file diff --git a/spaces/Yiqin/ChatVID/model/fastchat/train/llama_flash_attn_monkey_patch.py b/spaces/Yiqin/ChatVID/model/fastchat/train/llama_flash_attn_monkey_patch.py deleted file mode 100644 index 00fc39edff8f3e8b23bc5083e82db162153bb916..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/train/llama_flash_attn_monkey_patch.py +++ /dev/null @@ -1,114 +0,0 @@ -from typing import List, Optional, Tuple - -import torch -from torch import nn - -import transformers -from transformers.models.llama.modeling_llama import apply_rotary_pos_emb - -from einops import rearrange - -from flash_attn.flash_attn_interface import flash_attn_unpadded_qkvpacked_func -from flash_attn.bert_padding import unpad_input, pad_input - - -def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, -) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel - - attention_mask: [bsz, q_len] - """ - bsz, q_len, _ = hidden_states.size() - - query_states = ( - self.q_proj(hidden_states) - .view(bsz, q_len, self.num_heads, self.head_dim) - .transpose(1, 2) - ) - key_states = ( - self.k_proj(hidden_states) - .view(bsz, q_len, self.num_heads, self.head_dim) - .transpose(1, 2) - ) - value_states = ( - self.v_proj(hidden_states) - .view(bsz, q_len, self.num_heads, self.head_dim) - .transpose(1, 2) - ) - # [bsz, q_len, nh, hd] - # [bsz, nh, q_len, hd] - - kv_seq_len = key_states.shape[-2] - assert past_key_value is None, "past_key_value is not supported" - - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = apply_rotary_pos_emb( - query_states, key_states, cos, sin, position_ids - ) - # [bsz, nh, t, hd] - assert not output_attentions, "output_attentions is not supported" - assert not use_cache, "use_cache is not supported" - - # Flash attention codes from - # https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/flash_attention.py - - # transform the data into the format required by flash attention - qkv = torch.stack( - [query_states, key_states, value_states], dim=2 - ) # [bsz, nh, 3, q_len, hd] - qkv = qkv.transpose(1, 3) # [bsz, q_len, 3, nh, hd] - # We have disabled _prepare_decoder_attention_mask in LlamaModel - # the attention_mask should be the same as the key_padding_mask - key_padding_mask = attention_mask - - if key_padding_mask is None: - qkv = rearrange(qkv, "b s ... -> (b s) ...") - max_s = q_len - cu_q_lens = torch.arange( - 0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=qkv.device - ) - output = flash_attn_unpadded_qkvpacked_func( - qkv, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True - ) - output = rearrange(output, "(b s) ... -> b s ...", b=bsz) - else: - nheads = qkv.shape[-2] - x = rearrange(qkv, "b s three h d -> b s (three h d)") - x_unpad, indices, cu_q_lens, max_s = unpad_input(x, key_padding_mask) - x_unpad = rearrange( - x_unpad, "nnz (three h d) -> nnz three h d", three=3, h=nheads - ) - output_unpad = flash_attn_unpadded_qkvpacked_func( - x_unpad, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True - ) - output = rearrange( - pad_input( - rearrange(output_unpad, "nnz h d -> nnz (h d)"), indices, bsz, q_len - ), - "b s (h d) -> b s h d", - h=nheads, - ) - return self.o_proj(rearrange(output, "b s h d -> b s (h d)")), None, None - - -# Disable the transformation of the attention mask in LlamaModel as the flash attention -# requires the attention mask to be the same as the key_padding_mask -def _prepare_decoder_attention_mask( - self, attention_mask, input_shape, inputs_embeds, past_key_values_length -): - # [bsz, seq_len] - return attention_mask - - -def replace_llama_attn_with_flash_attn(): - transformers.models.llama.modeling_llama.LlamaModel._prepare_decoder_attention_mask = ( - _prepare_decoder_attention_mask - ) - transformers.models.llama.modeling_llama.LlamaAttention.forward = forward diff --git a/spaces/YuAnthony/Audio-Caption/create_audio_npy.py b/spaces/YuAnthony/Audio-Caption/create_audio_npy.py deleted file mode 100644 index 64a1fae7574ba850d91168fe7d6c6de969a02410..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/create_audio_npy.py +++ /dev/null @@ -1,44 +0,0 @@ -import glob -import librosa -import numpy as np - -from tools.features_log_mel_bands import feature_extraction - -from tools.file_io import load_yaml_file -from tools.argument_parsing import get_argument_parser -from pathlib import Path - -from concurrent.futures import ProcessPoolExecutor -from multiprocessing import cpu_count -from tqdm import tqdm -from functools import partial - -executor = ProcessPoolExecutor(max_workers=cpu_count()) - - -def wav_to_mel(wav_file_path): - - args = get_argument_parser().parse_args(args=[]) - file_dir = args.file_dir - config_file = args.config_file - file_ext = args.file_ext - settings = load_yaml_file(Path( - file_dir, f'{config_file}.{file_ext}')) - - settings_audio = settings['dataset_creation_settings']['audio'] - settings_features = settings['feature_extraction_settings'] - - y = librosa.load(path=wav_file_path, sr=int(settings_audio['sr']), mono=settings_audio['to_mono'])[0] -# print("wav") -# print(y) - - mel = feature_extraction(y, **settings_features['process']) -# print("feature") -# print(mel) - - return mel - - - - - diff --git a/spaces/Zeltoria/anime-voice-generator/monotonic_align/core.py b/spaces/Zeltoria/anime-voice-generator/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/Zeltoria/anime-voice-generator/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file diff --git a/spaces/abdellatif/pokemon-detector/app.py b/spaces/abdellatif/pokemon-detector/app.py deleted file mode 100644 index 652adb720d0ab8ef74928df7d42e9a451b4cd437..0000000000000000000000000000000000000000 --- a/spaces/abdellatif/pokemon-detector/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import os -import fastbook -from pathlib import Path -from fastai.vision.widgets import * - -fastbook.setup_book() - -examples = ['./examples/pikachu.webp', './examples/charizard.webp', - './examples/mewtwo.jpg', './examples/rayquaza.jpeg', './examples/cinderace.webp'] - -path = Path() -path.ls(file_exts='.pkl') -learn_inf = fastbook.load_learner(path/'pokemon-detector.pkl') - -labels = learn_inf.dls.vocab - -def pokemon_classifier(image): - image = fastbook.PILImage.create(image) - pred, pred_id, probs = learn_inf.predict(image) - output = {labels[i]: float(probs[i]) for i in range(len(labels))} - # limit the output to the top 5 results - output = dict(sorted(output.items(), key=lambda item: item[1], reverse=True)[:5]) - return output - - -iface = gr.Interface(fn=pokemon_classifier, inputs="image", outputs="label", examples=examples) -iface.launch() \ No newline at end of file diff --git a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/subword/learn_joint_bpe_and_vocab.py b/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/subword/learn_joint_bpe_and_vocab.py deleted file mode 100644 index 21def81477611032ea4820df3325a3738a19e55c..0000000000000000000000000000000000000000 --- a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/subword/learn_joint_bpe_and_vocab.py +++ /dev/null @@ -1,166 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Author: Rico Sennrich - -"""Use byte pair encoding (BPE) to learn a variable-length encoding of the vocabulary in a text. -This script learns BPE jointly on a concatenation of a list of texts (typically the source and target side of a parallel corpus, -applies the learned operation to each and (optionally) returns the resulting vocabulary of each text. -The vocabulary can be used in apply_bpe.py to avoid producing symbols that are rare or OOV in a training text. - -Reference: -Rico Sennrich, Barry Haddow and Alexandra Birch (2016). Neural Machine Translation of Rare Words with Subword Units. -Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany. -""" - -from __future__ import unicode_literals - -import sys -import os -import inspect -import codecs -import argparse -import tempfile -import warnings -from collections import Counter -from multiprocessing import cpu_count - -#hack to get imports working if running this as a script, or within a package -if __name__ == '__main__': - import learn_bpe - import apply_bpe -else: - from . import learn_bpe - from . import apply_bpe - -# hack for python2/3 compatibility -from io import open -argparse.open = open - -def create_parser(subparsers=None): - - if subparsers: - parser = subparsers.add_parser('learn-joint-bpe-and-vocab', - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - else: - parser = argparse.ArgumentParser( - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - - parser.add_argument( - '--input', '-i', type=argparse.FileType('r'), required=True, nargs = '+', - metavar='PATH', - help="Input texts (multiple allowed).") - parser.add_argument( - '--output', '-o', type=argparse.FileType('w'), required=True, - metavar='PATH', - help="Output file for BPE codes.") - parser.add_argument( - '--symbols', '-s', type=int, default=10000, - help="Create this many new symbols (each representing a character n-gram) (default: %(default)s)") - parser.add_argument( - '--separator', type=str, default='@@', metavar='STR', - help="Separator between non-final subword units (default: '%(default)s')") - parser.add_argument( - '--write-vocabulary', type=argparse.FileType('w'), required=True, nargs = '+', default=None, - metavar='PATH', dest='vocab', - help='Write to these vocabulary files after applying BPE. One per input text. Used for filtering in apply_bpe.py') - parser.add_argument( - '--min-frequency', type=int, default=2, metavar='FREQ', - help='Stop if no symbol pair has frequency >= FREQ (default: %(default)s)') - parser.add_argument( - '--total-symbols', '-t', action="store_true", - help="subtract number of characters from the symbols to be generated (so that '--symbols' becomes an estimate for the total number of symbols needed to encode text).") - parser.add_argument( - '--num-workers', type=int, default=1, - help="Number of processors to process texts, only supported in Python3. If -1, set `multiprocessing.cpu_count()`. (default: %(default)s)") - parser.add_argument( - '--verbose', '-v', action="store_true", - help="verbose mode.") - - return parser - -def learn_joint_bpe_and_vocab(args): - - if args.vocab and len(args.input) != len(args.vocab): - sys.stderr.write('Error: number of input files and vocabulary files must match\n') - sys.exit(1) - - # read/write files as UTF-8 - args.input = [codecs.open(f.name, encoding='UTF-8') for f in args.input] - args.vocab = [codecs.open(f.name, 'w', encoding='UTF-8') for f in args.vocab] - - # get combined vocabulary of all input texts - full_vocab = Counter() - for f in args.input: - full_vocab += learn_bpe.get_vocabulary(f, num_workers=args.num_workers) - f.seek(0) - - vocab_list = ['{0} {1}'.format(key, freq) for (key, freq) in full_vocab.items()] - - # learn BPE on combined vocabulary - with codecs.open(args.output.name, 'w', encoding='UTF-8') as output: - learn_bpe.learn_bpe(vocab_list, output, args.symbols, args.min_frequency, args.verbose, is_dict=True, total_symbols=args.total_symbols) - - with codecs.open(args.output.name, encoding='UTF-8') as codes: - bpe = apply_bpe.BPE(codes, separator=args.separator) - - # apply BPE to each training corpus and get vocabulary - for train_file, vocab_file in zip(args.input, args.vocab): - - tmp = tempfile.NamedTemporaryFile(delete=False) - tmp.close() - - tmpout = codecs.open(tmp.name, 'w', encoding='UTF-8') - - train_file.seek(0) - bpe.process_lines(train_file.name, tmpout, num_workers=args.num_workers) - - tmpout.close() - tmpin = codecs.open(tmp.name, encoding='UTF-8') - - vocab = learn_bpe.get_vocabulary(tmpin, num_workers=args.num_workers) - tmpin.close() - os.remove(tmp.name) - - for key, freq in sorted(vocab.items(), key=lambda x: x[1], reverse=True): - vocab_file.write("{0} {1}\n".format(key, freq)) - train_file.close() - vocab_file.close() - - -if __name__ == '__main__': - - currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) - newdir = os.path.join(currentdir, 'subword_nmt') - if os.path.isdir(newdir): - warnings.warn( - "this script's location has moved to {0}. This symbolic link will be removed in a future version. Please point to the new location, or install the package and use the command 'subword-nmt'".format(newdir), - DeprecationWarning - ) - - # python 2/3 compatibility - if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) - else: - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer) - - parser = create_parser() - args = parser.parse_args() - - if args.num_workers <= 0: - args.num_workers = cpu_count() - - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - if args.num_workers > 1: - args.num_workers = 1 - warnings.warn("Parallel mode is only supported in Python3. Using 1 processor instead.") - - assert(len(args.input) == len(args.vocab)) - - learn_joint_bpe_and_vocab(args) diff --git a/spaces/abhishek/first-order-motion-model/sync_batchnorm/replicate.py b/spaces/abhishek/first-order-motion-model/sync_batchnorm/replicate.py deleted file mode 100644 index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000 --- a/spaces/abhishek/first-order-motion-model/sync_batchnorm/replicate.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# File : replicate.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import functools - -from torch.nn.parallel.data_parallel import DataParallel - -__all__ = [ - 'CallbackContext', - 'execute_replication_callbacks', - 'DataParallelWithCallback', - 'patch_replication_callback' -] - - -class CallbackContext(object): - pass - - -def execute_replication_callbacks(modules): - """ - Execute an replication callback `__data_parallel_replicate__` on each module created by original replication. - - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Note that, as all modules are isomorphism, we assign each sub-module with a context - (shared among multiple copies of this module on different devices). - Through this context, different copies can share some information. - - We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback - of any slave copies. - """ - master_copy = modules[0] - nr_modules = len(list(master_copy.modules())) - ctxs = [CallbackContext() for _ in range(nr_modules)] - - for i, module in enumerate(modules): - for j, m in enumerate(module.modules()): - if hasattr(m, '__data_parallel_replicate__'): - m.__data_parallel_replicate__(ctxs[j], i) - - -class DataParallelWithCallback(DataParallel): - """ - Data Parallel with a replication callback. - - An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by - original `replicate` function. - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - # sync_bn.__data_parallel_replicate__ will be invoked. - """ - - def replicate(self, module, device_ids): - modules = super(DataParallelWithCallback, self).replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - -def patch_replication_callback(data_parallel): - """ - Monkey-patch an existing `DataParallel` object. Add the replication callback. - Useful when you have customized `DataParallel` implementation. - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) - > patch_replication_callback(sync_bn) - # this is equivalent to - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - """ - - assert isinstance(data_parallel, DataParallel) - - old_replicate = data_parallel.replicate - - @functools.wraps(old_replicate) - def new_replicate(module, device_ids): - modules = old_replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - data_parallel.replicate = new_replicate diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/anchor/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/anchor/__init__.py deleted file mode 100644 index 5838ff3eefb03bc83928fa13848cea9ff8647827..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/anchor/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -from .anchor_generator import (AnchorGenerator, LegacyAnchorGenerator, - YOLOAnchorGenerator) -from .builder import ANCHOR_GENERATORS, build_anchor_generator -from .point_generator import PointGenerator -from .utils import anchor_inside_flags, calc_region, images_to_levels - -__all__ = [ - 'AnchorGenerator', 'LegacyAnchorGenerator', 'anchor_inside_flags', - 'PointGenerator', 'images_to_levels', 'calc_region', - 'build_anchor_generator', 'ANCHOR_GENERATORS', 'YOLOAnchorGenerator' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py deleted file mode 100644 index df85a0112d27d97301fff56189f99bee0bf8efa5..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py +++ /dev/null @@ -1,27 +0,0 @@ -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock -from .fused_semantic_head import FusedSemanticHead - - -@HEADS.register_module() -class SCNetSemanticHead(FusedSemanticHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res=True, **kwargs): - super(SCNetSemanticHead, self).__init__(**kwargs) - self.conv_to_res = conv_to_res - if self.conv_to_res: - num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/scnet.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/scnet.py deleted file mode 100644 index 04a2347c4ec1efcbfda59a134cddd8bde620d983..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/scnet.py +++ /dev/null @@ -1,10 +0,0 @@ -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class SCNet(CascadeRCNN): - """Implementation of `SCNet `_""" - - def __init__(self, **kwargs): - super(SCNet, self).__init__(**kwargs) diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/tests/unit/test_scenes.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/tests/unit/test_scenes.py deleted file mode 100644 index d85dd714cb5d842ea12dee4140adfd7db55c9c01..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/tests/unit/test_scenes.py +++ /dev/null @@ -1,235 +0,0 @@ -import numpy as np -import pytest -import trimesh - -from pyrender import (Mesh, PerspectiveCamera, DirectionalLight, - SpotLight, PointLight, Scene, Node, OrthographicCamera) - - -def test_scenes(): - - # Basics - s = Scene() - assert np.allclose(s.bg_color, np.ones(4)) - assert np.allclose(s.ambient_light, np.zeros(3)) - assert len(s.nodes) == 0 - assert s.name is None - s.name = 'asdf' - s.bg_color = None - s.ambient_light = None - assert np.allclose(s.bg_color, np.ones(4)) - assert np.allclose(s.ambient_light, np.zeros(3)) - - assert s.nodes == set() - assert s.cameras == set() - assert s.lights == set() - assert s.point_lights == set() - assert s.spot_lights == set() - assert s.directional_lights == set() - assert s.meshes == set() - assert s.camera_nodes == set() - assert s.light_nodes == set() - assert s.point_light_nodes == set() - assert s.spot_light_nodes == set() - assert s.directional_light_nodes == set() - assert s.mesh_nodes == set() - assert s.main_camera_node is None - assert np.all(s.bounds == 0) - assert np.all(s.centroid == 0) - assert np.all(s.extents == 0) - assert np.all(s.scale == 0) - - # From trimesh scene - tms = trimesh.load('tests/data/WaterBottle.glb') - s = Scene.from_trimesh_scene(tms) - assert len(s.meshes) == 1 - assert len(s.mesh_nodes) == 1 - - # Test bg color formatting - s = Scene(bg_color=[0, 1.0, 0]) - assert np.allclose(s.bg_color, np.array([0.0, 1.0, 0.0, 1.0])) - - # Test constructor for nodes - n1 = Node() - n2 = Node() - n3 = Node() - nodes = [n1, n2, n3] - s = Scene(nodes=nodes) - n1.children.append(n2) - s = Scene(nodes=nodes) - n3.children.append(n2) - with pytest.raises(ValueError): - s = Scene(nodes=nodes) - n3.children = [] - n2.children.append(n3) - n3.children.append(n2) - with pytest.raises(ValueError): - s = Scene(nodes=nodes) - - # Test node accessors - n1 = Node() - n2 = Node() - n3 = Node() - nodes = [n1, n2] - s = Scene(nodes=nodes) - assert s.has_node(n1) - assert s.has_node(n2) - assert not s.has_node(n3) - - # Test node poses - for n in nodes: - assert np.allclose(s.get_pose(n), np.eye(4)) - with pytest.raises(ValueError): - s.get_pose(n3) - with pytest.raises(ValueError): - s.set_pose(n3, np.eye(4)) - tf = np.eye(4) - tf[:3,3] = np.ones(3) - s.set_pose(n1, tf) - assert np.allclose(s.get_pose(n1), tf) - assert np.allclose(s.get_pose(n2), np.eye(4)) - - nodes = [n1, n2, n3] - tf2 = np.eye(4) - tf2[:3,:3] = np.diag([-1,-1,1]) - n1.children.append(n2) - n1.matrix = tf - n2.matrix = tf2 - s = Scene(nodes=nodes) - assert np.allclose(s.get_pose(n1), tf) - assert np.allclose(s.get_pose(n2), tf.dot(tf2)) - assert np.allclose(s.get_pose(n3), np.eye(4)) - - n1 = Node() - n2 = Node() - n3 = Node() - n1.children.append(n2) - s = Scene() - s.add_node(n1) - with pytest.raises(ValueError): - s.add_node(n2) - s.set_pose(n1, tf) - assert np.allclose(s.get_pose(n1), tf) - assert np.allclose(s.get_pose(n2), tf) - s.set_pose(n2, tf2) - assert np.allclose(s.get_pose(n2), tf.dot(tf2)) - - # Test node removal - n1 = Node() - n2 = Node() - n3 = Node() - n1.children.append(n2) - n2.children.append(n3) - s = Scene(nodes=[n1, n2, n3]) - s.remove_node(n2) - assert len(s.nodes) == 1 - assert n1 in s.nodes - assert len(n1.children) == 0 - assert len(n2.children) == 1 - s.add_node(n2, parent_node=n1) - assert len(n1.children) == 1 - n1.matrix = tf - n3.matrix = tf2 - assert np.allclose(s.get_pose(n3), tf.dot(tf2)) - - # Now test ADD function - s = Scene() - m = Mesh([], name='m') - cp = PerspectiveCamera(yfov=2.0) - co = OrthographicCamera(xmag=1.0, ymag=1.0) - dl = DirectionalLight() - pl = PointLight() - sl = SpotLight() - - n1 = s.add(m, name='mn') - assert n1.mesh == m - assert len(s.nodes) == 1 - assert len(s.mesh_nodes) == 1 - assert n1 in s.mesh_nodes - assert len(s.meshes) == 1 - assert m in s.meshes - assert len(s.get_nodes(node=n2)) == 0 - n2 = s.add(m, pose=tf) - assert len(s.nodes) == len(s.mesh_nodes) == 2 - assert len(s.meshes) == 1 - assert len(s.get_nodes(node=n1)) == 1 - assert len(s.get_nodes(node=n1, name='mn')) == 1 - assert len(s.get_nodes(name='mn')) == 1 - assert len(s.get_nodes(obj=m)) == 2 - assert len(s.get_nodes(obj=m, obj_name='m')) == 2 - assert len(s.get_nodes(obj=co)) == 0 - nsl = s.add(sl, name='sln') - npl = s.add(pl, parent_name='sln') - assert nsl.children[0] == npl - ndl = s.add(dl, parent_node=npl) - assert npl.children[0] == ndl - nco = s.add(co) - ncp = s.add(cp) - - assert len(s.light_nodes) == len(s.lights) == 3 - assert len(s.point_light_nodes) == len(s.point_lights) == 1 - assert npl in s.point_light_nodes - assert len(s.spot_light_nodes) == len(s.spot_lights) == 1 - assert nsl in s.spot_light_nodes - assert len(s.directional_light_nodes) == len(s.directional_lights) == 1 - assert ndl in s.directional_light_nodes - assert len(s.cameras) == len(s.camera_nodes) == 2 - assert s.main_camera_node == nco - s.main_camera_node = ncp - s.remove_node(ncp) - assert len(s.cameras) == len(s.camera_nodes) == 1 - assert s.main_camera_node == nco - s.remove_node(n2) - assert len(s.meshes) == 1 - s.remove_node(n1) - assert len(s.meshes) == 0 - s.remove_node(nsl) - assert len(s.lights) == 0 - s.remove_node(nco) - assert s.main_camera_node is None - - s.add_node(n1) - s.clear() - assert len(s.nodes) == 0 - - # Trigger final errors - with pytest.raises(ValueError): - s.main_camera_node = None - with pytest.raises(ValueError): - s.main_camera_node = ncp - with pytest.raises(ValueError): - s.add(m, parent_node=n1) - with pytest.raises(ValueError): - s.add(m, name='asdf') - s.add(m, name='asdf') - s.add(m, parent_name='asdf') - with pytest.raises(ValueError): - s.add(m, parent_name='asfd') - with pytest.raises(TypeError): - s.add(None) - - s.clear() - # Test bounds - m1 = Mesh.from_trimesh(trimesh.creation.box()) - m2 = Mesh.from_trimesh(trimesh.creation.box()) - m3 = Mesh.from_trimesh(trimesh.creation.box()) - n1 = Node(mesh=m1) - n2 = Node(mesh=m2, translation=[1.0, 0.0, 0.0]) - n3 = Node(mesh=m3, translation=[0.5, 0.0, 1.0]) - s.add_node(n1) - s.add_node(n2) - s.add_node(n3) - assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [1.5, 0.5, 1.5]]) - s.clear() - s.add_node(n1) - s.add_node(n2, parent_node=n1) - s.add_node(n3, parent_node=n2) - assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [2.0, 0.5, 1.5]]) - tf = np.eye(4) - tf[:3,3] = np.ones(3) - s.set_pose(n3, tf) - assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [2.5, 1.5, 1.5]]) - s.remove_node(n2) - assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [0.5, 0.5, 0.5]]) - s.clear() - assert np.allclose(s.bounds, 0.0) diff --git a/spaces/abyildirim/inst-inpaint/ldm/models/diffusion/ddpm.py b/spaces/abyildirim/inst-inpaint/ldm/models/diffusion/ddpm.py deleted file mode 100644 index f0b4591f6854e98b6c616df5439aca15ff89c457..0000000000000000000000000000000000000000 --- a/spaces/abyildirim/inst-inpaint/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,1062 +0,0 @@ -################################################################################################## -# Adapted from: https://github.com/CompVis/latent-diffusion/blob/main/ldm/models/diffusion/ddpm.py -################################################################################################## -# Utilized resources: -# - https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# - https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -# - https://github.com/CompVis/taming-transformers -################################################################################################## - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager -from functools import partial -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only -from ldm.util import log_txt_as_img, exists, default, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import VQModelInterface -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler -from PIL import Image -from ldm.util import seed_everything - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - -class DDPM(pl.LightningModule): - # DDPM with Gaussian diffusion in image space. - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # Weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # All assuming fixed variance schedules - scheduler_config=None, - learn_logvar=False, - logvar_init=0. - ): - super().__init__() - assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size - self.channels = channels - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # Calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # Calculations for posterior q(x_{t-1} | x_t, x_0) - # Equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) + self.v_posterior * betas - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # Log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain. - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - else: - raise NotImplementedError("mu not supported") - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # No noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - else: - raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if isinstance(x, list): - return x - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # Getting diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # Getting denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - - -class LatentDiffusion(DDPM): - def __init__(self, - first_stage_config, - cond_stage_config, - cond_stage_instruction_embedder_config=None, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_instruction_key=None, - cond_stage_trainable=False, - cond_stage_instruction_embedder_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - *args, **kwargs): - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # For backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - self.cond_stage_instruction_key = cond_stage_instruction_key - self.cond_stage_instruction_embedder_config = cond_stage_instruction_embedder_config - self.cond_stage_instruction_embedder_trainable = cond_stage_instruction_embedder_trainable - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.instantiate_cond_stage_instruction_embedder(cond_stage_instruction_embedder_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - def keep_attn_map_dict(self, keep_attn_maps): - self.model.keep_attn_map_dict(keep_attn_maps) - - def get_attn_map_dict(self): - return self.model.attn_dict - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # Only for the very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # Set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def instantiate_cond_stage_instruction_embedder(self, config): - if self.cond_stage_instruction_embedder_config is not None: - assert self.cond_stage_instruction_key is not None - self.cond_stage_instruction_embedder = instantiate_from_config(config) - if not self.cond_stage_instruction_embedder_trainable: - self.cond_stage_instruction_embedder = self.cond_stage_instruction_embedder.eval() - self.cond_stage_instruction_embedder.train = disabled_train - for param in self.cond_stage_instruction_embedder.parameters(): - param.requires_grad = False - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - @torch.no_grad() - def get_main_input(self, batch, k, return_first_stage_outputs, force_c_encode, - cond_key, return_original_cond, bs): - x = super().get_input(batch, k) - check_condition_modification = False - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None: - check_condition_modification = True - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - else: - c = None - xc = None - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out, check_condition_modification - - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None): - - out, check_condition_modification = self.get_main_input(batch, k, return_first_stage_outputs, force_c_encode, - cond_key, return_original_cond, bs) - c = out[1] - # Implemented for inpainting model - if check_condition_modification: - if self.cond_stage_instruction_key and self.model.conditioning_key == "concat": - instructions = super().get_input(batch, self.cond_stage_instruction_key) - c = self.cond_stage_instruction_embedder(c, instructions) - - if self.cond_stage_instruction_key and self.model.conditioning_key == "hybrid": - instructions = super().get_input(batch, self.cond_stage_instruction_key) - # Condition image feature is sent as None to the instruction embedder (instruction embedding is not concatenated) - instruction_embedding = self.cond_stage_instruction_embedder(None, instructions) - c = {'c_concat': c, 'c_crossattn': instruction_embedding} - - out[1] = c - return out - - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def apply_model(self, x_noisy, t, cond, index=None): - # self.model.conditioning_key is not hybrid - if not isinstance(cond, dict): - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - x_recon = self.model(x_noisy, t, **cond, index=index) - - if isinstance(x_recon, tuple): - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - - loss = loss_simple / torch.exp(logvar_t) + logvar_t - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - - if return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - - if return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # No noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # Spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None,**kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, plot_progressive_rows=True, instruction_img_size=256, **kwargs): - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - if self.cond_stage_instruction_key and self.model.conditioning_key == "concat": - c_cond = c[:,:-self.cond_stage_instruction_embedder.out_size,:,:] - else: - c_cond = c - if isinstance(c_cond, dict): - c_cond = c_cond["c_concat"] - xc = self.cond_stage_model.decode(c_cond) - log["conditioning"] = xc - elif isimage(xc): - log["conditioning"] = xc - - if self.cond_stage_instruction_key is not None: - instructions = super().get_input(batch, self.cond_stage_instruction_key) - instructions_img = log_txt_as_img((instruction_img_size, instruction_img_size), instructions) - log['instructions'] = instructions_img - - if plot_progressive_rows: - with self.ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - return log - - @torch.no_grad() - def inpaint(self, image, instruction, num_steps=50, device="cuda", return_pil=True, seed=0): - assert len(image.shape) == 4 and image.shape[0] == 1, "Input image should be a tensor object with batch size 1" - assert isinstance(instruction, str), "Input instruction type should be String" - assert self.model.conditioning_key == "hybrid", "Inpaint function is only available for hybrid conditioning" - - image = image.to(device) - sampler = DDIMSampler(self, device=device) - - seed_everything(seed) - with torch.no_grad(): - with self.ema_scope(): - c = self.get_first_stage_encoding(self.cond_stage_model.encode(image)) - shape = c.shape[1:] - instruction_embedding = self.cond_stage_instruction_embedder(None, [instruction]) - c = {'c_concat': c, 'c_crossattn': instruction_embedding} - batch_size=c["c_concat"].shape[0] - output_latent, _ = sampler.sample(S=num_steps, - conditioning=c, - batch_size=batch_size, - shape=shape, - verbose=False) - output_image_tensor = self.decode_first_stage(output_latent)[0] - output_image_tensor = torch.clip(output_image_tensor, -1, 1) - output_image_np = ((output_image_tensor + 1) * 127.5).cpu().numpy() - output_image = Image.fromarray(output_image_np.transpose(1,2,0).astype(np.uint8)) - - if return_pil: - return output_image - return output_image_tensor - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.cond_stage_instruction_embedder_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditionaer (instruction embedder) params!") - params = params + list(self.cond_stage_instruction_embedder.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm'] - self.attn_dict = None - self.keep_attn_maps = False - - def keep_attn_map_dict(self, keep_attn_maps): - self.keep_attn_maps = keep_attn_maps - if keep_attn_maps: - if self.attn_dict is None: - self.attn_dict = {} - else: - self.attn_dict.clear() - else: - self.attn_dict = None - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_mask: list = None, index=None): - if self.keep_attn_maps: - assert index is not None - if index not in self.attn_dict: - self.attn_dict[index] = {} - else: - raise Exception("Attention maps of the current time index has already been assigned.") - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - if not isinstance(c_concat, list): - c_concat = [c_concat] - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - cc = torch.cat(c_crossattn, 1) - if self.keep_attn_maps: - out = self.diffusion_model(x, t, context=cc, attn_dict=self.attn_dict[index]) - else: - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - if not isinstance(c_concat, list): - c_concat = [c_concat] - if not isinstance(c_crossattn, list): - c_crossattn = [c_crossattn] - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - if self.keep_attn_maps: - out = self.diffusion_model(xc, t, context=cc, attn_dict=self.attn_dict[index]) - else: - out = self.diffusion_model(xc, t, context=cc) - else: - raise NotImplementedError() - return out diff --git a/spaces/adirik/stylemc-demo/README.md b/spaces/adirik/stylemc-demo/README.md deleted file mode 100644 index f950eaeee6f5178da18f00e74e92aa9e8c921e94..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: StyleMC Demo -emoji: 🏢 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/afcruzs/perceiver-image-classification-spanish/app.py b/spaces/afcruzs/perceiver-image-classification-spanish/app.py deleted file mode 100644 index b56c9edadef0f4dd56d81e6dc6675e0a1e76f63b..0000000000000000000000000000000000000000 --- a/spaces/afcruzs/perceiver-image-classification-spanish/app.py +++ /dev/null @@ -1,38 +0,0 @@ -import gradio as gr -from transformers import ImageClassificationPipeline, PerceiverForImageClassificationConvProcessing, PerceiverFeatureExtractor -import torch - -torch.hub.download_url_to_file('http://images.cocodataset.org/val2017/000000039769.jpg', 'cats.jpg') -torch.hub.download_url_to_file('https://storage.googleapis.com/perceiver_io/dalmation.jpg', 'dog.jpg') - -feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-conv") -model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv") - -image_pipe = ImageClassificationPipeline(model=model, feature_extractor=feature_extractor) - -with open('labels_translation.txt') as f: - labels_translation = [x.strip() for x in f.readlines()] - -with open('english_labels.txt') as f: - english_labels = [x.strip() for x in f.readlines()] - -english_to_spanish = {a: b for a, b in zip(english_labels, labels_translation)} - -def classify_image(image): - results = image_pipe(image) - # convert to format Gradio expects - output = {} - for prediction in results: - predicted_label = english_to_spanish[prediction['label']] - score = prediction['score'] - output[predicted_label] = score - return output - -image = gr.inputs.Image(type="pil") -label = gr.outputs.Label(num_top_classes=5) -examples = [["cats.jpg"], ["dog.jpg"]] -title = "Interactive demo: Perceiver for image classification" -description = "Demo for classifying images with Perceiver IO. To use it, simply upload an image or use the example images below and click 'submit' to let the model predict the 5 most probable ImageNet classes. Results will show up in a few seconds. This is based on this space: This space is based on: https://huggingface.co/spaces/nielsr/perceiver-image-classification, image net labels are machine translated from english to spanish." -article = "

    Perceiver IO: A General Architecture for Structured Inputs & Outputs | Official blog

    " - -gr.Interface(fn=classify_image, inputs=image, outputs=label, title=title, description=description, examples=examples, enable_queue=True).launch(debug=True) \ No newline at end of file diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/utterance.py b/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/utterance.py deleted file mode 100644 index 0768c3420f422a7464f305b4c1fb6752c57ceda7..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/utterance.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy as np - - -class Utterance: - def __init__(self, frames_fpath, wave_fpath): - self.frames_fpath = frames_fpath - self.wave_fpath = wave_fpath - - def get_frames(self): - return np.load(self.frames_fpath) - - def random_partial(self, n_frames): - """ - Crops the frames into a partial utterance of n_frames - - :param n_frames: The number of frames of the partial utterance - :return: the partial utterance frames and a tuple indicating the start and end of the - partial utterance in the complete utterance. - """ - frames = self.get_frames() - if frames.shape[0] == n_frames: - start = 0 - else: - start = np.random.randint(0, frames.shape[0] - n_frames) - end = start + n_frames - return frames[start:end], (start, end) \ No newline at end of file diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Models/Optimizers/RAdam.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/Models/Optimizers/RAdam.py deleted file mode 100644 index b74642c2f8870d37d0faa9a4824f2bb8c5fbe331..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Models/Optimizers/RAdam.py +++ /dev/null @@ -1,247 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - -import math -import torch -from torch.optim.optimizer import Optimizer, required - - -class RAdam(Optimizer): - """ - @article{liu2019radam, - title={On the Variance of the Adaptive Learning Rate and Beyond}, - author={Liu, Liyuan and Jiang, Haoming and He, Pengcheng and Chen, Weizhu and Liu, Xiaodong and Gao, Jianfeng and Han, Jiawei}, - journal={arXiv preprint arXiv:1908.03265}, - year={2019} - } - """ - - def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - self.buffer = [[None, None, None] for ind in range(10)] - super(RAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(RAdam, self).__setstate__(state) - - def step(self, closure=None): - - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError("RAdam does not support sparse gradients") - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state["step"] = 0 - state["exp_avg"] = torch.zeros_like(p_data_fp32) - state["exp_avg_sq"] = torch.zeros_like(p_data_fp32) - else: - state["exp_avg"] = state["exp_avg"].type_as(p_data_fp32) - state["exp_avg_sq"] = state["exp_avg_sq"].type_as(p_data_fp32) - - exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] - beta1, beta2 = group["betas"] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state["step"] += 1 - buffered = self.buffer[int(state["step"] % 10)] - if state["step"] == buffered[0]: - N_sma, step_size = buffered[1], buffered[2] - else: - buffered[0] = state["step"] - beta2_t = beta2 ** state["step"] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state["step"] * beta2_t / (1 - beta2_t) - buffered[1] = N_sma - - # more conservative since it's an approximated value - if N_sma >= 5: - step_size = ( - group["lr"] - * math.sqrt( - (1 - beta2_t) - * (N_sma - 4) - / (N_sma_max - 4) - * (N_sma - 2) - / N_sma - * N_sma_max - / (N_sma_max - 2) - ) - / (1 - beta1 ** state["step"]) - ) - else: - step_size = group["lr"] / (1 - beta1 ** state["step"]) - buffered[2] = step_size - - if group["weight_decay"] != 0: - p_data_fp32.add_(-group["weight_decay"] * group["lr"], p_data_fp32) - - # more conservative since it's an approximated value - if N_sma >= 5: - denom = exp_avg_sq.sqrt().add_(group["eps"]) - p_data_fp32.addcdiv_(-step_size, exp_avg, denom) - else: - p_data_fp32.add_(-step_size, exp_avg) - - p.data.copy_(p_data_fp32) - - return loss - - -class PlainRAdam(Optimizer): - def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - - super(PlainRAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(PlainRAdam, self).__setstate__(state) - - def step(self, closure=None): - - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError("RAdam does not support sparse gradients") - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state["step"] = 0 - state["exp_avg"] = torch.zeros_like(p_data_fp32) - state["exp_avg_sq"] = torch.zeros_like(p_data_fp32) - else: - state["exp_avg"] = state["exp_avg"].type_as(p_data_fp32) - state["exp_avg_sq"] = state["exp_avg_sq"].type_as(p_data_fp32) - - exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] - beta1, beta2 = group["betas"] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state["step"] += 1 - beta2_t = beta2 ** state["step"] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state["step"] * beta2_t / (1 - beta2_t) - - if group["weight_decay"] != 0: - p_data_fp32.add_(-group["weight_decay"] * group["lr"], p_data_fp32) - - # more conservative since it's an approximated value - if N_sma >= 5: - step_size = ( - group["lr"] - * math.sqrt( - (1 - beta2_t) - * (N_sma - 4) - / (N_sma_max - 4) - * (N_sma - 2) - / N_sma - * N_sma_max - / (N_sma_max - 2) - ) - / (1 - beta1 ** state["step"]) - ) - denom = exp_avg_sq.sqrt().add_(group["eps"]) - p_data_fp32.addcdiv_(-step_size, exp_avg, denom) - else: - step_size = group["lr"] / (1 - beta1 ** state["step"]) - p_data_fp32.add_(-step_size, exp_avg) - - p.data.copy_(p_data_fp32) - - return loss - - -class AdamW(Optimizer): - def __init__( - self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, warmup=0 - ): - defaults = dict( - lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, warmup=warmup - ) - super(AdamW, self).__init__(params, defaults) - - def __setstate__(self, state): - super(AdamW, self).__setstate__(state) - - def step(self, closure=None): - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError( - "Adam does not support sparse gradients, please consider SparseAdam instead" - ) - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state["step"] = 0 - state["exp_avg"] = torch.zeros_like(p_data_fp32) - state["exp_avg_sq"] = torch.zeros_like(p_data_fp32) - else: - state["exp_avg"] = state["exp_avg"].type_as(p_data_fp32) - state["exp_avg_sq"] = state["exp_avg_sq"].type_as(p_data_fp32) - - exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] - beta1, beta2 = group["betas"] - - state["step"] += 1 - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - denom = exp_avg_sq.sqrt().add_(group["eps"]) - bias_correction1 = 1 - beta1 ** state["step"] - bias_correction2 = 1 - beta2 ** state["step"] - - if group["warmup"] > state["step"]: - scheduled_lr = 1e-8 + state["step"] * group["lr"] / group["warmup"] - else: - scheduled_lr = group["lr"] - - step_size = group["lr"] * math.sqrt(bias_correction2) / bias_correction1 - - if group["weight_decay"] != 0: - p_data_fp32.add_(-group["weight_decay"] * scheduled_lr, p_data_fp32) - - p_data_fp32.addcdiv_(-step_size, exp_avg, denom) - - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/akhaliq/hubert-xlarge-ls960-ft/README.md b/spaces/akhaliq/hubert-xlarge-ls960-ft/README.md deleted file mode 100644 index e4f85f8d0c59ff12d3ac765af6cf56b37241ad4d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/hubert-xlarge-ls960-ft/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Hubert-xlarge-ls960-ft -emoji: 📚 -colorFrom: purple -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/alecinvan/image-captioning-tts/app.py b/spaces/alecinvan/image-captioning-tts/app.py deleted file mode 100644 index 723e36e4f218c1c4c9ba38b8b101e1bd7208277e..0000000000000000000000000000000000000000 --- a/spaces/alecinvan/image-captioning-tts/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -from transformers import BlipForConditionalGeneration, BlipProcessor -import torch -import tempfile -from gtts import gTTS - -# Load models -device = "cpu" -processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") -model_image_captioning = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to(device) - -def generate_caption_tts(image): - - inputs = processor(images=image, return_tensors="pt") - inputs["max_length"] = 20 - inputs["num_beams"] = 5 - outputs = model_image_captioning.generate(**inputs) - - caption = processor.batch_decode(outputs, skip_special_tokens=True)[0] - - speech = gTTS(caption, lang="en") - tmp_file = tempfile.mkstemp()[1] - speech.save(tmp_file) - - return (caption, tmp_file) - - -title ="李飒博士作品 - AI图像理解交互机器人" - -description = "BLPM模型:引导性语言图像预训练以实现统一视觉语言理解和生成。 请上传您的图像" - -iface = gr.Interface( - fn=generate_caption_tts, - title=title, - description=description, - inputs=gr.inputs.Image(shape=(224,224)), - outputs=["text", "audio"] -) - - -#iface.launch(share=True, debug=True) -iface.launch() \ No newline at end of file diff --git a/spaces/alexandrainst/zero-shot-classification/app.py b/spaces/alexandrainst/zero-shot-classification/app.py deleted file mode 100644 index 69140b34557d44f9184c7fd8b9d73b6f83fbe717..0000000000000000000000000000000000000000 --- a/spaces/alexandrainst/zero-shot-classification/app.py +++ /dev/null @@ -1,263 +0,0 @@ -"""Gradio app that showcases Scandinavian zero-shot text classification models.""" - -from typing import Dict, Tuple -import gradio as gr -from gradio.components import Dropdown, Textbox, Row, Column, Button, Label, Markdown -from types import MethodType -from transformers import pipeline, AutoModelForSequenceClassification, AutoTokenizer -from luga import language as detect_language -import torch -import re -import os -import torch._dynamo - - -def main(): - # Disable tokenizers parallelism - os.environ["TOKENIZERS_PARALLELISM"] = "false" - - # Load the zero-shot classification pipeline - global classifier, model, tokenizer - model_id = "alexandrainst/scandi-nli-large" - model = AutoModelForSequenceClassification.from_pretrained(model_id) - tokenizer = AutoTokenizer.from_pretrained(model_id) - model = torch.compile(model=model, backend="aot_eager") - model.eval() - classifier = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer) - classifier.get_inference_context = MethodType( - lambda self: torch.no_grad, classifier - ) - - # Create dictionary of descriptions for each task, containing the hypothesis template - # and candidate labels - task_configs: Dict[str, Tuple[str, str, str, str, str, str]] = { - "Sentiment classification": ( - "Dette eksempel er {}.", - "positivt, negativt, neutralt", - "Detta exempel är {}.", - "positivt, negativt, neutralt", - "Dette eksemplet er {}.", - "positivt, negativt, nøytralt", - ), - "News topic classification": ( - "Denne nyhedsartikel handler primært om {}.", - "krig, politik, uddannelse, sundhed, økonomi, mode, sport", - "Den här nyhetsartikeln handlar främst om {}.", - "krig, politik, utbildning, hälsa, ekonomi, mode, sport", - "Denne nyhetsartikkelen handler først og fremst om {}.", - "krig, politikk, utdanning, helse, økonomi, mote, sport", - ), - "Spam detection": ( - "Denne e-mail ligner {}.", - "en spam e-mail, ikke en spam e-mail", - "Det här e-postmeddelandet ser {}.", - "ut som ett skräppostmeddelande, inte ut som ett skräppostmeddelande", - "Denne e-posten ser {}.", - "ut som en spam-e-post, ikke ut som en spam-e-post", - ), - "Product feedback detection": ( - "Denne kommentar er {}.", - "en anmeldelse af et produkt, ikke en anmeldelse af et produkt", - "Den här kommentaren är {}.", - "en recension av en produkt, inte en recension av en produkt", - "Denne kommentaren er {}.", - "en anmeldelse av et produkt, ikke en anmeldelse av et produkt", - ), - "Define your own task!": ( - "Dette eksempel er {}.", - "", - "Detta exempel är {}.", - "", - "Dette eksemplet er {}.", - "", - ), - } - - def set_task_setup(task: str) -> Tuple[str, str, str, str, str, str]: - return task_configs[task] - - with gr.Blocks() as demo: - - # Create title and description - Markdown("# Scandinavian Zero-shot Text Classification") - Markdown(""" - Classify text in Danish, Swedish or Norwegian into categories, without - finetuning on any training data! - - Select one of the tasks from the dropdown menu on the left, and try - entering some input text (in Danish, Swedish or Norwegian) in the input - text box and press submit, to see the model in action! The labels are - generated by putting in each candidate label into the hypothesis template, - and then running the classifier on each label separately. Feel free to - change the "hypothesis template" and "candidate labels" on the left as you - please as well, and try to come up with your own tasks too 😊 - - _Also, be patient, as this demo is running on a CPU!_ - """) - - with Row(): - - # Input column - with Column(): - - # Create a dropdown menu for the task - dropdown = Dropdown( - label="Task", - choices=[ - "Sentiment classification", - "News topic classification", - "Spam detection", - "Product feedback detection", - "Define your own task!", - ], - value="Sentiment classification", - ) - - with Row(variant="compact"): - da_hypothesis_template = Textbox( - label="Danish hypothesis template", - value="Dette eksempel er {}.", - ) - da_candidate_labels = Textbox( - label="Danish candidate labels (comma separated)", - value="positivt, negativt, neutralt", - ) - - with Row(variant="compact"): - sv_hypothesis_template = Textbox( - label="Swedish hypothesis template", - value="Detta exempel är {}.", - ) - sv_candidate_labels = Textbox( - label="Swedish candidate labels (comma separated)", - value="positivt, negativt, neutralt", - ) - - with Row(variant="compact"): - no_hypothesis_template = Textbox( - label="Norwegian hypothesis template", - value="Dette eksemplet er {}.", - ) - no_candidate_labels = Textbox( - label="Norwegian candidate labels (comma separated)", - value="positivt, negativt, nøytralt", - ) - - # When a new task is chosen, update the description - dropdown.change( - fn=set_task_setup, - inputs=dropdown, - outputs=[ - da_hypothesis_template, - da_candidate_labels, - sv_hypothesis_template, - sv_candidate_labels, - no_hypothesis_template, - no_candidate_labels, - ], - ) - - # Output column - with Column(): - - # Create a text box for the input text - input_textbox = Textbox( - label="Input text", value="Jeg er helt vild med fodbolden 😊" - ) - - with Row(): - clear_btn = Button(value="Clear") - submit_btn = Button(value="Submit", variant="primary") - - # When the clear button is clicked, clear the input text box - clear_btn.click( - fn=lambda _: "", inputs=input_textbox, outputs=input_textbox - ) - - - with Column(): - - # Create output text box - output_textbox = Label(label="Result") - - # When the submit button is clicked, run the classifier on the input text - # and display the result in the output text box - submit_btn.click( - fn=classification, - inputs=[ - input_textbox, - da_hypothesis_template, - da_candidate_labels, - sv_hypothesis_template, - sv_candidate_labels, - no_hypothesis_template, - no_candidate_labels, - ], - outputs=output_textbox, - ) - - # Run the app - demo.launch(width=.5) - - -@torch.compile() -def classification( - doc: str, - da_hypothesis_template: str, - da_candidate_labels: str, - sv_hypothesis_template: str, - sv_candidate_labels: str, - no_hypothesis_template: str, - no_candidate_labels: str, - ) -> Dict[str, float]: - """Classify text into categories. - - Args: - doc (str): - Text to classify. - da_hypothesis_template (str): - Template for the hypothesis to be used for Danish classification. - da_candidate_labels (str): - Comma-separated list of candidate labels for Danish classification. - sv_hypothesis_template (str): - Template for the hypothesis to be used for Swedish classification. - sv_candidate_labels (str): - Comma-separated list of candidate labels for Swedish classification. - no_hypothesis_template (str): - Template for the hypothesis to be used for Norwegian classification. - no_candidate_labels (str): - Comma-separated list of candidate labels for Norwegian classification. - - Returns: - dict of str to float: - The predicted label and the confidence score. - """ - # Detect the language of the text - language = detect_language(doc.replace('\n', ' ')).name - - # Set the hypothesis template and candidate labels based on the detected language - if language == "sv": - hypothesis_template = sv_hypothesis_template - candidate_labels = re.split(r', *', sv_candidate_labels) - elif language == "no": - hypothesis_template = no_hypothesis_template - candidate_labels = re.split(r', *', no_candidate_labels) - else: - hypothesis_template = da_hypothesis_template - candidate_labels = re.split(r', *', da_candidate_labels) - - # Run the classifier on the text - result = classifier( - doc, - candidate_labels=candidate_labels, - hypothesis_template=hypothesis_template, - ) - - print(result) - - # Return the predicted label - return {lbl: score for lbl, score in zip(result["labels"], result["scores"])} - - -if __name__ == "__main__": - main() diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/index/package_finder.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/index/package_finder.py deleted file mode 100644 index 223d06df67e21ff59ae191613d8c905ea646e877..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/index/package_finder.py +++ /dev/null @@ -1,1004 +0,0 @@ -"""Routines related to PyPI, indexes""" - -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import functools -import itertools -import logging -import re -from typing import FrozenSet, Iterable, List, Optional, Set, Tuple, Union - -from pip._vendor.packaging import specifiers -from pip._vendor.packaging.tags import Tag -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.packaging.version import _BaseVersion -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.exceptions import ( - BestVersionAlreadyInstalled, - DistributionNotFound, - InvalidWheelFilename, - UnsupportedWheel, -) -from pip._internal.index.collector import LinkCollector, parse_links -from pip._internal.models.candidate import InstallationCandidate -from pip._internal.models.format_control import FormatControl -from pip._internal.models.link import Link -from pip._internal.models.search_scope import SearchScope -from pip._internal.models.selection_prefs import SelectionPreferences -from pip._internal.models.target_python import TargetPython -from pip._internal.models.wheel import Wheel -from pip._internal.req import InstallRequirement -from pip._internal.utils._log import getLogger -from pip._internal.utils.filetypes import WHEEL_EXTENSION -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.logging import indent_log -from pip._internal.utils.misc import build_netloc -from pip._internal.utils.packaging import check_requires_python -from pip._internal.utils.unpacking import SUPPORTED_EXTENSIONS - -__all__ = ["FormatControl", "BestCandidateResult", "PackageFinder"] - - -logger = getLogger(__name__) - -BuildTag = Union[Tuple[()], Tuple[int, str]] -CandidateSortingKey = Tuple[int, int, int, _BaseVersion, Optional[int], BuildTag] - - -def _check_link_requires_python( - link: Link, - version_info: Tuple[int, int, int], - ignore_requires_python: bool = False, -) -> bool: - """ - Return whether the given Python version is compatible with a link's - "Requires-Python" value. - - :param version_info: A 3-tuple of ints representing the Python - major-minor-micro version to check. - :param ignore_requires_python: Whether to ignore the "Requires-Python" - value if the given Python version isn't compatible. - """ - try: - is_compatible = check_requires_python( - link.requires_python, - version_info=version_info, - ) - except specifiers.InvalidSpecifier: - logger.debug( - "Ignoring invalid Requires-Python (%r) for link: %s", - link.requires_python, - link, - ) - else: - if not is_compatible: - version = ".".join(map(str, version_info)) - if not ignore_requires_python: - logger.verbose( - "Link requires a different Python (%s not in: %r): %s", - version, - link.requires_python, - link, - ) - return False - - logger.debug( - "Ignoring failed Requires-Python check (%s not in: %r) for link: %s", - version, - link.requires_python, - link, - ) - - return True - - -class LinkEvaluator: - - """ - Responsible for evaluating links for a particular project. - """ - - _py_version_re = re.compile(r"-py([123]\.?[0-9]?)$") - - # Don't include an allow_yanked default value to make sure each call - # site considers whether yanked releases are allowed. This also causes - # that decision to be made explicit in the calling code, which helps - # people when reading the code. - def __init__( - self, - project_name: str, - canonical_name: str, - formats: FrozenSet[str], - target_python: TargetPython, - allow_yanked: bool, - ignore_requires_python: Optional[bool] = None, - ) -> None: - """ - :param project_name: The user supplied package name. - :param canonical_name: The canonical package name. - :param formats: The formats allowed for this package. Should be a set - with 'binary' or 'source' or both in it. - :param target_python: The target Python interpreter to use when - evaluating link compatibility. This is used, for example, to - check wheel compatibility, as well as when checking the Python - version, e.g. the Python version embedded in a link filename - (or egg fragment) and against an HTML link's optional PEP 503 - "data-requires-python" attribute. - :param allow_yanked: Whether files marked as yanked (in the sense - of PEP 592) are permitted to be candidates for install. - :param ignore_requires_python: Whether to ignore incompatible - PEP 503 "data-requires-python" values in HTML links. Defaults - to False. - """ - if ignore_requires_python is None: - ignore_requires_python = False - - self._allow_yanked = allow_yanked - self._canonical_name = canonical_name - self._ignore_requires_python = ignore_requires_python - self._formats = formats - self._target_python = target_python - - self.project_name = project_name - - def evaluate_link(self, link: Link) -> Tuple[bool, Optional[str]]: - """ - Determine whether a link is a candidate for installation. - - :return: A tuple (is_candidate, result), where `result` is (1) a - version string if `is_candidate` is True, and (2) if - `is_candidate` is False, an optional string to log the reason - the link fails to qualify. - """ - version = None - if link.is_yanked and not self._allow_yanked: - reason = link.yanked_reason or "" - return (False, f"yanked for reason: {reason}") - - if link.egg_fragment: - egg_info = link.egg_fragment - ext = link.ext - else: - egg_info, ext = link.splitext() - if not ext: - return (False, "not a file") - if ext not in SUPPORTED_EXTENSIONS: - return (False, f"unsupported archive format: {ext}") - if "binary" not in self._formats and ext == WHEEL_EXTENSION: - reason = "No binaries permitted for {}".format(self.project_name) - return (False, reason) - if "macosx10" in link.path and ext == ".zip": - return (False, "macosx10 one") - if ext == WHEEL_EXTENSION: - try: - wheel = Wheel(link.filename) - except InvalidWheelFilename: - return (False, "invalid wheel filename") - if canonicalize_name(wheel.name) != self._canonical_name: - reason = "wrong project name (not {})".format(self.project_name) - return (False, reason) - - supported_tags = self._target_python.get_tags() - if not wheel.supported(supported_tags): - # Include the wheel's tags in the reason string to - # simplify troubleshooting compatibility issues. - file_tags = wheel.get_formatted_file_tags() - reason = ( - "none of the wheel's tags ({}) are compatible " - "(run pip debug --verbose to show compatible tags)".format( - ", ".join(file_tags) - ) - ) - return (False, reason) - - version = wheel.version - - # This should be up by the self.ok_binary check, but see issue 2700. - if "source" not in self._formats and ext != WHEEL_EXTENSION: - reason = f"No sources permitted for {self.project_name}" - return (False, reason) - - if not version: - version = _extract_version_from_fragment( - egg_info, - self._canonical_name, - ) - if not version: - reason = f"Missing project version for {self.project_name}" - return (False, reason) - - match = self._py_version_re.search(version) - if match: - version = version[: match.start()] - py_version = match.group(1) - if py_version != self._target_python.py_version: - return (False, "Python version is incorrect") - - supports_python = _check_link_requires_python( - link, - version_info=self._target_python.py_version_info, - ignore_requires_python=self._ignore_requires_python, - ) - if not supports_python: - # Return None for the reason text to suppress calling - # _log_skipped_link(). - return (False, None) - - logger.debug("Found link %s, version: %s", link, version) - - return (True, version) - - -def filter_unallowed_hashes( - candidates: List[InstallationCandidate], - hashes: Hashes, - project_name: str, -) -> List[InstallationCandidate]: - """ - Filter out candidates whose hashes aren't allowed, and return a new - list of candidates. - - If at least one candidate has an allowed hash, then all candidates with - either an allowed hash or no hash specified are returned. Otherwise, - the given candidates are returned. - - Including the candidates with no hash specified when there is a match - allows a warning to be logged if there is a more preferred candidate - with no hash specified. Returning all candidates in the case of no - matches lets pip report the hash of the candidate that would otherwise - have been installed (e.g. permitting the user to more easily update - their requirements file with the desired hash). - """ - if not hashes: - logger.debug( - "Given no hashes to check %s links for project %r: " - "discarding no candidates", - len(candidates), - project_name, - ) - # Make sure we're not returning back the given value. - return list(candidates) - - matches_or_no_digest = [] - # Collect the non-matches for logging purposes. - non_matches = [] - match_count = 0 - for candidate in candidates: - link = candidate.link - if not link.has_hash: - pass - elif link.is_hash_allowed(hashes=hashes): - match_count += 1 - else: - non_matches.append(candidate) - continue - - matches_or_no_digest.append(candidate) - - if match_count: - filtered = matches_or_no_digest - else: - # Make sure we're not returning back the given value. - filtered = list(candidates) - - if len(filtered) == len(candidates): - discard_message = "discarding no candidates" - else: - discard_message = "discarding {} non-matches:\n {}".format( - len(non_matches), - "\n ".join(str(candidate.link) for candidate in non_matches), - ) - - logger.debug( - "Checked %s links for project %r against %s hashes " - "(%s matches, %s no digest): %s", - len(candidates), - project_name, - hashes.digest_count, - match_count, - len(matches_or_no_digest) - match_count, - discard_message, - ) - - return filtered - - -class CandidatePreferences: - - """ - Encapsulates some of the preferences for filtering and sorting - InstallationCandidate objects. - """ - - def __init__( - self, - prefer_binary: bool = False, - allow_all_prereleases: bool = False, - ) -> None: - """ - :param allow_all_prereleases: Whether to allow all pre-releases. - """ - self.allow_all_prereleases = allow_all_prereleases - self.prefer_binary = prefer_binary - - -class BestCandidateResult: - """A collection of candidates, returned by `PackageFinder.find_best_candidate`. - - This class is only intended to be instantiated by CandidateEvaluator's - `compute_best_candidate()` method. - """ - - def __init__( - self, - candidates: List[InstallationCandidate], - applicable_candidates: List[InstallationCandidate], - best_candidate: Optional[InstallationCandidate], - ) -> None: - """ - :param candidates: A sequence of all available candidates found. - :param applicable_candidates: The applicable candidates. - :param best_candidate: The most preferred candidate found, or None - if no applicable candidates were found. - """ - assert set(applicable_candidates) <= set(candidates) - - if best_candidate is None: - assert not applicable_candidates - else: - assert best_candidate in applicable_candidates - - self._applicable_candidates = applicable_candidates - self._candidates = candidates - - self.best_candidate = best_candidate - - def iter_all(self) -> Iterable[InstallationCandidate]: - """Iterate through all candidates.""" - return iter(self._candidates) - - def iter_applicable(self) -> Iterable[InstallationCandidate]: - """Iterate through the applicable candidates.""" - return iter(self._applicable_candidates) - - -class CandidateEvaluator: - - """ - Responsible for filtering and sorting candidates for installation based - on what tags are valid. - """ - - @classmethod - def create( - cls, - project_name: str, - target_python: Optional[TargetPython] = None, - prefer_binary: bool = False, - allow_all_prereleases: bool = False, - specifier: Optional[specifiers.BaseSpecifier] = None, - hashes: Optional[Hashes] = None, - ) -> "CandidateEvaluator": - """Create a CandidateEvaluator object. - - :param target_python: The target Python interpreter to use when - checking compatibility. If None (the default), a TargetPython - object will be constructed from the running Python. - :param specifier: An optional object implementing `filter` - (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable - versions. - :param hashes: An optional collection of allowed hashes. - """ - if target_python is None: - target_python = TargetPython() - if specifier is None: - specifier = specifiers.SpecifierSet() - - supported_tags = target_python.get_tags() - - return cls( - project_name=project_name, - supported_tags=supported_tags, - specifier=specifier, - prefer_binary=prefer_binary, - allow_all_prereleases=allow_all_prereleases, - hashes=hashes, - ) - - def __init__( - self, - project_name: str, - supported_tags: List[Tag], - specifier: specifiers.BaseSpecifier, - prefer_binary: bool = False, - allow_all_prereleases: bool = False, - hashes: Optional[Hashes] = None, - ) -> None: - """ - :param supported_tags: The PEP 425 tags supported by the target - Python in order of preference (most preferred first). - """ - self._allow_all_prereleases = allow_all_prereleases - self._hashes = hashes - self._prefer_binary = prefer_binary - self._project_name = project_name - self._specifier = specifier - self._supported_tags = supported_tags - # Since the index of the tag in the _supported_tags list is used - # as a priority, precompute a map from tag to index/priority to be - # used in wheel.find_most_preferred_tag. - self._wheel_tag_preferences = { - tag: idx for idx, tag in enumerate(supported_tags) - } - - def get_applicable_candidates( - self, - candidates: List[InstallationCandidate], - ) -> List[InstallationCandidate]: - """ - Return the applicable candidates from a list of candidates. - """ - # Using None infers from the specifier instead. - allow_prereleases = self._allow_all_prereleases or None - specifier = self._specifier - versions = { - str(v) - for v in specifier.filter( - # We turn the version object into a str here because otherwise - # when we're debundled but setuptools isn't, Python will see - # packaging.version.Version and - # pkg_resources._vendor.packaging.version.Version as different - # types. This way we'll use a str as a common data interchange - # format. If we stop using the pkg_resources provided specifier - # and start using our own, we can drop the cast to str(). - (str(c.version) for c in candidates), - prereleases=allow_prereleases, - ) - } - - # Again, converting version to str to deal with debundling. - applicable_candidates = [c for c in candidates if str(c.version) in versions] - - filtered_applicable_candidates = filter_unallowed_hashes( - candidates=applicable_candidates, - hashes=self._hashes, - project_name=self._project_name, - ) - - return sorted(filtered_applicable_candidates, key=self._sort_key) - - def _sort_key(self, candidate: InstallationCandidate) -> CandidateSortingKey: - """ - Function to pass as the `key` argument to a call to sorted() to sort - InstallationCandidates by preference. - - Returns a tuple such that tuples sorting as greater using Python's - default comparison operator are more preferred. - - The preference is as follows: - - First and foremost, candidates with allowed (matching) hashes are - always preferred over candidates without matching hashes. This is - because e.g. if the only candidate with an allowed hash is yanked, - we still want to use that candidate. - - Second, excepting hash considerations, candidates that have been - yanked (in the sense of PEP 592) are always less preferred than - candidates that haven't been yanked. Then: - - If not finding wheels, they are sorted by version only. - If finding wheels, then the sort order is by version, then: - 1. existing installs - 2. wheels ordered via Wheel.support_index_min(self._supported_tags) - 3. source archives - If prefer_binary was set, then all wheels are sorted above sources. - - Note: it was considered to embed this logic into the Link - comparison operators, but then different sdist links - with the same version, would have to be considered equal - """ - valid_tags = self._supported_tags - support_num = len(valid_tags) - build_tag: BuildTag = () - binary_preference = 0 - link = candidate.link - if link.is_wheel: - # can raise InvalidWheelFilename - wheel = Wheel(link.filename) - try: - pri = -( - wheel.find_most_preferred_tag( - valid_tags, self._wheel_tag_preferences - ) - ) - except ValueError: - raise UnsupportedWheel( - "{} is not a supported wheel for this platform. It " - "can't be sorted.".format(wheel.filename) - ) - if self._prefer_binary: - binary_preference = 1 - if wheel.build_tag is not None: - match = re.match(r"^(\d+)(.*)$", wheel.build_tag) - build_tag_groups = match.groups() - build_tag = (int(build_tag_groups[0]), build_tag_groups[1]) - else: # sdist - pri = -(support_num) - has_allowed_hash = int(link.is_hash_allowed(self._hashes)) - yank_value = -1 * int(link.is_yanked) # -1 for yanked. - return ( - has_allowed_hash, - yank_value, - binary_preference, - candidate.version, - pri, - build_tag, - ) - - def sort_best_candidate( - self, - candidates: List[InstallationCandidate], - ) -> Optional[InstallationCandidate]: - """ - Return the best candidate per the instance's sort order, or None if - no candidate is acceptable. - """ - if not candidates: - return None - best_candidate = max(candidates, key=self._sort_key) - return best_candidate - - def compute_best_candidate( - self, - candidates: List[InstallationCandidate], - ) -> BestCandidateResult: - """ - Compute and return a `BestCandidateResult` instance. - """ - applicable_candidates = self.get_applicable_candidates(candidates) - - best_candidate = self.sort_best_candidate(applicable_candidates) - - return BestCandidateResult( - candidates, - applicable_candidates=applicable_candidates, - best_candidate=best_candidate, - ) - - -class PackageFinder: - """This finds packages. - - This is meant to match easy_install's technique for looking for - packages, by reading pages and looking for appropriate links. - """ - - def __init__( - self, - link_collector: LinkCollector, - target_python: TargetPython, - allow_yanked: bool, - use_deprecated_html5lib: bool, - format_control: Optional[FormatControl] = None, - candidate_prefs: Optional[CandidatePreferences] = None, - ignore_requires_python: Optional[bool] = None, - ) -> None: - """ - This constructor is primarily meant to be used by the create() class - method and from tests. - - :param format_control: A FormatControl object, used to control - the selection of source packages / binary packages when consulting - the index and links. - :param candidate_prefs: Options to use when creating a - CandidateEvaluator object. - """ - if candidate_prefs is None: - candidate_prefs = CandidatePreferences() - - format_control = format_control or FormatControl(set(), set()) - - self._allow_yanked = allow_yanked - self._candidate_prefs = candidate_prefs - self._ignore_requires_python = ignore_requires_python - self._link_collector = link_collector - self._target_python = target_python - self._use_deprecated_html5lib = use_deprecated_html5lib - - self.format_control = format_control - - # These are boring links that have already been logged somehow. - self._logged_links: Set[Link] = set() - - # Don't include an allow_yanked default value to make sure each call - # site considers whether yanked releases are allowed. This also causes - # that decision to be made explicit in the calling code, which helps - # people when reading the code. - @classmethod - def create( - cls, - link_collector: LinkCollector, - selection_prefs: SelectionPreferences, - target_python: Optional[TargetPython] = None, - *, - use_deprecated_html5lib: bool, - ) -> "PackageFinder": - """Create a PackageFinder. - - :param selection_prefs: The candidate selection preferences, as a - SelectionPreferences object. - :param target_python: The target Python interpreter to use when - checking compatibility. If None (the default), a TargetPython - object will be constructed from the running Python. - """ - if target_python is None: - target_python = TargetPython() - - candidate_prefs = CandidatePreferences( - prefer_binary=selection_prefs.prefer_binary, - allow_all_prereleases=selection_prefs.allow_all_prereleases, - ) - - return cls( - candidate_prefs=candidate_prefs, - link_collector=link_collector, - target_python=target_python, - allow_yanked=selection_prefs.allow_yanked, - format_control=selection_prefs.format_control, - ignore_requires_python=selection_prefs.ignore_requires_python, - use_deprecated_html5lib=use_deprecated_html5lib, - ) - - @property - def target_python(self) -> TargetPython: - return self._target_python - - @property - def search_scope(self) -> SearchScope: - return self._link_collector.search_scope - - @search_scope.setter - def search_scope(self, search_scope: SearchScope) -> None: - self._link_collector.search_scope = search_scope - - @property - def find_links(self) -> List[str]: - return self._link_collector.find_links - - @property - def index_urls(self) -> List[str]: - return self.search_scope.index_urls - - @property - def trusted_hosts(self) -> Iterable[str]: - for host_port in self._link_collector.session.pip_trusted_origins: - yield build_netloc(*host_port) - - @property - def allow_all_prereleases(self) -> bool: - return self._candidate_prefs.allow_all_prereleases - - def set_allow_all_prereleases(self) -> None: - self._candidate_prefs.allow_all_prereleases = True - - @property - def prefer_binary(self) -> bool: - return self._candidate_prefs.prefer_binary - - def set_prefer_binary(self) -> None: - self._candidate_prefs.prefer_binary = True - - def make_link_evaluator(self, project_name: str) -> LinkEvaluator: - canonical_name = canonicalize_name(project_name) - formats = self.format_control.get_allowed_formats(canonical_name) - - return LinkEvaluator( - project_name=project_name, - canonical_name=canonical_name, - formats=formats, - target_python=self._target_python, - allow_yanked=self._allow_yanked, - ignore_requires_python=self._ignore_requires_python, - ) - - def _sort_links(self, links: Iterable[Link]) -> List[Link]: - """ - Returns elements of links in order, non-egg links first, egg links - second, while eliminating duplicates - """ - eggs, no_eggs = [], [] - seen: Set[Link] = set() - for link in links: - if link not in seen: - seen.add(link) - if link.egg_fragment: - eggs.append(link) - else: - no_eggs.append(link) - return no_eggs + eggs - - def _log_skipped_link(self, link: Link, reason: str) -> None: - if link not in self._logged_links: - # Put the link at the end so the reason is more visible and because - # the link string is usually very long. - logger.debug("Skipping link: %s: %s", reason, link) - self._logged_links.add(link) - - def get_install_candidate( - self, link_evaluator: LinkEvaluator, link: Link - ) -> Optional[InstallationCandidate]: - """ - If the link is a candidate for install, convert it to an - InstallationCandidate and return it. Otherwise, return None. - """ - is_candidate, result = link_evaluator.evaluate_link(link) - if not is_candidate: - if result: - self._log_skipped_link(link, reason=result) - return None - - return InstallationCandidate( - name=link_evaluator.project_name, - link=link, - version=result, - ) - - def evaluate_links( - self, link_evaluator: LinkEvaluator, links: Iterable[Link] - ) -> List[InstallationCandidate]: - """ - Convert links that are candidates to InstallationCandidate objects. - """ - candidates = [] - for link in self._sort_links(links): - candidate = self.get_install_candidate(link_evaluator, link) - if candidate is not None: - candidates.append(candidate) - - return candidates - - def process_project_url( - self, project_url: Link, link_evaluator: LinkEvaluator - ) -> List[InstallationCandidate]: - logger.debug( - "Fetching project page and analyzing links: %s", - project_url, - ) - html_page = self._link_collector.fetch_page(project_url) - if html_page is None: - return [] - - page_links = list(parse_links(html_page, self._use_deprecated_html5lib)) - - with indent_log(): - package_links = self.evaluate_links( - link_evaluator, - links=page_links, - ) - - return package_links - - @functools.lru_cache(maxsize=None) - def find_all_candidates(self, project_name: str) -> List[InstallationCandidate]: - """Find all available InstallationCandidate for project_name - - This checks index_urls and find_links. - All versions found are returned as an InstallationCandidate list. - - See LinkEvaluator.evaluate_link() for details on which files - are accepted. - """ - link_evaluator = self.make_link_evaluator(project_name) - - collected_sources = self._link_collector.collect_sources( - project_name=project_name, - candidates_from_page=functools.partial( - self.process_project_url, - link_evaluator=link_evaluator, - ), - ) - - page_candidates_it = itertools.chain.from_iterable( - source.page_candidates() - for sources in collected_sources - for source in sources - if source is not None - ) - page_candidates = list(page_candidates_it) - - file_links_it = itertools.chain.from_iterable( - source.file_links() - for sources in collected_sources - for source in sources - if source is not None - ) - file_candidates = self.evaluate_links( - link_evaluator, - sorted(file_links_it, reverse=True), - ) - - if logger.isEnabledFor(logging.DEBUG) and file_candidates: - paths = [] - for candidate in file_candidates: - assert candidate.link.url # we need to have a URL - try: - paths.append(candidate.link.file_path) - except Exception: - paths.append(candidate.link.url) # it's not a local file - - logger.debug("Local files found: %s", ", ".join(paths)) - - # This is an intentional priority ordering - return file_candidates + page_candidates - - def make_candidate_evaluator( - self, - project_name: str, - specifier: Optional[specifiers.BaseSpecifier] = None, - hashes: Optional[Hashes] = None, - ) -> CandidateEvaluator: - """Create a CandidateEvaluator object to use.""" - candidate_prefs = self._candidate_prefs - return CandidateEvaluator.create( - project_name=project_name, - target_python=self._target_python, - prefer_binary=candidate_prefs.prefer_binary, - allow_all_prereleases=candidate_prefs.allow_all_prereleases, - specifier=specifier, - hashes=hashes, - ) - - @functools.lru_cache(maxsize=None) - def find_best_candidate( - self, - project_name: str, - specifier: Optional[specifiers.BaseSpecifier] = None, - hashes: Optional[Hashes] = None, - ) -> BestCandidateResult: - """Find matches for the given project and specifier. - - :param specifier: An optional object implementing `filter` - (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable - versions. - - :return: A `BestCandidateResult` instance. - """ - candidates = self.find_all_candidates(project_name) - candidate_evaluator = self.make_candidate_evaluator( - project_name=project_name, - specifier=specifier, - hashes=hashes, - ) - return candidate_evaluator.compute_best_candidate(candidates) - - def find_requirement( - self, req: InstallRequirement, upgrade: bool - ) -> Optional[InstallationCandidate]: - """Try to find a Link matching req - - Expects req, an InstallRequirement and upgrade, a boolean - Returns a InstallationCandidate if found, - Raises DistributionNotFound or BestVersionAlreadyInstalled otherwise - """ - hashes = req.hashes(trust_internet=False) - best_candidate_result = self.find_best_candidate( - req.name, - specifier=req.specifier, - hashes=hashes, - ) - best_candidate = best_candidate_result.best_candidate - - installed_version: Optional[_BaseVersion] = None - if req.satisfied_by is not None: - installed_version = req.satisfied_by.version - - def _format_versions(cand_iter: Iterable[InstallationCandidate]) -> str: - # This repeated parse_version and str() conversion is needed to - # handle different vendoring sources from pip and pkg_resources. - # If we stop using the pkg_resources provided specifier and start - # using our own, we can drop the cast to str(). - return ( - ", ".join( - sorted( - {str(c.version) for c in cand_iter}, - key=parse_version, - ) - ) - or "none" - ) - - if installed_version is None and best_candidate is None: - logger.critical( - "Could not find a version that satisfies the requirement %s " - "(from versions: %s)", - req, - _format_versions(best_candidate_result.iter_all()), - ) - - raise DistributionNotFound( - "No matching distribution found for {}".format(req) - ) - - best_installed = False - if installed_version and ( - best_candidate is None or best_candidate.version <= installed_version - ): - best_installed = True - - if not upgrade and installed_version is not None: - if best_installed: - logger.debug( - "Existing installed version (%s) is most up-to-date and " - "satisfies requirement", - installed_version, - ) - else: - logger.debug( - "Existing installed version (%s) satisfies requirement " - "(most up-to-date version is %s)", - installed_version, - best_candidate.version, - ) - return None - - if best_installed: - # We have an existing version, and its the best version - logger.debug( - "Installed version (%s) is most up-to-date (past versions: %s)", - installed_version, - _format_versions(best_candidate_result.iter_applicable()), - ) - raise BestVersionAlreadyInstalled - - logger.debug( - "Using version %s (newest of versions: %s)", - best_candidate.version, - _format_versions(best_candidate_result.iter_applicable()), - ) - return best_candidate - - -def _find_name_version_sep(fragment: str, canonical_name: str) -> int: - """Find the separator's index based on the package's canonical name. - - :param fragment: A + filename "fragment" (stem) or - egg fragment. - :param canonical_name: The package's canonical name. - - This function is needed since the canonicalized name does not necessarily - have the same length as the egg info's name part. An example:: - - >>> fragment = 'foo__bar-1.0' - >>> canonical_name = 'foo-bar' - >>> _find_name_version_sep(fragment, canonical_name) - 8 - """ - # Project name and version must be separated by one single dash. Find all - # occurrences of dashes; if the string in front of it matches the canonical - # name, this is the one separating the name and version parts. - for i, c in enumerate(fragment): - if c != "-": - continue - if canonicalize_name(fragment[:i]) == canonical_name: - return i - raise ValueError(f"{fragment} does not match {canonical_name}") - - -def _extract_version_from_fragment(fragment: str, canonical_name: str) -> Optional[str]: - """Parse the version string from a + filename - "fragment" (stem) or egg fragment. - - :param fragment: The string to parse. E.g. foo-2.1 - :param canonical_name: The canonicalized name of the package this - belongs to. - """ - try: - version_start = _find_name_version_sep(fragment, canonical_name) + 1 - except ValueError: - return None - version = fragment[version_start:] - if not version: - return None - return version diff --git a/spaces/ali-ghamdan/deoldify/fastai/vision/models/xresnet2.py b/spaces/ali-ghamdan/deoldify/fastai/vision/models/xresnet2.py deleted file mode 100644 index 58c1a94154062de89cfe6ee10f1526a0375819d0..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/vision/models/xresnet2.py +++ /dev/null @@ -1,202 +0,0 @@ -import torch.nn as nn -import torch -import math -import torch.utils.model_zoo as model_zoo -from ...torch_core import Module - - -__all__ = ['XResNet', 'xresnet18', 'xresnet34_2', 'xresnet50_2', 'xresnet101', 'xresnet152'] - - -def conv3x3(in_planes, out_planes, stride=1): - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) - - -class BasicBlock(Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - -def conv2d(ni, nf, stride): - return nn.Sequential(nn.Conv2d(ni, nf, kernel_size=3, stride=stride, padding=1, bias=False), - nn.BatchNorm2d(nf), nn.ReLU(inplace=True)) - -class XResNet(Module): - - def __init__(self, block, layers, c_out=1000): - self.inplanes = 64 - super(XResNet, self).__init__() - self.conv1 = conv2d(3, 32, 2) - self.conv2 = conv2d(32, 32, 1) - self.conv3 = conv2d(32, 64, 1) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - self.avgpool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Linear(512 * block.expansion, c_out) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - for m in self.modules(): - if isinstance(m, BasicBlock): m.bn2.weight = nn.Parameter(torch.zeros_like(m.bn2.weight)) - if isinstance(m, Bottleneck): m.bn3.weight = nn.Parameter(torch.zeros_like(m.bn3.weight)) - if isinstance(m, nn.Linear): m.weight.data.normal_(0, 0.01) - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - layers = [] - if stride==2: layers.append(nn.AvgPool2d(kernel_size=2, stride=2)) - layers += [ - nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=1, bias=False), - nn.BatchNorm2d(planes * block.expansion) ] - downsample = nn.Sequential(*layers) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): layers.append(block(self.inplanes, planes)) - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.conv2(x) - x = self.conv3(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - - return x - - -def xresnet18(pretrained=False, **kwargs): - """Constructs a XResNet-18 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = XResNet(BasicBlock, [2, 2, 2, 2], **kwargs) - if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['xresnet18'])) - return model - - -def xresnet34_2(pretrained=False, **kwargs): - """Constructs a XResNet-34 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = XResNet(BasicBlock, [3, 4, 6, 3], **kwargs) - if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['xresnet34'])) - return model - - -def xresnet50_2(pretrained=False, **kwargs): - """Constructs a XResNet-50 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = XResNet(Bottleneck, [3, 4, 6, 3], **kwargs) - if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['xresnet50'])) - return model - - -def xresnet101(pretrained=False, **kwargs): - """Constructs a XResNet-101 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = XResNet(Bottleneck, [3, 4, 23, 3], **kwargs) - if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['xresnet101'])) - return model - - -def xresnet152(pretrained=False, **kwargs): - """Constructs a XResNet-152 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = XResNet(Bottleneck, [3, 8, 36, 3], **kwargs) - if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['xresnet152'])) - return model - diff --git a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/model/msa_transformer.py b/spaces/alphunt/diffdock-alphunt-demo/esm/esm/model/msa_transformer.py deleted file mode 100644 index ef21cf550231bb9b0eb7f0933bec43d9d6fbbeba..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/model/msa_transformer.py +++ /dev/null @@ -1,238 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - -from ..modules import ( - AxialTransformerLayer, - LearnedPositionalEmbedding, - RobertaLMHead, - ESM1bLayerNorm, - ContactPredictionHead, -) - -from ..axial_attention import RowSelfAttention, ColumnSelfAttention - - - -class MSATransformer(nn.Module): - @classmethod - def add_args(cls, parser): - # fmt: off - parser.add_argument( - "--num_layers", - default=12, - type=int, - metavar="N", - help="number of layers" - ) - parser.add_argument( - "--embed_dim", - default=768, - type=int, - metavar="N", - help="embedding dimension" - ) - parser.add_argument( - "--logit_bias", - action="store_true", - help="whether to apply bias to logits" - ) - parser.add_argument( - "--ffn_embed_dim", - default=3072, - type=int, - metavar="N", - help="embedding dimension for FFN", - ) - parser.add_argument( - "--attention_heads", - default=12, - type=int, - metavar="N", - help="number of attention heads", - ) - parser.add_argument( - "--dropout", - default=0.1, - type=float, - help="Dropout to apply." - ) - parser.add_argument( - "--attention_dropout", - default=0.1, - type=float, - help="Dropout to apply." - ) - parser.add_argument( - "--activation_dropout", - default=0.1, - type=float, - help="Dropout to apply." - ) - parser.add_argument( - "--max_tokens_per_msa", - default=2 ** 14, - type=int, - help=( - "Used during inference to batch attention computations in a single " - "forward pass. This allows increased input sizes with less memory." - ), - ) - # fmt: on - - def __init__(self, args, alphabet): - super().__init__() - self.args = args - self.alphabet_size = len(alphabet) - self.padding_idx = alphabet.padding_idx - self.mask_idx = alphabet.mask_idx - self.cls_idx = alphabet.cls_idx - self.eos_idx = alphabet.eos_idx - self.prepend_bos = alphabet.prepend_bos - self.append_eos = alphabet.append_eos - - self.embed_tokens = nn.Embedding( - self.alphabet_size, self.args.embed_dim, padding_idx=self.padding_idx - ) - - if getattr(self.args, "embed_positions_msa", False): - emb_dim = getattr(self.args, "embed_positions_msa_dim", self.args.embed_dim) - self.msa_position_embedding = nn.Parameter( - 0.01 * torch.randn(1, 1024, 1, emb_dim), - requires_grad=True, - ) - else: - self.register_parameter("msa_position_embedding", None) - - self.dropout_module = nn.Dropout(self.args.dropout) - self.layers = nn.ModuleList( - [ - AxialTransformerLayer( - self.args.embed_dim, - self.args.ffn_embed_dim, - self.args.attention_heads, - self.args.dropout, - self.args.attention_dropout, - self.args.activation_dropout, - getattr(self.args, "max_tokens_per_msa", self.args.max_tokens), - ) - for _ in range(self.args.layers) - ] - ) - - self.contact_head = ContactPredictionHead( - self.args.layers * self.args.attention_heads, - self.prepend_bos, - self.append_eos, - eos_idx=self.eos_idx, - ) - self.embed_positions = LearnedPositionalEmbedding( - self.args.max_positions, - self.args.embed_dim, - self.padding_idx, - ) - self.emb_layer_norm_before = ESM1bLayerNorm(self.args.embed_dim) - self.emb_layer_norm_after = ESM1bLayerNorm(self.args.embed_dim) - self.lm_head = RobertaLMHead( - embed_dim=self.args.embed_dim, - output_dim=self.alphabet_size, - weight=self.embed_tokens.weight, - ) - - def forward(self, tokens, repr_layers=[], need_head_weights=False, return_contacts=False): - if return_contacts: - need_head_weights = True - - assert tokens.ndim == 3 - batch_size, num_alignments, seqlen = tokens.size() - padding_mask = tokens.eq(self.padding_idx) # B, R, C - if not padding_mask.any(): - padding_mask = None - - x = self.embed_tokens(tokens) - x += self.embed_positions(tokens.view(batch_size * num_alignments, seqlen)).view(x.size()) - if self.msa_position_embedding is not None: - if x.size(1) > 1024: - raise RuntimeError( - "Using model with MSA position embedding trained on maximum MSA " - f"depth of 1024, but received {x.size(1)} alignments." - ) - x += self.msa_position_embedding[:, :num_alignments] - - x = self.emb_layer_norm_before(x) - - x = self.dropout_module(x) - - if padding_mask is not None: - x = x * (1 - padding_mask.unsqueeze(-1).type_as(x)) - - repr_layers = set(repr_layers) - hidden_representations = {} - if 0 in repr_layers: - hidden_representations[0] = x - - if need_head_weights: - row_attn_weights = [] - col_attn_weights = [] - - # B x R x C x D -> R x C x B x D - x = x.permute(1, 2, 0, 3) - - for layer_idx, layer in enumerate(self.layers): - x = layer( - x, - self_attn_padding_mask=padding_mask, - need_head_weights=need_head_weights, - ) - if need_head_weights: - x, col_attn, row_attn = x - # H x C x B x R x R -> B x H x C x R x R - col_attn_weights.append(col_attn.permute(2, 0, 1, 3, 4)) - # H x B x C x C -> B x H x C x C - row_attn_weights.append(row_attn.permute(1, 0, 2, 3)) - if (layer_idx + 1) in repr_layers: - hidden_representations[layer_idx + 1] = x.permute(2, 0, 1, 3) - - x = self.emb_layer_norm_after(x) - x = x.permute(2, 0, 1, 3) # R x C x B x D -> B x R x C x D - - # last hidden representation should have layer norm applied - if (layer_idx + 1) in repr_layers: - hidden_representations[layer_idx + 1] = x - x = self.lm_head(x) - - result = {"logits": x, "representations": hidden_representations} - if need_head_weights: - # col_attentions: B x L x H x C x R x R - col_attentions = torch.stack(col_attn_weights, 1) - # row_attentions: B x L x H x C x C - row_attentions = torch.stack(row_attn_weights, 1) - result["col_attentions"] = col_attentions - result["row_attentions"] = row_attentions - if return_contacts: - contacts = self.contact_head(tokens, row_attentions) - result["contacts"] = contacts - - return result - - def predict_contacts(self, tokens): - return self(tokens, return_contacts=True)["contacts"] - - @property - def num_layers(self): - return self.args.layers - - def max_tokens_per_msa_(self, value: int) -> None: - """The MSA Transformer automatically batches attention computations when - gradients are disabled to allow you to pass in larger MSAs at test time than - you can fit in GPU memory. By default this occurs when more than 2^14 tokens - are passed in the input MSA. You can set this value to infinity to disable - this behavior. - """ - for module in self.modules(): - if isinstance(module, (RowSelfAttention, ColumnSelfAttention)): - module.max_tokens_per_msa = value diff --git a/spaces/alsrbdni/copy-ai.com/README.md b/spaces/alsrbdni/copy-ai.com/README.md deleted file mode 100644 index b4ee2811fed32b070f76a033ec3f35d41318853a..0000000000000000000000000000000000000000 --- a/spaces/alsrbdni/copy-ai.com/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Magic Prompt -emoji: 🎆 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: alsrbdni/magic-to-diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_sine_c++.cpp b/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_sine_c++.cpp deleted file mode 100644 index 5d965222b119c9c4ec822773b76a6cb873986aed..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_sine_c++.cpp +++ /dev/null @@ -1,275 +0,0 @@ -/** @file paex_sine.c - @ingroup examples_src - @brief Play a sine wave for several seconds. - @author Ross Bencina - @author Phil Burk -*/ -/* - * $Id: paex_sine.c 1752 2011-09-08 03:21:55Z philburk $ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com/ - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ -#include -#include -#include "portaudio.h" - -#define NUM_SECONDS (5) -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (64) - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -#define TABLE_SIZE (200) - -class Sine -{ -public: - Sine() : stream(0), left_phase(0), right_phase(0) - { - /* initialise sinusoidal wavetable */ - for( int i=0; iname); - } - - outputParameters.channelCount = 2; /* stereo output */ - outputParameters.sampleFormat = paFloat32; /* 32 bit floating point output */ - outputParameters.suggestedLatency = Pa_GetDeviceInfo( outputParameters.device )->defaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - PaError err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - paFramesPerBufferUnspecified, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - &Sine::paCallback, - this /* Using 'this' for userData so we can cast to Sine* in paCallback method */ - ); - - if (err != paNoError) - { - /* Failed to open stream to device !!! */ - return false; - } - - err = Pa_SetStreamFinishedCallback( stream, &Sine::paStreamFinished ); - - if (err != paNoError) - { - Pa_CloseStream( stream ); - stream = 0; - - return false; - } - - return true; - } - - bool close() - { - if (stream == 0) - return false; - - PaError err = Pa_CloseStream( stream ); - stream = 0; - - return (err == paNoError); - } - - - bool start() - { - if (stream == 0) - return false; - - PaError err = Pa_StartStream( stream ); - - return (err == paNoError); - } - - bool stop() - { - if (stream == 0) - return false; - - PaError err = Pa_StopStream( stream ); - - return (err == paNoError); - } - -private: - /* The instance callback, where we have access to every method/variable in object of class Sine */ - int paCallbackMethod(const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags) - { - float *out = (float*)outputBuffer; - unsigned long i; - - (void) timeInfo; /* Prevent unused variable warnings. */ - (void) statusFlags; - (void) inputBuffer; - - for( i=0; i= TABLE_SIZE ) left_phase -= TABLE_SIZE; - right_phase += 3; /* higher pitch so we can distinguish left and right. */ - if( right_phase >= TABLE_SIZE ) right_phase -= TABLE_SIZE; - } - - return paContinue; - - } - - /* This routine will be called by the PortAudio engine when audio is needed. - ** It may called at interrupt level on some machines so don't do anything - ** that could mess up the system like calling malloc() or free(). - */ - static int paCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) - { - /* Here we cast userData to Sine* type so we can call the instance method paCallbackMethod, we can do that since - we called Pa_OpenStream with 'this' for userData */ - return ((Sine*)userData)->paCallbackMethod(inputBuffer, outputBuffer, - framesPerBuffer, - timeInfo, - statusFlags); - } - - - void paStreamFinishedMethod() - { - printf( "Stream Completed: %s\n", message ); - } - - /* - * This routine is called by portaudio when playback is done. - */ - static void paStreamFinished(void* userData) - { - return ((Sine*)userData)->paStreamFinishedMethod(); - } - - PaStream *stream; - float sine[TABLE_SIZE]; - int left_phase; - int right_phase; - char message[20]; -}; - -class ScopedPaHandler -{ -public: - ScopedPaHandler() - : _result(Pa_Initialize()) - { - } - ~ScopedPaHandler() - { - if (_result == paNoError) - { - Pa_Terminate(); - } - } - - PaError result() const { return _result; } - -private: - PaError _result; -}; - - -/*******************************************************************/ -int main(void); -int main(void) -{ - Sine sine; - - printf("PortAudio Test: output sine wave. SR = %d, BufSize = %d\n", SAMPLE_RATE, FRAMES_PER_BUFFER); - - ScopedPaHandler paInit; - if( paInit.result() != paNoError ) goto error; - - if (sine.open(Pa_GetDefaultOutputDevice())) - { - if (sine.start()) - { - printf("Play for %d seconds.\n", NUM_SECONDS ); - Pa_Sleep( NUM_SECONDS * 1000 ); - - sine.stop(); - } - - sine.close(); - } - - printf("Test finished.\n"); - return paNoError; - -error: - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", paInit.result() ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( paInit.result() ) ); - return 1; -} diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/utils.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/utils.py deleted file mode 100644 index d5ab41c79b44ab81e1843d209cb342bd83dafb42..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/utils.py +++ /dev/null @@ -1,49 +0,0 @@ -import browser_cookie3 - - -class Utils: - browsers = [ - browser_cookie3.chrome, # 62.74% market share - browser_cookie3.safari, # 24.12% market share - browser_cookie3.firefox, # 4.56% market share - browser_cookie3.edge, # 2.85% market share - browser_cookie3.opera, # 1.69% market share - browser_cookie3.brave, # 0.96% market share - browser_cookie3.opera_gx, # 0.64% market share - browser_cookie3.vivaldi, # 0.32% market share - ] - - def get_cookies(domain: str, setName: str = None, setBrowser: str = False) -> dict: - cookies = {} - - if setBrowser != False: - for browser in Utils.browsers: - if browser.__name__ == setBrowser: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - else: - for browser in Utils.browsers: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - if setName: - try: - return {setName: cookies[setName]} - - except ValueError: - print(f'Error: could not find {setName} cookie in any browser.') - exit(1) - - else: - return cookies diff --git a/spaces/antonovmaxim/text-generation-webui-space/css/chat_style-messenger.css b/spaces/antonovmaxim/text-generation-webui-space/css/chat_style-messenger.css deleted file mode 100644 index 4d4bba0d902fe0d544a0203a8bf68d9c243ccbf6..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/css/chat_style-messenger.css +++ /dev/null @@ -1,124 +0,0 @@ -.chat { - margin-left: auto; - margin-right: auto; - max-width: 800px; - height: calc(100vh - 306px); - overflow-y: auto; - padding-right: 20px; - display: flex; - flex-direction: column-reverse; - word-break: break-word; - overflow-wrap: anywhere; -} - -.message { - padding-bottom: 25px; - font-size: 15px; - font-family: Helvetica, Arial, sans-serif; - line-height: 1.428571429; -} - -.circle-you { - width: 50px; - height: 50px; - background-color: rgb(238, 78, 59); - border-radius: 50%; -} - -.circle-bot { - width: 50px; - height: 50px; - background-color: rgb(59, 78, 244); - border-radius: 50%; - float: left; - margin-right: 10px; - margin-top: 5px; -} - -.circle-bot img, -.circle-you img { - border-radius: 50%; - width: 100%; - height: 100%; - object-fit: cover; -} -.circle-you { - margin-top: 5px; - float: right; -} -.circle-bot + .text, .circle-you + .text { - border-radius: 18px; - padding: 8px 12px; -} - -.circle-bot + .text { - background-color: #E4E6EB; - float: left; -} - -.circle-you + .text { - float: right; - background-color: rgb(0, 132, 255); - margin-right: 10px; -} - -.circle-you + .text div, .circle-you + .text *, .dark .circle-you + .text div, .dark .circle-you + .text * { - color: #FFF !important; -} -.circle-you + .text .username { - text-align: right; -} - -.dark .circle-bot + .text div, .dark .circle-bot + .text * { - color: #000; -} - -.text { - max-width: 80%; -} - -.text p { - margin-top: 5px; -} - -.username { - font-weight: bold; -} - -.message-body {} - -.message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; -} - -.message-body p { - margin-bottom: 0 !important; - font-size: 15px !important; - line-height: 1.428571429 !important; -} - -.message-body li { - margin-top: 0.5em !important; - margin-bottom: 0.5em !important; -} - -.message-body li > p { - display: inline !important; -} - -.message-body code { - overflow-x: auto; -} -.message-body :not(pre) > code { - white-space: normal !important; -} - -.dark .message-body p em { - color: rgb(138, 138, 138) !important; -} - -.message-body p em { - color: rgb(110, 110, 110) !important; -} diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py deleted file mode 100644 index eac7e896bbe85a670824bfe8ef487d0535d5bd99..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py +++ /dev/null @@ -1,186 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -Various positional encodings for the transformer. -""" -import math - -import torch -from torch import nn - -from groundingdino.util.misc import NestedTensor - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - mask = tensor_list.mask - assert mask is not None - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - # if os.environ.get("SHILONG_AMP", None) == '1': - # eps = 1e-4 - # else: - # eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - -class PositionEmbeddingSineHW(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__( - self, num_pos_feats=64, temperatureH=10000, temperatureW=10000, normalize=False, scale=None - ): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperatureH = temperatureH - self.temperatureW = temperatureW - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - mask = tensor_list.mask - assert mask is not None - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - - # import ipdb; ipdb.set_trace() - - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_tx = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_tx = self.temperatureW ** (2 * (torch.div(dim_tx, 2, rounding_mode='floor')) / self.num_pos_feats) - pos_x = x_embed[:, :, :, None] / dim_tx - - dim_ty = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_ty = self.temperatureH ** (2 * (torch.div(dim_ty, 2, rounding_mode='floor')) / self.num_pos_feats) - pos_y = y_embed[:, :, :, None] / dim_ty - - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - - # import ipdb; ipdb.set_trace() - - return pos - - -class PositionEmbeddingLearned(nn.Module): - """ - Absolute pos embedding, learned. - """ - - def __init__(self, num_pos_feats=256): - super().__init__() - self.row_embed = nn.Embedding(50, num_pos_feats) - self.col_embed = nn.Embedding(50, num_pos_feats) - self.reset_parameters() - - def reset_parameters(self): - nn.init.uniform_(self.row_embed.weight) - nn.init.uniform_(self.col_embed.weight) - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - h, w = x.shape[-2:] - i = torch.arange(w, device=x.device) - j = torch.arange(h, device=x.device) - x_emb = self.col_embed(i) - y_emb = self.row_embed(j) - pos = ( - torch.cat( - [ - x_emb.unsqueeze(0).repeat(h, 1, 1), - y_emb.unsqueeze(1).repeat(1, w, 1), - ], - dim=-1, - ) - .permute(2, 0, 1) - .unsqueeze(0) - .repeat(x.shape[0], 1, 1, 1) - ) - return pos - - -def build_position_encoding(args): - N_steps = args.hidden_dim // 2 - if args.position_embedding in ("v2", "sine"): - # TODO find a better way of exposing other arguments - position_embedding = PositionEmbeddingSineHW( - N_steps, - temperatureH=args.pe_temperatureH, - temperatureW=args.pe_temperatureW, - normalize=True, - ) - elif args.position_embedding in ("v3", "learned"): - position_embedding = PositionEmbeddingLearned(N_steps) - else: - raise ValueError(f"not supported {args.position_embedding}") - - return position_embedding diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugging.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugging.py deleted file mode 100644 index edb3f4e8ca582e4f0bc938c761b1fdf1f9d56f6b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugging.py +++ /dev/null @@ -1,20 +0,0 @@ -############################################### -# -# Odds and ends for debugging -# -############################################### - -def print_call_chain(*args): - import sys - print(" ".join(map(str, args))) - f = sys._getframe(1) - while f: - name = f.f_code.co_name - s = f.f_locals.get('self', None) - if s: - c = getattr(s, "__class__", None) - if c: - name = "%s.%s" % (c.__name__, name) - print("Called from: %s %s" % (name, f.f_lineno)) - f = f.f_back - print("-" * 70) diff --git a/spaces/aryadytm/remove-photo-background/src/trainer.py b/spaces/aryadytm/remove-photo-background/src/trainer.py deleted file mode 100644 index bd3d8be4eeeaf5cde08be16239bc7cdcb2d38bae..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/remove-photo-background/src/trainer.py +++ /dev/null @@ -1,299 +0,0 @@ -import math -import scipy -import numpy as np -from scipy.ndimage import grey_dilation, grey_erosion - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -__all__ = [ - 'supervised_training_iter', - 'soc_adaptation_iter', -] - - -# ---------------------------------------------------------------------------------- -# Tool Classes/Functions -# ---------------------------------------------------------------------------------- - -class GaussianBlurLayer(nn.Module): - """ Add Gaussian Blur to a 4D tensors - This layer takes a 4D tensor of {N, C, H, W} as input. - The Gaussian blur will be performed in given channel number (C) splitly. - """ - - def __init__(self, channels, kernel_size): - """ - Arguments: - channels (int): Channel for input tensor - kernel_size (int): Size of the kernel used in blurring - """ - - super(GaussianBlurLayer, self).__init__() - self.channels = channels - self.kernel_size = kernel_size - assert self.kernel_size % 2 != 0 - - self.op = nn.Sequential( - nn.ReflectionPad2d(math.floor(self.kernel_size / 2)), - nn.Conv2d(channels, channels, self.kernel_size, - stride=1, padding=0, bias=None, groups=channels) - ) - - self._init_kernel() - - def forward(self, x): - """ - Arguments: - x (torch.Tensor): input 4D tensor - Returns: - torch.Tensor: Blurred version of the input - """ - - if not len(list(x.shape)) == 4: - print('\'GaussianBlurLayer\' requires a 4D tensor as input\n') - exit() - elif not x.shape[1] == self.channels: - print('In \'GaussianBlurLayer\', the required channel ({0}) is' - 'not the same as input ({1})\n'.format(self.channels, x.shape[1])) - exit() - - return self.op(x) - - def _init_kernel(self): - sigma = 0.3 * ((self.kernel_size - 1) * 0.5 - 1) + 0.8 - - n = np.zeros((self.kernel_size, self.kernel_size)) - i = math.floor(self.kernel_size / 2) - n[i, i] = 1 - kernel = scipy.ndimage.gaussian_filter(n, sigma) - - for name, param in self.named_parameters(): - param.data.copy_(torch.from_numpy(kernel)) - -# ---------------------------------------------------------------------------------- - - -# ---------------------------------------------------------------------------------- -# MODNet Training Functions -# ---------------------------------------------------------------------------------- - -blurer = GaussianBlurLayer(1, 3).cuda() - - -def supervised_training_iter( - modnet, optimizer, image, trimap, gt_matte, - semantic_scale=10.0, detail_scale=10.0, matte_scale=1.0): - """ Supervised training iteration of MODNet - This function trains MODNet for one iteration in a labeled dataset. - - Arguments: - modnet (torch.nn.Module): instance of MODNet - optimizer (torch.optim.Optimizer): optimizer for supervised training - image (torch.autograd.Variable): input RGB image - its pixel values should be normalized - trimap (torch.autograd.Variable): trimap used to calculate the losses - its pixel values can be 0, 0.5, or 1 - (foreground=1, background=0, unknown=0.5) - gt_matte (torch.autograd.Variable): ground truth alpha matte - its pixel values are between [0, 1] - semantic_scale (float): scale of the semantic loss - NOTE: please adjust according to your dataset - detail_scale (float): scale of the detail loss - NOTE: please adjust according to your dataset - matte_scale (float): scale of the matte loss - NOTE: please adjust according to your dataset - - Returns: - semantic_loss (torch.Tensor): loss of the semantic estimation [Low-Resolution (LR) Branch] - detail_loss (torch.Tensor): loss of the detail prediction [High-Resolution (HR) Branch] - matte_loss (torch.Tensor): loss of the semantic-detail fusion [Fusion Branch] - - Example: - import torch - from src.models.modnet import MODNet - from src.trainer import supervised_training_iter - - bs = 16 # batch size - lr = 0.01 # learn rate - epochs = 40 # total epochs - - modnet = torch.nn.DataParallel(MODNet()).cuda() - optimizer = torch.optim.SGD(modnet.parameters(), lr=lr, momentum=0.9) - lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=int(0.25 * epochs), gamma=0.1) - - dataloader = CREATE_YOUR_DATALOADER(bs) # NOTE: please finish this function - - for epoch in range(0, epochs): - for idx, (image, trimap, gt_matte) in enumerate(dataloader): - semantic_loss, detail_loss, matte_loss = \ - supervised_training_iter(modnet, optimizer, image, trimap, gt_matte) - lr_scheduler.step() - """ - - global blurer - - # set the model to train mode and clear the optimizer - modnet.train() - optimizer.zero_grad() - - # forward the model - pred_semantic, pred_detail, pred_matte = modnet(image, False) - - # calculate the boundary mask from the trimap - boundaries = (trimap < 0.5) + (trimap > 0.5) - - # calculate the semantic loss - gt_semantic = F.interpolate(gt_matte, scale_factor=1/16, mode='bilinear') - gt_semantic = blurer(gt_semantic) - semantic_loss = torch.mean(F.mse_loss(pred_semantic, gt_semantic)) - semantic_loss = semantic_scale * semantic_loss - - # calculate the detail loss - pred_boundary_detail = torch.where(boundaries, trimap, pred_detail) - gt_detail = torch.where(boundaries, trimap, gt_matte) - detail_loss = torch.mean(F.l1_loss(pred_boundary_detail, gt_detail)) - detail_loss = detail_scale * detail_loss - - # calculate the matte loss - pred_boundary_matte = torch.where(boundaries, trimap, pred_matte) - matte_l1_loss = F.l1_loss(pred_matte, gt_matte) + 4.0 * F.l1_loss(pred_boundary_matte, gt_matte) - matte_compositional_loss = F.l1_loss(image * pred_matte, image * gt_matte) \ - + 4.0 * F.l1_loss(image * pred_boundary_matte, image * gt_matte) - matte_loss = torch.mean(matte_l1_loss + matte_compositional_loss) - matte_loss = matte_scale * matte_loss - - # calculate the final loss, backward the loss, and update the model - loss = semantic_loss + detail_loss + matte_loss - loss.backward() - optimizer.step() - - # for test - return semantic_loss, detail_loss, matte_loss - - -def soc_adaptation_iter( - modnet, backup_modnet, optimizer, image, - soc_semantic_scale=100.0, soc_detail_scale=1.0): - """ Self-Supervised sub-objective consistency (SOC) adaptation iteration of MODNet - This function fine-tunes MODNet for one iteration in an unlabeled dataset. - Note that SOC can only fine-tune a converged MODNet, i.e., MODNet that has been - trained in a labeled dataset. - - Arguments: - modnet (torch.nn.Module): instance of MODNet - backup_modnet (torch.nn.Module): backup of the trained MODNet - optimizer (torch.optim.Optimizer): optimizer for self-supervised SOC - image (torch.autograd.Variable): input RGB image - its pixel values should be normalized - soc_semantic_scale (float): scale of the SOC semantic loss - NOTE: please adjust according to your dataset - soc_detail_scale (float): scale of the SOC detail loss - NOTE: please adjust according to your dataset - - Returns: - soc_semantic_loss (torch.Tensor): loss of the semantic SOC - soc_detail_loss (torch.Tensor): loss of the detail SOC - - Example: - import copy - import torch - from src.models.modnet import MODNet - from src.trainer import soc_adaptation_iter - - bs = 1 # batch size - lr = 0.00001 # learn rate - epochs = 10 # total epochs - - modnet = torch.nn.DataParallel(MODNet()).cuda() - modnet = LOAD_TRAINED_CKPT() # NOTE: please finish this function - - optimizer = torch.optim.Adam(modnet.parameters(), lr=lr, betas=(0.9, 0.99)) - dataloader = CREATE_YOUR_DATALOADER(bs) # NOTE: please finish this function - - for epoch in range(0, epochs): - backup_modnet = copy.deepcopy(modnet) - for idx, (image) in enumerate(dataloader): - soc_semantic_loss, soc_detail_loss = \ - soc_adaptation_iter(modnet, backup_modnet, optimizer, image) - """ - - global blurer - - # set the backup model to eval mode - backup_modnet.eval() - - # set the main model to train mode and freeze its norm layers - modnet.train() - modnet.module.freeze_norm() - - # clear the optimizer - optimizer.zero_grad() - - # forward the main model - pred_semantic, pred_detail, pred_matte = modnet(image, False) - - # forward the backup model - with torch.no_grad(): - _, pred_backup_detail, pred_backup_matte = backup_modnet(image, False) - - # calculate the boundary mask from `pred_matte` and `pred_semantic` - pred_matte_fg = (pred_matte.detach() > 0.1).float() - pred_semantic_fg = (pred_semantic.detach() > 0.1).float() - pred_semantic_fg = F.interpolate(pred_semantic_fg, scale_factor=16, mode='bilinear') - pred_fg = pred_matte_fg * pred_semantic_fg - - n, c, h, w = pred_matte.shape - np_pred_fg = pred_fg.data.cpu().numpy() - np_boundaries = np.zeros([n, c, h, w]) - for sdx in range(0, n): - sample_np_boundaries = np_boundaries[sdx, 0, ...] - sample_np_pred_fg = np_pred_fg[sdx, 0, ...] - - side = int((h + w) / 2 * 0.05) - dilated = grey_dilation(sample_np_pred_fg, size=(side, side)) - eroded = grey_erosion(sample_np_pred_fg, size=(side, side)) - - sample_np_boundaries[np.where(dilated - eroded != 0)] = 1 - np_boundaries[sdx, 0, ...] = sample_np_boundaries - - boundaries = torch.tensor(np_boundaries).float().cuda() - - # sub-objectives consistency between `pred_semantic` and `pred_matte` - # generate pseudo ground truth for `pred_semantic` - downsampled_pred_matte = blurer(F.interpolate(pred_matte, scale_factor=1/16, mode='bilinear')) - pseudo_gt_semantic = downsampled_pred_matte.detach() - pseudo_gt_semantic = pseudo_gt_semantic * (pseudo_gt_semantic > 0.01).float() - - # generate pseudo ground truth for `pred_matte` - pseudo_gt_matte = pred_semantic.detach() - pseudo_gt_matte = pseudo_gt_matte * (pseudo_gt_matte > 0.01).float() - - # calculate the SOC semantic loss - soc_semantic_loss = F.mse_loss(pred_semantic, pseudo_gt_semantic) + F.mse_loss(downsampled_pred_matte, pseudo_gt_matte) - soc_semantic_loss = soc_semantic_scale * torch.mean(soc_semantic_loss) - - # NOTE: using the formulas in our paper to calculate the following losses has similar results - # sub-objectives consistency between `pred_detail` and `pred_backup_detail` (on boundaries only) - backup_detail_loss = boundaries * F.l1_loss(pred_detail, pred_backup_detail, reduction='none') - backup_detail_loss = torch.sum(backup_detail_loss, dim=(1,2,3)) / torch.sum(boundaries, dim=(1,2,3)) - backup_detail_loss = torch.mean(backup_detail_loss) - - # sub-objectives consistency between pred_matte` and `pred_backup_matte` (on boundaries only) - backup_matte_loss = boundaries * F.l1_loss(pred_matte, pred_backup_matte, reduction='none') - backup_matte_loss = torch.sum(backup_matte_loss, dim=(1,2,3)) / torch.sum(boundaries, dim=(1,2,3)) - backup_matte_loss = torch.mean(backup_matte_loss) - - soc_detail_loss = soc_detail_scale * (backup_detail_loss + backup_matte_loss) - - # calculate the final loss, backward the loss, and update the model - loss = soc_semantic_loss + soc_detail_loss - - loss.backward() - optimizer.step() - - return soc_semantic_loss, soc_detail_loss - -# ---------------------------------------------------------------------------------- diff --git a/spaces/ashercn97/AsherTesting/extensions/ngrok/script.py b/spaces/ashercn97/AsherTesting/extensions/ngrok/script.py deleted file mode 100644 index 46f39bd327b6046f8e0d38ef266fc7d3687640da..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/extensions/ngrok/script.py +++ /dev/null @@ -1,36 +0,0 @@ -# Adds ngrok ingress, to use add `--extension ngrok` to the command line options -# -# Parameters can be customized in settings.json of webui, e.g.: -# {"ngrok": {"basic_auth":"user:password"} } -# or -# {"ngrok": {"oauth_provider":"google", "oauth_allow_emails":["asdf@asdf.com"]} } -# -# See this example for full list of options: https://github.com/ngrok/ngrok-py/blob/main/examples/ngrok-connect-full.py -# or the README.md in this directory. - -import logging -from modules import shared - -# Pick up host/port command line arguments -host = shared.args.listen_host if shared.args.listen_host and shared.args.listen else '127.0.0.1' -port = shared.args.listen_port if shared.args.listen_port else '7860' - -# Default options -options = { - 'addr': f"{host}:{port}", - 'authtoken_from_env': True, - 'session_metadata': 'text-generation-webui', -} - - -def ui(): - settings = shared.settings.get("ngrok") - if settings: - options.update(settings) - - try: - import ngrok - tunnel = ngrok.connect(**options) - logging.info(f"Ingress established at: {tunnel.url()}") - except ModuleNotFoundError: - logging.error("===> ngrok library not found, please run `pip install -r extensions/ngrok/requirements.txt`") diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/models/diffusion/ldm.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/models/diffusion/ldm.py deleted file mode 100644 index 78fa65862d848a3fa49ff8c2b7bc475067175891..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/models/diffusion/ldm.py +++ /dev/null @@ -1,88 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -from tqdm import tqdm -from ldm.util import default -from ldm.modules.diffusionmodules.util import extract_into_tensor -from .ddpm import DDPM - - - -class LatentDiffusion(DDPM): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - # hardcoded - self.clip_denoised = False - - - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - - "Does not support DDPM sampling anymore. Only do DDIM or PLMS" - - # = = = = = = = = = = = = Below is for sampling = = = = = = = = = = = = # - - # def predict_start_from_noise(self, x_t, t, noise): - # return ( extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - # extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise ) - - # def q_posterior(self, x_start, x_t, t): - # posterior_mean = ( - # extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - # extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - # ) - # posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - # posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - # return posterior_mean, posterior_variance, posterior_log_variance_clipped - - - # def p_mean_variance(self, model, x, c, t): - - # model_out = model(x, t, c) - # x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - - # if self.clip_denoised: - # x_recon.clamp_(-1., 1.) - - # model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - # return model_mean, posterior_variance, posterior_log_variance, x_recon - - - # @torch.no_grad() - # def p_sample(self, model, x, c, t): - # b, *_, device = *x.shape, x.device - # model_mean, _, model_log_variance, x0 = self.p_mean_variance(model, x=x, c=c, t=t, ) - # noise = torch.randn_like(x) - - # # no noise when t == 0 - # nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - # return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - - - # @torch.no_grad() - # def p_sample_loop(self, model, shape, c): - # device = self.betas.device - # b = shape[0] - # img = torch.randn(shape, device=device) - - # iterator = tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps) - # for i in iterator: - # ts = torch.full((b,), i, device=device, dtype=torch.long) - # img, x0 = self.p_sample(model, img, c, ts) - - # return img - - - # @torch.no_grad() - # def sample(self, model, shape, c, uc=None, guidance_scale=None): - # return self.p_sample_loop(model, shape, c) - - - - - diff --git a/spaces/augmented-surveys/retrodict/README.md b/spaces/augmented-surveys/retrodict/README.md deleted file mode 100644 index 9493c1617accdbac22b0637721e057ce4528eb7c..0000000000000000000000000000000000000000 --- a/spaces/augmented-surveys/retrodict/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AI-Augmented Social Surveys -emoji: 📈 -colorFrom: green -colorTo: indigo -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/autosummproject/autosumm/extractor/extract.py b/spaces/autosummproject/autosumm/extractor/extract.py deleted file mode 100644 index 623754f3caa0cfd43642eb76621bc71d76aba3e8..0000000000000000000000000000000000000000 --- a/spaces/autosummproject/autosumm/extractor/extract.py +++ /dev/null @@ -1,66 +0,0 @@ -from ._utils import FewDocumentsError -from ._utils import document_extraction, paragraph_extraction, semantic_search -from utils.timing import Timer -from corpora import gen_corpus -from nltk.corpus import stopwords -from nltk.tokenize import word_tokenize -import string - -@Timer.time_it('extração', 'extraction') -def extract(query: str, search_model, n: int=3, extracted_documents: list=None) -> str: - """Extract n paragraphs from the corpus using the given query. - - Parameters: - query (str): Sentence used to search the corpus for relevant documents - n (int): Number of paragraphs to return - - Returns: - str: String containing the n most relevant paragraphs joined by line breaks - """ - # Open corpus - with Timer('geração do corpus', 'corpus generation'): - corpus = gen_corpus(query) - - # Setup query - stop_words = set(stopwords.words('english')) - query_tokens = word_tokenize(query.lower()) - tokens_without_sw = [word for word in query_tokens if not word in stop_words] - keywords = [keyword for keyword in tokens_without_sw if keyword not in string.punctuation] - - # Gross search - with Timer('busca exaustiva', 'exhaustive search'): - if not extracted_documents: - extracted_documents, documents_empty, documents_sizes = document_extraction( - dataset=corpus, - query=query, - keywords=keywords, - min_document_size=0, - min_just_one_paragraph_size=0 - ) - - # First semantc search (over documents) - with Timer('busca semantica nos documentos', 'semantic search over documents'): - selected_documents, documents_distances = semantic_search( - model=search_model, - query=query, - files=extracted_documents, - number_of_similar_files=10 - ) - - # Second semantic search (over paragraphs) - with Timer('busca semantica nos parágrafos', 'semantic search over paragraphs'): - paragraphs = paragraph_extraction( - documents=selected_documents, - min_paragraph_size=20, - ) - selected_paragraphs, paragraphs_distances = semantic_search( - model=search_model, - query=query, - files=paragraphs, - number_of_similar_files=10 - ) - - text = '\n'.join(selected_paragraphs[:n]) - - return text - diff --git a/spaces/awacke1/ASRGenerateStory/app.py b/spaces/awacke1/ASRGenerateStory/app.py deleted file mode 100644 index 802d78aff8e7fa6fc5ed4494c961c6cf4b75cebb..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ASRGenerateStory/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import gradio as gr -from transformers import pipeline -import io, base64 -from PIL import Image -import numpy as np -import tensorflow as tf -import mediapy -import os -import sys -from huggingface_hub import snapshot_download - -import streamlit as st -import firebase_admin -from firebase_admin import credentials -from firebase_admin import firestore -import datetime -import tempfile -from typing import Optional -import numpy as np -from TTS.utils.manage import ModelManager -from TTS.utils.synthesizer import Synthesizer - - -# firestore singleton is a cached multiuser instance to persist shared crowdsource memory -@st.experimental_singleton -def get_db_firestore(): - cred = credentials.Certificate('test.json') - firebase_admin.initialize_app(cred, {'projectId': u'clinical-nlp-b9117',}) - db = firestore.client() - return db - -#start firestore singleton -db = get_db_firestore() - -# create ASR ML pipeline -asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h") - -# create Text Classification pipeline -classifier = pipeline("text-classification") - -# create text generator pipeline -story_gen = pipeline("text-generation", "pranavpsv/gpt2-genre-story-generator") - -# transcribe function -def transcribe(audio): - text = asr(audio)["text"] - return text - -def speech_to_text(speech): - text = asr(speech)["text"] - return text - -def text_to_sentiment(text): - sentiment = classifier(text)[0]["label"] - return sentiment - -def upsert(text): - date_time =str(datetime.datetime.today()) - doc_ref = db.collection('Text2SpeechSentimentSave').document(date_time) - doc_ref.set({u'firefield': 'Recognize Speech', u'first': 'https://huggingface.co/spaces/awacke1/Text2SpeechSentimentSave', u'last': text, u'born': date_time,}) - saved = select('Text2SpeechSentimentSave', date_time) - # check it here: https://console.firebase.google.com/u/0/project/clinical-nlp-b9117/firestore/data/~2FStreamlitSpaces - return saved - -def select(collection, document): - doc_ref = db.collection(collection).document(document) - doc = doc_ref.get() - docid = ("The id is: ", doc.id) - contents = ("The contents are: ", doc.to_dict()) - return contents - -def selectall(text): - docs = db.collection('Text2SpeechSentimentSave').stream() - doclist='' - for doc in docs: - r=(f'{doc.id} => {doc.to_dict()}') - doclist += r - return doclist - -# story gen -def generate_story(choice, input_text): - query = " <{0}> {1}".format(choice, input_text) - generated_text = story_gen(query) - generated_text = generated_text[0]['generated_text'] - generated_text = generated_text.split('> ')[2] - return generated_text - -# images gen -def generate_images(text): - steps=50 - width=256 - height=256 - num_images=4 - diversity=6 - image_bytes = image_gen(text, steps, width, height, num_images, diversity) - generated_images = [] - for image in image_bytes[1]: - image_str = image[0] - image_str = image_str.replace("data:image/png;base64,","") - decoded_bytes = base64.decodebytes(bytes(image_str, "utf-8")) - img = Image.open(io.BytesIO(decoded_bytes)) - generated_images.append(img) - return generated_images - -# reductionism - interpolate 4 images - todo - unhardcode the pattern -def generate_interpolation(gallery): - times_to_interpolate = 4 - generated_images = [] - for image_str in gallery: - image_str = image_str.replace("data:image/png;base64,","") - decoded_bytes = base64.decodebytes(bytes(image_str, "utf-8")) - img = Image.open(io.BytesIO(decoded_bytes)) - generated_images.append(img) - generated_images[0].save('frame_0.png') - generated_images[1].save('frame_1.png') - generated_images[2].save('frame_2.png') - generated_images[3].save('frame_3.png') - input_frames = ["frame_0.png", "frame_1.png", "frame_2.png", "frame_3.png"] - frames = list(util.interpolate_recursively_from_files(input_frames, times_to_interpolate, interpolator)) - mediapy.write_video("out.mp4", frames, fps=15) - return "out.mp4" - -# image generator -image_gen = gr.Interface.load("spaces/multimodalart/latentdiffusion") - -# video generator -os.system("git clone https://github.com/google-research/frame-interpolation") -sys.path.append("frame-interpolation") -from eval import interpolator, util - -ffmpeg_path = util.get_ffmpeg_path() -mediapy.set_ffmpeg(ffmpeg_path) -model = snapshot_download(repo_id="akhaliq/frame-interpolation-film-style") -interpolator = interpolator.Interpolator(model, None) - -demo = gr.Blocks() -with demo: - - audio_file = gr.inputs.Audio(source="microphone", type="filepath") - text = gr.Textbox() - label = gr.Label() - saved = gr.Textbox() - savedAll = gr.Textbox() - audio = gr.Audio(label="Output", interactive=False) - - b1 = gr.Button("Recognize Speech") - b2 = gr.Button("Classify Sentiment") - b3 = gr.Button("Save Speech to Text") - b4 = gr.Button("Retrieve All") - - input_story_type = gr.Radio(choices=['superhero', 'action', 'drama', 'horror', 'thriller', 'sci_fi'], value='sci_fi', label="Genre") - input_start_text = gr.Textbox(placeholder='A teddy bear outer space', label="Starting Text") - - gr.Markdown("1. Select a type of story, then write some starting text! Then hit the 'Generate Story' button to generate a story! Feel free to edit the generated story afterwards!") - button_gen_story = gr.Button("Generate Story") - gr.Markdown("2. After generating a story, hit the 'Generate Images' button to create some visuals for your story! (Can re-run multiple times!)") - button_gen_images = gr.Button("Generate Images") - gr.Markdown("3. After generating some images, hit the 'Generate Video' button to create a short video by interpolating the previously generated visuals!") - button_gen_video = gr.Button("Generate Video") - output_generated_story = gr.Textbox(label="Generated Story") - output_gallery = gr.Gallery(label="Generated Story Images") - output_interpolation = gr.Video(label="Generated Video") - - # Bind functions to buttons - button_gen_story.click(fn=generate_story, inputs=[input_story_type , input_start_text], outputs=output_generated_story) - button_gen_images.click(fn=generate_images, inputs=output_generated_story, outputs=output_gallery) - button_gen_video.click(fn=generate_interpolation, inputs=output_gallery, outputs=output_interpolation) - - b1.click(speech_to_text, inputs=audio_file, outputs=input_start_text ) - b2.click(text_to_sentiment, inputs=text, outputs=label) - b3.click(upsert, inputs=text, outputs=saved) - b4.click(selectall, inputs=text, outputs=savedAll) - -demo.launch(debug=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/awacke1/HTML5-BabylonJS-Javascript-3DAnimation/style.css b/spaces/awacke1/HTML5-BabylonJS-Javascript-3DAnimation/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-BabylonJS-Javascript-3DAnimation/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/TextImg2Art/app.py b/spaces/awacke1/TextImg2Art/app.py deleted file mode 100644 index 2fda15ae70ccd460395debafe8df7098f2688fd2..0000000000000000000000000000000000000000 --- a/spaces/awacke1/TextImg2Art/app.py +++ /dev/null @@ -1,186 +0,0 @@ -import os - -os.system("git clone --recursive https://github.com/JD-P/cloob-latent-diffusion") -os.system("cd cloob-latent-diffusion;pip install omegaconf pillow pytorch-lightning einops wandb ftfy regex ./CLIP") - -import argparse -from functools import partial -from pathlib import Path -import sys - -sys.path.append('./cloob-latent-diffusion') -sys.path.append('./cloob-latent-diffusion/cloob-training') -sys.path.append('./cloob-latent-diffusion/latent-diffusion') -sys.path.append('./cloob-latent-diffusion/taming-transformers') -sys.path.append('./cloob-latent-diffusion/v-diffusion-pytorch') - -from omegaconf import OmegaConf -from PIL import Image - -import torch -from torch import nn -from torch.nn import functional as F -from torchvision import transforms -from torchvision.transforms import functional as TF -from tqdm import trange -from CLIP import clip -from cloob_training import model_pt, pretrained - -import ldm.models.autoencoder -from diffusion import sampling, utils - -import train_latent_diffusion as train -from huggingface_hub import hf_hub_url, cached_download - -import random - -# Download the model files -checkpoint = cached_download(hf_hub_url("huggan/distill-ccld-wa", filename="model_student.ckpt")) -ae_model_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.ckpt")) -ae_config_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.yaml")) - -# Define a few utility functions - -def parse_prompt(prompt, default_weight=3.): - if prompt.startswith('http://') or prompt.startswith('https://'): - vals = prompt.rsplit(':', 2) - vals = [vals[0] + ':' + vals[1], *vals[2:]] - else: - vals = prompt.rsplit(':', 1) - vals = vals + ['', default_weight][len(vals):] - return vals[0], float(vals[1]) - - -def resize_and_center_crop(image, size): - fac = max(size[0] / image.size[0], size[1] / image.size[1]) - image = image.resize((int(fac * image.size[0]), int(fac * image.size[1])), Image.LANCZOS) - return TF.center_crop(image, size[::-1]) - - -# Load the models -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') -print('Using device:', device) -print('loading models') - -# autoencoder -ae_config = OmegaConf.load(ae_config_path) -ae_model = ldm.models.autoencoder.AutoencoderKL(**ae_config.model.params) -ae_model.eval().requires_grad_(False).to(device) -ae_model.load_state_dict(torch.load(ae_model_path)) -n_ch, side_y, side_x = 4, 32, 32 - -# diffusion model -model = train.DiffusionModel(192, [1,1,2,2], autoencoder_scale=torch.tensor(4.3084)) -model.load_state_dict(torch.load(checkpoint, map_location='cpu')) -model = model.to(device).eval().requires_grad_(False) - -# CLOOB -cloob_config = pretrained.get_config('cloob_laion_400m_vit_b_16_16_epochs') -cloob = model_pt.get_pt_model(cloob_config) -checkpoint = pretrained.download_checkpoint(cloob_config) -cloob.load_state_dict(model_pt.get_pt_params(cloob_config, checkpoint)) -cloob.eval().requires_grad_(False).to(device) - - -# The key function: returns a list of n PIL images -def generate(n=1, prompts=['a red circle'], images=[], seed=42, steps=15, - method='plms', eta=None): - zero_embed = torch.zeros([1, cloob.config['d_embed']], device=device) - target_embeds, weights = [zero_embed], [] - - for prompt in prompts: - txt, weight = parse_prompt(prompt) - target_embeds.append(cloob.text_encoder(cloob.tokenize(txt).to(device)).float()) - weights.append(weight) - - for prompt in images: - path, weight = parse_prompt(prompt) - img = Image.open(utils.fetch(path)).convert('RGB') - clip_size = cloob.config['image_encoder']['image_size'] - img = resize_and_center_crop(img, (clip_size, clip_size)) - batch = TF.to_tensor(img)[None].to(device) - embed = F.normalize(cloob.image_encoder(cloob.normalize(batch)).float(), dim=-1) - target_embeds.append(embed) - weights.append(weight) - - weights = torch.tensor([1 - sum(weights), *weights], device=device) - - torch.manual_seed(seed) - - def cfg_model_fn(x, t): - n = x.shape[0] - n_conds = len(target_embeds) - x_in = x.repeat([n_conds, 1, 1, 1]) - t_in = t.repeat([n_conds]) - clip_embed_in = torch.cat([*target_embeds]).repeat_interleave(n, 0) - vs = model(x_in, t_in, clip_embed_in).view([n_conds, n, *x.shape[1:]]) - v = vs.mul(weights[:, None, None, None, None]).sum(0) - return v - - def run(x, steps): - if method == 'ddpm': - return sampling.sample(cfg_model_fn, x, steps, 1., {}) - if method == 'ddim': - return sampling.sample(cfg_model_fn, x, steps, eta, {}) - if method == 'prk': - return sampling.prk_sample(cfg_model_fn, x, steps, {}) - if method == 'plms': - return sampling.plms_sample(cfg_model_fn, x, steps, {}) - if method == 'pie': - return sampling.pie_sample(cfg_model_fn, x, steps, {}) - if method == 'plms2': - return sampling.plms2_sample(cfg_model_fn, x, steps, {}) - assert False - - batch_size = n - x = torch.randn([n, n_ch, side_y, side_x], device=device) - t = torch.linspace(1, 0, steps + 1, device=device)[:-1] - steps = utils.get_spliced_ddpm_cosine_schedule(t) - pil_ims = [] - for i in trange(0, n, batch_size): - cur_batch_size = min(n - i, batch_size) - out_latents = run(x[i:i+cur_batch_size], steps) - outs = ae_model.decode(out_latents * torch.tensor(2.55).to(device)) - for j, out in enumerate(outs): - pil_ims.append(utils.to_pil_image(out)) - - return pil_ims - - -import gradio as gr - -def gen_ims(prompt, im_prompt=None, seed=None, n_steps=10, method='plms'): - if seed == None : - seed = random.randint(0, 10000) - print( prompt, im_prompt, seed, n_steps) - prompts = [prompt] - im_prompts = [] - if im_prompt != None: - im_prompts = [im_prompt] - pil_ims = generate(n=1, prompts=prompts, images=im_prompts, seed=seed, steps=n_steps, method=method) - return pil_ims[0] - -iface = gr.Interface(fn=gen_ims, - inputs=[#gr.inputs.Slider(minimum=1, maximum=1, step=1, default=1,label="Number of images"), - #gr.inputs.Slider(minimum=0, maximum=200, step=1, label='Random seed', default=0), - gr.inputs.Textbox(label="Text prompt"), - gr.inputs.Image(optional=True, label="Image prompt", type='filepath'), - #gr.inputs.Slider(minimum=10, maximum=35, step=1, default=15,label="Number of steps") - ], - outputs=[gr.outputs.Image(type="pil", label="Generated Image")], - examples=[ - ["rembrandt, angel, painting"], - ["aivazovsky, romantic, painting"], - ['nicholas, dragon, symbolism, painting'], - ['monet, van-gogh pegasus, painting'], - ['landscape, unicorn, painting'], - ["picasso, lighthouse, reflections landscape"], - ["rembrandt, portrait, angel, AI"] - ], - title='Art from Text or Image:', - description="Sourced from [WikiArt] (https://huggingface.co/datasets/huggan/wikiart) dataset", - article = 'Distilled version of a cloob-conditioned latent diffusion model [model card](https://huggingface.co/huggan/distill-ccld-wa)' - -) - -iface.launch(enable_queue=True) # , debug=True for colab debugging \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327093426.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327093426.py deleted file mode 100644 index e7f6d806358f15ac7768e960fdbf29399937ac2c..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327093426.py +++ /dev/null @@ -1,66 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - #return Image.fromarray(restored_faces[0][:,:,::-1]) - return Image.fromarray(restored_img[:, :, ::-1]) - -title = "让美好回忆更清晰" - - -description = "上传老照片,点击Submit,稍等片刻,右侧Output将照片另存为即可。" -article = "

    | | Github Repo

    visitor badge
    " - -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True,share=True) - - diff --git a/spaces/bennydou/gitea/docker/root.sh b/spaces/bennydou/gitea/docker/root.sh deleted file mode 100644 index 86b7901c4622e7a14385d8af6d354af03f1f405d..0000000000000000000000000000000000000000 --- a/spaces/bennydou/gitea/docker/root.sh +++ /dev/null @@ -1,53 +0,0 @@ -#!/bin/bash - - -apt-get update -y - -# Include libssl1.1 if available -if [[ ! -z $(apt-cache --names-only search ^libssl1.1$) ]]; then - apt-get install -y --no-install-recommends libssl1.1 -fi - -# Include libssl3 if available -if [[ ! -z $(apt-cache --names-only search ^libssl3$) ]]; then - apt-get install -y --no-install-recommends libssl3 -fi - -# base packages -apt-get install --no-install-recommends -y \ - build-essential \ - ca-certificates \ - curl \ - dnsutils \ - git \ - git-lfs \ - iputils-ping \ - less \ - lsof \ - net-tools \ - nmap \ - openssh-client \ - procps \ - rsync \ - tcpdump \ - tig \ - tree \ - unzip \ - wget - -# Ensure at least the en_US.UTF-8 UTF-8 locale is available = common need for both applications and things like the agnoster ZSH theme. -if ! grep -o -E '^\s*en_US.UTF-8\s+UTF-8' /etc/locale.gen > /dev/null; then - echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen - apt-get -y install --no-install-recommends locales - locale-gen -fi - -# Clean up -apt-get -y clean -rm -rf /var/lib/apt/lists/* - -groupadd git -useradd -s /bin/bash --gid git -m git - -wget -O /usr/bin/gitea https://dl.gitea.com/gitea/main/gitea-main-nightly-linux-amd64 -chmod +x /usr/bin/gitea diff --git a/spaces/bkhmsi/Font-To-Sketch/code/save_svg.py b/spaces/bkhmsi/Font-To-Sketch/code/save_svg.py deleted file mode 100644 index 29533485423a0797f7f19cc7f5c59ea19c00b02d..0000000000000000000000000000000000000000 --- a/spaces/bkhmsi/Font-To-Sketch/code/save_svg.py +++ /dev/null @@ -1,155 +0,0 @@ -import torch -import pydiffvg -import xml.etree.ElementTree as etree -from xml.dom import minidom -def prettify(elem): - """Return a pretty-printed XML string for the Element. - """ - rough_string = etree.tostring(elem, 'utf-8') - reparsed = minidom.parseString(rough_string) - return reparsed.toprettyxml(indent=" ") -def save_svg(filename, width, height, shapes, shape_groups, use_gamma = False, background=None): - root = etree.Element('svg') - root.set('version', '1.1') - root.set('xmlns', 'http://www.w3.org/2000/svg') - root.set('width', str(width)) - root.set('height', str(height)) - if background is not None: - print(f"setting background to {background}") - root.set('style', str(background)) - defs = etree.SubElement(root, 'defs') - g = etree.SubElement(root, 'g') - if use_gamma: - f = etree.SubElement(defs, 'filter') - f.set('id', 'gamma') - f.set('x', '0') - f.set('y', '0') - f.set('width', '100%') - f.set('height', '100%') - gamma = etree.SubElement(f, 'feComponentTransfer') - gamma.set('color-interpolation-filters', 'sRGB') - feFuncR = etree.SubElement(gamma, 'feFuncR') - feFuncR.set('type', 'gamma') - feFuncR.set('amplitude', str(1)) - feFuncR.set('exponent', str(1/2.2)) - feFuncG = etree.SubElement(gamma, 'feFuncG') - feFuncG.set('type', 'gamma') - feFuncG.set('amplitude', str(1)) - feFuncG.set('exponent', str(1/2.2)) - feFuncB = etree.SubElement(gamma, 'feFuncB') - feFuncB.set('type', 'gamma') - feFuncB.set('amplitude', str(1)) - feFuncB.set('exponent', str(1/2.2)) - feFuncA = etree.SubElement(gamma, 'feFuncA') - feFuncA.set('type', 'gamma') - feFuncA.set('amplitude', str(1)) - feFuncA.set('exponent', str(1/2.2)) - g.set('style', 'filter:url(#gamma)') - # Store color - for i, shape_group in enumerate(shape_groups): - def add_color(shape_color, name): - if isinstance(shape_color, pydiffvg.LinearGradient): - lg = shape_color - color = etree.SubElement(defs, 'linearGradient') - color.set('id', name) - color.set('x1', str(lg.begin[0].item()/width)) - color.set('y1', str(lg.begin[1].item()/height)) - color.set('x2', str(lg.end[0].item()/width)) - color.set('y2', str(lg.end[1].item()/height)) - offsets = lg.offsets.data.cpu().numpy() - stop_colors = lg.stop_colors.data.cpu().numpy() - for j in range(offsets.shape[0]): - stop = etree.SubElement(color, 'stop') - stop.set('offset', str(offsets[j])) - c = lg.stop_colors[j, :] - stop.set('stop-color', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - stop.set('stop-opacity', '{}'.format(c[3])) - if isinstance(shape_color, pydiffvg.RadialGradient): - lg = shape_color - color = etree.SubElement(defs, 'radialGradient') - color.set('id', name) - color.set('cx', str(lg.center[0].item()/width)) - color.set('cy', str(lg.center[1].item()/height)) - # this only support width=height - color.set('r', str(lg.radius[0].item()/width)) - offsets = lg.offsets.data.cpu().numpy() - stop_colors = lg.stop_colors.data.cpu().numpy() - for j in range(offsets.shape[0]): - stop = etree.SubElement(color, 'stop') - stop.set('offset', str(offsets[j])) - c = lg.stop_colors[j, :] - stop.set('stop-color', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - stop.set('stop-opacity', '{}'.format(c[3])) - if shape_group.fill_color is not None: - add_color(shape_group.fill_color, 'shape_{}_fill'.format(i)) - if shape_group.stroke_color is not None: - add_color(shape_group.stroke_color, 'shape_{}_stroke'.format(i)) - for i, shape_group in enumerate(shape_groups): - # shape = shapes[shape_group.shape_ids[0]] - for j,id in enumerate(shape_group.shape_ids): - shape = shapes[id] - if isinstance(shape, pydiffvg.Path): - if j == 0: - shape_node = etree.SubElement(g, 'path') - path_str = '' - # shape_node = etree.SubElement(g, 'path') - num_segments = shape.num_control_points.shape[0] - num_control_points = shape.num_control_points.data.cpu().numpy() - points = shape.points.data.cpu().numpy() - num_points = shape.points.shape[0] - path_str += 'M {} {}'.format(points[0, 0], points[0, 1]) - point_id = 1 - for j in range(0, num_segments): - if num_control_points[j] == 0: - p = point_id % num_points - path_str += ' L {} {}'.format(\ - points[p, 0], points[p, 1]) - point_id += 1 - elif num_control_points[j] == 1: - p1 = (point_id + 1) % num_points - path_str += ' Q {} {} {} {}'.format(\ - points[point_id, 0], points[point_id, 1], - points[p1, 0], points[p1, 1]) - point_id += 2 - elif num_control_points[j] == 2: - p2 = (point_id + 2) % num_points - path_str += ' C {} {} {} {} {} {}'.format(\ - points[point_id, 0], points[point_id, 1], - points[point_id + 1, 0], points[point_id + 1, 1], - points[p2, 0], points[p2, 1]) - point_id += 3 - else: - assert(False) - # shape_node.set('stroke-width', str(2 * shape.stroke_width.data.cpu().item())) - shape_node.set('stroke-width', str(0)) # no strokes - if shape_group.fill_color is not None: - if isinstance(shape_group.fill_color, pydiffvg.LinearGradient): - shape_node.set('fill', 'url(#shape_{}_fill)'.format(i)) - elif isinstance(shape_group.fill_color, pydiffvg.RadialGradient): - shape_node.set('fill', 'url(#shape_{}_fill)'.format(i)) - else: - c = shape_group.fill_color.data.cpu().numpy() - shape_node.set('fill', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - shape_node.set('opacity', str(c[3])) - else: - shape_node.set('fill', 'none') - if shape_group.stroke_color is not None: - if isinstance(shape_group.stroke_color, pydiffvg.LinearGradient): - shape_node.set('stroke', 'url(#shape_{}_stroke)'.format(i)) - elif isinstance(shape_group.stroke_color, pydiffvg.LinearGradient): - shape_node.set('stroke', 'url(#shape_{}_stroke)'.format(i)) - else: - c = shape_group.stroke_color.data.cpu().numpy() - shape_node.set('stroke', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - shape_node.set('stroke-opacity', str(c[3])) - shape_node.set('stroke-linecap', 'round') - shape_node.set('stroke-linejoin', 'round') - - shape_node.set('d', path_str) - - with open(filename, "w") as f: - f.write(prettify(root)) diff --git a/spaces/bobmunzir/meta-llama-Llama-2-70b-hf/README.md b/spaces/bobmunzir/meta-llama-Llama-2-70b-hf/README.md deleted file mode 100644 index 8936dea47ba363c737d8348383e5809f781bc5fd..0000000000000000000000000000000000000000 --- a/spaces/bobmunzir/meta-llama-Llama-2-70b-hf/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chat Ui Template -emoji: 🚀 -colorFrom: indigo -colorTo: blue -sdk: docker -pinned: false -app_port: 3000 -suggested_hardware: a10g-small -duplicated_from: huggingchat/chat-ui-template ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bodah/RVC-Models-bo/mygit.sh b/spaces/bodah/RVC-Models-bo/mygit.sh deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/bodrum/bodrumfenisleri/README.md b/spaces/bodrum/bodrumfenisleri/README.md deleted file mode 100644 index 9e7b4feccdaa4aeddffd89e270254c5004544114..0000000000000000000000000000000000000000 --- a/spaces/bodrum/bodrumfenisleri/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bodrumfen -emoji: 🌍 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/botlik100/kaki/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/botlik100/kaki/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/losses.py b/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/train.py b/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/train.py deleted file mode 100644 index 4dff8b280d76c53abdfc2fbce83cafaf3022ab96..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/train.py +++ /dev/null @@ -1,301 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import librosa -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) - -import commons -import utils -from data_utils import ( - TextAudioLoader, - TextAudioCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '8000' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32,300,400,500,600,700,800,900,1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioCollate() - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank==0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(train_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask,\ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths) - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank==0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - old_g=os.path.join(hps.model_dir, "G_{}.pth".format(global_step-2000)) - old_d=os.path.join(hps.model_dir, "D_{}.pth".format(global_step-2000)) - if os.path.exists(old_g): - os.remove(old_g) - if os.path.exists(old_d): - os.remove(old_d) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000) - y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0,:,:y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/candlend/vits-hoshimi/sovits/vdecoder/hifigan/env.py b/spaces/candlend/vits-hoshimi/sovits/vdecoder/hifigan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/vdecoder/hifigan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/data/meshes/catalog.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/data/meshes/catalog.py deleted file mode 100644 index b258f3ce11a90666b9c764541ce299384cfddf4e..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/data/meshes/catalog.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import logging -from collections import UserDict -from dataclasses import dataclass -from typing import Iterable, Optional - -from ..utils import maybe_prepend_base_path - - -@dataclass -class MeshInfo: - name: str - data: str - geodists: Optional[str] = None - symmetry: Optional[str] = None - texcoords: Optional[str] = None - - -class _MeshCatalog(UserDict): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.mesh_ids = {} - self.mesh_names = {} - self.max_mesh_id = -1 - - def __setitem__(self, key, value): - if key in self: - logger = logging.getLogger(__name__) - logger.warning( - f"Overwriting mesh catalog entry '{key}': old value {self[key]}" - f", new value {value}" - ) - mesh_id = self.mesh_ids[key] - else: - self.max_mesh_id += 1 - mesh_id = self.max_mesh_id - super().__setitem__(key, value) - self.mesh_ids[key] = mesh_id - self.mesh_names[mesh_id] = key - - def get_mesh_id(self, shape_name: str) -> int: - return self.mesh_ids[shape_name] - - def get_mesh_name(self, mesh_id: int) -> str: - return self.mesh_names[mesh_id] - - -MeshCatalog = _MeshCatalog() - - -def register_mesh(mesh_info: MeshInfo, base_path: Optional[str]) -> None: - geodists, symmetry, texcoords = mesh_info.geodists, mesh_info.symmetry, mesh_info.texcoords - if geodists: - geodists = maybe_prepend_base_path(base_path, geodists) - if symmetry: - symmetry = maybe_prepend_base_path(base_path, symmetry) - if texcoords: - texcoords = maybe_prepend_base_path(base_path, texcoords) - MeshCatalog[mesh_info.name] = MeshInfo( - name=mesh_info.name, - data=maybe_prepend_base_path(base_path, mesh_info.data), - geodists=geodists, - symmetry=symmetry, - texcoords=texcoords, - ) - - -def register_meshes(mesh_infos: Iterable[MeshInfo], base_path: Optional[str]) -> None: - for mesh_info in mesh_infos: - register_mesh(mesh_info, base_path) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/README.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/README.md deleted file mode 100644 index 75ce084530d192a522824d01b98a474d77863e68..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/README.md +++ /dev/null @@ -1,41 +0,0 @@ -# Pointly-Supervised Instance Segmentation - -Bowen Cheng, Omkar Parkhi, Alexander Kirillov - -[[`arXiv`](https://arxiv.org/abs/2104.06404)] [[`Project`](https://bowenc0221.github.io/point-sup)] [[`BibTeX`](#CitingPointSup)] - -
    - -

    - -## Data preparation -Please follow these steps to prepare your datasets: -1. Follow official Detectron2 instruction to prepare COCO dataset. Set up `DETECTRON2_DATASETS` environment variable to the location of your Detectron2 dataset. -2. Generate 10-points annotations for COCO by running: `python tools/prepare_coco_point_annotations_without_masks.py 10` - -## Training - -To train a model with 8 GPUs run: -```bash -python train_net.py --config-file configs/mask_rcnn_R_50_FPN_3x_point_sup_point_aug_coco.yaml --num-gpus 8 -``` - -## Evaluation - -Model evaluation can be done similarly: -```bash -python train_net.py --config-file configs/mask_rcnn_R_50_FPN_3x_point_sup_point_aug_coco.yaml --eval-only MODEL.WEIGHTS /path/to/model_checkpoint -``` - -## Citing Pointly-Supervised Instance Segmentation - -If you use PointSup, please use the following BibTeX entry. - -```BibTeX -@article{cheng2021pointly, - title={Pointly-Supervised Instance Segmentation}, - author={Bowen Cheng and Omkar Parkhi and Alexander Kirillov}, - journal={arXiv}, - year={2021} -} -``` diff --git a/spaces/cat630/ChuanhuChatGPT/app.py b/spaces/cat630/ChuanhuChatGPT/app.py deleted file mode 100644 index 4f8fc1aa1f8bee0d27ffc098fcf4c36bf2b54085..0000000000000000000000000000000000000000 --- a/spaces/cat630/ChuanhuChatGPT/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import gradio as gr -# import openai -import os -import sys -from utils import * -from presets import * - -my_api_key = "" # 在这里输入你的 API 密钥 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True - -gr.Chatbot.postprocess = postprocess - -#if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - print("Please give a api key!") - sys.exit(1) - #auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if os.path.exists("api_key.txt"): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -with gr.Blocks(css=customCSS) as demo: - gr.HTML(title) - gr.HTML('''
    复制 Space强烈建议点击上面的按钮复制一份这个Space,在你自己的Space里运行,响应更迅速、也更安全👆
    ''') - keyTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入你的OpenAI API-key...", - value=my_api_key, label="API Key", type="password", visible=not HIDE_MY_KEY).style(container=True) - chatbot = gr.Chatbot() # .style(color_map=("#1D51EE", "#585A5B")) - history = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=12): - txt = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - with gr.Row(): - emptyBtn = gr.Button("🧹 新的对话") - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除上条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - statusDisplay = gr.Markdown("status: ready") - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", - label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件(.csv)", choices=get_template_names(plain=True), multiselect=False, value=get_template_names(plain=True)[0]) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - templaeFileReadBtn = gr.Button("📂 读入模板") - with gr.Row(): - with gr.Column(scale=6): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template(get_template_names(plain=True)[0], mode=1), multiselect=False, value=load_template(get_template_names(plain=True)[0], mode=1)[0]) - with gr.Column(scale=1): - templateApplyBtn = gr.Button("⬇️ 应用") - with gr.Accordion(label="保存/加载对话历史记录", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False, value=get_history_names(plain=True)[0]) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - historyReadBtn = gr.Button("📂 读入对话") - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("参数", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0, - step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - gr.Markdown(description) - - - txt.submit(predict, [txt, top_p, temperature, keyTxt, - chatbot, history, systemPromptTxt], [chatbot, history, statusDisplay]) - txt.submit(reset_textbox, [], [txt]) - submitBtn.click(predict, [txt, top_p, temperature, keyTxt, chatbot, - history, systemPromptTxt], [chatbot, history, statusDisplay], show_progress=True) - submitBtn.click(reset_textbox, [], [txt]) - emptyBtn.click(reset_state, outputs=[chatbot, history]) - retryBtn.click(predict, [txt, top_p, temperature, keyTxt, chatbot, history, - systemPromptTxt, TRUECOMSTANT], [chatbot, history, statusDisplay], show_progress=True) - delLastBtn.click(delete_last_conversation, [chatbot, history], [ - chatbot, history], show_progress=True) - reduceTokenBtn.click(predict, [txt, top_p, temperature, keyTxt, chatbot, history, - systemPromptTxt, FALSECONSTANT, TRUECOMSTANT], [chatbot, history, statusDisplay], show_progress=True) - saveHistoryBtn.click(save_chat_history, [ - saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyReadBtn.click(load_chat_history, [historyFileSelectDropdown, systemPromptTxt, history, chatbot], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True) - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templaeFileReadBtn.click(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True) - templateApplyBtn.click(get_template_content, [promptTemplates, templateSelectDropdown, systemPromptTxt], [systemPromptTxt], show_progress=True) - -print("川虎的温馨提示:访问 http://localhost:7860 查看界面") -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -#if running in Docker -if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) -#if not running in Docker -else: - if authflag: - demo.queue().launch(share=False, auth=(username, password)) - else: - demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - #demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/ccolas/TastyPiano/src/cocktails/pipeline/__init__.py b/spaces/ccolas/TastyPiano/src/cocktails/pipeline/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/updates_note.md b/spaces/chendl/compositional_test/multimodal/YOLOX/docs/updates_note.md deleted file mode 100644 index f675f43fcc36130d3294ab2c95210a56fdfb5c8e..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/updates_note.md +++ /dev/null @@ -1,55 +0,0 @@ - -# Updates notes - -## 【2021/08/19】 - -* Support image caching for faster training, which requires large system RAM. -* Remove the dependence of apex and support torch amp training. -* Optimize the preprocessing for faster training -* Replace the older distort augmentation with new HSV aug for faster training and better performance. - -### 2X Faster training - -We optimize the data preprocess and support image caching with `--cache` flag: - -```shell -python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o [--cache] - yolox-m - yolox-l - yolox-x -``` -* -d: number of gpu devices -* -b: total batch size, the recommended number for -b is num-gpu * 8 -* --fp16: mixed precision training -* --cache: caching imgs into RAM to accelarate training, which need large system RAM. - -### Higher performance - -New models achieve **~1%** higher performance! See [Model_Zoo](model_zoo.md) for more details. - -### Support torch amp - -We now support torch.cuda.amp training and Apex is not used anymore. - -### Breaking changes - -We remove the normalization operation like -mean/std. This will make the old weights **incompatible**. - -If you still want to use old weights, you can add `--legacy' in demo and eval: - -```shell -python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu] [--legacy] -``` - -and - -```shell -python tools/eval.py -n yolox-s -c yolox_s.pth -b 64 -d 8 --conf 0.001 [--fp16] [--fuse] [--legacy] - yolox-m - yolox-l - yolox-x -``` - -But for deployment demo, we don't support the old weights anymore. Users could checkout to YOLOX version 0.1.0 to use legacy weights for deployment - - diff --git a/spaces/chendl/compositional_test/multimodal/tools/prepare_pile.py b/spaces/chendl/compositional_test/multimodal/tools/prepare_pile.py deleted file mode 100644 index e35fba8e1cecb33f551e57b190756b40139bafee..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/tools/prepare_pile.py +++ /dev/null @@ -1,31 +0,0 @@ -import datasets -import os -from tqdm import tqdm -import webdataset as wds -import json - -DATASET_ROOT = "/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/the_pile/all/train" -OUT_DIR = "/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/the_pile" -SAMPLE_PER_SHARD = 100000 - -if __name__ == "__main__": - os.makedirs(OUT_DIR) - print("load dataset...") - pile = datasets.load_from_disk(DATASET_ROOT) - total_num = pile.num_rows - print("total num:", total_num) - num = 0 - pbar = tqdm(total=total_num) - with wds.ShardWriter(OUT_DIR+"/%05d.tar", maxcount=SAMPLE_PER_SHARD, encoder=False) as sink: - for sample in pile.iter(4096): - for text, meta in zip(sample["text"], sample["meta"]): - pbar.update(1) - if meta.get("pile_set_name", None) == "Github": - continue - num += 1 - sink.write({ - '__key__': str(num), - 'txt': text.encode("utf-8"), - 'json': json.dumps(meta, indent=4).encode("utf-8"), - }) - print(f"{num} out of {total_num} is written") diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/run_distributed_eval.py b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/run_distributed_eval.py deleted file mode 100644 index 55f3839d736483440bf142f9681819928363bbcb..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/run_distributed_eval.py +++ /dev/null @@ -1,262 +0,0 @@ -#!/usr/bin/env python -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import shutil -import time -from json import JSONDecodeError -from logging import getLogger -from pathlib import Path -from typing import Dict, List - -import torch -from torch.utils.data import DataLoader -from tqdm import tqdm - -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer -from utils import ( - Seq2SeqDataset, - calculate_bleu, - calculate_rouge, - chunks, - lmap, - load_json, - parse_numeric_n_bool_cl_kwargs, - save_json, - use_task_specific_params, - write_txt_file, -) - - -logger = getLogger(__name__) - - -def eval_data_dir( - data_dir, - save_dir: str, - model_name: str, - bs: int = 8, - max_source_length: int = 1024, - type_path="val", - n_obs=None, - fp16=False, - task="summarization", - local_rank=None, - num_return_sequences=1, - dataset_kwargs: Dict = None, - prefix="", - **generate_kwargs, -) -> Dict: - """Run evaluation on part of the data for one gpu and save to {save_dir}/rank_{rank}_output.json""" - model_name = str(model_name) - assert local_rank is not None - torch.distributed.init_process_group(backend="nccl", rank=local_rank) - - save_dir = Path(save_dir) - save_path = save_dir.joinpath(f"rank_{local_rank}_output.json") - torch.cuda.set_device(local_rank) - model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda() - if fp16: - model = model.half() - # determine if we need to increase num_beams - use_task_specific_params(model, task) # update config with task specific params - num_beams = generate_kwargs.pop("num_beams", model.config.num_beams) # AttributeError risk? - if num_return_sequences > num_beams: - num_beams = num_return_sequences - - tokenizer = AutoTokenizer.from_pretrained(model_name) - logger.info(f"Inferred tokenizer type: {tokenizer.__class__}") # if this is wrong, check config.model_type. - - if max_source_length is None: - max_source_length = tokenizer.model_max_length - if prefix is None: - prefix = prefix or getattr(model.config, "prefix", "") or "" - ds = Seq2SeqDataset( - tokenizer, - data_dir, - max_source_length, - max_target_length=1024, - type_path=type_path, - n_obs=n_obs, - prefix=prefix, - **dataset_kwargs, - ) - # I set shuffle=True for a more accurate progress bar. - # If all the longest samples are first, the prog bar estimate is too high at the beginning. - sampler = ds.make_sortish_sampler(bs, distributed=True, add_extra_examples=False, shuffle=True) - data_loader = DataLoader(ds, sampler=sampler, batch_size=bs, collate_fn=ds.collate_fn) - results = [] - for batch in tqdm(data_loader): - summaries = model.generate( - input_ids=batch["input_ids"].to(model.device), - attention_mask=batch["attention_mask"].to(model.device), - num_return_sequences=num_return_sequences, - num_beams=num_beams, - **generate_kwargs, - ) - preds = tokenizer.batch_decode(summaries, skip_special_tokens=True, clean_up_tokenization_spaces=False) - ids = batch["ids"] - if num_return_sequences > 1: - preds = chunks(preds, num_return_sequences) # batch size chunks, each of size num_return_seq - for i, pred in enumerate(preds): - results.append({"pred": pred, "id": ids[i].item()}) - save_json(results, save_path) - return results, sampler.num_replicas - - -def run_generate(): - parser = argparse.ArgumentParser( - epilog="Unspecified args like --num_beams=2 --decoder_start_token_id=4 are passed to model.generate" - ) - parser.add_argument("--data_dir", type=str, help="like cnn_dm/test.source") - parser.add_argument( - "--model_name", - type=str, - help="like facebook/bart-large-cnn,t5-base, etc.", - default="sshleifer/distilbart-xsum-12-3", - ) - parser.add_argument("--save_dir", type=str, help="where to save", default="tmp_gen") - parser.add_argument("--max_source_length", type=int, default=None) - parser.add_argument( - "--type_path", type=str, default="test", help="which subset to evaluate typically train/val/test" - ) - parser.add_argument("--task", type=str, default="summarization", help="used for task_specific_params + metrics") - parser.add_argument("--bs", type=int, default=8, required=False, help="batch size") - parser.add_argument( - "--local_rank", type=int, default=-1, required=False, help="should be passed by distributed.launch" - ) - - parser.add_argument( - "--n_obs", type=int, default=None, required=False, help="How many observations. Defaults to all." - ) - parser.add_argument( - "--num_return_sequences", type=int, default=1, required=False, help="How many sequences to return" - ) - parser.add_argument( - "--sync_timeout", - type=int, - default=600, - required=False, - help="How long should master process wait for other processes to finish.", - ) - parser.add_argument("--src_lang", type=str, default=None, required=False) - parser.add_argument("--tgt_lang", type=str, default=None, required=False) - parser.add_argument( - "--prefix", type=str, required=False, default=None, help="will be added to the begininng of src examples" - ) - parser.add_argument("--fp16", action="store_true") - parser.add_argument("--debug", action="store_true") - start_time = time.time() - args, rest = parser.parse_known_args() - generate_kwargs = parse_numeric_n_bool_cl_kwargs(rest) - if generate_kwargs and args.local_rank <= 0: - print(f"parsed the following generate kwargs: {generate_kwargs}") - json_save_dir = Path(args.save_dir + "_tmp") - Path(json_save_dir).mkdir(exist_ok=True) # this handles locking. - intermediate_files = list(json_save_dir.glob("rank_*.json")) - if intermediate_files: - raise ValueError(f"Found files at {json_save_dir} please move or remove them.") - # In theory, a node could finish and save before another node hits this. If this happens, we can address later. - dataset_kwargs = {} - if args.src_lang is not None: - dataset_kwargs["src_lang"] = args.src_lang - if args.tgt_lang is not None: - dataset_kwargs["tgt_lang"] = args.tgt_lang - - Path(args.save_dir).mkdir(exist_ok=True) - results, num_replicas = eval_data_dir( - args.data_dir, - json_save_dir, - args.model_name, - type_path=args.type_path, - bs=args.bs, - fp16=args.fp16, - task=args.task, - local_rank=args.local_rank, - n_obs=args.n_obs, - max_source_length=args.max_source_length, - num_return_sequences=args.num_return_sequences, - prefix=args.prefix, - dataset_kwargs=dataset_kwargs, - **generate_kwargs, - ) - - if args.local_rank <= 0: - save_dir = Path(args.save_dir) - save_dir.mkdir(exist_ok=True) - partial_results = gather_results_from_each_node(num_replicas, json_save_dir, args.sync_timeout) - preds = combine_partial_results(partial_results) - if args.num_return_sequences > 1: - save_path = save_dir.joinpath("pseudolabel_results.json") - print(f"Saving aggregated results at {save_path}, intermediate in {json_save_dir}/") - save_json(preds, save_path) - return - tgt_file = Path(args.data_dir).joinpath(args.type_path + ".target") - with open(tgt_file) as f: - labels = [x.rstrip() for x in f.readlines()][: len(preds)] - - # Calculate metrics, save metrics, and save _generations.txt - calc_bleu = "translation" in args.task - score_fn = calculate_bleu if calc_bleu else calculate_rouge - metric_name = "bleu" if calc_bleu else "rouge" - metrics: Dict = score_fn(preds, labels) - metrics["n_obs"] = len(preds) - runtime = time.time() - start_time - metrics["seconds_per_sample"] = round(runtime / metrics["n_obs"], 4) - metrics["n_gpus"] = num_replicas - # TODO(@stas00): add whatever metadata to metrics - metrics_save_path = save_dir.joinpath(f"{args.type_path}_{metric_name}.json") - save_json(metrics, metrics_save_path, indent=None) - print(metrics) - write_txt_file(preds, save_dir.joinpath(f"{args.type_path}_generations.txt")) - if args.debug: - write_txt_file(labels, save_dir.joinpath(f"{args.type_path}.target")) - else: - shutil.rmtree(json_save_dir) - - -def combine_partial_results(partial_results) -> List: - """Concatenate partial results into one file, then sort it by id.""" - records = [] - for partial_result in partial_results: - records.extend(partial_result) - records = sorted(records, key=lambda x: x["id"]) - preds = [x["pred"] for x in records] - return preds - - -def gather_results_from_each_node(num_replicas, save_dir, timeout) -> List[Dict[str, List]]: - # WAIT FOR lots of .json files - start_wait = time.time() - logger.info("waiting for all nodes to finish") - json_data = None - while (time.time() - start_wait) < timeout: - json_files = list(save_dir.glob("rank_*.json")) - if len(json_files) < num_replicas: - continue - try: - # make sure all json files are fully saved - json_data = lmap(load_json, json_files) - return json_data - except JSONDecodeError: - continue - else: - raise TimeoutError("Rank 0 gave up on waiting for other processes") - # Unreachable - - -if __name__ == "__main__": - # Usage for MT: - run_generate() diff --git a/spaces/chompionsawelo/whisper_transcribe/ui/ui_component.py b/spaces/chompionsawelo/whisper_transcribe/ui/ui_component.py deleted file mode 100644 index c059e91341338afc2e4fc210818418b2af96f722..0000000000000000000000000000000000000000 --- a/spaces/chompionsawelo/whisper_transcribe/ui/ui_component.py +++ /dev/null @@ -1,66 +0,0 @@ -from ui.lang_dictionary import get_ui_dict -import gradio as gr - -# Display available langauges and set default UI language -ui_lang_index = 1 -available_ui_lang = ["English", "Bahasa Indonesia"] -current_ui_lang = get_ui_dict(ui_lang_index) - -lang_radio_choice = 1 -model_dropdown_choice = 2 - -# Transcribe components -ui_lang_radio = gr.Radio( - available_ui_lang, type="index", value=available_ui_lang[ui_lang_index], interactive=True, show_label=False) -top_markdown = gr.Markdown( - current_ui_lang["top_markdown"]) -input_url = gr.Textbox( - max_lines=1, label=current_ui_lang["input_url_label"], info=current_ui_lang["input_url_info"], interactive=True) -url_download_button = gr.Button( - current_ui_lang["download_button_value"], size='sm', interactive=True) -input_video = gr.Video( - label=current_ui_lang["input_video_label"], interactive=True) -start_time = gr.Textbox( - "00:00:00", max_lines=1, placeholder="00:00:00", label=current_ui_lang["start_time_label"], interactive=True) -end_time = gr.Textbox( - "00:15:00", max_lines=1, placeholder="99:99:99", label=current_ui_lang["end_time_label"], interactive=True) -lang_radio = gr.Radio( - current_ui_lang["lang_radio_choices"], label=current_ui_lang["lang_radio_label"], info=current_ui_lang["lang_radio_info"], type='index', interactive=True) -model_dropdown = gr.Dropdown( - current_ui_lang["model_dropdown_choices"], label=current_ui_lang["model_dropdown_label"], info=current_ui_lang["model_dropdown_info"], type='index', interactive=True) -start_button = gr.Button( - current_ui_lang["start_button_value"], variant="primary", interactive=True) - -# Adjust components -middle_markdown = gr.Markdown( - current_ui_lang["middle_markdown"]) -adjust_audio = gr.Audio( - interactive=False) -adjust_speaker = gr.Textbox( - label=current_ui_lang["adjust_speaker_value"], interactive=False) -prev_button = gr.Button( - current_ui_lang["prev_button_value"], interactive=False) -next_button = gr.Button( - current_ui_lang["next_button_value"], interactive=False) -adjust_button = gr.Button( - current_ui_lang["adjust_button_value"], variant="primary", interactive=False) - -# Result components -bottom_markdown = gr.Markdown( - current_ui_lang["bottom_markdown"]) -output_video = gr.Video( - label=current_ui_lang["output_video_label"], interactive=False) -download_video_subtitle_button = gr.Button( - current_ui_lang["download_video_button_value"], interactive=False, size='sm') -output_file = gr.File( - file_count="multiple", interactive=False) -output_transcribe = gr.Textbox( - label=current_ui_lang["output_transcribe_label"], interactive=False, show_copy_button=True) - -# Summary components -summary_markdown = gr.Markdown( - current_ui_lang["summary_markdown"]) -summary_button = gr.Button( - current_ui_lang["summary_button_value"], variant="primary", interactive=False) -output_summary = gr.Textbox( - label=current_ui_lang["output_summary_label"], interactive=False, show_copy_button=True) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/exceptions.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/exceptions.py deleted file mode 100644 index c1692f396127a4cb5ffa38568be70ad67192fd59..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/exceptions.py +++ /dev/null @@ -1,49 +0,0 @@ -from typing import Any, Dict, Optional, Sequence, Type - -from pydantic import BaseModel, create_model -from starlette.exceptions import HTTPException as StarletteHTTPException -from starlette.exceptions import WebSocketException as WebSocketException # noqa: F401 - - -class HTTPException(StarletteHTTPException): - def __init__( - self, - status_code: int, - detail: Any = None, - headers: Optional[Dict[str, str]] = None, - ) -> None: - super().__init__(status_code=status_code, detail=detail, headers=headers) - - -RequestErrorModel: Type[BaseModel] = create_model("Request") -WebSocketErrorModel: Type[BaseModel] = create_model("WebSocket") - - -class FastAPIError(RuntimeError): - """ - A generic, FastAPI-specific error. - """ - - -class ValidationException(Exception): - def __init__(self, errors: Sequence[Any]) -> None: - self._errors = errors - - def errors(self) -> Sequence[Any]: - return self._errors - - -class RequestValidationError(ValidationException): - def __init__(self, errors: Sequence[Any], *, body: Any = None) -> None: - super().__init__(errors) - self.body = body - - -class WebSocketRequestValidationError(ValidationException): - pass - - -class ResponseValidationError(ValidationException): - def __init__(self, errors: Sequence[Any], *, body: Any = None) -> None: - super().__init__(errors) - self.body = body diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/colorLib/builder.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/colorLib/builder.py deleted file mode 100644 index 442bc20e4223827d8e28c9fbb0290dac6f1553dc..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/colorLib/builder.py +++ /dev/null @@ -1,659 +0,0 @@ -""" -colorLib.builder: Build COLR/CPAL tables from scratch - -""" -import collections -import copy -import enum -from functools import partial -from math import ceil, log -from typing import ( - Any, - Dict, - Generator, - Iterable, - List, - Mapping, - Optional, - Sequence, - Tuple, - Type, - TypeVar, - Union, -) -from fontTools.misc.arrayTools import intRect -from fontTools.misc.fixedTools import fixedToFloat -from fontTools.misc.treeTools import build_n_ary_tree -from fontTools.ttLib.tables import C_O_L_R_ -from fontTools.ttLib.tables import C_P_A_L_ -from fontTools.ttLib.tables import _n_a_m_e -from fontTools.ttLib.tables import otTables as ot -from fontTools.ttLib.tables.otTables import ExtendMode, CompositeMode -from .errors import ColorLibError -from .geometry import round_start_circle_stable_containment -from .table_builder import BuildCallback, TableBuilder - - -# TODO move type aliases to colorLib.types? -T = TypeVar("T") -_Kwargs = Mapping[str, Any] -_PaintInput = Union[int, _Kwargs, ot.Paint, Tuple[str, "_PaintInput"]] -_PaintInputList = Sequence[_PaintInput] -_ColorGlyphsDict = Dict[str, Union[_PaintInputList, _PaintInput]] -_ColorGlyphsV0Dict = Dict[str, Sequence[Tuple[str, int]]] -_ClipBoxInput = Union[ - Tuple[int, int, int, int, int], # format 1, variable - Tuple[int, int, int, int], # format 0, non-variable - ot.ClipBox, -] - - -MAX_PAINT_COLR_LAYER_COUNT = 255 -_DEFAULT_ALPHA = 1.0 -_MAX_REUSE_LEN = 32 - - -def _beforeBuildPaintRadialGradient(paint, source): - x0 = source["x0"] - y0 = source["y0"] - r0 = source["r0"] - x1 = source["x1"] - y1 = source["y1"] - r1 = source["r1"] - - # TODO apparently no builder_test confirms this works (?) - - # avoid abrupt change after rounding when c0 is near c1's perimeter - c = round_start_circle_stable_containment((x0, y0), r0, (x1, y1), r1) - x0, y0 = c.centre - r0 = c.radius - - # update source to ensure paint is built with corrected values - source["x0"] = x0 - source["y0"] = y0 - source["r0"] = r0 - source["x1"] = x1 - source["y1"] = y1 - source["r1"] = r1 - - return paint, source - - -def _defaultColorStop(): - colorStop = ot.ColorStop() - colorStop.Alpha = _DEFAULT_ALPHA - return colorStop - - -def _defaultVarColorStop(): - colorStop = ot.VarColorStop() - colorStop.Alpha = _DEFAULT_ALPHA - return colorStop - - -def _defaultColorLine(): - colorLine = ot.ColorLine() - colorLine.Extend = ExtendMode.PAD - return colorLine - - -def _defaultVarColorLine(): - colorLine = ot.VarColorLine() - colorLine.Extend = ExtendMode.PAD - return colorLine - - -def _defaultPaintSolid(): - paint = ot.Paint() - paint.Alpha = _DEFAULT_ALPHA - return paint - - -def _buildPaintCallbacks(): - return { - ( - BuildCallback.BEFORE_BUILD, - ot.Paint, - ot.PaintFormat.PaintRadialGradient, - ): _beforeBuildPaintRadialGradient, - ( - BuildCallback.BEFORE_BUILD, - ot.Paint, - ot.PaintFormat.PaintVarRadialGradient, - ): _beforeBuildPaintRadialGradient, - (BuildCallback.CREATE_DEFAULT, ot.ColorStop): _defaultColorStop, - (BuildCallback.CREATE_DEFAULT, ot.VarColorStop): _defaultVarColorStop, - (BuildCallback.CREATE_DEFAULT, ot.ColorLine): _defaultColorLine, - (BuildCallback.CREATE_DEFAULT, ot.VarColorLine): _defaultVarColorLine, - ( - BuildCallback.CREATE_DEFAULT, - ot.Paint, - ot.PaintFormat.PaintSolid, - ): _defaultPaintSolid, - ( - BuildCallback.CREATE_DEFAULT, - ot.Paint, - ot.PaintFormat.PaintVarSolid, - ): _defaultPaintSolid, - } - - -def populateCOLRv0( - table: ot.COLR, - colorGlyphsV0: _ColorGlyphsV0Dict, - glyphMap: Optional[Mapping[str, int]] = None, -): - """Build v0 color layers and add to existing COLR table. - - Args: - table: a raw ``otTables.COLR()`` object (not ttLib's ``table_C_O_L_R_``). - colorGlyphsV0: map of base glyph names to lists of (layer glyph names, - color palette index) tuples. Can be empty. - glyphMap: a map from glyph names to glyph indices, as returned from - ``TTFont.getReverseGlyphMap()``, to optionally sort base records by GID. - """ - if glyphMap is not None: - colorGlyphItems = sorted( - colorGlyphsV0.items(), key=lambda item: glyphMap[item[0]] - ) - else: - colorGlyphItems = colorGlyphsV0.items() - baseGlyphRecords = [] - layerRecords = [] - for baseGlyph, layers in colorGlyphItems: - baseRec = ot.BaseGlyphRecord() - baseRec.BaseGlyph = baseGlyph - baseRec.FirstLayerIndex = len(layerRecords) - baseRec.NumLayers = len(layers) - baseGlyphRecords.append(baseRec) - - for layerGlyph, paletteIndex in layers: - layerRec = ot.LayerRecord() - layerRec.LayerGlyph = layerGlyph - layerRec.PaletteIndex = paletteIndex - layerRecords.append(layerRec) - - table.BaseGlyphRecordArray = table.LayerRecordArray = None - if baseGlyphRecords: - table.BaseGlyphRecordArray = ot.BaseGlyphRecordArray() - table.BaseGlyphRecordArray.BaseGlyphRecord = baseGlyphRecords - if layerRecords: - table.LayerRecordArray = ot.LayerRecordArray() - table.LayerRecordArray.LayerRecord = layerRecords - table.BaseGlyphRecordCount = len(baseGlyphRecords) - table.LayerRecordCount = len(layerRecords) - - -def buildCOLR( - colorGlyphs: _ColorGlyphsDict, - version: Optional[int] = None, - *, - glyphMap: Optional[Mapping[str, int]] = None, - varStore: Optional[ot.VarStore] = None, - varIndexMap: Optional[ot.DeltaSetIndexMap] = None, - clipBoxes: Optional[Dict[str, _ClipBoxInput]] = None, - allowLayerReuse: bool = True, -) -> C_O_L_R_.table_C_O_L_R_: - """Build COLR table from color layers mapping. - - Args: - - colorGlyphs: map of base glyph name to, either list of (layer glyph name, - color palette index) tuples for COLRv0; or a single ``Paint`` (dict) or - list of ``Paint`` for COLRv1. - version: the version of COLR table. If None, the version is determined - by the presence of COLRv1 paints or variation data (varStore), which - require version 1; otherwise, if all base glyphs use only simple color - layers, version 0 is used. - glyphMap: a map from glyph names to glyph indices, as returned from - TTFont.getReverseGlyphMap(), to optionally sort base records by GID. - varStore: Optional ItemVarationStore for deltas associated with v1 layer. - varIndexMap: Optional DeltaSetIndexMap for deltas associated with v1 layer. - clipBoxes: Optional map of base glyph name to clip box 4- or 5-tuples: - (xMin, yMin, xMax, yMax) or (xMin, yMin, xMax, yMax, varIndexBase). - - Returns: - A new COLR table. - """ - self = C_O_L_R_.table_C_O_L_R_() - - if varStore is not None and version == 0: - raise ValueError("Can't add VarStore to COLRv0") - - if version in (None, 0) and not varStore: - # split color glyphs into v0 and v1 and encode separately - colorGlyphsV0, colorGlyphsV1 = _split_color_glyphs_by_version(colorGlyphs) - if version == 0 and colorGlyphsV1: - raise ValueError("Can't encode COLRv1 glyphs in COLRv0") - else: - # unless explicitly requested for v1 or have variations, in which case - # we encode all color glyph as v1 - colorGlyphsV0, colorGlyphsV1 = {}, colorGlyphs - - colr = ot.COLR() - - populateCOLRv0(colr, colorGlyphsV0, glyphMap) - - colr.LayerList, colr.BaseGlyphList = buildColrV1( - colorGlyphsV1, - glyphMap, - allowLayerReuse=allowLayerReuse, - ) - - if version is None: - version = 1 if (varStore or colorGlyphsV1) else 0 - elif version not in (0, 1): - raise NotImplementedError(version) - self.version = colr.Version = version - - if version == 0: - self.ColorLayers = self._decompileColorLayersV0(colr) - else: - colr.ClipList = buildClipList(clipBoxes) if clipBoxes else None - colr.VarIndexMap = varIndexMap - colr.VarStore = varStore - self.table = colr - - return self - - -def buildClipList(clipBoxes: Dict[str, _ClipBoxInput]) -> ot.ClipList: - clipList = ot.ClipList() - clipList.Format = 1 - clipList.clips = {name: buildClipBox(box) for name, box in clipBoxes.items()} - return clipList - - -def buildClipBox(clipBox: _ClipBoxInput) -> ot.ClipBox: - if isinstance(clipBox, ot.ClipBox): - return clipBox - n = len(clipBox) - clip = ot.ClipBox() - if n not in (4, 5): - raise ValueError(f"Invalid ClipBox: expected 4 or 5 values, found {n}") - clip.xMin, clip.yMin, clip.xMax, clip.yMax = intRect(clipBox[:4]) - clip.Format = int(n == 5) + 1 - if n == 5: - clip.VarIndexBase = int(clipBox[4]) - return clip - - -class ColorPaletteType(enum.IntFlag): - USABLE_WITH_LIGHT_BACKGROUND = 0x0001 - USABLE_WITH_DARK_BACKGROUND = 0x0002 - - @classmethod - def _missing_(cls, value): - # enforce reserved bits - if isinstance(value, int) and (value < 0 or value & 0xFFFC != 0): - raise ValueError(f"{value} is not a valid {cls.__name__}") - return super()._missing_(value) - - -# None, 'abc' or {'en': 'abc', 'de': 'xyz'} -_OptionalLocalizedString = Union[None, str, Dict[str, str]] - - -def buildPaletteLabels( - labels: Iterable[_OptionalLocalizedString], nameTable: _n_a_m_e.table__n_a_m_e -) -> List[Optional[int]]: - return [ - nameTable.addMultilingualName(l, mac=False) - if isinstance(l, dict) - else C_P_A_L_.table_C_P_A_L_.NO_NAME_ID - if l is None - else nameTable.addMultilingualName({"en": l}, mac=False) - for l in labels - ] - - -def buildCPAL( - palettes: Sequence[Sequence[Tuple[float, float, float, float]]], - paletteTypes: Optional[Sequence[ColorPaletteType]] = None, - paletteLabels: Optional[Sequence[_OptionalLocalizedString]] = None, - paletteEntryLabels: Optional[Sequence[_OptionalLocalizedString]] = None, - nameTable: Optional[_n_a_m_e.table__n_a_m_e] = None, -) -> C_P_A_L_.table_C_P_A_L_: - """Build CPAL table from list of color palettes. - - Args: - palettes: list of lists of colors encoded as tuples of (R, G, B, A) floats - in the range [0..1]. - paletteTypes: optional list of ColorPaletteType, one for each palette. - paletteLabels: optional list of palette labels. Each lable can be either: - None (no label), a string (for for default English labels), or a - localized string (as a dict keyed with BCP47 language codes). - paletteEntryLabels: optional list of palette entry labels, one for each - palette entry (see paletteLabels). - nameTable: optional name table where to store palette and palette entry - labels. Required if either paletteLabels or paletteEntryLabels is set. - - Return: - A new CPAL v0 or v1 table, if custom palette types or labels are specified. - """ - if len({len(p) for p in palettes}) != 1: - raise ColorLibError("color palettes have different lengths") - - if (paletteLabels or paletteEntryLabels) and not nameTable: - raise TypeError( - "nameTable is required if palette or palette entries have labels" - ) - - cpal = C_P_A_L_.table_C_P_A_L_() - cpal.numPaletteEntries = len(palettes[0]) - - cpal.palettes = [] - for i, palette in enumerate(palettes): - colors = [] - for j, color in enumerate(palette): - if not isinstance(color, tuple) or len(color) != 4: - raise ColorLibError( - f"In palette[{i}][{j}]: expected (R, G, B, A) tuple, got {color!r}" - ) - if any(v > 1 or v < 0 for v in color): - raise ColorLibError( - f"palette[{i}][{j}] has invalid out-of-range [0..1] color: {color!r}" - ) - # input colors are RGBA, CPAL encodes them as BGRA - red, green, blue, alpha = color - colors.append( - C_P_A_L_.Color(*(round(v * 255) for v in (blue, green, red, alpha))) - ) - cpal.palettes.append(colors) - - if any(v is not None for v in (paletteTypes, paletteLabels, paletteEntryLabels)): - cpal.version = 1 - - if paletteTypes is not None: - if len(paletteTypes) != len(palettes): - raise ColorLibError( - f"Expected {len(palettes)} paletteTypes, got {len(paletteTypes)}" - ) - cpal.paletteTypes = [ColorPaletteType(t).value for t in paletteTypes] - else: - cpal.paletteTypes = [C_P_A_L_.table_C_P_A_L_.DEFAULT_PALETTE_TYPE] * len( - palettes - ) - - if paletteLabels is not None: - if len(paletteLabels) != len(palettes): - raise ColorLibError( - f"Expected {len(palettes)} paletteLabels, got {len(paletteLabels)}" - ) - cpal.paletteLabels = buildPaletteLabels(paletteLabels, nameTable) - else: - cpal.paletteLabels = [C_P_A_L_.table_C_P_A_L_.NO_NAME_ID] * len(palettes) - - if paletteEntryLabels is not None: - if len(paletteEntryLabels) != cpal.numPaletteEntries: - raise ColorLibError( - f"Expected {cpal.numPaletteEntries} paletteEntryLabels, " - f"got {len(paletteEntryLabels)}" - ) - cpal.paletteEntryLabels = buildPaletteLabels(paletteEntryLabels, nameTable) - else: - cpal.paletteEntryLabels = [ - C_P_A_L_.table_C_P_A_L_.NO_NAME_ID - ] * cpal.numPaletteEntries - else: - cpal.version = 0 - - return cpal - - -# COLR v1 tables -# See draft proposal at: https://github.com/googlefonts/colr-gradients-spec - - -def _is_colrv0_layer(layer: Any) -> bool: - # Consider as COLRv0 layer any sequence of length 2 (be it tuple or list) in which - # the first element is a str (the layerGlyph) and the second element is an int - # (CPAL paletteIndex). - # https://github.com/googlefonts/ufo2ft/issues/426 - try: - layerGlyph, paletteIndex = layer - except (TypeError, ValueError): - return False - else: - return isinstance(layerGlyph, str) and isinstance(paletteIndex, int) - - -def _split_color_glyphs_by_version( - colorGlyphs: _ColorGlyphsDict, -) -> Tuple[_ColorGlyphsV0Dict, _ColorGlyphsDict]: - colorGlyphsV0 = {} - colorGlyphsV1 = {} - for baseGlyph, layers in colorGlyphs.items(): - if all(_is_colrv0_layer(l) for l in layers): - colorGlyphsV0[baseGlyph] = layers - else: - colorGlyphsV1[baseGlyph] = layers - - # sanity check - assert set(colorGlyphs) == (set(colorGlyphsV0) | set(colorGlyphsV1)) - - return colorGlyphsV0, colorGlyphsV1 - - -def _reuse_ranges(num_layers: int) -> Generator[Tuple[int, int], None, None]: - # TODO feels like something itertools might have already - for lbound in range(num_layers): - # Reuse of very large #s of layers is relatively unlikely - # +2: we want sequences of at least 2 - # otData handles single-record duplication - for ubound in range( - lbound + 2, min(num_layers + 1, lbound + 2 + _MAX_REUSE_LEN) - ): - yield (lbound, ubound) - - -class LayerReuseCache: - reusePool: Mapping[Tuple[Any, ...], int] - tuples: Mapping[int, Tuple[Any, ...]] - keepAlive: List[ot.Paint] # we need id to remain valid - - def __init__(self): - self.reusePool = {} - self.tuples = {} - self.keepAlive = [] - - def _paint_tuple(self, paint: ot.Paint): - # start simple, who even cares about cyclic graphs or interesting field types - def _tuple_safe(value): - if isinstance(value, enum.Enum): - return value - elif hasattr(value, "__dict__"): - return tuple( - (k, _tuple_safe(v)) for k, v in sorted(value.__dict__.items()) - ) - elif isinstance(value, collections.abc.MutableSequence): - return tuple(_tuple_safe(e) for e in value) - return value - - # Cache the tuples for individual Paint instead of the whole sequence - # because the seq could be a transient slice - result = self.tuples.get(id(paint), None) - if result is None: - result = _tuple_safe(paint) - self.tuples[id(paint)] = result - self.keepAlive.append(paint) - return result - - def _as_tuple(self, paints: Sequence[ot.Paint]) -> Tuple[Any, ...]: - return tuple(self._paint_tuple(p) for p in paints) - - def try_reuse(self, layers: List[ot.Paint]) -> List[ot.Paint]: - found_reuse = True - while found_reuse: - found_reuse = False - - ranges = sorted( - _reuse_ranges(len(layers)), - key=lambda t: (t[1] - t[0], t[1], t[0]), - reverse=True, - ) - for lbound, ubound in ranges: - reuse_lbound = self.reusePool.get( - self._as_tuple(layers[lbound:ubound]), -1 - ) - if reuse_lbound == -1: - continue - new_slice = ot.Paint() - new_slice.Format = int(ot.PaintFormat.PaintColrLayers) - new_slice.NumLayers = ubound - lbound - new_slice.FirstLayerIndex = reuse_lbound - layers = layers[:lbound] + [new_slice] + layers[ubound:] - found_reuse = True - break - return layers - - def add(self, layers: List[ot.Paint], first_layer_index: int): - for lbound, ubound in _reuse_ranges(len(layers)): - self.reusePool[self._as_tuple(layers[lbound:ubound])] = ( - lbound + first_layer_index - ) - - -class LayerListBuilder: - layers: List[ot.Paint] - cache: LayerReuseCache - allowLayerReuse: bool - - def __init__(self, *, allowLayerReuse=True): - self.layers = [] - if allowLayerReuse: - self.cache = LayerReuseCache() - else: - self.cache = None - - # We need to intercept construction of PaintColrLayers - callbacks = _buildPaintCallbacks() - callbacks[ - ( - BuildCallback.BEFORE_BUILD, - ot.Paint, - ot.PaintFormat.PaintColrLayers, - ) - ] = self._beforeBuildPaintColrLayers - self.tableBuilder = TableBuilder(callbacks) - - # COLR layers is unusual in that it modifies shared state - # so we need a callback into an object - def _beforeBuildPaintColrLayers(self, dest, source): - # Sketchy gymnastics: a sequence input will have dropped it's layers - # into NumLayers; get it back - if isinstance(source.get("NumLayers", None), collections.abc.Sequence): - layers = source["NumLayers"] - else: - layers = source["Layers"] - - # Convert maps seqs or whatever into typed objects - layers = [self.buildPaint(l) for l in layers] - - # No reason to have a colr layers with just one entry - if len(layers) == 1: - return layers[0], {} - - if self.cache is not None: - # Look for reuse, with preference to longer sequences - # This may make the layer list smaller - layers = self.cache.try_reuse(layers) - - # The layer list is now final; if it's too big we need to tree it - is_tree = len(layers) > MAX_PAINT_COLR_LAYER_COUNT - layers = build_n_ary_tree(layers, n=MAX_PAINT_COLR_LAYER_COUNT) - - # We now have a tree of sequences with Paint leaves. - # Convert the sequences into PaintColrLayers. - def listToColrLayers(layer): - if isinstance(layer, collections.abc.Sequence): - return self.buildPaint( - { - "Format": ot.PaintFormat.PaintColrLayers, - "Layers": [listToColrLayers(l) for l in layer], - } - ) - return layer - - layers = [listToColrLayers(l) for l in layers] - - # No reason to have a colr layers with just one entry - if len(layers) == 1: - return layers[0], {} - - paint = ot.Paint() - paint.Format = int(ot.PaintFormat.PaintColrLayers) - paint.NumLayers = len(layers) - paint.FirstLayerIndex = len(self.layers) - self.layers.extend(layers) - - # Register our parts for reuse provided we aren't a tree - # If we are a tree the leaves registered for reuse and that will suffice - if self.cache is not None and not is_tree: - self.cache.add(layers, paint.FirstLayerIndex) - - # we've fully built dest; empty source prevents generalized build from kicking in - return paint, {} - - def buildPaint(self, paint: _PaintInput) -> ot.Paint: - return self.tableBuilder.build(ot.Paint, paint) - - def build(self) -> Optional[ot.LayerList]: - if not self.layers: - return None - layers = ot.LayerList() - layers.LayerCount = len(self.layers) - layers.Paint = self.layers - return layers - - -def buildBaseGlyphPaintRecord( - baseGlyph: str, layerBuilder: LayerListBuilder, paint: _PaintInput -) -> ot.BaseGlyphList: - self = ot.BaseGlyphPaintRecord() - self.BaseGlyph = baseGlyph - self.Paint = layerBuilder.buildPaint(paint) - return self - - -def _format_glyph_errors(errors: Mapping[str, Exception]) -> str: - lines = [] - for baseGlyph, error in sorted(errors.items()): - lines.append(f" {baseGlyph} => {type(error).__name__}: {error}") - return "\n".join(lines) - - -def buildColrV1( - colorGlyphs: _ColorGlyphsDict, - glyphMap: Optional[Mapping[str, int]] = None, - *, - allowLayerReuse: bool = True, -) -> Tuple[Optional[ot.LayerList], ot.BaseGlyphList]: - if glyphMap is not None: - colorGlyphItems = sorted( - colorGlyphs.items(), key=lambda item: glyphMap[item[0]] - ) - else: - colorGlyphItems = colorGlyphs.items() - - errors = {} - baseGlyphs = [] - layerBuilder = LayerListBuilder(allowLayerReuse=allowLayerReuse) - for baseGlyph, paint in colorGlyphItems: - try: - baseGlyphs.append(buildBaseGlyphPaintRecord(baseGlyph, layerBuilder, paint)) - - except (ColorLibError, OverflowError, ValueError, TypeError) as e: - errors[baseGlyph] = e - - if errors: - failed_glyphs = _format_glyph_errors(errors) - exc = ColorLibError(f"Failed to build BaseGlyphList:\n{failed_glyphs}") - exc.errors = errors - raise exc from next(iter(errors.values())) - - layers = layerBuilder.build() - glyphs = ot.BaseGlyphList() - glyphs.BaseGlyphCount = len(baseGlyphs) - glyphs.BaseGlyphPaintRecord = baseGlyphs - return (layers, glyphs) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/webhdfs.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/webhdfs.py deleted file mode 100644 index cc595934f9a0161be24d3e300260fb73d4fd9784..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/webhdfs.py +++ /dev/null @@ -1,447 +0,0 @@ -# https://hadoop.apache.org/docs/r1.0.4/webhdfs.html - -import logging -import os -import secrets -import shutil -import tempfile -import uuid -from contextlib import suppress -from urllib.parse import quote - -import requests - -from ..spec import AbstractBufferedFile, AbstractFileSystem -from ..utils import infer_storage_options, tokenize - -logger = logging.getLogger("webhdfs") - - -class WebHDFS(AbstractFileSystem): - """ - Interface to HDFS over HTTP using the WebHDFS API. Supports also HttpFS gateways. - - Three auth mechanisms are supported: - - insecure: no auth is done, and the user is assumed to be whoever they - say they are (parameter ``user``), or a predefined value such as - "dr.who" if not given - spnego: when kerberos authentication is enabled, auth is negotiated by - requests_kerberos https://github.com/requests/requests-kerberos . - This establishes a session based on existing kinit login and/or - specified principal/password; parameters are passed with ``kerb_kwargs`` - token: uses an existing Hadoop delegation token from another secured - service. Indeed, this client can also generate such tokens when - not insecure. Note that tokens expire, but can be renewed (by a - previously specified user) and may allow for proxying. - - """ - - tempdir = str(tempfile.gettempdir()) - protocol = "webhdfs", "webHDFS" - - def __init__( - self, - host, - port=50070, - kerberos=False, - token=None, - user=None, - proxy_to=None, - kerb_kwargs=None, - data_proxy=None, - use_https=False, - **kwargs, - ): - """ - Parameters - ---------- - host: str - Name-node address - port: int - Port for webHDFS - kerberos: bool - Whether to authenticate with kerberos for this connection - token: str or None - If given, use this token on every call to authenticate. A user - and user-proxy may be encoded in the token and should not be also - given - user: str or None - If given, assert the user name to connect with - proxy_to: str or None - If given, the user has the authority to proxy, and this value is - the user in who's name actions are taken - kerb_kwargs: dict - Any extra arguments for HTTPKerberosAuth, see - ``_ - data_proxy: dict, callable or None - If given, map data-node addresses. This can be necessary if the - HDFS cluster is behind a proxy, running on Docker or otherwise has - a mismatch between the host-names given by the name-node and the - address by which to refer to them from the client. If a dict, - maps host names ``host->data_proxy[host]``; if a callable, full - URLs are passed, and function must conform to - ``url->data_proxy(url)``. - use_https: bool - Whether to connect to the Name-node using HTTPS instead of HTTP - kwargs - """ - if self._cached: - return - super().__init__(**kwargs) - self.url = "{protocol}://{host}:{port}/webhdfs/v1".format( - protocol="https" if use_https else "http", host=host, port=port - ) - self.kerb = kerberos - self.kerb_kwargs = kerb_kwargs or {} - self.pars = {} - self.proxy = data_proxy or {} - if token is not None: - if user is not None or proxy_to is not None: - raise ValueError( - "If passing a delegation token, must not set " - "user or proxy_to, as these are encoded in the" - " token" - ) - self.pars["delegation"] = token - if user is not None: - self.pars["user.name"] = user - if proxy_to is not None: - self.pars["doas"] = proxy_to - if kerberos and user is not None: - raise ValueError( - "If using Kerberos auth, do not specify the " - "user, this is handled by kinit." - ) - self._connect() - - self._fsid = "webhdfs_" + tokenize(host, port) - - @property - def fsid(self): - return self._fsid - - def _connect(self): - self.session = requests.Session() - if self.kerb: - from requests_kerberos import HTTPKerberosAuth - - self.session.auth = HTTPKerberosAuth(**self.kerb_kwargs) - - def _call(self, op, method="get", path=None, data=None, redirect=True, **kwargs): - url = self.url + quote(path or "") - args = kwargs.copy() - args.update(self.pars) - args["op"] = op.upper() - logger.debug("sending %s with %s", url, method) - out = self.session.request( - method=method.upper(), - url=url, - params=args, - data=data, - allow_redirects=redirect, - ) - if out.status_code in [400, 401, 403, 404, 500]: - try: - err = out.json() - msg = err["RemoteException"]["message"] - exp = err["RemoteException"]["exception"] - except (ValueError, KeyError): - pass - else: - if exp in ["IllegalArgumentException", "UnsupportedOperationException"]: - raise ValueError(msg) - elif exp in ["SecurityException", "AccessControlException"]: - raise PermissionError(msg) - elif exp in ["FileNotFoundException"]: - raise FileNotFoundError(msg) - else: - raise RuntimeError(msg) - out.raise_for_status() - return out - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=True, - replication=None, - permissions=None, - **kwargs, - ): - """ - - Parameters - ---------- - path: str - File location - mode: str - 'rb', 'wb', etc. - block_size: int - Client buffer size for read-ahead or write buffer - autocommit: bool - If False, writes to temporary file that only gets put in final - location upon commit - replication: int - Number of copies of file on the cluster, write mode only - permissions: str or int - posix permissions, write mode only - kwargs - - Returns - ------- - WebHDFile instance - """ - block_size = block_size or self.blocksize - return WebHDFile( - self, - path, - mode=mode, - block_size=block_size, - tempdir=self.tempdir, - autocommit=autocommit, - replication=replication, - permissions=permissions, - ) - - @staticmethod - def _process_info(info): - info["type"] = info["type"].lower() - info["size"] = info["length"] - return info - - @classmethod - def _strip_protocol(cls, path): - return infer_storage_options(path)["path"] - - @staticmethod - def _get_kwargs_from_urls(urlpath): - out = infer_storage_options(urlpath) - out.pop("path", None) - out.pop("protocol", None) - if "username" in out: - out["user"] = out.pop("username") - return out - - def info(self, path): - out = self._call("GETFILESTATUS", path=path) - info = out.json()["FileStatus"] - info["name"] = path - return self._process_info(info) - - def ls(self, path, detail=False): - out = self._call("LISTSTATUS", path=path) - infos = out.json()["FileStatuses"]["FileStatus"] - for info in infos: - self._process_info(info) - info["name"] = path.rstrip("/") + "/" + info["pathSuffix"] - if detail: - return sorted(infos, key=lambda i: i["name"]) - else: - return sorted(info["name"] for info in infos) - - def content_summary(self, path): - """Total numbers of files, directories and bytes under path""" - out = self._call("GETCONTENTSUMMARY", path=path) - return out.json()["ContentSummary"] - - def ukey(self, path): - """Checksum info of file, giving method and result""" - out = self._call("GETFILECHECKSUM", path=path, redirect=False) - if "Location" in out.headers: - location = self._apply_proxy(out.headers["Location"]) - out2 = self.session.get(location) - out2.raise_for_status() - return out2.json()["FileChecksum"] - else: - out.raise_for_status() - return out.json()["FileChecksum"] - - def home_directory(self): - """Get user's home directory""" - out = self._call("GETHOMEDIRECTORY") - return out.json()["Path"] - - def get_delegation_token(self, renewer=None): - """Retrieve token which can give the same authority to other uses - - Parameters - ---------- - renewer: str or None - User who may use this token; if None, will be current user - """ - if renewer: - out = self._call("GETDELEGATIONTOKEN", renewer=renewer) - else: - out = self._call("GETDELEGATIONTOKEN") - t = out.json()["Token"] - if t is None: - raise ValueError("No token available for this user/security context") - return t["urlString"] - - def renew_delegation_token(self, token): - """Make token live longer. Returns new expiry time""" - out = self._call("RENEWDELEGATIONTOKEN", method="put", token=token) - return out.json()["long"] - - def cancel_delegation_token(self, token): - """Stop the token from being useful""" - self._call("CANCELDELEGATIONTOKEN", method="put", token=token) - - def chmod(self, path, mod): - """Set the permission at path - - Parameters - ---------- - path: str - location to set (file or directory) - mod: str or int - posix epresentation or permission, give as oct string, e.g, '777' - or 0o777 - """ - self._call("SETPERMISSION", method="put", path=path, permission=mod) - - def chown(self, path, owner=None, group=None): - """Change owning user and/or group""" - kwargs = {} - if owner is not None: - kwargs["owner"] = owner - if group is not None: - kwargs["group"] = group - self._call("SETOWNER", method="put", path=path, **kwargs) - - def set_replication(self, path, replication): - """ - Set file replication factor - - Parameters - ---------- - path: str - File location (not for directories) - replication: int - Number of copies of file on the cluster. Should be smaller than - number of data nodes; normally 3 on most systems. - """ - self._call("SETREPLICATION", path=path, method="put", replication=replication) - - def mkdir(self, path, **kwargs): - self._call("MKDIRS", method="put", path=path) - - def makedirs(self, path, exist_ok=False): - if exist_ok is False and self.exists(path): - raise FileExistsError(path) - self.mkdir(path) - - def mv(self, path1, path2, **kwargs): - self._call("RENAME", method="put", path=path1, destination=path2) - - def rm(self, path, recursive=False, **kwargs): - self._call( - "DELETE", - method="delete", - path=path, - recursive="true" if recursive else "false", - ) - - def rm_file(self, path, **kwargs): - self.rm(path) - - def cp_file(self, lpath, rpath, **kwargs): - with self.open(lpath) as lstream: - tmp_fname = "/".join([self._parent(rpath), f".tmp.{secrets.token_hex(16)}"]) - # Perform an atomic copy (stream to a temporary file and - # move it to the actual destination). - try: - with self.open(tmp_fname, "wb") as rstream: - shutil.copyfileobj(lstream, rstream) - self.mv(tmp_fname, rpath) - except BaseException: # noqa - with suppress(FileNotFoundError): - self.rm(tmp_fname) - raise - - def _apply_proxy(self, location): - if self.proxy and callable(self.proxy): - location = self.proxy(location) - elif self.proxy: - # as a dict - for k, v in self.proxy.items(): - location = location.replace(k, v, 1) - return location - - -class WebHDFile(AbstractBufferedFile): - """A file living in HDFS over webHDFS""" - - def __init__(self, fs, path, **kwargs): - super().__init__(fs, path, **kwargs) - kwargs = kwargs.copy() - if kwargs.get("permissions", None) is None: - kwargs.pop("permissions", None) - if kwargs.get("replication", None) is None: - kwargs.pop("replication", None) - self.permissions = kwargs.pop("permissions", 511) - tempdir = kwargs.pop("tempdir") - if kwargs.pop("autocommit", False) is False: - self.target = self.path - self.path = os.path.join(tempdir, str(uuid.uuid4())) - - def _upload_chunk(self, final=False): - """Write one part of a multi-block file upload - - Parameters - ========== - final: bool - This is the last block, so should complete file, if - self.autocommit is True. - """ - out = self.fs.session.post( - self.location, - data=self.buffer.getvalue(), - headers={"content-type": "application/octet-stream"}, - ) - out.raise_for_status() - return True - - def _initiate_upload(self): - """Create remote file/upload""" - kwargs = self.kwargs.copy() - if "a" in self.mode: - op, method = "APPEND", "POST" - else: - op, method = "CREATE", "PUT" - kwargs["overwrite"] = "true" - out = self.fs._call(op, method, self.path, redirect=False, **kwargs) - location = self.fs._apply_proxy(out.headers["Location"]) - if "w" in self.mode: - # create empty file to append to - out2 = self.fs.session.put( - location, headers={"content-type": "application/octet-stream"} - ) - out2.raise_for_status() - # after creating empty file, change location to append to - out2 = self.fs._call("APPEND", "POST", self.path, redirect=False, **kwargs) - self.location = self.fs._apply_proxy(out2.headers["Location"]) - - def _fetch_range(self, start, end): - start = max(start, 0) - end = min(self.size, end) - if start >= end or start >= self.size: - return b"" - out = self.fs._call( - "OPEN", path=self.path, offset=start, length=end - start, redirect=False - ) - out.raise_for_status() - if "Location" in out.headers: - location = out.headers["Location"] - out2 = self.fs.session.get(self.fs._apply_proxy(location)) - return out2.content - else: - return out.content - - def commit(self): - self.fs.mv(self.path, self.target) - - def discard(self): - self.fs.rm(self.path) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/service.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/service.py deleted file mode 100644 index 5625246324cad3c71108a4466466d1d3b1568907..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/service.py +++ /dev/null @@ -1,228 +0,0 @@ -# Protocol Buffers - Google's data interchange format -# Copyright 2008 Google Inc. All rights reserved. -# https://developers.google.com/protocol-buffers/ -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following disclaimer -# in the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Google Inc. nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -"""DEPRECATED: Declares the RPC service interfaces. - -This module declares the abstract interfaces underlying proto2 RPC -services. These are intended to be independent of any particular RPC -implementation, so that proto2 services can be used on top of a variety -of implementations. Starting with version 2.3.0, RPC implementations should -not try to build on these, but should instead provide code generator plugins -which generate code specific to the particular RPC implementation. This way -the generated code can be more appropriate for the implementation in use -and can avoid unnecessary layers of indirection. -""" - -__author__ = 'petar@google.com (Petar Petrov)' - - -class RpcException(Exception): - """Exception raised on failed blocking RPC method call.""" - pass - - -class Service(object): - - """Abstract base interface for protocol-buffer-based RPC services. - - Services themselves are abstract classes (implemented either by servers or as - stubs), but they subclass this base interface. The methods of this - interface can be used to call the methods of the service without knowing - its exact type at compile time (analogous to the Message interface). - """ - - def GetDescriptor(): - """Retrieves this service's descriptor.""" - raise NotImplementedError - - def CallMethod(self, method_descriptor, rpc_controller, - request, done): - """Calls a method of the service specified by method_descriptor. - - If "done" is None then the call is blocking and the response - message will be returned directly. Otherwise the call is asynchronous - and "done" will later be called with the response value. - - In the blocking case, RpcException will be raised on error. - - Preconditions: - - * method_descriptor.service == GetDescriptor - * request is of the exact same classes as returned by - GetRequestClass(method). - * After the call has started, the request must not be modified. - * "rpc_controller" is of the correct type for the RPC implementation being - used by this Service. For stubs, the "correct type" depends on the - RpcChannel which the stub is using. - - Postconditions: - - * "done" will be called when the method is complete. This may be - before CallMethod() returns or it may be at some point in the future. - * If the RPC failed, the response value passed to "done" will be None. - Further details about the failure can be found by querying the - RpcController. - """ - raise NotImplementedError - - def GetRequestClass(self, method_descriptor): - """Returns the class of the request message for the specified method. - - CallMethod() requires that the request is of a particular subclass of - Message. GetRequestClass() gets the default instance of this required - type. - - Example: - method = service.GetDescriptor().FindMethodByName("Foo") - request = stub.GetRequestClass(method)() - request.ParseFromString(input) - service.CallMethod(method, request, callback) - """ - raise NotImplementedError - - def GetResponseClass(self, method_descriptor): - """Returns the class of the response message for the specified method. - - This method isn't really needed, as the RpcChannel's CallMethod constructs - the response protocol message. It's provided anyway in case it is useful - for the caller to know the response type in advance. - """ - raise NotImplementedError - - -class RpcController(object): - - """An RpcController mediates a single method call. - - The primary purpose of the controller is to provide a way to manipulate - settings specific to the RPC implementation and to find out about RPC-level - errors. The methods provided by the RpcController interface are intended - to be a "least common denominator" set of features which we expect all - implementations to support. Specific implementations may provide more - advanced features (e.g. deadline propagation). - """ - - # Client-side methods below - - def Reset(self): - """Resets the RpcController to its initial state. - - After the RpcController has been reset, it may be reused in - a new call. Must not be called while an RPC is in progress. - """ - raise NotImplementedError - - def Failed(self): - """Returns true if the call failed. - - After a call has finished, returns true if the call failed. The possible - reasons for failure depend on the RPC implementation. Failed() must not - be called before a call has finished. If Failed() returns true, the - contents of the response message are undefined. - """ - raise NotImplementedError - - def ErrorText(self): - """If Failed is true, returns a human-readable description of the error.""" - raise NotImplementedError - - def StartCancel(self): - """Initiate cancellation. - - Advises the RPC system that the caller desires that the RPC call be - canceled. The RPC system may cancel it immediately, may wait awhile and - then cancel it, or may not even cancel the call at all. If the call is - canceled, the "done" callback will still be called and the RpcController - will indicate that the call failed at that time. - """ - raise NotImplementedError - - # Server-side methods below - - def SetFailed(self, reason): - """Sets a failure reason. - - Causes Failed() to return true on the client side. "reason" will be - incorporated into the message returned by ErrorText(). If you find - you need to return machine-readable information about failures, you - should incorporate it into your response protocol buffer and should - NOT call SetFailed(). - """ - raise NotImplementedError - - def IsCanceled(self): - """Checks if the client cancelled the RPC. - - If true, indicates that the client canceled the RPC, so the server may - as well give up on replying to it. The server should still call the - final "done" callback. - """ - raise NotImplementedError - - def NotifyOnCancel(self, callback): - """Sets a callback to invoke on cancel. - - Asks that the given callback be called when the RPC is canceled. The - callback will always be called exactly once. If the RPC completes without - being canceled, the callback will be called after completion. If the RPC - has already been canceled when NotifyOnCancel() is called, the callback - will be called immediately. - - NotifyOnCancel() must be called no more than once per request. - """ - raise NotImplementedError - - -class RpcChannel(object): - - """Abstract interface for an RPC channel. - - An RpcChannel represents a communication line to a service which can be used - to call that service's methods. The service may be running on another - machine. Normally, you should not use an RpcChannel directly, but instead - construct a stub {@link Service} wrapping it. Example: - - Example: - RpcChannel channel = rpcImpl.Channel("remotehost.example.com:1234") - RpcController controller = rpcImpl.Controller() - MyService service = MyService_Stub(channel) - service.MyMethod(controller, request, callback) - """ - - def CallMethod(self, method_descriptor, rpc_controller, - request, response_class, done): - """Calls the method identified by the descriptor. - - Call the given method of the remote service. The signature of this - procedure looks the same as Service.CallMethod(), but the requirements - are less strict in one important way: the request object doesn't have to - be of any specific class as long as its descriptor is method.input_type. - """ - raise NotImplementedError diff --git a/spaces/cihyFjudo/fairness-paper-search/Command and conquer 3 kane wrath cd key changer The easiest way to change your game key.md b/spaces/cihyFjudo/fairness-paper-search/Command and conquer 3 kane wrath cd key changer The easiest way to change your game key.md deleted file mode 100644 index c25c70b6623b196c3ada224d10948c37c9d52c8e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Command and conquer 3 kane wrath cd key changer The easiest way to change your game key.md +++ /dev/null @@ -1,7 +0,0 @@ - -

    First: start menu > run > type "regedit" > ok (regedit opens)

    I had to change the serial numbers using regedit in both places:

    ==> hkey_local_machine/software/wow6432node/electronic arts/ea games/command and
    conquer generals zero hour/ergc

    ==> hkey_local_machinesoftwareelectronic artsea gamescommand and conquer generals zero
    hourergc

    For you who want to play network only, use different serial number for each computer.
    (erase all the "-")
    Pnja-tda6-tw3g-n48d-5dhq
    Pfbb-spap-tyz2-h6ue-cmtp
    Ym2s-pvc6-rl2t-ut89-sw8t
    Qfyy-ceqt-j4g8-3uwq-48uy
    N3xf-mgw6-glee-8s2a-yasu
    Azne-p748-w8c4-ssws-4e8s
    Qj9h-w286-yvkx-vx6z-kn7c
    Yhjs-gvkt-4u9y-duw4-5622
    El22-2w4z-p45k-jul7-cqyw
    Un7p-g6sa-yag4-4yl4-sg2w
    P77f-lsaq-tt7n-h796-h4pv
    Pffp-ffat-t3r8-bkcg-fvdp
    Pa6n-nyal-tufj-8fmh-bnhq
    P5wn-9aa4-tmll-bgdd-hlyy
    Pn7p-r8ak-tgjj-6vev-y29c

    -

    2 Command And Conquer 3 Tiberium Wars V1.09 No Cd Crack Command And Conquer 3 Tiberium Wars V1.09 No Cd Crack... Trainer) Command & Conquer 3: Kane's Wrath Cheat Codes, Trainers, Patch.. Hey guys, I bought the C&C 3 game for my mac years ago and love it.... Become a MacRumors Supporter for $25/year with no ads, private forums, and more!... I still have a copy of C&C 3 for Mac, can you use the cd key on that to... Edit: The patch isn't for mac, i only saw a 1.09 patch that is for the mac.. command conquer tiberium wars, command conquer tiberium wars mods, command conquer tiberium wars patch 1.09 download, command... English manual installation version of official patch v1.09 for Command & Conquer 3 Tiberium Wars. For details of what the page changes see this posting,.. Command And Conquer 3 Tiberium Wars V1.09 No Cd Crack.. C&C 3 Tiberium Wars v1.09 NoDVD Crack Fix [FLT] torrent-oyun.... Wars oyunun crack 1.0 lazm... No-CD & No-DVD Patch troubleshooting: The most common problem getting a... Oct 30, 2007 Command & Conquer 3: Tiberium Wars v1.09 All. More... Command And Conquer 3 Tiberium Wars 1.9 Crack bit.ly/2u1xodm... GDI CommandMar 18, 2011 C&C 3 Tiberium Wars No-CD Fix - Tutorial.. C&C 3 Tiberium Wars v1.09 NoDVD Crack Fix [FLT] torrent-oyun. Program FilesEACommand and Conquer 3: Tiberium WarsRetailExe Current Trainers: Command & Conquer 3 Tiberium Wars V1.9.0 Trainer +3 Command & Conquer 3: Tiberium Wars (Steam) Trainer no worries now we know most command & Conquer games or any other origin... c c 3 tiberium wars crack 1.09 download games. Command and conquer 3 tiberium wars kane edition v1.09 crack.... C c 3 tiberium wars no cd fix tutorial.. Fairlight no CD Command & Conquer 3: Tiberium Wars v1.09 All.... Command And Conquer 3 Tiberium Wars 1.09 Crack. guild wars 2... Command & conquer 3 tiberium wars crack no cd C&C; Fairlight no CD Command amp Conquer 3: Tiberium Wars Kane Edition v1.. I just installed Command and Conquer 3 Tiberium wars and was asked to replace the original.... Any ideas?! patch is 1.09(kane edition). command conquer tiberium wars command conquer tiberium wars, command conquer tiberium wars walkthrough, command & conquer tiberium wars cheats, command & conquer tiberium wars mods, command & conquer tiberium wars trainer, command conquer tiberium wars xbox 360, command & conquer tiberium wars cd key, command & conquer tiberium wars steam, command & conquer tiberium wars cast, command conquer tiberium wars patch 1.09 download, command & conquer tiberium wars crack, command & conquer tiberium wars 3 Jump to C&C 3: Tiberium Wars v1.04 [GERMAN] No-DVD/Fixed EXE Apply the official C&C 3: Tiberium Wars v1.04 Patch.... C&C 3: Tiberium Wars v C&C 3: Tiberium Wars CD-KEY... C&C 3: Tiberium Wars PROPER NO INTRO... C&C 3: Tiberium Wars v TRAINER, Command And Conquer 3 Tiberium Wars No Cd Patch. July Command & Conquer: Tiberian Sun v TRAINER, I recently re-installed Command and Conquer 3, and patched it up to the latest version, 1.09, previous to the patch the game worked perfectly,... Command & Conquer: Tiberian Sun - Firestorm v Trainer.... Command And Conquer 3 Tiberium Wars No Cd Crack 09 Crack 28 MB... command conquer tiberium wars, command conquer tiberium wars mods, command conquer tiberium wars patch 1.09 download, command... Mods Tiberium Essence Command & Conquer: Red Alert 2 - New Age... C&C Red Alert 2 : Yuri Revenge C&C Red Alert 3 Counter-Strike... Jan 13, 2016 Red Alert 3 v Game mod - Download The file New Age of War Remastered v.... Not Available: No-CD & Fixed Executables/Images for this Sep 16,... Command & Conquer 3: Tiberium Wars Game Fixes, No-CD Game Fixes,... COMMAND & CONQUER 3: TIBERIUM WARS v1.1 [ENGLISH] NO- DVD/FIXED DLL... CONQUER 3: TIBERIUM WARS v1.09 [ENGLISH] NO-DVD/FIXED DLL (12MB)... No-CD & No- DVD Patch troubleshooting: The most common problem getting a... Command And Conquer 3 Tiberium Wars No Cd Crack 1.09 torrent Software. Download millions of torrents with TV Tiberium CC Generals.. Red Alert 2 Map Editor The latest vers RoboCop: 0 0: 3 Citadel Survival.... Evil SURVIVAL v1 A Command & Conquer: Red Alert 2 Yuri's Revenge... Just pure gameplay with cool music and no commentary at 60 fps Missing... C&C Red Alert 2, Tiberium Wars,Tiberian Sun, Generals, Renegade, patch Patches C&C 3 Patch v1.09. Description. English manual installation version of official patch v1.09 for Command & Conquer 3 Tiberium Wars. For details of what the... 2 / 5

    -

    Command and conquer 3 kane wrath cd key changer


    Download Ziphttps://tinurli.com/2uwhMN



    -

    3 command & conquer tiberium wars cheats More Command & Conquer 3: Tiberium Wars Kane Edition Fixes. Fairlight no CD Command & Conquer 3: Tiberium Wars Kane Edition v1.0 All Command... Download command and conquer tiberium wars 3 no cd crack... 3 Tiberium Wars - Kane Edition v1.0 No DVD Patch Play Fix Exe PC bar and.... Free downloadable content like Command & Conquer 3: Tiberium Wars V Fairlight no CD Command & Conquer 3: Tiberium Wars v1.09 All.. 09 patch for the English version of Command & Conquer 3: Tiberium Wars, which fixes several... Command And Conquer 3 Tiberium Wars V1.09 No Cd Crack.. Tiberium Wars No-cd PatchCommand & Conquer 3: Tiberium Wars is the long-awaited... Command & Conquer 3: Tiberium Wars Patch v Command & Conquer 3: Tiberium Wars Cheat Codes, Trainers, Patch Updates, Demos, Downloads, Cheats Trainer, Tweaks & Game Patch Fixes are featured on this... Enable No Fog Of War... Command & Conquer 3: Tiberium Wars v1.07 & v Trainer... Command & Conquer 3: Tiberium Wars CD Key Changer #2. 19 Aug 2020 Command & Conquer: Red Alert 3 - Red Alert: Armor Rush Mod, free and... Shock Therapy: Non-C&C games Crysis Wars Tiberian Genesis: StarCraft II Red... Install Red Alert 3 Mod Steam Command And Conquer Red Alert 3 PC Game, is no... Aug 12, 2012 Red Alert 2: Yuris Revenge - Revolution patch v1.. Command and conquer 3 tiberium wars mac no cd crack.... Game Tools: C&C 3 v1.09 GDI CHEAT MOD; C&C 3: Kane's Wrath SCREEN... Download the v1.09 patch and execute that from the wineprefix after install has completed. Subsequently, you may wish to download a NO-CD crack. Install. PC Game Fix Crack for Command & Conquer 3: Tiberium Wars v1 09,.... y de aca el no dvd para que puedas jugar sin el cd original, Mar... command & conquer tiberium wars trainer 09 for Command & Conquer 3 Tiberium Wars. Tiberium Wars Patch v1.09 now from the world's largest gaming download site, FilePlanet!.. Command & Conquer 4: Tiberian Twilight Free Download PC Game... Some... Command conquer 3 tiberium wars crack no cd directx error command and conquer tiberium wars origin 09 All. 0 Trainer least the Origin & Original CD 4 Jun 2019 I need help in trying to download DirectX Command & Conquer 3: Tiberium Wars Crack Plus Keygen It is Not Supported: No CD-ROM version of the game NOTE: DirectX may require.. Command & Conquer 3 Tiberium Wars v years 11 MB 4 0 Command And... + Crack 6 3 Tiberium Wars Kane Edition [English][PCDVD] 1 0 Command And... Command & Conquer 3: Tiberium Wars Install.NET 2.0 BATTERY.NOCD.. Please change the name from Command And Conquer 3 : Tiberium Wars (Kane Edition) Patch 1.09 to its official name, which would be Command & Conquer 3:... Command And Conquer 3 Tiberium Wars Patch 1.09 Crack, bleach... Command & Conquer 3: Tiberium Wars Game Fixes, No-CD Game Fixes... You should re-install the game from a CD/DVD." What i did: Downloaded the patch from File Planet. Placed install in C&C 3 folder, Ran the patch, at the end, the... Patch 1.09 for Command & Conquer 3: Tiberium Wars was released in... This Kane Edition map no longer has Tiberium fields along the narrow... Final (Patch 2) - Game mod - Download The file Contra 009 v Contra 009 u6 +11 TRAINER; C&C Generals: Zero Hour v1.... Additional Command & Conquer: Generals: Zero Hour Game Fixes, No-CD Game Fixes, No-CD... 0 a total of 9 NEW Generals have been added 3 Field Commanders have been added to each Free spy apps for Android Free apps for Android and ios Call Detector You can use it to save tracks from an Audio CD as any of WAV, MP3, OGG,... as it runs great with no issues; even ran well in Linux via Steam's Proton with... title Command & Conquer 3: Tiberium Wars Complete Collection [MULTi10] for PC [14.. Command And Conquer 3 Tiberium Wars No Cd Crack V1.0instmank... wars 3 patch 1.09, command & conquer tiberium wars cd key, Dec 03, 2020 Suffice you don't want to run Command Prompt every... Use the power wisely, but know that no matter how effective your rule, your... 3 Mods Game Watcher 14:51 3-Sep-20 Crusader Kings 3 Patch Notes - Update of darkcompare Immortal Realms: Vampire Wars Steam key cd key... The command line version RAR is available for Linux, FreeBSD and MAC OS X. If you ever... Introduction Of Command & Conquer 3 Kane's Wrath PC Game.... WinRAR Full Crack offers a set of tools that integrates directly with Windows... Wrath PC Game Is An Expansion To The Command & Conquer 3 Tiberium Wars PC... Visit MAIN N E T W O R K Command & Conquer 3 Tiberium Wars... Some No-CD/Fixed EXE files work fine in Single Player mode but are... Command And Conquer 3 Kane's Wrath Patch 1.09 Crack... Um.I'm not actually certain what 'Nó DVD-ROM' / 'Nó CD-ROM' error you mean,... Command Conquer 3 Tiberium Wars Patch 1.08 command conquer... wars no cd patch, command and conquer 3 tiberium wars v1 09 patch,... Do I still need the no-cd.dat patch? Command & Conquer 3: Tiberium Wars v1.09 All Skip to navigation Skip to main. Command & Conquer 3:.. Tiberium Wars V1 09 Patch Crack 2 In purchase to unpack this document after... Command And Conquer 3 Tiberium Wars Patch V /Rebalanced Chart: Tournament Tower system/ This map no more offers a third-tier... Command & Conquer 3: Tiberium Wars v1.09 All.. comcommand CONQUER 3 TIBERIUM WARS Free Automated Malware Analysis... ClanHomeworld 2 no cd crack pcc&c3 version 1.09 doesn't like me.. C&c 3 Tiberium Wars No Cd Crack C&c 3 Tiberium Wars No Cd Crack 1.0, inkling in illustrator cc crack. Loading.,Humpier,,,Plascencia,... tiberium_wars_1.09crack.7z. 12 MB C&C 3 Tiberium Wars Patch crack.rar. 3 / 5

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Gratis Game Dynasty Warrior 5 Pc Full Version The Ultimate Guide.md b/spaces/cihyFjudo/fairness-paper-search/Gratis Game Dynasty Warrior 5 Pc Full Version The Ultimate Guide.md deleted file mode 100644 index dbffeb4c4607caa15b97c166cd6e36cd77d0fc24..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Gratis Game Dynasty Warrior 5 Pc Full Version The Ultimate Guide.md +++ /dev/null @@ -1,7 +0,0 @@ - -

    We may have multiple downloads for few games when different versions are available.Also, we try to upload manuals and extra documentation when possible. If you have additional files to contribute or have the game in another language, please contact us!

    -

    Gratis Game Dynasty Warrior 5 Pc Full Version


    Downloadhttps://tinurli.com/2uwhCK



    -

    This version of the game is a combination of the original game and some Xtreme Legends features, including Legend Mode and Xtreme Mode. It does not include Edit Mode or Destiny Mode, but it does include the new items from Xtreme Legends. It includes gamepad support, allowing the user to use PlayStation 2 or Xbox 360-style controllers compatible with Windows.

    -

    Dynasty Warriors 5: Empires received "mixed" reviews according to video game review aggregator Metacritic.[47][48] In Japan, Famitsu gave it a score of one nine, two eights, and one nine for the Xbox 360 version, and all four eights for the PS2 version.[33]

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Watch Kaalo Hindi Dubbed Movie Torrent in HD The 18th Century Witch Saga.md b/spaces/cihyFjudo/fairness-paper-search/Watch Kaalo Hindi Dubbed Movie Torrent in HD The 18th Century Witch Saga.md deleted file mode 100644 index 3b474d78c5e5f683271ee1a3b2e445a33a2adee2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Watch Kaalo Hindi Dubbed Movie Torrent in HD The 18th Century Witch Saga.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Kaalo Hindi Dubbed Movie Torrent


    Downloadhttps://tinurli.com/2uwiJ1



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cleanmaster/so-vits-svc-akagi/vdecoder/hifigan/env.py b/spaces/cleanmaster/so-vits-svc-akagi/vdecoder/hifigan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/so-vits-svc-akagi/vdecoder/hifigan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/v5/schema/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/v5/schema/__init__.py deleted file mode 100644 index 123a3fb5f048408f59a80cc0fa80097b652ceebb..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/v5/schema/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# ruff: noqa -from .core import * -from .channels import * -SCHEMA_VERSION = 'v5.8.0' -SCHEMA_URL = 'https://vega.github.io/schema/vega-lite/v5.8.0.json' diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_levels.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_levels.h deleted file mode 100644 index 310d79e51a2e0a45e6f9000c872342a3d4749ba1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_levels.h +++ /dev/null @@ -1,51 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_H264_LEVELS_H -#define AVCODEC_H264_LEVELS_H - - -#include - -typedef struct H264LevelDescriptor { - char name[4]; // Large enough for all current levels like "4.1" - uint8_t level_idc; - uint8_t constraint_set3_flag; - uint32_t max_mbps; - uint32_t max_fs; - uint32_t max_dpb_mbs; - uint32_t max_br; - uint32_t max_cpb; - uint16_t max_v_mv_r; - uint8_t min_cr; - uint8_t max_mvs_per_2mb; -} H264LevelDescriptor; - -/** - * Guess the level of a stream from some parameters. - * - * Unknown parameters may be zero, in which case they are ignored. - */ -const H264LevelDescriptor *ff_h264_guess_level(int profile_idc, - int64_t bitrate, - int framerate, - int width, int height, - int max_dec_frame_buffering); - - -#endif /* AVCODEC_H264_LEVELS_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dec.c deleted file mode 100644 index 2d691731c5d5277908c043831e265c9cedd3f85a..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dec.c +++ /dev/null @@ -1,1106 +0,0 @@ -/* - * H.26L/H.264/AVC/JVT/14496-10/... decoder - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * H.264 / AVC / MPEG-4 part10 codec. - * @author Michael Niedermayer - */ - -#define UNCHECKED_BITSTREAM_READER 1 - -#include "config_components.h" - -#include "libavutil/avassert.h" -#include "libavutil/imgutils.h" -#include "libavutil/opt.h" -#include "libavutil/thread.h" -#include "libavutil/video_enc_params.h" - -#include "codec_internal.h" -#include "internal.h" -#include "error_resilience.h" -#include "avcodec.h" -#include "h264.h" -#include "h264dec.h" -#include "h2645_parse.h" -#include "h264data.h" -#include "h264_ps.h" -#include "golomb.h" -#include "hwconfig.h" -#include "mpegutils.h" -#include "profiles.h" -#include "rectangle.h" -#include "thread.h" -#include "threadframe.h" - -const uint16_t ff_h264_mb_sizes[4] = { 256, 384, 512, 768 }; - -int avpriv_h264_has_num_reorder_frames(AVCodecContext *avctx) -{ - H264Context *h = avctx->priv_data; - return h && h->ps.sps ? h->ps.sps->num_reorder_frames : 0; -} - -static void h264_er_decode_mb(void *opaque, int ref, int mv_dir, int mv_type, - int (*mv)[2][4][2], - int mb_x, int mb_y, int mb_intra, int mb_skipped) -{ - H264Context *h = opaque; - H264SliceContext *sl = &h->slice_ctx[0]; - - sl->mb_x = mb_x; - sl->mb_y = mb_y; - sl->mb_xy = mb_x + mb_y * h->mb_stride; - memset(sl->non_zero_count_cache, 0, sizeof(sl->non_zero_count_cache)); - av_assert1(ref >= 0); - /* FIXME: It is possible albeit uncommon that slice references - * differ between slices. We take the easy approach and ignore - * it for now. If this turns out to have any relevance in - * practice then correct remapping should be added. */ - if (ref >= sl->ref_count[0]) - ref = 0; - if (!sl->ref_list[0][ref].data[0]) { - av_log(h->avctx, AV_LOG_DEBUG, "Reference not available for error concealing\n"); - ref = 0; - } - if ((sl->ref_list[0][ref].reference&3) != 3) { - av_log(h->avctx, AV_LOG_DEBUG, "Reference invalid\n"); - return; - } - fill_rectangle(&h->cur_pic.ref_index[0][4 * sl->mb_xy], - 2, 2, 2, ref, 1); - fill_rectangle(&sl->ref_cache[0][scan8[0]], 4, 4, 8, ref, 1); - fill_rectangle(sl->mv_cache[0][scan8[0]], 4, 4, 8, - pack16to32((*mv)[0][0][0], (*mv)[0][0][1]), 4); - sl->mb_mbaff = - sl->mb_field_decoding_flag = 0; - ff_h264_hl_decode_mb(h, &h->slice_ctx[0]); -} - -void ff_h264_draw_horiz_band(const H264Context *h, H264SliceContext *sl, - int y, int height) -{ - AVCodecContext *avctx = h->avctx; - const AVFrame *src = h->cur_pic.f; - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt); - int vshift = desc->log2_chroma_h; - const int field_pic = h->picture_structure != PICT_FRAME; - if (field_pic) { - height <<= 1; - y <<= 1; - } - - height = FFMIN(height, avctx->height - y); - - if (field_pic && h->first_field && !(avctx->slice_flags & SLICE_FLAG_ALLOW_FIELD)) - return; - - if (avctx->draw_horiz_band) { - int offset[AV_NUM_DATA_POINTERS]; - int i; - - offset[0] = y * src->linesize[0]; - offset[1] = - offset[2] = (y >> vshift) * src->linesize[1]; - for (i = 3; i < AV_NUM_DATA_POINTERS; i++) - offset[i] = 0; - - emms_c(); - - avctx->draw_horiz_band(avctx, src, offset, - y, h->picture_structure, height); - } -} - -void ff_h264_free_tables(H264Context *h) -{ - int i; - - av_freep(&h->intra4x4_pred_mode); - av_freep(&h->chroma_pred_mode_table); - av_freep(&h->cbp_table); - av_freep(&h->mvd_table[0]); - av_freep(&h->mvd_table[1]); - av_freep(&h->direct_table); - av_freep(&h->non_zero_count); - av_freep(&h->slice_table_base); - h->slice_table = NULL; - av_freep(&h->list_counts); - - av_freep(&h->mb2b_xy); - av_freep(&h->mb2br_xy); - - av_buffer_pool_uninit(&h->qscale_table_pool); - av_buffer_pool_uninit(&h->mb_type_pool); - av_buffer_pool_uninit(&h->motion_val_pool); - av_buffer_pool_uninit(&h->ref_index_pool); - -#if CONFIG_ERROR_RESILIENCE - av_freep(&h->er.mb_index2xy); - av_freep(&h->er.error_status_table); - av_freep(&h->er.er_temp_buffer); - av_freep(&h->dc_val_base); -#endif - - for (i = 0; i < h->nb_slice_ctx; i++) { - H264SliceContext *sl = &h->slice_ctx[i]; - - av_freep(&sl->bipred_scratchpad); - av_freep(&sl->edge_emu_buffer); - av_freep(&sl->top_borders[0]); - av_freep(&sl->top_borders[1]); - - sl->bipred_scratchpad_allocated = 0; - sl->edge_emu_buffer_allocated = 0; - sl->top_borders_allocated[0] = 0; - sl->top_borders_allocated[1] = 0; - } -} - -int ff_h264_alloc_tables(H264Context *h) -{ - ERContext *const er = &h->er; - const int big_mb_num = h->mb_stride * (h->mb_height + 1); - const int row_mb_num = 2*h->mb_stride*FFMAX(h->nb_slice_ctx, 1); - const int st_size = big_mb_num + h->mb_stride; - int x, y; - - if (!FF_ALLOCZ_TYPED_ARRAY(h->intra4x4_pred_mode, row_mb_num * 8) || - !FF_ALLOCZ_TYPED_ARRAY(h->non_zero_count, big_mb_num) || - !FF_ALLOCZ_TYPED_ARRAY(h->slice_table_base, st_size) || - !FF_ALLOCZ_TYPED_ARRAY(h->cbp_table, big_mb_num) || - !FF_ALLOCZ_TYPED_ARRAY(h->chroma_pred_mode_table, big_mb_num) || - !FF_ALLOCZ_TYPED_ARRAY(h->mvd_table[0], row_mb_num * 8) || - !FF_ALLOCZ_TYPED_ARRAY(h->mvd_table[1], row_mb_num * 8) || - !FF_ALLOCZ_TYPED_ARRAY(h->direct_table, big_mb_num * 4) || - !FF_ALLOCZ_TYPED_ARRAY(h->list_counts, big_mb_num) || - !FF_ALLOCZ_TYPED_ARRAY(h->mb2b_xy, big_mb_num) || - !FF_ALLOCZ_TYPED_ARRAY(h->mb2br_xy, big_mb_num)) - return AVERROR(ENOMEM); - h->slice_ctx[0].intra4x4_pred_mode = h->intra4x4_pred_mode; - h->slice_ctx[0].mvd_table[0] = h->mvd_table[0]; - h->slice_ctx[0].mvd_table[1] = h->mvd_table[1]; - memset(h->slice_table_base, -1, - st_size * sizeof(*h->slice_table_base)); - h->slice_table = h->slice_table_base + h->mb_stride * 2 + 1; - for (y = 0; y < h->mb_height; y++) - for (x = 0; x < h->mb_width; x++) { - const int mb_xy = x + y * h->mb_stride; - const int b_xy = 4 * x + 4 * y * h->b_stride; - - h->mb2b_xy[mb_xy] = b_xy; - h->mb2br_xy[mb_xy] = 8 * (FMO ? mb_xy : (mb_xy % (2 * h->mb_stride))); - } - - if (CONFIG_ERROR_RESILIENCE) { - const int er_size = h->mb_height * h->mb_stride * (4*sizeof(int) + 1); - int mb_array_size = h->mb_height * h->mb_stride; - int y_size = (2 * h->mb_width + 1) * (2 * h->mb_height + 1); - int yc_size = y_size + 2 * big_mb_num; - - /* init ER */ - er->avctx = h->avctx; - er->decode_mb = h264_er_decode_mb; - er->opaque = h; - er->quarter_sample = 1; - - er->mb_num = h->mb_num; - er->mb_width = h->mb_width; - er->mb_height = h->mb_height; - er->mb_stride = h->mb_stride; - er->b8_stride = h->mb_width * 2 + 1; - - // error resilience code looks cleaner with this - if (!FF_ALLOCZ_TYPED_ARRAY(er->mb_index2xy, h->mb_num + 1) || - !FF_ALLOCZ_TYPED_ARRAY(er->error_status_table, mb_array_size) || - !FF_ALLOCZ_TYPED_ARRAY(er->er_temp_buffer, er_size) || - !FF_ALLOCZ_TYPED_ARRAY(h->dc_val_base, yc_size)) - return AVERROR(ENOMEM); // ff_h264_free_tables will clean up for us - - for (y = 0; y < h->mb_height; y++) - for (x = 0; x < h->mb_width; x++) - er->mb_index2xy[x + y * h->mb_width] = x + y * h->mb_stride; - - er->mb_index2xy[h->mb_height * h->mb_width] = (h->mb_height - 1) * - h->mb_stride + h->mb_width; - er->dc_val[0] = h->dc_val_base + h->mb_width * 2 + 2; - er->dc_val[1] = h->dc_val_base + y_size + h->mb_stride + 1; - er->dc_val[2] = er->dc_val[1] + big_mb_num; - for (int i = 0; i < yc_size; i++) - h->dc_val_base[i] = 1024; - } - - return 0; -} - -/** - * Init slice context - */ -void ff_h264_slice_context_init(H264Context *h, H264SliceContext *sl) -{ - sl->ref_cache[0][scan8[5] + 1] = - sl->ref_cache[0][scan8[7] + 1] = - sl->ref_cache[0][scan8[13] + 1] = - sl->ref_cache[1][scan8[5] + 1] = - sl->ref_cache[1][scan8[7] + 1] = - sl->ref_cache[1][scan8[13] + 1] = PART_NOT_AVAILABLE; - - sl->er = &h->er; -} - -static int h264_init_pic(H264Picture *pic) -{ - pic->f = av_frame_alloc(); - if (!pic->f) - return AVERROR(ENOMEM); - - pic->f_grain = av_frame_alloc(); - if (!pic->f_grain) - return AVERROR(ENOMEM); - - return 0; -} - -static int h264_init_context(AVCodecContext *avctx, H264Context *h) -{ - int i, ret; - - h->avctx = avctx; - h->cur_chroma_format_idc = -1; - - h->width_from_caller = avctx->width; - h->height_from_caller = avctx->height; - - h->workaround_bugs = avctx->workaround_bugs; - h->flags = avctx->flags; - h->poc.prev_poc_msb = 1 << 16; - h->recovery_frame = -1; - h->frame_recovered = 0; - h->poc.prev_frame_num = -1; - h->sei.common.frame_packing.arrangement_cancel_flag = -1; - h->sei.common.unregistered.x264_build = -1; - - h->next_outputed_poc = INT_MIN; - for (i = 0; i < FF_ARRAY_ELEMS(h->last_pocs); i++) - h->last_pocs[i] = INT_MIN; - - ff_h264_sei_uninit(&h->sei); - - h->nb_slice_ctx = (avctx->active_thread_type & FF_THREAD_SLICE) ? avctx->thread_count : 1; - h->slice_ctx = av_calloc(h->nb_slice_ctx, sizeof(*h->slice_ctx)); - if (!h->slice_ctx) { - h->nb_slice_ctx = 0; - return AVERROR(ENOMEM); - } - - for (i = 0; i < H264_MAX_PICTURE_COUNT; i++) { - if ((ret = h264_init_pic(&h->DPB[i])) < 0) - return ret; - } - - if ((ret = h264_init_pic(&h->cur_pic)) < 0) - return ret; - - if ((ret = h264_init_pic(&h->last_pic_for_ec)) < 0) - return ret; - - for (i = 0; i < h->nb_slice_ctx; i++) - h->slice_ctx[i].h264 = h; - - return 0; -} - -static void h264_free_pic(H264Context *h, H264Picture *pic) -{ - ff_h264_unref_picture(h, pic); - av_frame_free(&pic->f); - av_frame_free(&pic->f_grain); -} - -static av_cold int h264_decode_end(AVCodecContext *avctx) -{ - H264Context *h = avctx->priv_data; - int i; - - ff_h264_remove_all_refs(h); - ff_h264_free_tables(h); - - for (i = 0; i < H264_MAX_PICTURE_COUNT; i++) { - h264_free_pic(h, &h->DPB[i]); - } - memset(h->delayed_pic, 0, sizeof(h->delayed_pic)); - - h->cur_pic_ptr = NULL; - - av_freep(&h->slice_ctx); - h->nb_slice_ctx = 0; - - ff_h264_sei_uninit(&h->sei); - ff_h264_ps_uninit(&h->ps); - - ff_h2645_packet_uninit(&h->pkt); - - h264_free_pic(h, &h->cur_pic); - h264_free_pic(h, &h->last_pic_for_ec); - - return 0; -} - -static AVOnce h264_vlc_init = AV_ONCE_INIT; - -static av_cold int h264_decode_init(AVCodecContext *avctx) -{ - H264Context *h = avctx->priv_data; - int ret; - - ret = h264_init_context(avctx, h); - if (ret < 0) - return ret; - - ret = ff_thread_once(&h264_vlc_init, ff_h264_decode_init_vlc); - if (ret != 0) { - av_log(avctx, AV_LOG_ERROR, "pthread_once has failed."); - return AVERROR_UNKNOWN; - } - - avctx->ticks_per_frame = 2; - - if (!avctx->internal->is_copy) { - if (avctx->extradata_size > 0 && avctx->extradata) { - ret = ff_h264_decode_extradata(avctx->extradata, avctx->extradata_size, - &h->ps, &h->is_avc, &h->nal_length_size, - avctx->err_recognition, avctx); - if (ret < 0) { - int explode = avctx->err_recognition & AV_EF_EXPLODE; - av_log(avctx, explode ? AV_LOG_ERROR: AV_LOG_WARNING, - "Error decoding the extradata\n"); - if (explode) { - return ret; - } - ret = 0; - } - } - } - - if (h->ps.sps && h->ps.sps->bitstream_restriction_flag && - h->avctx->has_b_frames < h->ps.sps->num_reorder_frames) { - h->avctx->has_b_frames = h->ps.sps->num_reorder_frames; - } - - ff_h264_flush_change(h); - - if (h->enable_er < 0 && (avctx->active_thread_type & FF_THREAD_SLICE)) - h->enable_er = 0; - - if (h->enable_er && (avctx->active_thread_type & FF_THREAD_SLICE)) { - av_log(avctx, AV_LOG_WARNING, - "Error resilience with slice threads is enabled. It is unsafe and unsupported and may crash. " - "Use it at your own risk\n"); - } - - return 0; -} - -/** - * instantaneous decoder refresh. - */ -static void idr(H264Context *h) -{ - int i; - ff_h264_remove_all_refs(h); - h->poc.prev_frame_num = - h->poc.prev_frame_num_offset = 0; - h->poc.prev_poc_msb = 1<<16; - h->poc.prev_poc_lsb = -1; - for (i = 0; i < FF_ARRAY_ELEMS(h->last_pocs); i++) - h->last_pocs[i] = INT_MIN; -} - -/* forget old pics after a seek */ -void ff_h264_flush_change(H264Context *h) -{ - int i, j; - - h->next_outputed_poc = INT_MIN; - h->prev_interlaced_frame = 1; - idr(h); - - h->poc.prev_frame_num = -1; - if (h->cur_pic_ptr) { - h->cur_pic_ptr->reference = 0; - for (j=i=0; h->delayed_pic[i]; i++) - if (h->delayed_pic[i] != h->cur_pic_ptr) - h->delayed_pic[j++] = h->delayed_pic[i]; - h->delayed_pic[j] = NULL; - } - ff_h264_unref_picture(h, &h->last_pic_for_ec); - - h->first_field = 0; - h->recovery_frame = -1; - h->frame_recovered = 0; - h->current_slice = 0; - h->mmco_reset = 1; -} - -static void h264_decode_flush(AVCodecContext *avctx) -{ - H264Context *h = avctx->priv_data; - int i; - - memset(h->delayed_pic, 0, sizeof(h->delayed_pic)); - - ff_h264_flush_change(h); - ff_h264_sei_uninit(&h->sei); - - for (i = 0; i < H264_MAX_PICTURE_COUNT; i++) - ff_h264_unref_picture(h, &h->DPB[i]); - h->cur_pic_ptr = NULL; - ff_h264_unref_picture(h, &h->cur_pic); - - h->mb_y = 0; - - ff_h264_free_tables(h); - h->context_initialized = 0; -} - -static int get_last_needed_nal(H264Context *h) -{ - int nals_needed = 0; - int slice_type = 0; - int picture_intra_only = 1; - int first_slice = 0; - int i, ret; - - for (i = 0; i < h->pkt.nb_nals; i++) { - H2645NAL *nal = &h->pkt.nals[i]; - GetBitContext gb; - - /* packets can sometimes contain multiple PPS/SPS, - * e.g. two PAFF field pictures in one packet, or a demuxer - * which splits NALs strangely if so, when frame threading we - * can't start the next thread until we've read all of them */ - switch (nal->type) { - case H264_NAL_SPS: - case H264_NAL_PPS: - nals_needed = i; - break; - case H264_NAL_DPA: - case H264_NAL_IDR_SLICE: - case H264_NAL_SLICE: - ret = init_get_bits8(&gb, nal->data + 1, nal->size - 1); - if (ret < 0) { - av_log(h->avctx, AV_LOG_ERROR, "Invalid zero-sized VCL NAL unit\n"); - if (h->avctx->err_recognition & AV_EF_EXPLODE) - return ret; - - break; - } - if (!get_ue_golomb_long(&gb) || // first_mb_in_slice - !first_slice || - first_slice != nal->type) - nals_needed = i; - slice_type = get_ue_golomb_31(&gb); - if (slice_type > 9) - slice_type = 0; - if (slice_type > 4) - slice_type -= 5; - - slice_type = ff_h264_golomb_to_pict_type[slice_type]; - picture_intra_only &= (slice_type & 3) == AV_PICTURE_TYPE_I; - if (!first_slice) - first_slice = nal->type; - } - } - - h->picture_intra_only = picture_intra_only; - - return nals_needed; -} - -static void debug_green_metadata(const H264SEIGreenMetaData *gm, void *logctx) -{ - av_log(logctx, AV_LOG_DEBUG, "Green Metadata Info SEI message\n"); - av_log(logctx, AV_LOG_DEBUG, " green_metadata_type: %d\n", gm->green_metadata_type); - - if (gm->green_metadata_type == 0) { - av_log(logctx, AV_LOG_DEBUG, " green_metadata_period_type: %d\n", gm->period_type); - - if (gm->period_type == 2) - av_log(logctx, AV_LOG_DEBUG, " green_metadata_num_seconds: %d\n", gm->num_seconds); - else if (gm->period_type == 3) - av_log(logctx, AV_LOG_DEBUG, " green_metadata_num_pictures: %d\n", gm->num_pictures); - - av_log(logctx, AV_LOG_DEBUG, " SEI GREEN Complexity Metrics: %f %f %f %f\n", - (float)gm->percent_non_zero_macroblocks/255, - (float)gm->percent_intra_coded_macroblocks/255, - (float)gm->percent_six_tap_filtering/255, - (float)gm->percent_alpha_point_deblocking_instance/255); - - } else if (gm->green_metadata_type == 1) { - av_log(logctx, AV_LOG_DEBUG, " xsd_metric_type: %d\n", gm->xsd_metric_type); - - if (gm->xsd_metric_type == 0) - av_log(logctx, AV_LOG_DEBUG, " xsd_metric_value: %f\n", - (float)gm->xsd_metric_value/100); - } -} - -static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size) -{ - AVCodecContext *const avctx = h->avctx; - int nals_needed = 0; ///< number of NALs that need decoding before the next frame thread starts - int idr_cleared=0; - int i, ret = 0; - - h->has_slice = 0; - h->nal_unit_type= 0; - - if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS)) { - h->current_slice = 0; - if (!h->first_field) { - h->cur_pic_ptr = NULL; - ff_h264_sei_uninit(&h->sei); - } - } - - if (h->nal_length_size == 4) { - if (buf_size > 8 && AV_RB32(buf) == 1 && AV_RB32(buf+5) > (unsigned)buf_size) { - h->is_avc = 0; - }else if(buf_size > 3 && AV_RB32(buf) > 1 && AV_RB32(buf) <= (unsigned)buf_size) - h->is_avc = 1; - } - - ret = ff_h2645_packet_split(&h->pkt, buf, buf_size, avctx, h->is_avc, h->nal_length_size, - avctx->codec_id, 0, 0); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, - "Error splitting the input into NAL units.\n"); - return ret; - } - - if (avctx->active_thread_type & FF_THREAD_FRAME) - nals_needed = get_last_needed_nal(h); - if (nals_needed < 0) - return nals_needed; - - for (i = 0; i < h->pkt.nb_nals; i++) { - H2645NAL *nal = &h->pkt.nals[i]; - int max_slice_ctx, err; - - if (avctx->skip_frame >= AVDISCARD_NONREF && - nal->ref_idc == 0 && nal->type != H264_NAL_SEI) - continue; - - // FIXME these should stop being context-global variables - h->nal_ref_idc = nal->ref_idc; - h->nal_unit_type = nal->type; - - err = 0; - switch (nal->type) { - case H264_NAL_IDR_SLICE: - if ((nal->data[1] & 0xFC) == 0x98) { - av_log(h->avctx, AV_LOG_ERROR, "Invalid inter IDR frame\n"); - h->next_outputed_poc = INT_MIN; - ret = -1; - goto end; - } - if(!idr_cleared) { - idr(h); // FIXME ensure we don't lose some frames if there is reordering - } - idr_cleared = 1; - h->has_recovery_point = 1; - case H264_NAL_SLICE: - h->has_slice = 1; - - if ((err = ff_h264_queue_decode_slice(h, nal))) { - H264SliceContext *sl = h->slice_ctx + h->nb_slice_ctx_queued; - sl->ref_count[0] = sl->ref_count[1] = 0; - break; - } - - if (h->current_slice == 1) { - if (avctx->active_thread_type & FF_THREAD_FRAME && - i >= nals_needed && !h->setup_finished && h->cur_pic_ptr) { - ff_thread_finish_setup(avctx); - h->setup_finished = 1; - } - - if (h->avctx->hwaccel && - (ret = h->avctx->hwaccel->start_frame(h->avctx, buf, buf_size)) < 0) - goto end; - } - - max_slice_ctx = avctx->hwaccel ? 1 : h->nb_slice_ctx; - if (h->nb_slice_ctx_queued == max_slice_ctx) { - if (h->avctx->hwaccel) { - ret = avctx->hwaccel->decode_slice(avctx, nal->raw_data, nal->raw_size); - h->nb_slice_ctx_queued = 0; - } else - ret = ff_h264_execute_decode_slices(h); - if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE)) - goto end; - } - break; - case H264_NAL_DPA: - case H264_NAL_DPB: - case H264_NAL_DPC: - avpriv_request_sample(avctx, "data partitioning"); - break; - case H264_NAL_SEI: - if (h->setup_finished) { - avpriv_request_sample(avctx, "Late SEI"); - break; - } - ret = ff_h264_sei_decode(&h->sei, &nal->gb, &h->ps, avctx); - h->has_recovery_point = h->has_recovery_point || h->sei.recovery_point.recovery_frame_cnt != -1; - if (avctx->debug & FF_DEBUG_GREEN_MD) - debug_green_metadata(&h->sei.green_metadata, h->avctx); - if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE)) - goto end; - break; - case H264_NAL_SPS: { - GetBitContext tmp_gb = nal->gb; - if (avctx->hwaccel && avctx->hwaccel->decode_params) { - ret = avctx->hwaccel->decode_params(avctx, - nal->type, - nal->raw_data, - nal->raw_size); - if (ret < 0) - goto end; - } - if (ff_h264_decode_seq_parameter_set(&tmp_gb, avctx, &h->ps, 0) >= 0) - break; - av_log(h->avctx, AV_LOG_DEBUG, - "SPS decoding failure, trying again with the complete NAL\n"); - init_get_bits8(&tmp_gb, nal->raw_data + 1, nal->raw_size - 1); - if (ff_h264_decode_seq_parameter_set(&tmp_gb, avctx, &h->ps, 0) >= 0) - break; - ff_h264_decode_seq_parameter_set(&nal->gb, avctx, &h->ps, 1); - break; - } - case H264_NAL_PPS: - if (avctx->hwaccel && avctx->hwaccel->decode_params) { - ret = avctx->hwaccel->decode_params(avctx, - nal->type, - nal->raw_data, - nal->raw_size); - if (ret < 0) - goto end; - } - ret = ff_h264_decode_picture_parameter_set(&nal->gb, avctx, &h->ps, - nal->size_bits); - if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE)) - goto end; - break; - case H264_NAL_AUD: - case H264_NAL_END_SEQUENCE: - case H264_NAL_END_STREAM: - case H264_NAL_FILLER_DATA: - case H264_NAL_SPS_EXT: - case H264_NAL_AUXILIARY_SLICE: - break; - default: - av_log(avctx, AV_LOG_DEBUG, "Unknown NAL code: %d (%d bits)\n", - nal->type, nal->size_bits); - } - - if (err < 0) { - av_log(h->avctx, AV_LOG_ERROR, "decode_slice_header error\n"); - } - } - - ret = ff_h264_execute_decode_slices(h); - if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE)) - goto end; - - // set decode_error_flags to allow users to detect concealed decoding errors - if ((ret < 0 || h->er.error_occurred) && h->cur_pic_ptr) { - h->cur_pic_ptr->f->decode_error_flags |= FF_DECODE_ERROR_DECODE_SLICES; - } - - ret = 0; -end: - -#if CONFIG_ERROR_RESILIENCE - /* - * FIXME: Error handling code does not seem to support interlaced - * when slices span multiple rows - * The ff_er_add_slice calls don't work right for bottom - * fields; they cause massive erroneous error concealing - * Error marking covers both fields (top and bottom). - * This causes a mismatched s->error_count - * and a bad error table. Further, the error count goes to - * INT_MAX when called for bottom field, because mb_y is - * past end by one (callers fault) and resync_mb_y != 0 - * causes problems for the first MB line, too. - */ - if (!FIELD_PICTURE(h) && h->current_slice && h->enable_er) { - - H264SliceContext *sl = h->slice_ctx; - int use_last_pic = h->last_pic_for_ec.f->buf[0] && !sl->ref_count[0]; - - ff_h264_set_erpic(&h->er.cur_pic, h->cur_pic_ptr); - - if (use_last_pic) { - ff_h264_set_erpic(&h->er.last_pic, &h->last_pic_for_ec); - sl->ref_list[0][0].parent = &h->last_pic_for_ec; - memcpy(sl->ref_list[0][0].data, h->last_pic_for_ec.f->data, sizeof(sl->ref_list[0][0].data)); - memcpy(sl->ref_list[0][0].linesize, h->last_pic_for_ec.f->linesize, sizeof(sl->ref_list[0][0].linesize)); - sl->ref_list[0][0].reference = h->last_pic_for_ec.reference; - } else if (sl->ref_count[0]) { - ff_h264_set_erpic(&h->er.last_pic, sl->ref_list[0][0].parent); - } else - ff_h264_set_erpic(&h->er.last_pic, NULL); - - if (sl->ref_count[1]) - ff_h264_set_erpic(&h->er.next_pic, sl->ref_list[1][0].parent); - - ff_er_frame_end(&h->er); - if (use_last_pic) - memset(&sl->ref_list[0][0], 0, sizeof(sl->ref_list[0][0])); - } -#endif /* CONFIG_ERROR_RESILIENCE */ - /* clean up */ - if (h->cur_pic_ptr && !h->droppable && h->has_slice) { - ff_thread_report_progress(&h->cur_pic_ptr->tf, INT_MAX, - h->picture_structure == PICT_BOTTOM_FIELD); - } - - return (ret < 0) ? ret : buf_size; -} - -/** - * Return the number of bytes consumed for building the current frame. - */ -static int get_consumed_bytes(int pos, int buf_size) -{ - if (pos == 0) - pos = 1; // avoid infinite loops (I doubt that is needed but...) - if (pos + 10 > buf_size) - pos = buf_size; // oops ;) - - return pos; -} - -static int h264_export_enc_params(AVFrame *f, H264Picture *p) -{ - AVVideoEncParams *par; - unsigned int nb_mb = p->mb_height * p->mb_width; - unsigned int x, y; - - par = av_video_enc_params_create_side_data(f, AV_VIDEO_ENC_PARAMS_H264, nb_mb); - if (!par) - return AVERROR(ENOMEM); - - par->qp = p->pps->init_qp; - - par->delta_qp[1][0] = p->pps->chroma_qp_index_offset[0]; - par->delta_qp[1][1] = p->pps->chroma_qp_index_offset[0]; - par->delta_qp[2][0] = p->pps->chroma_qp_index_offset[1]; - par->delta_qp[2][1] = p->pps->chroma_qp_index_offset[1]; - - for (y = 0; y < p->mb_height; y++) - for (x = 0; x < p->mb_width; x++) { - const unsigned int block_idx = y * p->mb_width + x; - const unsigned int mb_xy = y * p->mb_stride + x; - AVVideoBlockParams *b = av_video_enc_params_block(par, block_idx); - - b->src_x = x * 16; - b->src_y = y * 16; - b->w = 16; - b->h = 16; - - b->delta_qp = p->qscale_table[mb_xy] - par->qp; - } - - return 0; -} - -static int output_frame(H264Context *h, AVFrame *dst, H264Picture *srcp) -{ - int ret; - - ret = av_frame_ref(dst, srcp->needs_fg ? srcp->f_grain : srcp->f); - if (ret < 0) - return ret; - - if (srcp->needs_fg && (ret = av_frame_copy_props(dst, srcp->f)) < 0) - return ret; - - av_dict_set(&dst->metadata, "stereo_mode", ff_h264_sei_stereo_mode(&h->sei.common.frame_packing), 0); - - if (srcp->sei_recovery_frame_cnt == 0) - dst->key_frame = 1; - - if (h->avctx->export_side_data & AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS) { - ret = h264_export_enc_params(dst, srcp); - if (ret < 0) - goto fail; - } - - if (!(h->avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN)) - av_frame_remove_side_data(dst, AV_FRAME_DATA_FILM_GRAIN_PARAMS); - - return 0; -fail: - av_frame_unref(dst); - return ret; -} - -static int is_avcc_extradata(const uint8_t *buf, int buf_size) -{ - int cnt= buf[5]&0x1f; - const uint8_t *p= buf+6; - if (!cnt) - return 0; - while(cnt--){ - int nalsize= AV_RB16(p) + 2; - if(nalsize > buf_size - (p-buf) || (p[2] & 0x9F) != 7) - return 0; - p += nalsize; - } - cnt = *(p++); - if(!cnt) - return 0; - while(cnt--){ - int nalsize= AV_RB16(p) + 2; - if(nalsize > buf_size - (p-buf) || (p[2] & 0x9F) != 8) - return 0; - p += nalsize; - } - return 1; -} - -static int finalize_frame(H264Context *h, AVFrame *dst, H264Picture *out, int *got_frame) -{ - int ret; - - if (((h->avctx->flags & AV_CODEC_FLAG_OUTPUT_CORRUPT) || - (h->avctx->flags2 & AV_CODEC_FLAG2_SHOW_ALL) || - out->recovered)) { - - if (!h->avctx->hwaccel && - (out->field_poc[0] == INT_MAX || - out->field_poc[1] == INT_MAX) - ) { - int p; - AVFrame *f = out->f; - int field = out->field_poc[0] == INT_MAX; - uint8_t *dst_data[4]; - int linesizes[4]; - const uint8_t *src_data[4]; - - av_log(h->avctx, AV_LOG_DEBUG, "Duplicating field %d to fill missing\n", field); - - for (p = 0; p<4; p++) { - dst_data[p] = f->data[p] + (field^1)*f->linesize[p]; - src_data[p] = f->data[p] + field *f->linesize[p]; - linesizes[p] = 2*f->linesize[p]; - } - - av_image_copy(dst_data, linesizes, src_data, linesizes, - f->format, f->width, f->height>>1); - } - - ret = output_frame(h, dst, out); - if (ret < 0) - return ret; - - *got_frame = 1; - - if (CONFIG_MPEGVIDEODEC) { - ff_print_debug_info2(h->avctx, dst, NULL, - out->mb_type, - out->qscale_table, - out->motion_val, - out->mb_width, out->mb_height, out->mb_stride, 1); - } - } - - return 0; -} - -static int send_next_delayed_frame(H264Context *h, AVFrame *dst_frame, - int *got_frame, int buf_index) -{ - int ret, i, out_idx; - H264Picture *out = h->delayed_pic[0]; - - h->cur_pic_ptr = NULL; - h->first_field = 0; - - out_idx = 0; - for (i = 1; - h->delayed_pic[i] && - !h->delayed_pic[i]->f->key_frame && - !h->delayed_pic[i]->mmco_reset; - i++) - if (h->delayed_pic[i]->poc < out->poc) { - out = h->delayed_pic[i]; - out_idx = i; - } - - for (i = out_idx; h->delayed_pic[i]; i++) - h->delayed_pic[i] = h->delayed_pic[i + 1]; - - if (out) { - out->reference &= ~DELAYED_PIC_REF; - ret = finalize_frame(h, dst_frame, out, got_frame); - if (ret < 0) - return ret; - } - - return buf_index; -} - -static int h264_decode_frame(AVCodecContext *avctx, AVFrame *pict, - int *got_frame, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - H264Context *h = avctx->priv_data; - int buf_index; - int ret; - - h->flags = avctx->flags; - h->setup_finished = 0; - h->nb_slice_ctx_queued = 0; - - ff_h264_unref_picture(h, &h->last_pic_for_ec); - - /* end of stream, output what is still in the buffers */ - if (buf_size == 0) - return send_next_delayed_frame(h, pict, got_frame, 0); - - if (av_packet_get_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, NULL)) { - size_t side_size; - uint8_t *side = av_packet_get_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, &side_size); - ff_h264_decode_extradata(side, side_size, - &h->ps, &h->is_avc, &h->nal_length_size, - avctx->err_recognition, avctx); - } - if (h->is_avc && buf_size >= 9 && buf[0]==1 && buf[2]==0 && (buf[4]&0xFC)==0xFC) { - if (is_avcc_extradata(buf, buf_size)) - return ff_h264_decode_extradata(buf, buf_size, - &h->ps, &h->is_avc, &h->nal_length_size, - avctx->err_recognition, avctx); - } - - buf_index = decode_nal_units(h, buf, buf_size); - if (buf_index < 0) - return AVERROR_INVALIDDATA; - - if (!h->cur_pic_ptr && h->nal_unit_type == H264_NAL_END_SEQUENCE) { - av_assert0(buf_index <= buf_size); - return send_next_delayed_frame(h, pict, got_frame, buf_index); - } - - if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS) && (!h->cur_pic_ptr || !h->has_slice)) { - if (avctx->skip_frame >= AVDISCARD_NONREF || - buf_size >= 4 && !memcmp("Q264", buf, 4)) - return buf_size; - av_log(avctx, AV_LOG_ERROR, "no frame!\n"); - return AVERROR_INVALIDDATA; - } - - if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS) || - (h->mb_y >= h->mb_height && h->mb_height)) { - if ((ret = ff_h264_field_end(h, &h->slice_ctx[0], 0)) < 0) - return ret; - - /* Wait for second field. */ - if (h->next_output_pic) { - ret = finalize_frame(h, pict, h->next_output_pic, got_frame); - if (ret < 0) - return ret; - } - } - - av_assert0(pict->buf[0] || !*got_frame); - - ff_h264_unref_picture(h, &h->last_pic_for_ec); - - return get_consumed_bytes(buf_index, buf_size); -} - -#define OFFSET(x) offsetof(H264Context, x) -#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM -#define VDX VD | AV_OPT_FLAG_EXPORT -static const AVOption h264_options[] = { - { "is_avc", "is avc", OFFSET(is_avc), AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, VDX }, - { "nal_length_size", "nal_length_size", OFFSET(nal_length_size), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 4, VDX }, - { "enable_er", "Enable error resilience on damaged frames (unsafe)", OFFSET(enable_er), AV_OPT_TYPE_BOOL, { .i64 = -1 }, -1, 1, VD }, - { "x264_build", "Assume this x264 version if no x264 version found in any SEI", OFFSET(x264_build), AV_OPT_TYPE_INT, {.i64 = -1}, -1, INT_MAX, VD }, - { NULL }, -}; - -static const AVClass h264_class = { - .class_name = "H264 Decoder", - .item_name = av_default_item_name, - .option = h264_options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_h264_decoder = { - .p.name = "h264", - CODEC_LONG_NAME("H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_H264, - .priv_data_size = sizeof(H264Context), - .init = h264_decode_init, - .close = h264_decode_end, - FF_CODEC_DECODE_CB(h264_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1 | - AV_CODEC_CAP_DELAY | AV_CODEC_CAP_SLICE_THREADS | - AV_CODEC_CAP_FRAME_THREADS, - .hw_configs = (const AVCodecHWConfigInternal *const []) { -#if CONFIG_H264_DXVA2_HWACCEL - HWACCEL_DXVA2(h264), -#endif -#if CONFIG_H264_D3D11VA_HWACCEL - HWACCEL_D3D11VA(h264), -#endif -#if CONFIG_H264_D3D11VA2_HWACCEL - HWACCEL_D3D11VA2(h264), -#endif -#if CONFIG_H264_NVDEC_HWACCEL - HWACCEL_NVDEC(h264), -#endif -#if CONFIG_H264_VAAPI_HWACCEL - HWACCEL_VAAPI(h264), -#endif -#if CONFIG_H264_VDPAU_HWACCEL - HWACCEL_VDPAU(h264), -#endif -#if CONFIG_H264_VIDEOTOOLBOX_HWACCEL - HWACCEL_VIDEOTOOLBOX(h264), -#endif - NULL - }, - .caps_internal = FF_CODEC_CAP_EXPORTS_CROPPING | - FF_CODEC_CAP_ALLOCATE_PROGRESS | FF_CODEC_CAP_INIT_CLEANUP, - .flush = h264_decode_flush, - UPDATE_THREAD_CONTEXT(ff_h264_update_thread_context), - UPDATE_THREAD_CONTEXT_FOR_USER(ff_h264_update_thread_context_for_user), - .p.profiles = NULL_IF_CONFIG_SMALL(ff_h264_profiles), - .p.priv_class = &h264_class, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download H.U.G File and Add New Cars in GTA San Andreas PC Easily.md b/spaces/congsaPfin/Manga-OCR/logs/Download H.U.G File and Add New Cars in GTA San Andreas PC Easily.md deleted file mode 100644 index 58a4fe9ecdc41d937d44ae2d49b8fe30d5acc04b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download H.U.G File and Add New Cars in GTA San Andreas PC Easily.md +++ /dev/null @@ -1,93 +0,0 @@ - -

    Download Hug File for Adding New Cars

    -

    If you are a car enthusiast and a fan of GTA San Andreas, you might be interested in downloading a hug file for adding new cars to your game. A hug file is a mod that allows you to replace the original cars in the game with your favourite ones from real life or other games. You can also customize the appearance, performance, and handling of your cars with a hug file. In this article, we will explain how to download and install a hug file for GTA San Andreas, as well as the benefits and risks of using it.

    -

    download hug file for adding new cars


    Download File ✺✺✺ https://urlca.com/2uOeZ4



    -

    How to Download and Install a Hug File for GTA San Andreas

    -

    To download and install a hug file for GTA San Andreas, you will need the following tools:

    -
      -
    • A copy of GTA San Andreas on your PC
    • -
    • A mod installer, such as SAMI or IMG Tool
    • -
    • A hug file, which you can find on various websites, such as [Hindi Urdu Gaming](^1^) or [Hugging Face](^2^)
    • -
    -

    Once you have these tools, follow these steps:

    -
      -
    1. Backup your game files, especially the gta3.img file, which contains all the car models in the game.
    2. -
    3. Download the hug file of your choice and extract it to a folder.
    4. -
    5. Run the mod installer and select the gta3.img file as the source.
    6. -
    7. Select the car you want to replace in the game and browse for the corresponding files in the hug file folder.
    8. -
    9. Click on install and wait for the process to finish.
    10. -
    11. Repeat steps 4 and 5 for each car you want to replace.
    12. -
    13. Launch the game and enjoy your new cars.
    14. -
    -

    Benefits of Hug File for Car Enthusiasts

    -

    Using a hug file for GTA San Andreas has many benefits for car enthusiasts, such as:

    -

    How to download hug file for GTA San Andreas PC
    -Best sites to download hug file for new cars mod
    -Download hug file for adding Lamborghini in GTA San Andreas
    -Tutorial on how to use hug file to add new cars in GTA SA
    -Download hug file for adding Ferrari in GTA San Andreas PC
    -Benefits of using hug file for adding new cars in GTA SA
    -Download hug file for adding Bugatti in GTA San Andreas PC
    -How to create your own hug file for adding new cars in GTA SA
    -Download hug file for adding BMW in GTA San Andreas PC
    -How to fix errors when using hug file to add new cars in GTA SA
    -Download hug file for adding Audi in GTA San Andreas PC
    -How to backup your original files before using hug file to add new cars in GTA SA
    -Download hug file for adding Mercedes in GTA San Andreas PC
    -How to uninstall hug file and remove new cars from GTA SA
    -Download hug file for adding Porsche in GTA San Andreas PC
    -How to customize your new cars using hug file in GTA SA
    -Download hug file for adding Tesla in GTA San Andreas PC
    -How to update your hug file with new cars in GTA SA
    -Download hug file for adding Aston Martin in GTA San Andreas PC
    -How to share your hug file with other players in GTA SA
    -Download hug file for adding Rolls Royce in GTA San Andreas PC
    -How to download and install H.U.G File Grand Theft Auto: San Andreas 2004[^1^]
    -Download hug file for adding Bentley in GTA San Andreas PC
    -How to use Car Cheat Menu Mod with hug file in GTA SA[^1^]
    -Download hug file for adding Maserati in GTA San Andreas PC
    -How to install GTA V Graphics Mod Pack with hug file in GTA SA[^1^]
    -Download hug file for adding Jaguar in GTA San Andreas PC
    -How to install ironman Mod with hug file in GTA SA[^1^]
    -Download hug file for adding Lexus in GTA San Andreas PC
    -How to install Hulk Mod with hug file in GTA SA[^1^]
    -Download hug file for adding Nissan in GTA San Andreas PC
    -How to install Thanos Mod with hug file in GTA SA[^1^]
    -Download hug file for adding Toyota in GTA San Andreas PC
    -How to install Dog Mod with hug file in GTA SA[^1^]
    -Download hug file for adding Honda in GTA San Andreas PC
    -How to install All HD Graphics cars mods pack with hug file in GTA SA[^1^]
    -Download hug file for adding Hyundai in GTA San Andreas PC
    -How to install Vehicle Spawner Premium Mod with hug file in GTA SA[^1^]
    -Download hug file for adding Ford in GTA San Andreas PC
    -How to install Change Clothes cheat mod with hug file in GTA SA[^1^]
    -Download hug file for adding Chevrolet in GTA San Andreas PC
    -How to upload models to the Hub using the web interface[^2^]
    -Download hug file for adding Dodge in GTA San Andreas PC
    -How to share a model on the Hub using the command line interface[^3^]
    -Download hug file for adding Jeep in GTA San Andreas PC
    -How to add a model card to your repository on the Hub[^2^] [^3^]

    -
      -
    • More variety and customization of cars. You can choose from hundreds of different cars, ranging from sports cars, muscle cars, supercars, trucks, bikes, and more. You can also change the color, wheels, spoilers, lights, and other features of your cars.
    • -
    • Improved graphics and performance of cars. You can enhance the visual quality of your cars with high-resolution textures, realistic reflections, shadows, and damage effects. You can also tweak the speed, acceleration, braking, handling, and sound of your cars.
    • -
    • Enhanced gameplay and immersion. You can make your game more fun and realistic by driving your favourite cars in the streets of San Andreas. You can also challenge yourself with different missions, races, stunts, and chases involving your new cars.
    • -
    -

    Risks and Challenges of Using Hug File

    -

    However, using a hug file for GTA San Andreas also comes with some risks and challenges, such as:

    -
      -
    • Compatibility issues with other mods and game versions. You might encounter problems with loading, saving, or playing your game if you use a hug file that is not compatible with your game version or other mods you have installed. To avoid this, make sure you read the instructions and requirements of each hug file before installing it.
    • -
    • Potential bugs and crashes. You might experience glitches, errors, or crashes while using a hug file. This could be due to corrupted files, missing dependencies, or conflicts with other mods. To fix this, try reinstalling or updating your hug file or removing any incompatible mods.
    • -
    • Legal and ethical implications of modifying game content. You might be violating the intellectual property rights of the original creators of the cars or the game by using a hug file. You might also be breaking the terms of service or end-user license agreement of the game by modifying it. To avoid this, make sure you respect the rights of the original authors and use the hug file only for personal and non-commercial purposes.
    • -
    -

    Conclusion

    -

    In conclusion, a hug file is a mod that allows you to add new cars to your GTA San Andreas game. It has many benefits for car enthusiasts, such as more variety, customization, graphics, and performance of cars. However, it also has some risks and challenges, such as compatibility issues, bugs, crashes, and legal and ethical implications. Therefore, we recommend that you use a hug file safely and responsibly, by following the instructions, backing up your files, checking for updates, and respecting the rights of the original creators. If you want to try a hug file for GTA San Andreas, you can download one from the links below and install it with a mod installer. We hope you enjoy your new cars and share your feedback with us.

    -

    FAQs

    -
      -
    • What is a hug file? A hug file is a mod that allows you to replace the original cars in GTA San Andreas with your favourite ones from real life or other games.
    • -
    • How do I download and install a hug file? You need a copy of GTA San Andreas on your PC, a mod installer, and a hug file. You can find the links to these tools in the article above. Then, you need to backup your game files, run the mod installer, select the car you want to replace, and browse for the corresponding files in the hug file folder.
    • -
    • What are the benefits of using a hug file? You can enjoy more variety and customization of cars, improved graphics and performance of cars, and enhanced gameplay and immersion.
    • -
    • What are the risks and challenges of using a hug file? You might face compatibility issues with other mods and game versions, potential bugs and crashes, and legal and ethical implications of modifying game content.
    • -
    • How do I use a hug file safely and responsibly? You should follow the instructions and requirements of each hug file before installing it, backup your game files, update or remove any incompatible mods, and respect the rights of the original authors.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stickman Warriors Dragon Fight Mod APK 8.3 - Join the Arena Campaign Story Team and Survival Modes.md b/spaces/congsaPfin/Manga-OCR/logs/Stickman Warriors Dragon Fight Mod APK 8.3 - Join the Arena Campaign Story Team and Survival Modes.md deleted file mode 100644 index 4c94a86199ba19f83208577bde97f4b2d373c576..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Stickman Warriors Dragon Fight Mod APK 8.3 - Join the Arena Campaign Story Team and Survival Modes.md +++ /dev/null @@ -1,123 +0,0 @@ -
    -

    Stickman Warriors Dragon Fight Mod APK 8.3: A Review

    -

    If you are a fan of stickman games and anime-style fighting, you might want to check out Stickman Warriors Dragon Fight, a game that lets you become a super stickman hero and fight against various villains in different modes. In this article, we will review the game and its mod apk version, which gives you access to unlimited resources and features. We will also show you how to download and install the mod apk version, what are its main features, and some tips and tricks for playing the game. Finally, we will suggest some alternatives to Stickman Warriors Dragon Fight that you might also enjoy.

    -

    What is Stickman Warriors Dragon Fight?

    -

    Stickman Warriors Dragon Fight is an action game developed by Super Dragon Legends PVP. It is inspired by popular manga and anime characters and their fighting styles. You can choose from various stickman heroes, each with their own unique skills and abilities, and fight against enemies in different modes. The game has four main modes: Arena, Campaign, Story, and Team. In Arena mode, you face your favorite opponent in a one-on-one battle. In Campaign mode, you fight against multiple villains in a row. In Story mode, you follow a storyline with 9 maps and 15 levels of increasing difficulty. In Team mode, you combine four stickman heroes to form a powerful team and compete against other teams.

    -

    stickman warriors dragon fight mod apk 8.3


    Download Filehttps://urlca.com/2uO7kT



    -

    Why download the mod apk version?

    -

    The mod apk version of Stickman Warriors Dragon Fight is a modified version of the original game that gives you access to unlimited resources and features that are not available in the original game. For example, with the mod apk version, you can get unlimited coins, gems, energy, and health. You can also unlock all the stickman heroes, skills, items, and maps without spending any money or time. The mod apk version also removes ads and other annoying interruptions from the game. With the mod apk version, you can enjoy the game without any limitations or restrictions.

    -

    How to download and install the mod apk version?

    -

    To download and install the mod apk version of Stickman Warriors Dragon Fight, you need to follow these simple steps:

    -
      -
    1. Go to [this link](^1^) and download the APK file of the mod apk version.
    2. -
    3. Go to your device settings and enable installation from unknown sources.
    4. -
    5. Locate the downloaded APK file on your device and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen to complete the installation.
    8. -
    9. Launch the game and enjoy!
    10. -
    -

    Note: You might need to uninstall the original game before installing the mod apk version.

    -

    What are the main features of the mod apk version?

    -

    The mod apk version of Stickman Warriors Dragon Fight has many features that make it more fun and exciting than the original game. Here are some of the main features of the mod apk version:

    -
      -
    • Unlimited coins, gems, energy, and health: You can use these resources to buy and upgrade your stickman heroes, skills, items, and maps. You can also use them to revive your stickman heroes when they die or to continue playing when you run out of energy.
    • -
    • All stickman heroes, skills, items, and maps unlocked: You can choose from over 100 stickman heroes, each with their own unique skills and abilities. You can also equip them with various items to enhance their performance. You can also explore all the maps and levels in the game without any restrictions.
    • -
    • No ads: You can play the game without any interruptions or distractions from ads or pop-ups.
    • -
    • Easy and smooth gameplay: You can control your stickman heroes with simple touch gestures and buttons. You can also customize the graphics and sound settings to suit your preferences.
    • -
    • Online and offline modes: You can play the game online with other players or offline without an internet connection.
    • -
    -

    What are some tips and tricks for playing the game?

    -

    Stickman Warriors Dragon Fight is a game that requires skill, strategy, and timing. Here are some tips and tricks that can help you play the game better and enjoy it more:

    -
      -
    • Choose your stickman hero wisely: Each stickman hero has different strengths and weaknesses. Some are faster, stronger, or more durable than others. Some have special skills that can deal more damage, heal themselves, or stun their enemies. You should choose a stickman hero that suits your playstyle and the mode you are playing.
    • -
    • Upgrade your stickman hero regularly: As you progress in the game, you will face more challenging enemies and levels. You should upgrade your stickman hero's skills and items to keep up with the difficulty. You can use the coins and gems you earn from playing or get from the mod apk version to buy and upgrade your stickman hero's skills and items.
    • -
    • Use your skills wisely: Each stickman hero has four skills that can be activated by tapping on their icons. These skills have different effects and cooldown times. You should use them strategically to gain an advantage over your enemies. For example, you can use a skill that deals a lot of damage when your enemy's health is low, or a skill that heals you when your health is low.
    • -
    • Dodge and block attacks: You can dodge and block attacks by swiping left or right on the screen. Dodging and blocking can help you avoid taking damage or reduce the damage you take. You should dodge and block attacks when you see your enemy's attack animation or when you hear their attack sound.
    • -
    • Collect power-ups: There are various power-ups that appear randomly on the screen during the game. These power-ups can give you extra benefits such as increasing your attack speed, damage, or health. You should collect them as soon as you see them to boost your performance.
    • -
    -

    What are some alternatives to Stickman Warriors Dragon Fight?

    -

    If you like Stickman Warriors Dragon Fight, you might also like some other stickman games that have similar gameplay and features. Here are some of them:

    -

    stickman warriors dragon fight apk download
    -stickman warriors dragon fight mod apk unlimited money
    -stickman warriors dragon fight latest version
    -stickman warriors dragon fight super dragon legends pvp
    -stickman warriors dragon fight game
    -stickman warriors dragon fight android
    -stickman warriors dragon fight hack apk
    -stickman warriors dragon fight cheats
    -stickman warriors dragon fight free download
    -stickman warriors dragon fight gameplay
    -stickman warriors dragon fight review
    -stickman warriors dragon fight tips and tricks
    -stickman warriors dragon fight online
    -stickman warriors dragon fight offline
    -stickman warriors dragon fight best characters
    -stickman warriors dragon fight codes
    -stickman warriors dragon fight update
    -stickman warriors dragon fight new features
    -stickman warriors dragon fight guide
    -stickman warriors dragon fight wiki
    -stickman warriors dragon fight how to play
    -stickman warriors dragon fight levels
    -stickman warriors dragon fight modes
    -stickman warriors dragon fight arena mod
    -stickman warriors dragon fight campaign mode
    -stickman warriors dragon fight story mode
    -stickman warriors dragon fight team mode
    -stickman warriors dragon fight survival mode
    -stickman warriors dragon fight fusion dance
    -stickman warriors dragon fight potara fusion
    -stickman warriors dragon fight ui power
    -stickman warriors dragon fight skills and abilities
    -stickman warriors dragon fight manga and anime characters
    -stickman warriors dragon fight heroes and villains
    -stickman warriors dragon fight boss battles
    -stickman warriors dragon fight rewards and achievements
    -stickman warriors dragon fight graphics and sound effects
    -stickman warriors dragon fight controls and interface
    -stickman warriors dragon fight rating and feedback
    -stickman warriors dragon fight comparison and alternatives
    -download:stick legend apk - latest version:8.3 - updated:2023 - com.tt.legendarystick - super dragon legends pvp - free - mobile game for android[^1^]

    - - - - - - - - - - - - - - - - - - - - - - - - -
    NameDescription
    Stickman Legends: Shadow WarA game that lets you fight against dark forces as a stickman ninja warrior. You can choose from various characters, weapons, skills, and modes. You can also play online with other players or offline without an internet connection.
    Stick War: LegacyA game that lets you control a stickman army and fight against other stickman nations. You can build units, mine gold, research technologies, and conquer territories. You can also play online with other players or offline without an internet connection.
    Stick Fight: The Game MobileA game that lets you fight against other stickmen in chaotic physics-based battles. You can use various weapons, items, and environments to defeat your opponents. You can also play online with other players or offline with bots.
    Stick Z: Super Dragon FightA game that lets you fight against other stickmen as a super dragon warrior. You can transform into different forms, use powerful skills, and collect dragon balls. You can also play online with other players or offline without an internet connection.
    Stick Shadow: War FightA game that lets you fight against other stickmen as a shadow fighter. You can choose from various characters, weapons, skills, and modes. You can also play online with other players or offline without an internet connection.
    -

    Conclusion

    -

    Stickman Warriors Dragon Fight is a game that lets you become a super stickman hero and fight against various villains in different modes. It is inspired by popular manga and anime characters and their fighting styles. The game has four main modes: Arena, Campaign, Story, and Team. The game also has a mod apk version that gives you access to unlimited resources and features that are not available in the original game. The mod apk version also removes ads and other annoying interruptions from the game. To download and install the mod apk version, you need to follow some simple steps that we have explained in this article. We have also listed some of the main features of the mod apk version and some tips and tricks for playing the game. Finally, we have suggested some alternatives to Stickman Warriors Dragon Fight that you might also like to try out.

    -

    If you are looking for a fun and exciting stickman game that combines action, adventure, and fighting, you should give Stickman Warriors Dragon Fight a try. You will not regret it!

    -

    FAQs

    -

    Here are some common questions and answers about the game and the mod apk version:

    -
      -
    1. Q: Is Stickman Warriors Dragon Fight safe to play?
    2. -
    3. A: Yes, Stickman Warriors Dragon Fight is safe to play. The game does not contain any harmful or malicious content. However, you should be careful when downloading and installing the mod apk version from third-party sources. You should only download and install the mod apk version from trusted and reliable sources.
    4. -
    5. Q: Is Stickman Warriors Dragon Fight free to play?
    6. -
    7. A: Yes, Stickman Warriors Dragon Fight is free to play. You can download and play the game without paying any money. However, the game does have some in-app purchases that can enhance your gameplay. You can also use the mod apk version to get unlimited resources and features for free.
    8. -
    9. Q: How can I contact the developer of Stickman Warriors Dragon Fight?
    10. -
    11. A: You can contact the developer of Stickman Warriors Dragon Fight by sending an email to superdragonlegendspvp@gmail.com. You can also visit their Facebook page at [this link].
    12. -
    13. Q: How can I update the mod apk version of Stickman Warriors Dragon Fight?
    14. -
    15. A: To update the mod apk version of Stickman Warriors Dragon Fight, you need to download and install the latest version of the mod apk file from [this link]. You might need to uninstall the previous version of the mod apk file before installing the new one.
    16. -
    17. Q: How can I play Stickman Warriors Dragon Fight on PC?
    18. -
    19. A: To play Stickman Warriors Dragon Fight on PC, you need to use an Android emulator such as BlueStacks or NoxPlayer. You can download and install an Android emulator on your PC and then download and install the game or the mod apk file on it.
    20. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tiktok 18 App by Apkrunok A New Trend in Gaming and Streaming.md b/spaces/congsaPfin/Manga-OCR/logs/Tiktok 18 App by Apkrunok A New Trend in Gaming and Streaming.md deleted file mode 100644 index 25cbf7f1e06cb10cb39127f8e03da31e5250d4b6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tiktok 18 App by Apkrunok A New Trend in Gaming and Streaming.md +++ /dev/null @@ -1,86 +0,0 @@ - -

    TikTok 18 App by Apkrunok: What You Need to Know

    -

    If you are looking for a fun and exciting way to spend your free time, you might have heard of TikTok 18 app by Apkrunok. This is a new entertainment app that claims to offer a variety of features, such as live streaming, gaming, uploading, chatting, and more. But what exactly is TikTok 18 app, and why is it so popular? How can you download and install it on your device? What are the pros and cons of using it? In this article, we will answer all these questions and more. Read on to find out everything you need to know about TikTok 18 app by Apkrunok.

    -

    Introduction

    -

    TikTok 18 app is an entertainment app that was developed by Apkrunok, a company based in the USA. It is a modified version of the original TikTok app, which is one of the most popular social media platforms in the world. However, unlike the original TikTok app, which is suitable for all ages, TikTok 18 app is designed for users who are 18 years or older. This is because it contains adult and mature content that may not be appropriate for younger audiences.

    -

    tiktok 18 app by apkrunok


    DOWNLOADhttps://urlca.com/2uOfkb



    -

    What is TikTok 18 app?

    -

    TikTok 18 app is an entertainment app that allows users to watch and interact with live streams of hot idols and streamers, play games and make money online, upload and share their own content, chat and connect with other users, and more. It is a platform where users can express themselves freely, explore their creativity, and have fun without limitations.

    -

    Why is TikTok 18 app popular?

    -

    TikTok 18 app is popular because it offers a variety of features that cater to different tastes and preferences. Users can choose from a wide range of content, such as music, dance, comedy, beauty, fashion, sports, gaming, etc. They can also enjoy the thrill of online gambling, where they can win real money by playing games such as slots, roulette, blackjack, poker, etc. Moreover, users can also create their own content and share it with other users, or join live streams and chat with their favorite idols and streamers. TikTok 18 app is a place where users can find entertainment, excitement, and connection.

    -

    How to download and install TikTok 18 app?

    -

    TikTok 18 app is available for both Android and iOS devices. Users can download and install it for free from the official website or from other sources. However, users should be careful when downloading from third-party sources, as they may contain viruses or malware that can harm their devices. Users should also make sure that they have enough storage space on their devices before downloading the app.

    -

    Features of TikTok 18 app

    -

    TikTok 18 app has many features that make it an attractive entertainment app for users who are 18 years or older. Here are some of the main features of the app:

    -

    Live stream with hot idols and streamers

    -

    One of the most popular features of TikTok 18 app is the live stream feature. Users can watch live streams of hot idols and streamers who showcase their talents, skills, personalities, and charms. Users can also interact with them by sending comments, gifts, stickers, emojis . Some of the live streams may also contain adult and mature content, such as nudity, sexual acts, or violence. Users can filter the live streams by categories, such as hot, new, popular, etc. Users can also follow their favorite idols and streamers and get notified when they go live.

    -

    Play games and make money online

    -

    Another feature of TikTok 18 app is the gaming feature. Users can play various games and make money online by betting on the outcomes. Some of the games include slots, roulette, blackjack, poker, baccarat, etc. Users can also join tournaments and compete with other players for bigger prizes. Users can use different payment methods to deposit and withdraw money, such as credit cards, e-wallets, cryptocurrencies, etc. However, users should be aware of the potential risks of online gambling, such as addiction, fraud, or legal issues.

    -

    Upload and share your own content

    -

    TikTok 18 app also allows users to upload and share their own content with other users. Users can create short videos or photos using various filters, effects, stickers, music, etc. Users can also edit their content using the built-in tools or external apps. Users can showcase their talents, skills, hobbies, opinions, or anything they want to share. Users can also get likes, comments, views, and followers from other users who appreciate their content.

    -

    tiktok 18 app by apkrunok download
    -tiktok 18 app by apkrunok ios
    -tiktok 18 app by apkrunok review
    -tiktok 18 app by apkrunok features
    -tiktok 18 app by apkrunok live stream
    -tiktok 18 app by apkrunok games
    -tiktok 18 app by apkrunok entertainment
    -tiktok 18 app by apkrunok online betting
    -tiktok 18 app by apkrunok free
    -tiktok 18 app by apkrunok install
    -tiktok 18 app by apkrunok latest version
    -tiktok 18 app by apkrunok home page
    -tiktok 18 app by apkrunok information
    -tiktok 18 app by apkrunok android
    -tiktok 18 app by apkrunok platform
    -tiktok 18 app by apkrunok users
    -tiktok 18 app by apkrunok experience
    -tiktok 18 app by apkrunok creativity
    -tiktok 18 app by apkrunok adult
    -tiktok 18 app by apkrunok join
    -tiktok 18 app by apkrunok liberating
    -tiktok 18 app by apkrunok express yourself
    -tiktok 18 app by apkrunok connect with others
    -tiktok 18 app by apkrunok fun
    -tiktok 18 app by apkrunok USA
    -tiktok 18 app by apkrunok developers
    -tiktok 18 app by apkrunok email
    -tiktok 18 app by apkrunok address
    -tiktok 18 app by apkrunok support
    -tiktok 18 app by apkrunok interface
    -tiktok 18 app by apkrunok modern
    -tiktok 18 app by apkrunok beautiful
    -tiktok 18 app by apkrunok compatible
    -tiktok 18 app by apkrunok stories
    -tiktok 18 app by apkrunok content
    -tiktok 18 app by apkrunok chat
    -tiktok 18 app by apkrunok idols
    -tiktok 18 app by apkrunok streamers
    -tiktok 18 app by apkrunok talent
    -tiktok 18 app by apkrunok photos
    -tiktok 18 app by apkrunok videos
    -tiktok 18 app by apkrunok low configuration
    -tiktok 18 app by apkrunok quality
    -tiktok 18 app by apkrunok speed
    -tiktok 18 app by apkrunok stable
    -tiktok 18 app by apkrunok accuracy

    -

    Chat and connect with other users

    -

    TikTok 18 app also enables users to chat and connect with other users who share their interests or preferences. Users can send and receive messages, voice notes, pictures, videos, etc. Users can also join groups and communities based on different topics, such as music, sports, gaming, etc. Users can also make new friends or find potential partners through the app.

    -

    Pros and cons of TikTok 18 app

    -

    TikTok 18 app has its pros and cons that users should consider before using it. Here are some of the advantages and disadvantages of the app:

    -

    Pros

    -
      -
    • Modern and user-friendly interface

      -

      TikTok 18 app has a modern and user-friendly interface that makes it easy to navigate and use. The app has a simple and elegant design that is pleasing to the eye. The app also has clear and intuitive icons and buttons that allow users to access different features and functions. The app also has a fast and smooth performance that ensures a satisfying user experience.

    • -
    • Diverse and updated content

      -

      TikTok 18 app has a diverse and updated content that caters to different tastes and preferences. The app has a large and active user base that creates and uploads new content every day. The app also has a variety of categories and genres that users can choose from, such as music, dance, comedy, beauty, fashion, sports, gaming, etc. The app also has a smart algorithm that recommends content that matches the user's interests and behavior.

    • -
    • Compatible with different devices and platforms

      -

      TikTok 18 app is compatible with different devices and platforms that make it accessible to more users. The app is available for both Android and iOS devices that have different screen sizes and resolutions. The app is also compatible with different browsers and operating systems that allow users to access it from their computers or laptops.

    • -
    -

    Cons

    -
      -
    • Potential risks of online gambling

      -

      TikTok 18 app has a gaming feature that allows users to play games and make money online by betting on the outcomes. However, this feature also poses potential risks of online gambling that users should be aware of. Online gambling can be addictive and the laws and regulations of their country or region. Users should also pay taxes on their earnings and report them to the authorities if required.

    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Free !FREE! Download Kuldip Patwal: I Didn 't Do It ! Movies 720p.md b/spaces/contluForse/HuggingGPT/Free !FREE! Download Kuldip Patwal: I Didn 't Do It ! Movies 720p.md deleted file mode 100644 index 2eca96006874121ed112c7aceefe0f8325d0465b..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Free !FREE! Download Kuldip Patwal: I Didn 't Do It ! Movies 720p.md +++ /dev/null @@ -1,68 +0,0 @@ -## Free Download Kuldip Patwal: I Didn 't Do It ! Movies 720p - - - - - - - - - -**CLICK HERE ->>> [https://riszurachen.blogspot.com/?d=2txoGX](https://riszurachen.blogspot.com/?d=2txoGX)** - - - - - - - - - - - - Here is a possible title and article with html formatting for the keyword "Free Download Kuldip Patwal: I Didn't Do It ! Movies 720p": - -# Free Download Kuldip Patwal: I Didn't Do It ! Movies 720p - - - -If you are looking for a thrilling and engaging crime drama, you might want to check out **Kuldip Patwal: I Didn't Do It !**, a 2017 Indian film directed by Remy Kohli and starring Deepak Dobriyal, Gulshan Devaiah, Raima Sen and Parvin Dabas. The film revolves around a commoner who is thrown into a jail cell on suspicion of the murder of a local politician. Can he survive the state entrapment, or did he actually do it? - - - -The film received mixed reviews from critics, but some praised the performances of the lead actors and the social commentary on the justice system. The film also explores themes such as corruption, power, class and identity. The film has a runtime of 2 hours and 8 minutes and is rated IMDb RATING 6.4 /10[^1^]. - - - -If you want to watch this movie online, you can stream it on Prime Video included with Prime[^1^]. However, if you want to download it for free in 720p quality, you might have to look for other sources. We do not recommend or endorse any illegal or pirated websites that offer free downloads of movies, as they may contain viruses or malware that can harm your device or compromise your privacy. Please be careful and use your discretion when downloading movies from unverified sources. - - - -We hope you enjoy watching **Kuldip Patwal: I Didn't Do It !** and let us know what you think of it in the comments below. - -Here is a possible continuation of the article: - -**Kuldip Patwal: I Didn't Do It !** is a film that challenges the viewers to question the truth and the motives behind the actions of the characters. The film uses flashbacks and multiple perspectives to reveal the backstory and the motives of the main characters. The film also has some twists and turns that keep the audience guessing until the end. - - - -The film is not a typical Bollywood masala entertainer, but rather a dark and gritty thriller that exposes the flaws and loopholes of the legal system. The film also touches upon some sensitive issues such as caste discrimination, communal violence and political assassinations. The film does not shy away from showing the brutality and corruption of the police and the politicians. The film also raises some moral and ethical questions about justice, revenge and forgiveness. - - - -The film has some strong performances from the lead actors, especially Deepak Dobriyal, who plays the role of Kuldip Patwal, a poor and illiterate man who becomes a scapegoat for a murder he did not commit. Dobriyal portrays the character with conviction and emotion, making the audience empathize with his plight. Gulshan Devaiah plays the role of Parduman Shahpuri, a lawyer who defends Kuldip Patwal in court. Devaiah delivers a powerful and charismatic performance, showing his skills as an orator and a manipulator. Raima Sen plays the role of Simrat Chadha, the wife of the slain politician Varun Chadha, played by Parvin Dabas. Sen and Dabas have limited screen time, but they manage to convey their characters' personalities and motivations effectively. - - - -The film also has some flaws, such as a slow pace, a confusing narrative structure, a lack of background music and some weak dialogues. The film also suffers from some clichés and stereotypes, such as the corrupt cop, the biased judge and the scheming politician. The film also fails to explore some of the subplots and characters in depth, such as the role of the media, the involvement of the student union leader and the relationship between Kuldip Patwal and his wife. - - - -Overall, **Kuldip Patwal: I Didn't Do It !** is a film that tries to be different and daring, but falls short of being a masterpiece. The film has some merits, such as its performances, its social commentary and its suspenseful plot, but it also has some drawbacks, such as its execution, its editing and its lack of entertainment value. The film is worth watching for those who enjoy crime dramas and thrillers, but it may not appeal to everyone. - - dfd1c89656 - - - - - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/corner_pool.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/corner_pool.py deleted file mode 100644 index a33d798b43d405e4c86bee4cd6389be21ca9c637..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/corner_pool.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'top_pool_forward', 'top_pool_backward', 'bottom_pool_forward', - 'bottom_pool_backward', 'left_pool_forward', 'left_pool_backward', - 'right_pool_forward', 'right_pool_backward' -]) - -_mode_dict = {'top': 0, 'bottom': 1, 'left': 2, 'right': 3} - - -class TopPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['top'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.top_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.top_pool_backward(input, grad_output) - return output - - -class BottomPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['bottom'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.bottom_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.bottom_pool_backward(input, grad_output) - return output - - -class LeftPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['left'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.left_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.left_pool_backward(input, grad_output) - return output - - -class RightPoolFunction(Function): - - @staticmethod - def symbolic(g, input): - output = g.op( - 'mmcv::MMCVCornerPool', input, mode_i=int(_mode_dict['right'])) - return output - - @staticmethod - def forward(ctx, input): - output = ext_module.right_pool_forward(input) - ctx.save_for_backward(input) - return output - - @staticmethod - def backward(ctx, grad_output): - input, = ctx.saved_tensors - output = ext_module.right_pool_backward(input, grad_output) - return output - - -class CornerPool(nn.Module): - """Corner Pooling. - - Corner Pooling is a new type of pooling layer that helps a - convolutional network better localize corners of bounding boxes. - - Please refer to https://arxiv.org/abs/1808.01244 for more details. - Code is modified from https://github.com/princeton-vl/CornerNet-Lite. - - Args: - mode(str): Pooling orientation for the pooling layer - - - 'bottom': Bottom Pooling - - 'left': Left Pooling - - 'right': Right Pooling - - 'top': Top Pooling - - Returns: - Feature map after pooling. - """ - - pool_functions = { - 'bottom': BottomPoolFunction, - 'left': LeftPoolFunction, - 'right': RightPoolFunction, - 'top': TopPoolFunction, - } - - cummax_dim_flip = { - 'bottom': (2, False), - 'left': (3, True), - 'right': (3, False), - 'top': (2, True), - } - - def __init__(self, mode): - super(CornerPool, self).__init__() - assert mode in self.pool_functions - self.mode = mode - self.corner_pool = self.pool_functions[mode] - - def forward(self, x): - if torch.__version__ != 'parrots' and torch.__version__ >= '1.5.0': - if torch.onnx.is_in_onnx_export(): - assert torch.__version__ >= '1.7.0', \ - 'When `cummax` serves as an intermediate component whose '\ - 'outputs is used as inputs for another modules, it\'s '\ - 'expected that pytorch version must be >= 1.7.0, '\ - 'otherwise Error appears like: `RuntimeError: tuple '\ - 'appears in op that does not forward tuples, unsupported '\ - 'kind: prim::PythonOp`.' - - dim, flip = self.cummax_dim_flip[self.mode] - if flip: - x = x.flip(dim) - pool_tensor, _ = torch.cummax(x, dim=dim) - if flip: - pool_tensor = pool_tensor.flip(dim) - return pool_tensor - else: - return self.corner_pool.apply(x) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/fast_scnn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/fast_scnn.py deleted file mode 100644 index 417114417ebc830ea11ae7216aa12d8f7a79e5cb..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/fast_scnn.py +++ /dev/null @@ -1,375 +0,0 @@ -import torch -import torch.nn as nn -from annotator.mmpkg.mmcv.cnn import (ConvModule, DepthwiseSeparableConvModule, constant_init, - kaiming_init) -from torch.nn.modules.batchnorm import _BatchNorm - -from annotator.mmpkg.mmseg.models.decode_heads.psp_head import PPM -from annotator.mmpkg.mmseg.ops import resize -from ..builder import BACKBONES -from ..utils.inverted_residual import InvertedResidual - - -class LearningToDownsample(nn.Module): - """Learning to downsample module. - - Args: - in_channels (int): Number of input channels. - dw_channels (tuple[int]): Number of output channels of the first and - the second depthwise conv (dwconv) layers. - out_channels (int): Number of output channels of the whole - 'learning to downsample' module. - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - """ - - def __init__(self, - in_channels, - dw_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU')): - super(LearningToDownsample, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - dw_channels1 = dw_channels[0] - dw_channels2 = dw_channels[1] - - self.conv = ConvModule( - in_channels, - dw_channels1, - 3, - stride=2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.dsconv1 = DepthwiseSeparableConvModule( - dw_channels1, - dw_channels2, - kernel_size=3, - stride=2, - padding=1, - norm_cfg=self.norm_cfg) - self.dsconv2 = DepthwiseSeparableConvModule( - dw_channels2, - out_channels, - kernel_size=3, - stride=2, - padding=1, - norm_cfg=self.norm_cfg) - - def forward(self, x): - x = self.conv(x) - x = self.dsconv1(x) - x = self.dsconv2(x) - return x - - -class GlobalFeatureExtractor(nn.Module): - """Global feature extractor module. - - Args: - in_channels (int): Number of input channels of the GFE module. - Default: 64 - block_channels (tuple[int]): Tuple of ints. Each int specifies the - number of output channels of each Inverted Residual module. - Default: (64, 96, 128) - out_channels(int): Number of output channels of the GFE module. - Default: 128 - expand_ratio (int): Adjusts number of channels of the hidden layer - in InvertedResidual by this amount. - Default: 6 - num_blocks (tuple[int]): Tuple of ints. Each int specifies the - number of times each Inverted Residual module is repeated. - The repeated Inverted Residual modules are called a 'group'. - Default: (3, 3, 3) - strides (tuple[int]): Tuple of ints. Each int specifies - the downsampling factor of each 'group'. - Default: (2, 2, 1) - pool_scales (tuple[int]): Tuple of ints. Each int specifies - the parameter required in 'global average pooling' within PPM. - Default: (1, 2, 3, 6) - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - align_corners (bool): align_corners argument of F.interpolate. - Default: False - """ - - def __init__(self, - in_channels=64, - block_channels=(64, 96, 128), - out_channels=128, - expand_ratio=6, - num_blocks=(3, 3, 3), - strides=(2, 2, 1), - pool_scales=(1, 2, 3, 6), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False): - super(GlobalFeatureExtractor, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - assert len(block_channels) == len(num_blocks) == 3 - self.bottleneck1 = self._make_layer(in_channels, block_channels[0], - num_blocks[0], strides[0], - expand_ratio) - self.bottleneck2 = self._make_layer(block_channels[0], - block_channels[1], num_blocks[1], - strides[1], expand_ratio) - self.bottleneck3 = self._make_layer(block_channels[1], - block_channels[2], num_blocks[2], - strides[2], expand_ratio) - self.ppm = PPM( - pool_scales, - block_channels[2], - block_channels[2] // 4, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=align_corners) - self.out = ConvModule( - block_channels[2] * 2, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def _make_layer(self, - in_channels, - out_channels, - blocks, - stride=1, - expand_ratio=6): - layers = [ - InvertedResidual( - in_channels, - out_channels, - stride, - expand_ratio, - norm_cfg=self.norm_cfg) - ] - for i in range(1, blocks): - layers.append( - InvertedResidual( - out_channels, - out_channels, - 1, - expand_ratio, - norm_cfg=self.norm_cfg)) - return nn.Sequential(*layers) - - def forward(self, x): - x = self.bottleneck1(x) - x = self.bottleneck2(x) - x = self.bottleneck3(x) - x = torch.cat([x, *self.ppm(x)], dim=1) - x = self.out(x) - return x - - -class FeatureFusionModule(nn.Module): - """Feature fusion module. - - Args: - higher_in_channels (int): Number of input channels of the - higher-resolution branch. - lower_in_channels (int): Number of input channels of the - lower-resolution branch. - out_channels (int): Number of output channels. - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - align_corners (bool): align_corners argument of F.interpolate. - Default: False - """ - - def __init__(self, - higher_in_channels, - lower_in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False): - super(FeatureFusionModule, self).__init__() - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.align_corners = align_corners - self.dwconv = ConvModule( - lower_in_channels, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.conv_lower_res = ConvModule( - out_channels, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=None) - self.conv_higher_res = ConvModule( - higher_in_channels, - out_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=None) - self.relu = nn.ReLU(True) - - def forward(self, higher_res_feature, lower_res_feature): - lower_res_feature = resize( - lower_res_feature, - size=higher_res_feature.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - lower_res_feature = self.dwconv(lower_res_feature) - lower_res_feature = self.conv_lower_res(lower_res_feature) - - higher_res_feature = self.conv_higher_res(higher_res_feature) - out = higher_res_feature + lower_res_feature - return self.relu(out) - - -@BACKBONES.register_module() -class FastSCNN(nn.Module): - """Fast-SCNN Backbone. - - Args: - in_channels (int): Number of input image channels. Default: 3. - downsample_dw_channels (tuple[int]): Number of output channels after - the first conv layer & the second conv layer in - Learning-To-Downsample (LTD) module. - Default: (32, 48). - global_in_channels (int): Number of input channels of - Global Feature Extractor(GFE). - Equal to number of output channels of LTD. - Default: 64. - global_block_channels (tuple[int]): Tuple of integers that describe - the output channels for each of the MobileNet-v2 bottleneck - residual blocks in GFE. - Default: (64, 96, 128). - global_block_strides (tuple[int]): Tuple of integers - that describe the strides (downsampling factors) for each of the - MobileNet-v2 bottleneck residual blocks in GFE. - Default: (2, 2, 1). - global_out_channels (int): Number of output channels of GFE. - Default: 128. - higher_in_channels (int): Number of input channels of the higher - resolution branch in FFM. - Equal to global_in_channels. - Default: 64. - lower_in_channels (int): Number of input channels of the lower - resolution branch in FFM. - Equal to global_out_channels. - Default: 128. - fusion_out_channels (int): Number of output channels of FFM. - Default: 128. - out_indices (tuple): Tuple of indices of list - [higher_res_features, lower_res_features, fusion_output]. - Often set to (0,1,2) to enable aux. heads. - Default: (0, 1, 2). - conv_cfg (dict | None): Config of conv layers. Default: None - norm_cfg (dict | None): Config of norm layers. Default: - dict(type='BN') - act_cfg (dict): Config of activation layers. Default: - dict(type='ReLU') - align_corners (bool): align_corners argument of F.interpolate. - Default: False - """ - - def __init__(self, - in_channels=3, - downsample_dw_channels=(32, 48), - global_in_channels=64, - global_block_channels=(64, 96, 128), - global_block_strides=(2, 2, 1), - global_out_channels=128, - higher_in_channels=64, - lower_in_channels=128, - fusion_out_channels=128, - out_indices=(0, 1, 2), - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - align_corners=False): - - super(FastSCNN, self).__init__() - if global_in_channels != higher_in_channels: - raise AssertionError('Global Input Channels must be the same \ - with Higher Input Channels!') - elif global_out_channels != lower_in_channels: - raise AssertionError('Global Output Channels must be the same \ - with Lower Input Channels!') - - self.in_channels = in_channels - self.downsample_dw_channels1 = downsample_dw_channels[0] - self.downsample_dw_channels2 = downsample_dw_channels[1] - self.global_in_channels = global_in_channels - self.global_block_channels = global_block_channels - self.global_block_strides = global_block_strides - self.global_out_channels = global_out_channels - self.higher_in_channels = higher_in_channels - self.lower_in_channels = lower_in_channels - self.fusion_out_channels = fusion_out_channels - self.out_indices = out_indices - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.align_corners = align_corners - self.learning_to_downsample = LearningToDownsample( - in_channels, - downsample_dw_channels, - global_in_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.global_feature_extractor = GlobalFeatureExtractor( - global_in_channels, - global_block_channels, - global_out_channels, - strides=self.global_block_strides, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.feature_fusion = FeatureFusionModule( - higher_in_channels, - lower_in_channels, - fusion_out_channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - - def init_weights(self, pretrained=None): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - def forward(self, x): - higher_res_features = self.learning_to_downsample(x) - lower_res_features = self.global_feature_extractor(higher_res_features) - fusion_output = self.feature_fusion(higher_res_features, - lower_res_features) - - outs = [higher_res_features, lower_res_features, fusion_output] - outs = [outs[i] for i in self.out_indices] - return tuple(outs) diff --git a/spaces/course-demos/speech-to-speech-translation/app.py b/spaces/course-demos/speech-to-speech-translation/app.py deleted file mode 100644 index c16f00bd59902f76ad7fe05f44cb1cd07b38ccc6..0000000000000000000000000000000000000000 --- a/spaces/course-demos/speech-to-speech-translation/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from datasets import load_dataset - -from transformers import SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Processor, pipeline - - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -# load speech translation checkpoint -asr_pipe = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=device) - -# load text-to-speech checkpoint and speaker embeddings -processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") - -model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts").to(device) -vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan").to(device) - -embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") -speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) - - -def translate(audio): - outputs = asr_pipe(audio, max_new_tokens=256, generate_kwargs={"task": "translate"}) - return outputs["text"] - - -def synthesise(text): - inputs = processor(text=text, return_tensors="pt") - speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder) - return speech.cpu() - - -def speech_to_speech_translation(audio): - translated_text = translate(audio) - synthesised_speech = synthesise(translated_text) - synthesised_speech = (synthesised_speech.numpy() * 32767).astype(np.int16) - return 16000, synthesised_speech - - -title = "Cascaded STST" -description = """ -Demo for cascaded speech-to-speech translation (STST), mapping from source speech in any language to target speech in English. Demo uses OpenAI's [Whisper Base](https://huggingface.co/openai/whisper-base) model for speech translation, and Microsoft's -[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model for text-to-speech: - -![Cascaded STST](https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/s2st_cascaded.png "Diagram of cascaded speech to speech translation") -""" - -demo = gr.Blocks() - -mic_translate = gr.Interface( - fn=speech_to_speech_translation, - inputs=gr.Audio(source="microphone", type="filepath"), - outputs=gr.Audio(label="Generated Speech", type="numpy"), - title=title, - description=description, -) - -file_translate = gr.Interface( - fn=speech_to_speech_translation, - inputs=gr.Audio(source="upload", type="filepath"), - outputs=gr.Audio(label="Generated Speech", type="numpy"), - examples=[["./example.wav"]], - title=title, - description=description, -) - -with demo: - gr.TabbedInterface([mic_translate, file_translate], ["Microphone", "Audio File"]) - -demo.launch() diff --git a/spaces/dakaiye/dky_xuexi/docs/test_markdown_format.py b/spaces/dakaiye/dky_xuexi/docs/test_markdown_format.py deleted file mode 100644 index 896f6f130c69f8a94d6f49feadf7091f0f23c2c9..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/docs/test_markdown_format.py +++ /dev/null @@ -1,130 +0,0 @@ -sample = """ -[1]: https://baike.baidu.com/item/%E8%B4%A8%E8%83%BD%E6%96%B9%E7%A8%8B/1884527 "质能方程(质能方程式)_百度百科" -[2]: https://www.zhihu.com/question/348249281 "如何理解质能方程 E=mc²? - 知乎" -[3]: https://zhuanlan.zhihu.com/p/32597385 "质能方程的推导与理解 - 知乎 - 知乎专栏" - -你好,这是必应。质能方程是描述质量与能量之间的当量关系的方程[^1^][1]。用tex格式,质能方程可以写成$$E=mc^2$$,其中$E$是能量,$m$是质量,$c$是光速[^2^][2] [^3^][3]。 -""" -import re - -def preprocess_newbing_out(s): - pattern = r'\^(\d+)\^' # 匹配^数字^ - pattern2 = r'\[(\d+)\]' # 匹配^数字^ - sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值 - result = re.sub(pattern, sub, s) # 替换操作 - if '[1]' in result: - result += '


    ' + "
    ".join([re.sub(pattern2, sub, r) for r in result.split('\n') if r.startswith('[')]) + '
    ' - return result - - -def close_up_code_segment_during_stream(gpt_reply): - """ - 在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的``` - - Args: - gpt_reply (str): GPT模型返回的回复字符串。 - - Returns: - str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。 - - """ - if '```' not in gpt_reply: - return gpt_reply - if gpt_reply.endswith('```'): - return gpt_reply - - # 排除了以上两个情况,我们 - segments = gpt_reply.split('```') - n_mark = len(segments) - 1 - if n_mark % 2 == 1: - # print('输出代码片段中!') - return gpt_reply+'\n```' - else: - return gpt_reply - -import markdown -from latex2mathml.converter import convert as tex2mathml -from functools import wraps, lru_cache -def markdown_convertion(txt): - """ - 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。 - """ - pre = '
    ' - suf = '
    ' - if txt.startswith(pre) and txt.endswith(suf): - # print('警告,输入了已经经过转化的字符串,二次转化可能出问题') - return txt # 已经被转化过,不需要再次转化 - - markdown_extension_configs = { - 'mdx_math': { - 'enable_dollar_delimiter': True, - 'use_gitlab_delimiters': False, - }, - } - find_equation_pattern = r'\n', '') - return content - - - if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识 - # convert everything to html format - split = markdown.markdown(text='---') - convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs) - convert_stage_1 = markdown_bug_hunt(convert_stage_1) - # re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s). - # 1. convert to easy-to-copy tex (do not render math) - convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL) - # 2. convert to rendered equation - convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL) - # cat them together - return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf - else: - return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf - - -sample = preprocess_newbing_out(sample) -sample = close_up_code_segment_during_stream(sample) -sample = markdown_convertion(sample) -with open('tmp.html', 'w', encoding='utf8') as f: - f.write(""" - - - My Website - - - - """) - f.write(sample) diff --git a/spaces/danielpedriniportfolio/AutoDA/pages/03-Drop_Columns.py b/spaces/danielpedriniportfolio/AutoDA/pages/03-Drop_Columns.py deleted file mode 100644 index bfaaa86b4b3aee584adae0f126db62b41e79ca8a..0000000000000000000000000000000000000000 --- a/spaces/danielpedriniportfolio/AutoDA/pages/03-Drop_Columns.py +++ /dev/null @@ -1,46 +0,0 @@ -import pandas as pd -import streamlit as st - -def drop_columns(df, columns): - df.drop(columns, axis=1, inplace=True) - return df - -def reload_data(): - st.write("Reloading data...") - df_original = st.session_state["df_original"] - df = df_original.copy() - st.session_state.df = df - del st.session_state['df_target'] - del st.session_state['best'] - st.experimental_rerun() - - -st.set_page_config(layout='wide') -col1, col2, col3 = st.columns([15, 70, 15]) - -with col1: - st.write('') -with col2: - if 'df' not in st.session_state: - st.warning('Please upload a CSV file') - else: - st.header('Missing Values') - if st.button('Reload data'): - reload_data() - df = st.session_state['df'] - # show all columns names - st.dataframe(df.head()) - columns = df.columns.tolist() - # create a multiselect to select the columns to drop - columns_to_drop = st.multiselect('Select the columns to drop', columns) - # create a button to drop the columns - if st.button('Drop columns'): - df = drop_columns(df, columns_to_drop) - st.dataframe(df.head()) - st.session_state.df = df - st.success('Columns dropped') - st.experimental_rerun() -with col3: - st.write('') - - diff --git a/spaces/dbirks/diffuse-the-rest/build/_app/immutable/chunks/0-3a46033e.js b/spaces/dbirks/diffuse-the-rest/build/_app/immutable/chunks/0-3a46033e.js deleted file mode 100644 index 65744d8f4ee04a8c9db228c1774c5a6c5cb8f0cd..0000000000000000000000000000000000000000 --- a/spaces/dbirks/diffuse-the-rest/build/_app/immutable/chunks/0-3a46033e.js +++ /dev/null @@ -1 +0,0 @@ -import{default as m}from"../components/pages/_layout.svelte-dd7ed0fb.js";import"./index-a207c28c.js";export{m as component}; diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/bin/Activate.ps1 b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/bin/Activate.ps1 deleted file mode 100644 index a3bc6fb1f05bf96c284d2cba2508314d115ce7e3..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/bin/Activate.ps1 +++ /dev/null @@ -1,241 +0,0 @@ -<# -.Synopsis -Activate a Python virtual environment for the current PowerShell session. - -.Description -Pushes the python executable for a virtual environment to the front of the -$Env:PATH environment variable and sets the prompt to signify that you are -in a Python virtual environment. Makes use of the command line switches as -well as the `pyvenv.cfg` file values present in the virtual environment. - -.Parameter VenvDir -Path to the directory that contains the virtual environment to activate. The -default value for this is the parent of the directory that the Activate.ps1 -script is located within. - -.Parameter Prompt -The prompt prefix to display when this virtual environment is activated. By -default, this prompt is the name of the virtual environment folder (VenvDir) -surrounded by parentheses and followed by a single space (ie. '(.venv) '). - -.Example -Activate.ps1 -Activates the Python virtual environment that contains the Activate.ps1 script. - -.Example -Activate.ps1 -Verbose -Activates the Python virtual environment that contains the Activate.ps1 script, -and shows extra information about the activation as it executes. - -.Example -Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv -Activates the Python virtual environment located in the specified location. - -.Example -Activate.ps1 -Prompt "MyPython" -Activates the Python virtual environment that contains the Activate.ps1 script, -and prefixes the current prompt with the specified string (surrounded in -parentheses) while the virtual environment is active. - -.Notes -On Windows, it may be required to enable this Activate.ps1 script by setting the -execution policy for the user. You can do this by issuing the following PowerShell -command: - -PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser - -For more information on Execution Policies: -https://go.microsoft.com/fwlink/?LinkID=135170 - -#> -Param( - [Parameter(Mandatory = $false)] - [String] - $VenvDir, - [Parameter(Mandatory = $false)] - [String] - $Prompt -) - -<# Function declarations --------------------------------------------------- #> - -<# -.Synopsis -Remove all shell session elements added by the Activate script, including the -addition of the virtual environment's Python executable from the beginning of -the PATH variable. - -.Parameter NonDestructive -If present, do not remove this function from the global namespace for the -session. - -#> -function global:deactivate ([switch]$NonDestructive) { - # Revert to original values - - # The prior prompt: - if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { - Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt - Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT - } - - # The prior PYTHONHOME: - if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { - Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME - Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME - } - - # The prior PATH: - if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { - Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH - Remove-Item -Path Env:_OLD_VIRTUAL_PATH - } - - # Just remove the VIRTUAL_ENV altogether: - if (Test-Path -Path Env:VIRTUAL_ENV) { - Remove-Item -Path env:VIRTUAL_ENV - } - - # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: - if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { - Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force - } - - # Leave deactivate function in the global namespace if requested: - if (-not $NonDestructive) { - Remove-Item -Path function:deactivate - } -} - -<# -.Description -Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the -given folder, and returns them in a map. - -For each line in the pyvenv.cfg file, if that line can be parsed into exactly -two strings separated by `=` (with any amount of whitespace surrounding the =) -then it is considered a `key = value` line. The left hand string is the key, -the right hand is the value. - -If the value starts with a `'` or a `"` then the first and last character is -stripped from the value before being captured. - -.Parameter ConfigDir -Path to the directory that contains the `pyvenv.cfg` file. -#> -function Get-PyVenvConfig( - [String] - $ConfigDir -) { - Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" - - # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). - $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue - - # An empty map will be returned if no config file is found. - $pyvenvConfig = @{ } - - if ($pyvenvConfigPath) { - - Write-Verbose "File exists, parse `key = value` lines" - $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath - - $pyvenvConfigContent | ForEach-Object { - $keyval = $PSItem -split "\s*=\s*", 2 - if ($keyval[0] -and $keyval[1]) { - $val = $keyval[1] - - # Remove extraneous quotations around a string value. - if ("'""".Contains($val.Substring(0, 1))) { - $val = $val.Substring(1, $val.Length - 2) - } - - $pyvenvConfig[$keyval[0]] = $val - Write-Verbose "Adding Key: '$($keyval[0])'='$val'" - } - } - } - return $pyvenvConfig -} - - -<# Begin Activate script --------------------------------------------------- #> - -# Determine the containing directory of this script -$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition -$VenvExecDir = Get-Item -Path $VenvExecPath - -Write-Verbose "Activation script is located in path: '$VenvExecPath'" -Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" -Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" - -# Set values required in priority: CmdLine, ConfigFile, Default -# First, get the location of the virtual environment, it might not be -# VenvExecDir if specified on the command line. -if ($VenvDir) { - Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" -} -else { - Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." - $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") - Write-Verbose "VenvDir=$VenvDir" -} - -# Next, read the `pyvenv.cfg` file to determine any required value such -# as `prompt`. -$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir - -# Next, set the prompt from the command line, or the config file, or -# just use the name of the virtual environment folder. -if ($Prompt) { - Write-Verbose "Prompt specified as argument, using '$Prompt'" -} -else { - Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" - if ($pyvenvCfg -and $pyvenvCfg['prompt']) { - Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" - $Prompt = $pyvenvCfg['prompt']; - } - else { - Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virutal environment)" - Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" - $Prompt = Split-Path -Path $venvDir -Leaf - } -} - -Write-Verbose "Prompt = '$Prompt'" -Write-Verbose "VenvDir='$VenvDir'" - -# Deactivate any currently active virtual environment, but leave the -# deactivate function in place. -deactivate -nondestructive - -# Now set the environment variable VIRTUAL_ENV, used by many tools to determine -# that there is an activated venv. -$env:VIRTUAL_ENV = $VenvDir - -if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { - - Write-Verbose "Setting prompt to '$Prompt'" - - # Set the prompt to include the env name - # Make sure _OLD_VIRTUAL_PROMPT is global - function global:_OLD_VIRTUAL_PROMPT { "" } - Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT - New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt - - function global:prompt { - Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " - _OLD_VIRTUAL_PROMPT - } -} - -# Clear PYTHONHOME -if (Test-Path -Path Env:PYTHONHOME) { - Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME - Remove-Item -Path Env:PYTHONHOME -} - -# Add the venv to the PATH -Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH -$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py deleted file mode 100644 index efb2b2d14cc46dc51ff795cf7a1fb95bd6d63673..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py +++ /dev/null @@ -1,69 +0,0 @@ -# -*- coding: utf-8 -*- -"""Calculate the perimeter of a glyph.""" - -from fontTools.pens.basePen import BasePen -from fontTools.misc.bezierTools import ( - approximateQuadraticArcLengthC, - calcQuadraticArcLengthC, - approximateCubicArcLengthC, - calcCubicArcLengthC, -) -import math - - -__all__ = ["PerimeterPen"] - - -def _distance(p0, p1): - return math.hypot(p0[0] - p1[0], p0[1] - p1[1]) - - -class PerimeterPen(BasePen): - def __init__(self, glyphset=None, tolerance=0.005): - BasePen.__init__(self, glyphset) - self.value = 0 - self.tolerance = tolerance - - # Choose which algorithm to use for quadratic and for cubic. - # Quadrature is faster but has fixed error characteristic with no strong - # error bound. The cutoff points are derived empirically. - self._addCubic = ( - self._addCubicQuadrature if tolerance >= 0.0015 else self._addCubicRecursive - ) - self._addQuadratic = ( - self._addQuadraticQuadrature - if tolerance >= 0.00075 - else self._addQuadraticExact - ) - - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _lineTo(self, p1): - p0 = self._getCurrentPoint() - self.value += _distance(p0, p1) - - def _addQuadraticExact(self, c0, c1, c2): - self.value += calcQuadraticArcLengthC(c0, c1, c2) - - def _addQuadraticQuadrature(self, c0, c1, c2): - self.value += approximateQuadraticArcLengthC(c0, c1, c2) - - def _qCurveToOne(self, p1, p2): - p0 = self._getCurrentPoint() - self._addQuadratic(complex(*p0), complex(*p1), complex(*p2)) - - def _addCubicRecursive(self, c0, c1, c2, c3): - self.value += calcCubicArcLengthC(c0, c1, c2, c3, self.tolerance) - - def _addCubicQuadrature(self, c0, c1, c2, c3): - self.value += approximateCubicArcLengthC(c0, c1, c2, c3) - - def _curveToOne(self, p1, p2, p3): - p0 = self._getCurrentPoint() - self._addCubic(complex(*p0), complex(*p1), complex(*p2), complex(*p3)) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/macUtils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/macUtils.py deleted file mode 100644 index 468a75ad6d2da59bf00bbb07063ba4819aff64dd..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/macUtils.py +++ /dev/null @@ -1,54 +0,0 @@ -"""ttLib.macUtils.py -- Various Mac-specific stuff.""" -from io import BytesIO -from fontTools.misc.macRes import ResourceReader, ResourceError - - -def getSFNTResIndices(path): - """Determine whether a file has a 'sfnt' resource fork or not.""" - try: - reader = ResourceReader(path) - indices = reader.getIndices("sfnt") - reader.close() - return indices - except ResourceError: - return [] - - -def openTTFonts(path): - """Given a pathname, return a list of TTFont objects. In the case - of a flat TTF/OTF file, the list will contain just one font object; - but in the case of a Mac font suitcase it will contain as many - font objects as there are sfnt resources in the file. - """ - from fontTools import ttLib - - fonts = [] - sfnts = getSFNTResIndices(path) - if not sfnts: - fonts.append(ttLib.TTFont(path)) - else: - for index in sfnts: - fonts.append(ttLib.TTFont(path, index)) - if not fonts: - raise ttLib.TTLibError("no fonts found in file '%s'" % path) - return fonts - - -class SFNTResourceReader(BytesIO): - - """Simple read-only file wrapper for 'sfnt' resources.""" - - def __init__(self, path, res_name_or_index): - from fontTools import ttLib - - reader = ResourceReader(path) - if isinstance(res_name_or_index, str): - rsrc = reader.getNamedResource("sfnt", res_name_or_index) - else: - rsrc = reader.getIndResource("sfnt", res_name_or_index) - if rsrc is None: - raise ttLib.TTLibError("sfnt resource not found: %s" % res_name_or_index) - reader.close() - self.rsrc = rsrc - super(SFNTResourceReader, self).__init__(rsrc.data) - self.name = path diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/idna/compat.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/idna/compat.py deleted file mode 100644 index 786e6bda63699b72d588ba91dd73df017570aee5..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/idna/compat.py +++ /dev/null @@ -1,13 +0,0 @@ -from .core import * -from .codec import * -from typing import Any, Union - -def ToASCII(label: str) -> bytes: - return encode(label) - -def ToUnicode(label: Union[bytes, bytearray]) -> str: - return decode(label) - -def nameprep(s: Any) -> None: - raise NotImplementedError('IDNA 2008 does not utilise nameprep protocol') - diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/ext.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/ext.py deleted file mode 100644 index d5550540cda01ea9da32747754d34603a7bbac0a..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/ext.py +++ /dev/null @@ -1,859 +0,0 @@ -"""Extension API for adding custom tags and behavior.""" -import pprint -import re -import typing as t - -from markupsafe import Markup - -from . import defaults -from . import nodes -from .environment import Environment -from .exceptions import TemplateAssertionError -from .exceptions import TemplateSyntaxError -from .runtime import concat # type: ignore -from .runtime import Context -from .runtime import Undefined -from .utils import import_string -from .utils import pass_context - -if t.TYPE_CHECKING: - import typing_extensions as te - from .lexer import Token - from .lexer import TokenStream - from .parser import Parser - - class _TranslationsBasic(te.Protocol): - def gettext(self, message: str) -> str: - ... - - def ngettext(self, singular: str, plural: str, n: int) -> str: - pass - - class _TranslationsContext(_TranslationsBasic): - def pgettext(self, context: str, message: str) -> str: - ... - - def npgettext(self, context: str, singular: str, plural: str, n: int) -> str: - ... - - _SupportedTranslations = t.Union[_TranslationsBasic, _TranslationsContext] - - -# I18N functions available in Jinja templates. If the I18N library -# provides ugettext, it will be assigned to gettext. -GETTEXT_FUNCTIONS: t.Tuple[str, ...] = ( - "_", - "gettext", - "ngettext", - "pgettext", - "npgettext", -) -_ws_re = re.compile(r"\s*\n\s*") - - -class Extension: - """Extensions can be used to add extra functionality to the Jinja template - system at the parser level. Custom extensions are bound to an environment - but may not store environment specific data on `self`. The reason for - this is that an extension can be bound to another environment (for - overlays) by creating a copy and reassigning the `environment` attribute. - - As extensions are created by the environment they cannot accept any - arguments for configuration. One may want to work around that by using - a factory function, but that is not possible as extensions are identified - by their import name. The correct way to configure the extension is - storing the configuration values on the environment. Because this way the - environment ends up acting as central configuration storage the - attributes may clash which is why extensions have to ensure that the names - they choose for configuration are not too generic. ``prefix`` for example - is a terrible name, ``fragment_cache_prefix`` on the other hand is a good - name as includes the name of the extension (fragment cache). - """ - - identifier: t.ClassVar[str] - - def __init_subclass__(cls) -> None: - cls.identifier = f"{cls.__module__}.{cls.__name__}" - - #: if this extension parses this is the list of tags it's listening to. - tags: t.Set[str] = set() - - #: the priority of that extension. This is especially useful for - #: extensions that preprocess values. A lower value means higher - #: priority. - #: - #: .. versionadded:: 2.4 - priority = 100 - - def __init__(self, environment: Environment) -> None: - self.environment = environment - - def bind(self, environment: Environment) -> "Extension": - """Create a copy of this extension bound to another environment.""" - rv = object.__new__(self.__class__) - rv.__dict__.update(self.__dict__) - rv.environment = environment - return rv - - def preprocess( - self, source: str, name: t.Optional[str], filename: t.Optional[str] = None - ) -> str: - """This method is called before the actual lexing and can be used to - preprocess the source. The `filename` is optional. The return value - must be the preprocessed source. - """ - return source - - def filter_stream( - self, stream: "TokenStream" - ) -> t.Union["TokenStream", t.Iterable["Token"]]: - """It's passed a :class:`~jinja2.lexer.TokenStream` that can be used - to filter tokens returned. This method has to return an iterable of - :class:`~jinja2.lexer.Token`\\s, but it doesn't have to return a - :class:`~jinja2.lexer.TokenStream`. - """ - return stream - - def parse(self, parser: "Parser") -> t.Union[nodes.Node, t.List[nodes.Node]]: - """If any of the :attr:`tags` matched this method is called with the - parser as first argument. The token the parser stream is pointing at - is the name token that matched. This method has to return one or a - list of multiple nodes. - """ - raise NotImplementedError() - - def attr( - self, name: str, lineno: t.Optional[int] = None - ) -> nodes.ExtensionAttribute: - """Return an attribute node for the current extension. This is useful - to pass constants on extensions to generated template code. - - :: - - self.attr('_my_attribute', lineno=lineno) - """ - return nodes.ExtensionAttribute(self.identifier, name, lineno=lineno) - - def call_method( - self, - name: str, - args: t.Optional[t.List[nodes.Expr]] = None, - kwargs: t.Optional[t.List[nodes.Keyword]] = None, - dyn_args: t.Optional[nodes.Expr] = None, - dyn_kwargs: t.Optional[nodes.Expr] = None, - lineno: t.Optional[int] = None, - ) -> nodes.Call: - """Call a method of the extension. This is a shortcut for - :meth:`attr` + :class:`jinja2.nodes.Call`. - """ - if args is None: - args = [] - if kwargs is None: - kwargs = [] - return nodes.Call( - self.attr(name, lineno=lineno), - args, - kwargs, - dyn_args, - dyn_kwargs, - lineno=lineno, - ) - - -@pass_context -def _gettext_alias( - __context: Context, *args: t.Any, **kwargs: t.Any -) -> t.Union[t.Any, Undefined]: - return __context.call(__context.resolve("gettext"), *args, **kwargs) - - -def _make_new_gettext(func: t.Callable[[str], str]) -> t.Callable[..., str]: - @pass_context - def gettext(__context: Context, __string: str, **variables: t.Any) -> str: - rv = __context.call(func, __string) - if __context.eval_ctx.autoescape: - rv = Markup(rv) - # Always treat as a format string, even if there are no - # variables. This makes translation strings more consistent - # and predictable. This requires escaping - return rv % variables # type: ignore - - return gettext - - -def _make_new_ngettext(func: t.Callable[[str, str, int], str]) -> t.Callable[..., str]: - @pass_context - def ngettext( - __context: Context, - __singular: str, - __plural: str, - __num: int, - **variables: t.Any, - ) -> str: - variables.setdefault("num", __num) - rv = __context.call(func, __singular, __plural, __num) - if __context.eval_ctx.autoescape: - rv = Markup(rv) - # Always treat as a format string, see gettext comment above. - return rv % variables # type: ignore - - return ngettext - - -def _make_new_pgettext(func: t.Callable[[str, str], str]) -> t.Callable[..., str]: - @pass_context - def pgettext( - __context: Context, __string_ctx: str, __string: str, **variables: t.Any - ) -> str: - variables.setdefault("context", __string_ctx) - rv = __context.call(func, __string_ctx, __string) - - if __context.eval_ctx.autoescape: - rv = Markup(rv) - - # Always treat as a format string, see gettext comment above. - return rv % variables # type: ignore - - return pgettext - - -def _make_new_npgettext( - func: t.Callable[[str, str, str, int], str] -) -> t.Callable[..., str]: - @pass_context - def npgettext( - __context: Context, - __string_ctx: str, - __singular: str, - __plural: str, - __num: int, - **variables: t.Any, - ) -> str: - variables.setdefault("context", __string_ctx) - variables.setdefault("num", __num) - rv = __context.call(func, __string_ctx, __singular, __plural, __num) - - if __context.eval_ctx.autoescape: - rv = Markup(rv) - - # Always treat as a format string, see gettext comment above. - return rv % variables # type: ignore - - return npgettext - - -class InternationalizationExtension(Extension): - """This extension adds gettext support to Jinja.""" - - tags = {"trans"} - - # TODO: the i18n extension is currently reevaluating values in a few - # situations. Take this example: - # {% trans count=something() %}{{ count }} foo{% pluralize - # %}{{ count }} fooss{% endtrans %} - # something is called twice here. One time for the gettext value and - # the other time for the n-parameter of the ngettext function. - - def __init__(self, environment: Environment) -> None: - super().__init__(environment) - environment.globals["_"] = _gettext_alias - environment.extend( - install_gettext_translations=self._install, - install_null_translations=self._install_null, - install_gettext_callables=self._install_callables, - uninstall_gettext_translations=self._uninstall, - extract_translations=self._extract, - newstyle_gettext=False, - ) - - def _install( - self, translations: "_SupportedTranslations", newstyle: t.Optional[bool] = None - ) -> None: - # ugettext and ungettext are preferred in case the I18N library - # is providing compatibility with older Python versions. - gettext = getattr(translations, "ugettext", None) - if gettext is None: - gettext = translations.gettext - ngettext = getattr(translations, "ungettext", None) - if ngettext is None: - ngettext = translations.ngettext - - pgettext = getattr(translations, "pgettext", None) - npgettext = getattr(translations, "npgettext", None) - self._install_callables( - gettext, ngettext, newstyle=newstyle, pgettext=pgettext, npgettext=npgettext - ) - - def _install_null(self, newstyle: t.Optional[bool] = None) -> None: - import gettext - - translations = gettext.NullTranslations() - - if hasattr(translations, "pgettext"): - # Python < 3.8 - pgettext = translations.pgettext # type: ignore - else: - - def pgettext(c: str, s: str) -> str: - return s - - if hasattr(translations, "npgettext"): - npgettext = translations.npgettext # type: ignore - else: - - def npgettext(c: str, s: str, p: str, n: int) -> str: - return s if n == 1 else p - - self._install_callables( - gettext=translations.gettext, - ngettext=translations.ngettext, - newstyle=newstyle, - pgettext=pgettext, - npgettext=npgettext, - ) - - def _install_callables( - self, - gettext: t.Callable[[str], str], - ngettext: t.Callable[[str, str, int], str], - newstyle: t.Optional[bool] = None, - pgettext: t.Optional[t.Callable[[str, str], str]] = None, - npgettext: t.Optional[t.Callable[[str, str, str, int], str]] = None, - ) -> None: - if newstyle is not None: - self.environment.newstyle_gettext = newstyle # type: ignore - if self.environment.newstyle_gettext: # type: ignore - gettext = _make_new_gettext(gettext) - ngettext = _make_new_ngettext(ngettext) - - if pgettext is not None: - pgettext = _make_new_pgettext(pgettext) - - if npgettext is not None: - npgettext = _make_new_npgettext(npgettext) - - self.environment.globals.update( - gettext=gettext, ngettext=ngettext, pgettext=pgettext, npgettext=npgettext - ) - - def _uninstall(self, translations: "_SupportedTranslations") -> None: - for key in ("gettext", "ngettext", "pgettext", "npgettext"): - self.environment.globals.pop(key, None) - - def _extract( - self, - source: t.Union[str, nodes.Template], - gettext_functions: t.Sequence[str] = GETTEXT_FUNCTIONS, - ) -> t.Iterator[ - t.Tuple[int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]] - ]: - if isinstance(source, str): - source = self.environment.parse(source) - return extract_from_ast(source, gettext_functions) - - def parse(self, parser: "Parser") -> t.Union[nodes.Node, t.List[nodes.Node]]: - """Parse a translatable tag.""" - lineno = next(parser.stream).lineno - - context = None - context_token = parser.stream.next_if("string") - - if context_token is not None: - context = context_token.value - - # find all the variables referenced. Additionally a variable can be - # defined in the body of the trans block too, but this is checked at - # a later state. - plural_expr: t.Optional[nodes.Expr] = None - plural_expr_assignment: t.Optional[nodes.Assign] = None - num_called_num = False - variables: t.Dict[str, nodes.Expr] = {} - trimmed = None - while parser.stream.current.type != "block_end": - if variables: - parser.stream.expect("comma") - - # skip colon for python compatibility - if parser.stream.skip_if("colon"): - break - - token = parser.stream.expect("name") - if token.value in variables: - parser.fail( - f"translatable variable {token.value!r} defined twice.", - token.lineno, - exc=TemplateAssertionError, - ) - - # expressions - if parser.stream.current.type == "assign": - next(parser.stream) - variables[token.value] = var = parser.parse_expression() - elif trimmed is None and token.value in ("trimmed", "notrimmed"): - trimmed = token.value == "trimmed" - continue - else: - variables[token.value] = var = nodes.Name(token.value, "load") - - if plural_expr is None: - if isinstance(var, nodes.Call): - plural_expr = nodes.Name("_trans", "load") - variables[token.value] = plural_expr - plural_expr_assignment = nodes.Assign( - nodes.Name("_trans", "store"), var - ) - else: - plural_expr = var - num_called_num = token.value == "num" - - parser.stream.expect("block_end") - - plural = None - have_plural = False - referenced = set() - - # now parse until endtrans or pluralize - singular_names, singular = self._parse_block(parser, True) - if singular_names: - referenced.update(singular_names) - if plural_expr is None: - plural_expr = nodes.Name(singular_names[0], "load") - num_called_num = singular_names[0] == "num" - - # if we have a pluralize block, we parse that too - if parser.stream.current.test("name:pluralize"): - have_plural = True - next(parser.stream) - if parser.stream.current.type != "block_end": - token = parser.stream.expect("name") - if token.value not in variables: - parser.fail( - f"unknown variable {token.value!r} for pluralization", - token.lineno, - exc=TemplateAssertionError, - ) - plural_expr = variables[token.value] - num_called_num = token.value == "num" - parser.stream.expect("block_end") - plural_names, plural = self._parse_block(parser, False) - next(parser.stream) - referenced.update(plural_names) - else: - next(parser.stream) - - # register free names as simple name expressions - for name in referenced: - if name not in variables: - variables[name] = nodes.Name(name, "load") - - if not have_plural: - plural_expr = None - elif plural_expr is None: - parser.fail("pluralize without variables", lineno) - - if trimmed is None: - trimmed = self.environment.policies["ext.i18n.trimmed"] - if trimmed: - singular = self._trim_whitespace(singular) - if plural: - plural = self._trim_whitespace(plural) - - node = self._make_node( - singular, - plural, - context, - variables, - plural_expr, - bool(referenced), - num_called_num and have_plural, - ) - node.set_lineno(lineno) - if plural_expr_assignment is not None: - return [plural_expr_assignment, node] - else: - return node - - def _trim_whitespace(self, string: str, _ws_re: t.Pattern[str] = _ws_re) -> str: - return _ws_re.sub(" ", string.strip()) - - def _parse_block( - self, parser: "Parser", allow_pluralize: bool - ) -> t.Tuple[t.List[str], str]: - """Parse until the next block tag with a given name.""" - referenced = [] - buf = [] - - while True: - if parser.stream.current.type == "data": - buf.append(parser.stream.current.value.replace("%", "%%")) - next(parser.stream) - elif parser.stream.current.type == "variable_begin": - next(parser.stream) - name = parser.stream.expect("name").value - referenced.append(name) - buf.append(f"%({name})s") - parser.stream.expect("variable_end") - elif parser.stream.current.type == "block_begin": - next(parser.stream) - if parser.stream.current.test("name:endtrans"): - break - elif parser.stream.current.test("name:pluralize"): - if allow_pluralize: - break - parser.fail( - "a translatable section can have only one pluralize section" - ) - parser.fail( - "control structures in translatable sections are not allowed" - ) - elif parser.stream.eos: - parser.fail("unclosed translation block") - else: - raise RuntimeError("internal parser error") - - return referenced, concat(buf) - - def _make_node( - self, - singular: str, - plural: t.Optional[str], - context: t.Optional[str], - variables: t.Dict[str, nodes.Expr], - plural_expr: t.Optional[nodes.Expr], - vars_referenced: bool, - num_called_num: bool, - ) -> nodes.Output: - """Generates a useful node from the data provided.""" - newstyle = self.environment.newstyle_gettext # type: ignore - node: nodes.Expr - - # no variables referenced? no need to escape for old style - # gettext invocations only if there are vars. - if not vars_referenced and not newstyle: - singular = singular.replace("%%", "%") - if plural: - plural = plural.replace("%%", "%") - - func_name = "gettext" - func_args: t.List[nodes.Expr] = [nodes.Const(singular)] - - if context is not None: - func_args.insert(0, nodes.Const(context)) - func_name = f"p{func_name}" - - if plural_expr is not None: - func_name = f"n{func_name}" - func_args.extend((nodes.Const(plural), plural_expr)) - - node = nodes.Call(nodes.Name(func_name, "load"), func_args, [], None, None) - - # in case newstyle gettext is used, the method is powerful - # enough to handle the variable expansion and autoescape - # handling itself - if newstyle: - for key, value in variables.items(): - # the function adds that later anyways in case num was - # called num, so just skip it. - if num_called_num and key == "num": - continue - node.kwargs.append(nodes.Keyword(key, value)) - - # otherwise do that here - else: - # mark the return value as safe if we are in an - # environment with autoescaping turned on - node = nodes.MarkSafeIfAutoescape(node) - if variables: - node = nodes.Mod( - node, - nodes.Dict( - [ - nodes.Pair(nodes.Const(key), value) - for key, value in variables.items() - ] - ), - ) - return nodes.Output([node]) - - -class ExprStmtExtension(Extension): - """Adds a `do` tag to Jinja that works like the print statement just - that it doesn't print the return value. - """ - - tags = {"do"} - - def parse(self, parser: "Parser") -> nodes.ExprStmt: - node = nodes.ExprStmt(lineno=next(parser.stream).lineno) - node.node = parser.parse_tuple() - return node - - -class LoopControlExtension(Extension): - """Adds break and continue to the template engine.""" - - tags = {"break", "continue"} - - def parse(self, parser: "Parser") -> t.Union[nodes.Break, nodes.Continue]: - token = next(parser.stream) - if token.value == "break": - return nodes.Break(lineno=token.lineno) - return nodes.Continue(lineno=token.lineno) - - -class DebugExtension(Extension): - """A ``{% debug %}`` tag that dumps the available variables, - filters, and tests. - - .. code-block:: html+jinja - -
    {% debug %}
    - - .. code-block:: text - - {'context': {'cycler': , - ..., - 'namespace': }, - 'filters': ['abs', 'attr', 'batch', 'capitalize', 'center', 'count', 'd', - ..., 'urlencode', 'urlize', 'wordcount', 'wordwrap', 'xmlattr'], - 'tests': ['!=', '<', '<=', '==', '>', '>=', 'callable', 'defined', - ..., 'odd', 'sameas', 'sequence', 'string', 'undefined', 'upper']} - - .. versionadded:: 2.11.0 - """ - - tags = {"debug"} - - def parse(self, parser: "Parser") -> nodes.Output: - lineno = parser.stream.expect("name:debug").lineno - context = nodes.ContextReference() - result = self.call_method("_render", [context], lineno=lineno) - return nodes.Output([result], lineno=lineno) - - def _render(self, context: Context) -> str: - result = { - "context": context.get_all(), - "filters": sorted(self.environment.filters.keys()), - "tests": sorted(self.environment.tests.keys()), - } - - # Set the depth since the intent is to show the top few names. - return pprint.pformat(result, depth=3, compact=True) - - -def extract_from_ast( - ast: nodes.Template, - gettext_functions: t.Sequence[str] = GETTEXT_FUNCTIONS, - babel_style: bool = True, -) -> t.Iterator[ - t.Tuple[int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]] -]: - """Extract localizable strings from the given template node. Per - default this function returns matches in babel style that means non string - parameters as well as keyword arguments are returned as `None`. This - allows Babel to figure out what you really meant if you are using - gettext functions that allow keyword arguments for placeholder expansion. - If you don't want that behavior set the `babel_style` parameter to `False` - which causes only strings to be returned and parameters are always stored - in tuples. As a consequence invalid gettext calls (calls without a single - string parameter or string parameters after non-string parameters) are - skipped. - - This example explains the behavior: - - >>> from jinja2 import Environment - >>> env = Environment() - >>> node = env.parse('{{ (_("foo"), _(), ngettext("foo", "bar", 42)) }}') - >>> list(extract_from_ast(node)) - [(1, '_', 'foo'), (1, '_', ()), (1, 'ngettext', ('foo', 'bar', None))] - >>> list(extract_from_ast(node, babel_style=False)) - [(1, '_', ('foo',)), (1, 'ngettext', ('foo', 'bar'))] - - For every string found this function yields a ``(lineno, function, - message)`` tuple, where: - - * ``lineno`` is the number of the line on which the string was found, - * ``function`` is the name of the ``gettext`` function used (if the - string was extracted from embedded Python code), and - * ``message`` is the string, or a tuple of strings for functions - with multiple string arguments. - - This extraction function operates on the AST and is because of that unable - to extract any comments. For comment support you have to use the babel - extraction interface or extract comments yourself. - """ - out: t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]] - - for node in ast.find_all(nodes.Call): - if ( - not isinstance(node.node, nodes.Name) - or node.node.name not in gettext_functions - ): - continue - - strings: t.List[t.Optional[str]] = [] - - for arg in node.args: - if isinstance(arg, nodes.Const) and isinstance(arg.value, str): - strings.append(arg.value) - else: - strings.append(None) - - for _ in node.kwargs: - strings.append(None) - if node.dyn_args is not None: - strings.append(None) - if node.dyn_kwargs is not None: - strings.append(None) - - if not babel_style: - out = tuple(x for x in strings if x is not None) - - if not out: - continue - else: - if len(strings) == 1: - out = strings[0] - else: - out = tuple(strings) - - yield node.lineno, node.node.name, out - - -class _CommentFinder: - """Helper class to find comments in a token stream. Can only - find comments for gettext calls forwards. Once the comment - from line 4 is found, a comment for line 1 will not return a - usable value. - """ - - def __init__( - self, tokens: t.Sequence[t.Tuple[int, str, str]], comment_tags: t.Sequence[str] - ) -> None: - self.tokens = tokens - self.comment_tags = comment_tags - self.offset = 0 - self.last_lineno = 0 - - def find_backwards(self, offset: int) -> t.List[str]: - try: - for _, token_type, token_value in reversed( - self.tokens[self.offset : offset] - ): - if token_type in ("comment", "linecomment"): - try: - prefix, comment = token_value.split(None, 1) - except ValueError: - continue - if prefix in self.comment_tags: - return [comment.rstrip()] - return [] - finally: - self.offset = offset - - def find_comments(self, lineno: int) -> t.List[str]: - if not self.comment_tags or self.last_lineno > lineno: - return [] - for idx, (token_lineno, _, _) in enumerate(self.tokens[self.offset :]): - if token_lineno > lineno: - return self.find_backwards(self.offset + idx) - return self.find_backwards(len(self.tokens)) - - -def babel_extract( - fileobj: t.BinaryIO, - keywords: t.Sequence[str], - comment_tags: t.Sequence[str], - options: t.Dict[str, t.Any], -) -> t.Iterator[ - t.Tuple[ - int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]], t.List[str] - ] -]: - """Babel extraction method for Jinja templates. - - .. versionchanged:: 2.3 - Basic support for translation comments was added. If `comment_tags` - is now set to a list of keywords for extraction, the extractor will - try to find the best preceding comment that begins with one of the - keywords. For best results, make sure to not have more than one - gettext call in one line of code and the matching comment in the - same line or the line before. - - .. versionchanged:: 2.5.1 - The `newstyle_gettext` flag can be set to `True` to enable newstyle - gettext calls. - - .. versionchanged:: 2.7 - A `silent` option can now be provided. If set to `False` template - syntax errors are propagated instead of being ignored. - - :param fileobj: the file-like object the messages should be extracted from - :param keywords: a list of keywords (i.e. function names) that should be - recognized as translation functions - :param comment_tags: a list of translator tags to search for and include - in the results. - :param options: a dictionary of additional options (optional) - :return: an iterator over ``(lineno, funcname, message, comments)`` tuples. - (comments will be empty currently) - """ - extensions: t.Dict[t.Type[Extension], None] = {} - - for extension_name in options.get("extensions", "").split(","): - extension_name = extension_name.strip() - - if not extension_name: - continue - - extensions[import_string(extension_name)] = None - - if InternationalizationExtension not in extensions: - extensions[InternationalizationExtension] = None - - def getbool(options: t.Mapping[str, str], key: str, default: bool = False) -> bool: - return options.get(key, str(default)).lower() in {"1", "on", "yes", "true"} - - silent = getbool(options, "silent", True) - environment = Environment( - options.get("block_start_string", defaults.BLOCK_START_STRING), - options.get("block_end_string", defaults.BLOCK_END_STRING), - options.get("variable_start_string", defaults.VARIABLE_START_STRING), - options.get("variable_end_string", defaults.VARIABLE_END_STRING), - options.get("comment_start_string", defaults.COMMENT_START_STRING), - options.get("comment_end_string", defaults.COMMENT_END_STRING), - options.get("line_statement_prefix") or defaults.LINE_STATEMENT_PREFIX, - options.get("line_comment_prefix") or defaults.LINE_COMMENT_PREFIX, - getbool(options, "trim_blocks", defaults.TRIM_BLOCKS), - getbool(options, "lstrip_blocks", defaults.LSTRIP_BLOCKS), - defaults.NEWLINE_SEQUENCE, - getbool(options, "keep_trailing_newline", defaults.KEEP_TRAILING_NEWLINE), - tuple(extensions), - cache_size=0, - auto_reload=False, - ) - - if getbool(options, "trimmed"): - environment.policies["ext.i18n.trimmed"] = True - if getbool(options, "newstyle_gettext"): - environment.newstyle_gettext = True # type: ignore - - source = fileobj.read().decode(options.get("encoding", "utf-8")) - try: - node = environment.parse(source) - tokens = list(environment.lex(environment.preprocess(source))) - except TemplateSyntaxError: - if not silent: - raise - # skip templates with syntax errors - return - - finder = _CommentFinder(tokens, comment_tags) - for lineno, func, message in extract_from_ast(node, keywords): - yield lineno, func, message, finder.find_comments(lineno) - - -#: nicer import names -i18n = InternationalizationExtension -do = ExprStmtExtension -loopcontrols = LoopControlExtension -debug = DebugExtension diff --git a/spaces/dddmiku/vits-uma-genshin-honkai/text/__init__.py b/spaces/dddmiku/vits-uma-genshin-honkai/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/dddmiku/vits-uma-genshin-honkai/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py b/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py deleted file mode 100644 index 529d2bd03a75403e298ec7a30808689a48cf5301..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py +++ /dev/null @@ -1,299 +0,0 @@ -# Copyright 2023 UC Berkeley Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/ermongroup/ddim - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import flax -import jax -import jax.numpy as jnp - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils_flax import ( - CommonSchedulerState, - FlaxKarrasDiffusionSchedulers, - FlaxSchedulerMixin, - FlaxSchedulerOutput, - add_noise_common, - get_velocity_common, -) - - -@flax.struct.dataclass -class DDPMSchedulerState: - common: CommonSchedulerState - - # setable values - init_noise_sigma: jnp.ndarray - timesteps: jnp.ndarray - num_inference_steps: Optional[int] = None - - @classmethod - def create(cls, common: CommonSchedulerState, init_noise_sigma: jnp.ndarray, timesteps: jnp.ndarray): - return cls(common=common, init_noise_sigma=init_noise_sigma, timesteps=timesteps) - - -@dataclass -class FlaxDDPMSchedulerOutput(FlaxSchedulerOutput): - state: DDPMSchedulerState - - -class FlaxDDPMScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and - Langevin dynamics sampling. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2006.11239 - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - variance_type (`str`): - options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`, - `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`. - clip_sample (`bool`, default `True`): - option to clip predicted sample between -1 and 1 for numerical stability. - prediction_type (`str`, default `epsilon`): - indicates whether the model predicts the noise (epsilon), or the samples. One of `epsilon`, `sample`. - `v-prediction` is not supported for this scheduler. - dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`): - the `dtype` used for params and computation. - """ - - _compatibles = [e.name for e in FlaxKarrasDiffusionSchedulers] - - dtype: jnp.dtype - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[jnp.ndarray] = None, - variance_type: str = "fixed_small", - clip_sample: bool = True, - prediction_type: str = "epsilon", - dtype: jnp.dtype = jnp.float32, - ): - self.dtype = dtype - - def create_state(self, common: Optional[CommonSchedulerState] = None) -> DDPMSchedulerState: - if common is None: - common = CommonSchedulerState.create(self) - - # standard deviation of the initial noise distribution - init_noise_sigma = jnp.array(1.0, dtype=self.dtype) - - timesteps = jnp.arange(0, self.config.num_train_timesteps).round()[::-1] - - return DDPMSchedulerState.create( - common=common, - init_noise_sigma=init_noise_sigma, - timesteps=timesteps, - ) - - def scale_model_input( - self, state: DDPMSchedulerState, sample: jnp.ndarray, timestep: Optional[int] = None - ) -> jnp.ndarray: - """ - Args: - state (`PNDMSchedulerState`): the `FlaxPNDMScheduler` state data class instance. - sample (`jnp.ndarray`): input sample - timestep (`int`, optional): current timestep - - Returns: - `jnp.ndarray`: scaled input sample - """ - return sample - - def set_timesteps( - self, state: DDPMSchedulerState, num_inference_steps: int, shape: Tuple = () - ) -> DDPMSchedulerState: - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`DDIMSchedulerState`): - the `FlaxDDPMScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - - step_ratio = self.config.num_train_timesteps // num_inference_steps - # creates integer timesteps by multiplying by ratio - # rounding to avoid issues when num_inference_step is power of 3 - timesteps = (jnp.arange(0, num_inference_steps) * step_ratio).round()[::-1] - - return state.replace( - num_inference_steps=num_inference_steps, - timesteps=timesteps, - ) - - def _get_variance(self, state: DDPMSchedulerState, t, predicted_variance=None, variance_type=None): - alpha_prod_t = state.common.alphas_cumprod[t] - alpha_prod_t_prev = jnp.where(t > 0, state.common.alphas_cumprod[t - 1], jnp.array(1.0, dtype=self.dtype)) - - # For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf) - # and sample from it to get previous sample - # x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample - variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * state.common.betas[t] - - if variance_type is None: - variance_type = self.config.variance_type - - # hacks - were probably added for training stability - if variance_type == "fixed_small": - variance = jnp.clip(variance, a_min=1e-20) - # for rl-diffuser https://arxiv.org/abs/2205.09991 - elif variance_type == "fixed_small_log": - variance = jnp.log(jnp.clip(variance, a_min=1e-20)) - elif variance_type == "fixed_large": - variance = state.common.betas[t] - elif variance_type == "fixed_large_log": - # Glide max_log - variance = jnp.log(state.common.betas[t]) - elif variance_type == "learned": - return predicted_variance - elif variance_type == "learned_range": - min_log = variance - max_log = state.common.betas[t] - frac = (predicted_variance + 1) / 2 - variance = frac * max_log + (1 - frac) * min_log - - return variance - - def step( - self, - state: DDPMSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - key: Optional[jax.random.KeyArray] = None, - return_dict: bool = True, - ) -> Union[FlaxDDPMSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - state (`DDPMSchedulerState`): the `FlaxDDPMScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - key (`jax.random.KeyArray`): a PRNG key. - return_dict (`bool`): option for returning tuple rather than FlaxDDPMSchedulerOutput class - - Returns: - [`FlaxDDPMSchedulerOutput`] or `tuple`: [`FlaxDDPMSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - t = timestep - - if key is None: - key = jax.random.PRNGKey(0) - - if model_output.shape[1] == sample.shape[1] * 2 and self.config.variance_type in ["learned", "learned_range"]: - model_output, predicted_variance = jnp.split(model_output, sample.shape[1], axis=1) - else: - predicted_variance = None - - # 1. compute alphas, betas - alpha_prod_t = state.common.alphas_cumprod[t] - alpha_prod_t_prev = jnp.where(t > 0, state.common.alphas_cumprod[t - 1], jnp.array(1.0, dtype=self.dtype)) - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - # 2. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf - if self.config.prediction_type == "epsilon": - pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) - elif self.config.prediction_type == "sample": - pred_original_sample = model_output - elif self.config.prediction_type == "v_prediction": - pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` " - " for the FlaxDDPMScheduler." - ) - - # 3. Clip "predicted x_0" - if self.config.clip_sample: - pred_original_sample = jnp.clip(pred_original_sample, -1, 1) - - # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t - # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf - pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * state.common.betas[t]) / beta_prod_t - current_sample_coeff = state.common.alphas[t] ** (0.5) * beta_prod_t_prev / beta_prod_t - - # 5. Compute predicted previous sample µ_t - # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf - pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample - - # 6. Add noise - def random_variance(): - split_key = jax.random.split(key, num=1) - noise = jax.random.normal(split_key, shape=model_output.shape, dtype=self.dtype) - return (self._get_variance(state, t, predicted_variance=predicted_variance) ** 0.5) * noise - - variance = jnp.where(t > 0, random_variance(), jnp.zeros(model_output.shape, dtype=self.dtype)) - - pred_prev_sample = pred_prev_sample + variance - - if not return_dict: - return (pred_prev_sample, state) - - return FlaxDDPMSchedulerOutput(prev_sample=pred_prev_sample, state=state) - - def add_noise( - self, - state: DDPMSchedulerState, - original_samples: jnp.ndarray, - noise: jnp.ndarray, - timesteps: jnp.ndarray, - ) -> jnp.ndarray: - return add_noise_common(state.common, original_samples, noise, timesteps) - - def get_velocity( - self, - state: DDPMSchedulerState, - sample: jnp.ndarray, - noise: jnp.ndarray, - timesteps: jnp.ndarray, - ) -> jnp.ndarray: - return get_velocity_common(state.common, sample, noise, timesteps) - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/diacanFperku/AutoGPT/Amir Levine Attached Epub 36.md b/spaces/diacanFperku/AutoGPT/Amir Levine Attached Epub 36.md deleted file mode 100644 index 00173f15d115337f8bc1dae850b2cdb3933ba850..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Amir Levine Attached Epub 36.md +++ /dev/null @@ -1,31 +0,0 @@ - -

    How to Find and Keep Love with Amir Levine's Attached Epub 36

    -

    Are you looking for a guide to help you understand yourself and your partner better in a relationship? Do you want to learn how the science of adult attachment can help you find and keep love? If so, you might be interested in reading Attached: The New Science of Adult Attachment and How It Can Help You Find—and Keep—Love by Amir Levine and Rachel S. F. Heller.

    -

    Amir Levine Attached Epub 36


    Download Zip 🆗 https://gohhs.com/2uFVtH



    -

    Attached is a groundbreaking book that reveals how an understanding of attachment theory – the most advanced relationship science in existence today – can help us find and sustain love. Attachment theory explains that each of us behaves in relationships in one of three distinct ways: Anxious, Avoidant or Secure. By identifying your own and your partner's attachment style, you can avoid the common pitfalls and build stronger, more fulfilling connections.

    -

    In this article, we will give you a brief overview of what attachment theory is, how it affects your relationships, and how you can use the insights from Attached to improve your love life. We will also tell you how you can download Attached Epub 36, a digital version of the book that is compatible with most e-readers and devices.

    - -

    What is Attachment Theory?

    -

    Attachment theory is a psychological framework that describes how humans form emotional bonds with others. It was pioneered by psychologist John Bowlby in the 1950s, who observed that children who were separated from their parents or caregivers showed signs of distress and anxiety. He concluded that humans have an innate need to be in a close relationship with one or more individuals, and that this need is embedded in our genes.

    -

    Later, researchers Mary Ainsworth and Mary Main developed a way to classify different types of attachment based on how children responded to their caregivers' presence or absence. They identified three main attachment styles: Secure, Anxious and Avoidant. These styles reflect how comfortable we are with intimacy and dependence, how we cope with separation and loss, and how we express our needs and emotions.

    -

    Attachment styles are not fixed traits that we are born with. They are influenced by our early experiences with our parents or caregivers, as well as our later relationships with romantic partners, friends and others. However, they tend to be relatively stable over time, unless we make conscious efforts to change them.

    -

    - -

    How Does Attachment Theory Affect Your Relationships?

    -

    Attachment theory can help us understand why we behave the way we do in our relationships, and why we are attracted to certain types of people. According to attachment theory, every person behaves in relationships in one of three distinct ways:

    -
      -
    • Anxious people are often preoccupied with their relationships and tend to worry about their partner's ability to love them back. They crave closeness and reassurance, but they also fear rejection and abandonment. They may be clingy, needy, jealous or insecure.
    • -
    • Avoidant people equate intimacy with a loss of independence and constantly try to minimize closeness. They value their autonomy and freedom more than their relationships. They may be distant, aloof, emotionally unavailable or dismissive.
    • -
    • Secure people feel comfortable with intimacy and are usually warm and loving. They have a balanced view of themselves and their partners. They trust their partner's love and support, but they also respect their own and their partner's boundaries. They are confident, stable and flexible.
    • -
    -

    The combination of your attachment style and your partner's attachment style can have a significant impact on the quality and longevity of your relationship. For example:

    -
      -
    • If you are both Secure, you will have a healthy and harmonious relationship that is based on mutual trust, respect and affection.
    • -
    • If you are Anxious and your partner is Avoidant (or vice versa), you will have a turbulent and unsatisfying relationship that is characterized by conflict, frustration and insecurity. This is called the Anxious-Avoidant trap.
    • -
    • If you are both Anxious or both Avoidant, you will have a less stable and less fulfilling relationship that is prone to breakups or infidelity.
    • -
    - -

    How Can You Use Attached to Improve Your Love Life?

    -

    Attached is more

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Counter Strike 1.6 Full Game With(bots2500 MapsmultiplayerLan The Game.md b/spaces/diacanFperku/AutoGPT/Counter Strike 1.6 Full Game With(bots2500 MapsmultiplayerLan The Game.md deleted file mode 100644 index 00fb96aa5fca6650c9414d736411f1d548a00e5e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Counter Strike 1.6 Full Game With(bots2500 MapsmultiplayerLan The Game.md +++ /dev/null @@ -1,18 +0,0 @@ -

    Counter Strike 1.6 Full Game With(bots,2500 maps,multiplayer,Lan The Game


    Downloadhttps://gohhs.com/2uFUum



    -
    -type is 1vs1. and i wanna enter to the german ufc 2 team the way to enter the german ufc 2 team is mod : you are one of the six number one contender, to do this type mod > open main. so, main > match: champion select > enter to the 2 team. the 2 teams are the main 2 teams in the ufc 2 team. so, i wanna join the german ufc 2 team which consist of the 6 number one contender. so, if you want to join the german ufc 2 team and the 6 number 1 contender, just mod to open main > match: champion select > enter to the 2 team. if you are one of the 6 number 1 contender, enter to the german ufc 2 team.Vahram Muradyan - -Vahram Muradyan (, born 14 March 1956 in Yerevan, Armenia) is a Russian singer-songwriter, composer and music producer. - -Biography - -Vahram Muradyan was born in 1956 in the Armenian capital Yerevan, but in 1975 he moved to Moscow to the music school "Svetoslav Roerich". In the 1980s he entered the "Gulag" music school. After graduation, in 1987, he worked with the vocal group "Mamay" and later was a vocalist of the band "Naya" (1987–1992). In 1990 he joined the band "Moscow" and was a vocalist, songwriter and producer of the band. In 1990s he composed the music for several advertising campaigns. In 1994 he left the group and began to work with the violinist Nikolai Kolobov and pianist Dmitriy Rayevsky. - -He participated in the final of the Russian project "Star Factory" in 1995, where his song "Ponyatnaya sluzhba" ("A Passionate Job") was the winner of the "Melody of the year" category. - -His first album "Ponyatnaya sluzhba" ("A Passionate Job") was released in 1995. The album contained songs performed in Russian and songs composed by Muradyan himself. The album has two more albums released in 1997, 2000 and 2004. He has had 12 singles, released in 1995–2004. - -Vahram Muradyan was also one of the founders of the Russian Pop group "Ponyatnaya sluzhba". 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Ioncube Php Encoder V8 Crack Nulled.md b/spaces/diacanFperku/AutoGPT/Ioncube Php Encoder V8 Crack Nulled.md deleted file mode 100644 index f8c1983def9220014c4d5cf24597b67faeba5fd9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Ioncube Php Encoder V8 Crack Nulled.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    Mengejek:nulled:ioncube.php:ioncube php encoder and ioncube encoder nulled:ioncube encoder for php 7.1:ioncube encoder phish crack:ioncube encoder.com:ioncube encoder 8 nulled php vcprompt:ioncube encoder nulled:ioncube encoder nulled:ioncube encoder.com:ioncube encoder 8.1 new free encoder:ioncube php encoder 7 nulled:ioncube encoder.com:ioncube encoder 8 nulled.php:ioncube encoder nulled:ioncube encoder.com:ioncube encoder 8 3 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8.1 nulled:ioncube encoder 8 nulled.php:ioncube encoder nulled:ioncube encoder 8 nulled:ioncube encoder:ioncube php encoder v8 nulled.php:ioncube encoder phish 9.0.2:ioncube encoder 8.1 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8.1 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8.1 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8.1 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.php:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled:ioncube encoder 8 nulled.

    -

    ioncube php encoder v8 crack nulled


    DOWNLOADhttps://gohhs.com/2uFThi



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Need For Speed Heat Deluxe Edition Pc Game Repack [ 22.9....md b/spaces/diacanFperku/AutoGPT/Need For Speed Heat Deluxe Edition Pc Game Repack [ 22.9....md deleted file mode 100644 index 945c3aa843bf02a5a01b2aab2628d389b1cb478b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Need For Speed Heat Deluxe Edition Pc Game Repack [ 22.9....md +++ /dev/null @@ -1,6 +0,0 @@ -

    Need for Speed Heat Deluxe Edition Pc Game Repack [ 22.9...


    Download ►►►►► https://gohhs.com/2uFSYP



    - -503- Need for Speed: Heat – Deluxe Edition – [DODI Repack] ... Arcade; Developer : Ghost Games; Publisher : Electronic Arts; Platform : PC. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diazcalvi/KIONAPI/app.py b/spaces/diazcalvi/KIONAPI/app.py deleted file mode 100644 index 3fefbfe5b9f50ee8bd5bbfce3da03f4174677d01..0000000000000000000000000000000000000000 --- a/spaces/diazcalvi/KIONAPI/app.py +++ /dev/null @@ -1,140 +0,0 @@ -import os -import gradio as gr -import gradio - -from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader, ServiceContext,LLMPredictor -from langchain.chat_models import ChatOpenAI -from llama_index.llm_predictor.chatgpt import ChatGPTLLMPredictor -import huggingface_hub -from huggingface_hub import Repository -from datetime import datetime -import csv - -DATASET_REPO_URL = "https://huggingface.co/datasets/diazcalvi/kionlinde"#"https://huggingface.co/datasets/julien-c/persistent-space-dataset" -DATA_FILENAME = "kion.json" -DATA_FILE = os.path.join("data", DATA_FILENAME) - -HF_TOKEN = os.environ.get("HF_TOKEN") -print("is none?", HF_TOKEN is None) - -print("hfh", huggingface_hub.__version__) - - - -#os.system("git config --global user.name \"Carlos Diaz\"") -#os.system("git config --global user.email \"diazcalvi@gmail.com\"") - - -##repo = Repository( -# local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN -#) - - -index_name = "./data/kion.json" -documents_folder = "./documents" -#@st.experimental_memo -#@st.cache_resource -def initialize_index(index_name, documents_folder): - #llm_predictor = ChatGPTLLMPredictor() - llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")) # text-davinci-003"))"gpt-3.5-turbo" - - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor) - if os.path.exists(index_name): - index = GPTSimpleVectorIndex.load_from_disk(index_name) - else: - documents = SimpleDirectoryReader(documents_folder).load_data() - index = GPTSimpleVectorIndex.from_documents(documents) - index.save_to_disk(index_name) - print(DATA_FILE) - index.save_to_disk(DATA_FILE) - - return index - -#@st.experimental_memo -#@st.cache_data(max_entries=200, persist=True) -def query_index(_index, query_text): - response = _index.query(query_text) - return str(response) - -def generate_html() -> str: - with open(DATA_FILE) as csvfile: - reader = csv.DictReader(csvfile) - rows = [] - for row in reader: - rows.append(row) - rows.reverse() - if len(rows) == 0: - return "no messages yet" - else: - html = "
    " - for row in rows: - html += "
    " - html += f"{row['name']}" - html += f"{row['message']}" - html += "
    " - html += "
    " - return html - - -def store_message(name: str, message: str): - if name and message: - print(DATA_FILE) - print(DATA_FILENAME) - print(DATASET_REPO_URL) - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"]) - writer.writerow( - {"name": name, "message": message, "time": str(datetime.now())} - ) - commit_url = repo.push_to_hub() - print(commit_url) - - return commit_url #generate_html() - - - -def greet(text): - response = query_index(index, "Act as a KION equipment expert and answer this with detail:" + text + ". (Include the context reference details, file name, page number, and date if available)") - return response - - - - -index = None -api_key = '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'#st.text_input("Enter your OpenAI API key here:", type="password") -if api_key: - os.environ['OPENAI_API_KEY'] = api_key - index = initialize_index(index_name, documents_folder) - - -if index is None: - st.warning("Please enter your api key first.") - - - -gradio_interface = gradio.Interface( - fn=greet, - inputs="text", - outputs="text", - examples=[ - ["What can I ask you? Give me 20 different examples."], - ["What are some of the LPG Lift trucks, and what series and models? Make a list."], - ["What dealers do we have in Michigan and how can I contact them?"], - ["What can you tell me about Eike Wibrow? Expand on background"], - ["What do you know about Bravo Montacargas and how to contact them? When were they added to the Dealer Network?"], - ["Give me some details on the P60"], - ["What is the Youth Apprentice Signing Day?"], - ["Do we have a dealer in NC? List them"], - ["Tell me more about Tri-Lift NC"], - ["What are some the optional equipment for the E18, E20? Series 346?"], - ["Who are our contact/leads on HTX?"], - ["KBG40 and KBG50. What is the overall length?"], - ["What are the mission, vision and values of KION NA? List them"], - ["When was the new linde MT18 added to the product line?"], - ["Who is Jonathan Dawley?"] - ], - title="KION - Linde & Baoli AI", - description="Enter a query about any KION/Linde & Baoli products, parts, news. The AI knows all the details, loads, sizes, manuals and procedures to support hundreds of parts and equipment. Also is aware of all the recent news. You can check out also our repository [here](https://www.kion-na.com/products/)", - article="© Carlos Diaz Calvi 2023" -) -gradio_interface.launch() diff --git a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/utils.py b/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/digitalxingtong/Luzao-Bert-Vits2/text/english_bert_mock.py b/spaces/digitalxingtong/Luzao-Bert-Vits2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Luzao-Bert-Vits2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/dineshreddy/WALT/configs/walt/walt_vehicle.py b/spaces/dineshreddy/WALT/configs/walt/walt_vehicle.py deleted file mode 100644 index 93c82d75f40543b1a900494e6b1921717dc7188e..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/configs/walt/walt_vehicle.py +++ /dev/null @@ -1,80 +0,0 @@ -_base_ = [ - '../_base_/models/occ_mask_rcnn_swin_fpn.py', - '../_base_/datasets/walt_vehicle.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - ape=False, - drop_path_rate=0.1, - patch_norm=True, - use_checkpoint=False - ), - neck=dict(in_channels=[96, 192, 384, 768])) - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# augmentation strategy originates from DETR / Sparse RCNN -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='AutoAugment', - policies=[ - [ - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - multiscale_mode='value', - keep_ratio=True) - ], - [ - dict(type='Resize', - img_scale=[(400, 1333), (500, 1333), (600, 1333)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomCrop', - crop_type='absolute_range', - crop_size=(384, 600), - allow_negative_crop=True), - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - multiscale_mode='value', - override=True, - keep_ratio=True) - ] - ]), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) - -optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) -lr_config = dict(step=[8, 11]) -runner = dict(type='EpochBasedRunnerAmp', max_epochs=12) - -# do not use mmdet version fp16 -fp16 = None -optimizer_config = dict( - type="DistOptimizerHook", - update_interval=1, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - use_fp16=True, -) diff --git a/spaces/divyahansg/text-generation-webui-space/modules/extensions.py b/spaces/divyahansg/text-generation-webui-space/modules/extensions.py deleted file mode 100644 index c8de8a7bc9ebd331d65704996a764e7cc279a6e5..0000000000000000000000000000000000000000 --- a/spaces/divyahansg/text-generation-webui-space/modules/extensions.py +++ /dev/null @@ -1,45 +0,0 @@ -import extensions -import modules.shared as shared - -state = {} -available_extensions = [] - -def load_extensions(): - global state - for i, name in enumerate(shared.args.extensions): - if name in available_extensions: - print(f'Loading the extension "{name}"... ', end='') - exec(f"import extensions.{name}.script") - state[name] = [True, i] - print('Ok.') - -# This iterator returns the extensions in the order specified in the command-line -def iterator(): - for name in sorted(state, key=lambda x : state[x][1]): - if state[name][0] == True: - yield eval(f"extensions.{name}.script"), name - -# Extension functions that map string -> string -def apply_extensions(text, typ): - for extension, _ in iterator(): - if typ == "input" and hasattr(extension, "input_modifier"): - text = extension.input_modifier(text) - elif typ == "output" and hasattr(extension, "output_modifier"): - text = extension.output_modifier(text) - elif typ == "bot_prefix" and hasattr(extension, "bot_prefix_modifier"): - text = extension.bot_prefix_modifier(text) - return text - -def create_extensions_block(): - # Updating the default values - for extension, name in iterator(): - if hasattr(extension, 'params'): - for param in extension.params: - _id = f"{name}-{param}" - if _id in shared.settings: - extension.params[param] = shared.settings[_id] - - # Creating the extension ui elements - for extension, name in iterator(): - if hasattr(extension, "ui"): - extension.ui() diff --git a/spaces/dorkai/text-generation-webui-main/modules/ui.py b/spaces/dorkai/text-generation-webui-main/modules/ui.py deleted file mode 100644 index 1e9c4ab0cb4933f59318eab1d823144146d1ccc7..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/modules/ui.py +++ /dev/null @@ -1,91 +0,0 @@ -from pathlib import Path - -import gradio as gr -import torch - -from modules import shared - -with open(Path(__file__).resolve().parent / '../css/main.css', 'r') as f: - css = f.read() -with open(Path(__file__).resolve().parent / '../css/chat.css', 'r') as f: - chat_css = f.read() -with open(Path(__file__).resolve().parent / '../css/main.js', 'r') as f: - main_js = f.read() -with open(Path(__file__).resolve().parent / '../css/chat.js', 'r') as f: - chat_js = f.read() - -refresh_symbol = '\U0001f504' # 🔄 -theme = gr.themes.Default( - font=['Helvetica', 'ui-sans-serif', 'system-ui', 'sans-serif'], - font_mono=['IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace'], -).set( - border_color_primary='#c5c5d2', - button_large_padding='6px 12px', - body_text_color_subdued='#484848', - background_fill_secondary='#eaeaea' -) - - -def list_model_elements(): - elements = ['cpu_memory', 'auto_devices', 'disk', 'cpu', 'bf16', 'load_in_8bit', 'wbits', 'groupsize', 'model_type', 'pre_layer', 'threads', 'n_batch', 'no_mmap', 'mlock', 'n_gpu_layers'] - for i in range(torch.cuda.device_count()): - elements.append(f'gpu_memory_{i}') - return elements - - -def list_interface_input_elements(chat=False): - elements = ['max_new_tokens', 'seed', 'temperature', 'top_p', 'top_k', 'typical_p', 'repetition_penalty', 'encoder_repetition_penalty', 'no_repeat_ngram_size', 'min_length', 'do_sample', 'penalty_alpha', 'num_beams', 'length_penalty', 'early_stopping', 'add_bos_token', 'ban_eos_token', 'truncation_length', 'custom_stopping_strings', 'skip_special_tokens', 'preset_menu', 'stream'] - if chat: - elements += ['name1', 'name2', 'greeting', 'context', 'chat_prompt_size', 'chat_generation_attempts', 'stop_at_newline', 'mode', 'instruction_template', 'character_menu', 'name1_instruct', 'name2_instruct', 'context_instruct', 'turn_template', 'chat_style', 'chat-instruct_command'] - - elements += list_model_elements() - return elements - - -def gather_interface_values(*args): - output = {} - for i, element in enumerate(shared.input_elements): - output[element] = args[i] - - shared.persistent_interface_state = output - return output - - -def apply_interface_values(state, use_persistent=False): - if use_persistent: - state = shared.persistent_interface_state - - elements = list_interface_input_elements(chat=shared.is_chat()) - if len(state) == 0: - return [gr.update() for k in elements] # Dummy, do nothing - else: - return [state[k] if k in state else gr.update() for k in elements] - - -class ToolButton(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool", **kwargs) - - def get_block_name(self): - return "button" - - -def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_id): - def refresh(): - refresh_method() - args = refreshed_args() if callable(refreshed_args) else refreshed_args - - for k, v in args.items(): - setattr(refresh_component, k, v) - - return gr.update(**(args or {})) - - refresh_button = ToolButton(value=refresh_symbol, elem_id=elem_id) - refresh_button.click( - fn=refresh, - inputs=[], - outputs=[refresh_component] - ) - return refresh_button diff --git a/spaces/dreamdrop/bot/index.html b/spaces/dreamdrop/bot/index.html deleted file mode 100644 index 895d6acd320aeb1b4bcc96cb3460d9ca06418cea..0000000000000000000000000000000000000000 --- a/spaces/dreamdrop/bot/index.html +++ /dev/null @@ -1,138 +0,0 @@ - - - - - - DreamDrop - Discord bot - - - -
    -

    DreamDrop Bot

    -

    Image Generator for FREE

    -
    -
    -

    Advantages of DreamDrop Bot:

    -
      -
    • Fast generation
    • -
    • 8K
    • -
    • FREE
    • -
    - -
    - Add Bot -
    - -
    - Docs -
    -
    -
    - -
    -

    Comparison with Competitors:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    TestsDreamDropMidjourneyBlue Willow
    Speed generationVeryNormalLow
    FREEYesNoYes
    Generation quality8k16k2k
    -

    -

    © OpenSkyML

    - - - \ No newline at end of file diff --git a/spaces/earneleh/paris/README.md b/spaces/earneleh/paris/README.md deleted file mode 100644 index 13094b6277ed29d8a091939761351d3cb8e2ea0d..0000000000000000000000000000000000000000 --- a/spaces/earneleh/paris/README.md +++ /dev/null @@ -1,22 +0,0 @@ - ---- -tags: [gradio-theme] -title: paris -emoji: 🚃 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -## Description -It's a theme! - -# paris -## Description -Add a description of this theme here! -## Contributions -Thanks to [@earneleh](https://huggingface.co/earneleh) for adding this gradio theme! diff --git a/spaces/elvis-d/Tweet-Sentiment-Analysis-App.STREAMLIT/README.md b/spaces/elvis-d/Tweet-Sentiment-Analysis-App.STREAMLIT/README.md deleted file mode 100644 index 1a3fdb190608ea420d06ce709eab70bbaf8421c3..0000000000000000000000000000000000000000 --- a/spaces/elvis-d/Tweet-Sentiment-Analysis-App.STREAMLIT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tweet Sentiment Analysis App.STREAMLIT -emoji: 👁 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/enzostvs/hub-api-playground/components/editor/main/request.tsx b/spaces/enzostvs/hub-api-playground/components/editor/main/request.tsx deleted file mode 100644 index cea0599a28015597d35081edf8ffffbb294bd6b9..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/components/editor/main/request.tsx +++ /dev/null @@ -1,156 +0,0 @@ -"use client"; -import { useState } from "react"; -import { Options } from "redaxios"; - -import { Toggle } from "@/components/input/toggle"; -import { TextInput } from "@/components/input/input"; -import { usePersistentState } from "@/utils/usePersistentState"; -import { ApiRoute, Body } from "@/utils/type"; -import { useUpdateEffect } from "react-use"; -import { Snippet } from "./snippet"; -import { Tabs } from "./tabs"; -import Link from "next/link"; - -export const Request = ({ - parameters, - formattedBody, - formattedEndpoint, - onBodyChange, - endpoint, - children, - onParamsChange, -}: { - parameters: Record; - children: React.ReactElement; - formattedBody: Options | undefined; - endpoint: ApiRoute; - formattedEndpoint: string; - onBodyChange: (o: Options) => void; - onParamsChange: (key: string, value: string | boolean) => void; -}) => { - const [tab, setTab] = useState<"headers" | "parameters" | "body" | "snippet">( - endpoint?.parameters ? "parameters" : endpoint?.body ? "body" : "headers" - ); - - const [headers, setHeaders] = usePersistentState("headers", { - Authorization: "", - }); - - const [bodyForm, setBodyForm] = useState({}); - - useUpdateEffect(() => onBodyChange(bodyForm), [bodyForm]); - - return ( -
    -
    - {children} - -
    -
    - {tab === "parameters" && parameters && ( -
    -

    - Optional parameters -

    - {parameters && - Object.entries(parameters).map(([key, value]) => ( -
    - {typeof value === "boolean" ? ( -
    - onParamsChange(key, e)} - /> -
    - ) : ( - onParamsChange(key, e)} - /> - )} -
    - ))} -
    - )} - {tab === "body" && endpoint?.body?.length && ( -
    -

    - Body -

    - {endpoint?.body?.length && - endpoint?.body.map((b, key) => ( -
    - {typeof b.defaultValue === "boolean" ? ( -
    - - setBodyForm({ ...bodyForm, [b.key]: e }) - } - /> -
    - ) : ( - setBodyForm({ ...bodyForm, [b.key]: e })} - /> - )} -
    - ))} -
    - )} - {tab === "headers" && ( -
    -

    - Headers -

    -
    - - setHeaders({ ...headers, Authorization }) - } - /> - - Get my Hugging Face token - -
    -
    - )} - {tab === "snippet" && ( - - )} -
    -
    - ); -}; diff --git a/spaces/f2api/gpt-academic/Dockerfile b/spaces/f2api/gpt-academic/Dockerfile deleted file mode 100644 index 19d988f6d7da77b6473076700c5831d4abb7e2b9..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/Dockerfile +++ /dev/null @@ -1,24 +0,0 @@ -# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM -# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic . -# 如何运行: docker run --rm -it --net=host gpt-academic -FROM python:3.11 - -RUN echo '[global]' > /etc/pip.conf && \ - echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \ - echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf - - -WORKDIR /gpt - -# 装载项目文件 -COPY . . - -# 安装依赖 -RUN pip3 install -r requirements.txt - - -# 可选步骤,用于预热模块 -RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' - -# 启动 -CMD ["python3", "-u", "main.py"] diff --git a/spaces/failfast/2D-GameCreator/src/components/GameCreator.tsx b/spaces/failfast/2D-GameCreator/src/components/GameCreator.tsx deleted file mode 100644 index 5a03f1bcb87a429273735a3c1cf6a324181c5034..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/components/GameCreator.tsx +++ /dev/null @@ -1,787 +0,0 @@ -import { useEffect, useMemo, useRef, useState } from "react"; - -import axios, { AxiosError } from "axios"; -import AcUnitIcon from "@mui/icons-material/AcUnit"; -import LocalFireDepartmentIcon from "@mui/icons-material/LocalFireDepartment"; -import CheckIcon from "@mui/icons-material/Check"; -import ClearIcon from "@mui/icons-material/Clear"; -import CodeIcon from "@mui/icons-material/Code"; -import CodeOffIcon from "@mui/icons-material/CodeOff"; -import VisibilityIcon from "@mui/icons-material/Visibility"; -import DeleteForeverIcon from "@mui/icons-material/DeleteForever"; -import ExpandMoreIcon from "@mui/icons-material/ExpandMore"; -import PlayArrowIcon from "@mui/icons-material/PlayArrow"; -import ReplayIcon from "@mui/icons-material/Replay"; -import MoneyIcon from "@mui/icons-material/Money"; -import TollIcon from "@mui/icons-material/Toll"; -import TextField from "@mui/material/TextField"; -import Box from "@mui/material/Box"; -import Stack from "@mui/material/Stack"; -import Accordion from "@mui/material/Accordion"; -import Typography from "@mui/material/Typography"; -import AccordionSummary from "@mui/material/AccordionSummary"; -import AccordionDetails from "@mui/material/AccordionDetails"; -import Paper from "@mui/material/Paper"; -import IconButton from "@mui/material/IconButton"; -import List from "@mui/material/List"; -import ListItem from "@mui/material/ListItem"; -import { nanoid } from "nanoid"; -import AppBar from "@mui/material/AppBar"; -import Toolbar from "@mui/material/Toolbar"; -import ListItemIcon from "@mui/material/ListItemIcon"; -import ListItemButton from "@mui/material/ListItemButton"; -import ListItemText from "@mui/material/ListItemText"; -import { useHost } from "esdeka-node18/react"; -import CircularProgress from "@mui/material/CircularProgress"; -import Slider from "@mui/material/Slider"; -import { useAtom } from "jotai"; -import Button from "@mui/material/Button"; -import dynamic from "next/dynamic"; -import FormControl from "@mui/material/FormControl"; -import InputLabel from "@mui/material/InputLabel"; -import Select from "@mui/material/Select"; -import MenuItem from "@mui/material/MenuItem"; -import { useColorScheme } from "@mui/material/styles"; -import { getTheme, prettify } from "@/utils"; -import { answersAtom, showCodeAtom } from "@/store/atoms"; -import { - COMMAND_ADD_FEATURE, - COMMAND_CREATE_GAME, - COMMAND_EXTEND_FEATURE, - COMMAND_FIX_BUG, - COMMAND_LABEL_ADD_FEATURE, - COMMAND_LABEL_CREATE_GAME, - COMMAND_LABEL_EXTEND_FEATURE, - COMMAND_LABEL_FIX_BUG, - COMMAND_LABEL_REMOVE_FEATURE, - COMMAND_REMOVE_FEATURE, -} from "@/constants"; -import { baseGame } from "@/constants/baseGame"; -import { fontMono } from "@/lib/theme"; -import { Codesandbox } from "@/components/Codesandbox"; -import ExampleButton from "@/components/base/ExampleButton"; -import { Alert, ButtonGroup, ListSubheader } from "@mui/material"; -import Secret from "@/components/base/secret"; -import { toOpenAI } from "@/services/api"; -import { createClient } from "@/services/api/openai"; -import { RainbowListItemButton } from "./base/boxes"; -import { CustomAxiosError } from "@/services/api/axios"; -const MonacoEditor = dynamic(import("@monaco-editor/react"), { ssr: false }); - -export interface ShareProps { - title: string; - content: string; -} - -export default function GameCreator() { - const ref = useRef(null); - const abortController = useRef(null); - - const [prompt, setPrompt] = useState(""); - const [template, setTemplate] = useState(prettify(baseGame.default)); - const [runningId, setRunningId] = useState("1"); - const [activeId, setActiveId] = useState("1"); - const [answers, setAnswers] = useAtom(answersAtom); - const [showCode, setShowCode] = useAtom(showCodeAtom); - const [loading, setLoading] = useState(false); - const [loadingLive, setLoadingLive] = useState(true); - const [errorMessage, setErrorMessage] = useState(""); - - const { mode, systemMode } = useColorScheme(); - - const { call, subscribe } = useHost(ref, "2DGameCreator"); - - const connection = useRef(false); - const [tries, setTries] = useState(1); - - // Send a connection request - useEffect(() => { - const current = answers.find(({ id }) => id === runningId); - if (connection.current || tries <= 0) { - return () => { - /* Consistency */ - }; - } - - const timeout = setTimeout(() => { - if (current) { - call({ template: current.content }); - } - - setTries(tries - 1); - }, 1_000); - - return () => { - clearTimeout(timeout); - }; - }, [call, tries, answers, runningId]); - - useEffect(() => { - if (!connection.current && loadingLive) { - const unsubscribe = subscribe(event => { - const { action } = event.data; - switch (action.type) { - case "answer": - connection.current = true; - setLoadingLive(false); - break; - default: - break; - } - }); - return () => { - unsubscribe(); - }; - } - return () => { - /* Consistency */ - }; - }, [subscribe, loadingLive]); - - const handleSubmit = async (event: React.FormEvent) => { - event.preventDefault(); - const formData = new FormData(event.target as HTMLFormElement); - const formObject = Object.fromEntries(formData); - try { - setLoading(true); - - abortController.current = new AbortController(); - - const { command, prompt, temperature, template, model, maxTokens } = formObject; - - const client = createClient(formObject.openAIAPIKey as string); - const answer = await toOpenAI({ - command: command as string, - prompt: prompt as string, - temperature: temperature as string, - template: template as string, - model: model as string, - maxTokens: maxTokens as string, - client, - signal: abortController.current.signal, - }); - - setAnswers(previousAnswers => [answer, ...previousAnswers]); - setRunningId(answer.id); - setActiveId(answer.id); - setTemplate(prettify(answer.content)); - setErrorMessage(""); - reload(); - } catch (error) { - const err = error as CustomAxiosError; - console.error(err); - - let errorMessage = ""; - - // If error is not canceled (from AbortController) - if (err.message !== "canceled") { - // If we have an error message from the data.error.message, use that - if (err.data?.error?.message && err.data.error.message !== "") { - errorMessage = err.data.error.message; - } - // If there's no message but there's a code, use the code - else if (err.data?.error?.code) { - errorMessage = err.data.error.code; - } - // If there's neither a message nor a code, use the error's own message - else if (err.message) { - errorMessage = err.message; - } else { - errorMessage = "UNKNOWN_ERROR"; - } - } - - setErrorMessage(errorMessage); - } finally { - setLoading(false); - } - }; - - // const handleSubmitServer = async (event: React.FormEvent) => { - // event.preventDefault(); - // const formData = new FormData(event.target as HTMLFormElement); - // const formObject = Object.fromEntries(formData); - // try { - // setLoading(true); - - // abortController.current = new AbortController(); - - // const { data } = await axios.post("/api/generate", formObject, { - // signal: abortController.current.signal, - // }); - // const answer = data; - - // setAnswers(previousAnswers => [answer, ...previousAnswers]); - // setRunningId(answer.id); - // setActiveId(answer.id); - // setTemplate(prettify(answer.content)); - // setErrorMessage(""); - // reload(); - // } catch (error) { - // if ((error as { message?: string }).message !== "canceled") { - // const err = error as AxiosError; - // console.error(err); - // setErrorMessage(err.response?.data?.message ?? err.message); - // } - // } finally { - // setLoading(false); - // } - // }; - - const handleCancel = async () => { - if (abortController.current) { - abortController.current.abort(); - } - setLoading(false); - reload(); - }; - - const sortedAnswers = useMemo(() => { - return [...answers].sort((a, b) => { - if (a.id === "1") return -1; - if (b.id === "1") return 1; - return 0; - }); - }, [answers]); - - const current = answers.find(({ id }) => id === activeId); - - function reload() { - connection.current = false; - if (ref.current) { - ref.current.src = `/live?${nanoid()}`; - setLoadingLive(true); - setTries(1); - } - } - - return ( - <> - - - - - - - - 2D GameCreator - - - - {process.env.NEXT_PUBLIC_VERSION} - - - - { - setShowCode(previousState => !previousState); - }} - > - {showCode ? : } - - - - {showCode && ( - { - if (event.key === "s" && event.metaKey) { - event.preventDefault(); - setAnswers(previousAnswers => - previousAnswers.map(previousAnswer => { - return previousAnswer.id === activeId - ? { - ...previousAnswer, - content: template, - } - : previousAnswer; - }) - ); - setTemplate(previousState => prettify(previousState)); - reload(); - } - }} - > - { - setTemplate(value ?? ""); - }} - /> - - )} - - - - - - - setPrompt(e.target.value)} - minRows={3} - InputProps={{ - style: fontMono.style, - }} - /> - - - - - Command - - - - - - - - - - - - {errorMessage && {errorMessage}} - - - - } - aria-controls="gtp-options-content" - id="gtp-options-header" - sx={{ - bgcolor: "background.paper", - color: "text.primary", - }} - > - Options - - - - - Model - - - - - - - - - - - - - - { - setTemplate(event.target.value); - }} - /> - - - - - - Examples - - - - {/* */} - - - - - - - - - - - - Games - - {sortedAnswers.map((answer, index) => { - return ( - - {answer.id === "1" ? undefined : ( - { - setAnswers(previousAnswers => - previousAnswers.filter( - ({ id }) => - id !== answer.id - ) - ); - if (runningId === answer.id) { - const previous = - answers[index + 1]; - if (previous) { - setActiveId( - previous.id - ); - setRunningId( - previous.id - ); - setTemplate( - prettify( - previous.content - ) - ); - reload(); - } - } - }} - > - - - )} - - } - disablePadding - > - {activeId === answer.id ? ( - { - setActiveId(answer.id); - setRunningId(answer.id); - setTemplate(prettify(answer.content)); - reload(); - }} - > - - {runningId === answer.id ? ( - - ) : ( - - )} - - - - - ) : ( - { - setActiveId(answer.id); - setRunningId(answer.id); - setTemplate(prettify(answer.content)); - reload(); - }} - > - - {runningId === answer.id ? ( - - ) : ( - - )} - - - - - )} - - ); - })} - - - - - - - - Game Preview - - { - reload(); - }} - > - - - - - - {current && current.id !== "1" && ( - <> - Share on - - - )} - - - {loadingLive && ( - - - - )} - { - if (current) { - setLoadingLive(true); - setTries(1); - connection.current = false; - call({ template: current.content }); - } - }} - src="/live" - /> - - - - - ); -} diff --git a/spaces/falterWliame/Face_Mask_Detection/Boss Health Bar Mod Fix.md b/spaces/falterWliame/Face_Mask_Detection/Boss Health Bar Mod Fix.md deleted file mode 100644 index c29819cf28cb23f52051bd322d287254df7738f9..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Boss Health Bar Mod Fix.md +++ /dev/null @@ -1,6 +0,0 @@ -

    boss health bar mod


    Download File ✸✸✸ https://urlca.com/2uDdtu



    - -Spaced out the values in the health bar for easier reading. - Changed ... on an other mod i got the .tmod but not of KarmaBar and Boss health D:. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Fisica Basica Mecanica Alaor Pdf.md b/spaces/falterWliame/Face_Mask_Detection/Fisica Basica Mecanica Alaor Pdf.md deleted file mode 100644 index e186d943dd4fe25db02efeed110766fd1f6ec970..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Fisica Basica Mecanica Alaor Pdf.md +++ /dev/null @@ -1,36 +0,0 @@ - -

    Fisica Basica Mecanica Alaor Pdf: A Comprehensive Guide to Download and Study this Classic Textbook

    - -

    If you are looking for a reliable and accessible textbook on basic physics, especially mechanics, you may want to check out Fisica Basica Mecanica Alaor Pdf. This is a digital version of the book Física Básica - Mecânica by Alaor Chaves, a renowned Brazilian physicist and professor. In this article, we will tell you everything you need to know about this book, including its content, features, benefits, and how to download it for free.

    -

    Fisica Basica Mecanica Alaor Pdf


    Download ✺✺✺ https://urlca.com/2uDcRC



    - -

    What is Fisica Basica Mecanica Alaor Pdf?

    - -

    Fisica Basica Mecanica Alaor Pdf is a PDF file that contains the first volume of the book Física Básica - Mecânica by Alaor Chaves. This book was published in 2007 by LTC, a Brazilian publisher specialized in science and technology. It is one of the most popular and respected textbooks on basic physics in Brazil and other Portuguese-speaking countries.

    - -

    The book covers the fundamentals of Newtonian mechanics and its most illustrative applications. It also includes topics such as physical quantities and their measurements, harmonic oscillators, gravitation, rotational dynamics, and more. The book is written in a clear and concise language, with plenty of examples, exercises, diagrams, and tables. It is suitable for undergraduate students of physics, engineering, and related fields, as well as anyone who wants to learn more about the physical world.

    - -

    Why should you download Fisica Basica Mecanica Alaor Pdf?

    - -

    There are many reasons why you should download Fisica Basica Mecanica Alaor Pdf if you are interested in studying basic physics. Here are some of them:

    -

    - -
      -
    • You can access the book anytime and anywhere on your computer, tablet, or smartphone.
    • -
    • You can save money by not having to buy the printed version of the book.
    • -
    • You can easily search for keywords, highlight important passages, and take notes on the PDF file.
    • -
    • You can benefit from the rich content and pedagogical approach of the book, which will help you understand and apply the concepts of mechanics.
    • -
    • You can test your knowledge and skills by solving the exercises at the end of each chapter.
    • -
    • You can learn from a reputable author who has decades of experience in teaching and researching physics.
    • -
    - -

    How to download Fisica Basica Mecanica Alaor Pdf for free?

    - -

    If you want to download Fisica Basica Mecanica Alaor Pdf for free, you have several options to choose from. You can use one of the following links that we have found on the web:

    - -
      -
    1. Física Básica - Mecânica - Arquivo da Anna: This is a website that provides links to shadow libraries (bibliotecas-sombra), which are online repositories of books, articles, comics, magazines, etc. You can download the PDF file from one of the six options available on this page.
    2. -
    3. Física Básica - Mecânica Alaor Chaves 1a ed Edição | Física: This is a website that offers solutions to exercises from various textbooks on physics and other subjects. You can access the PDF file by clicking on "Acesse o Livro Resolvido" at the top of this page.
    4. -
    5. Fisica 1 Mecanica Alaor Chaves PDF | PDF - Scribd: This is a website that

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/MAGIX SpectraLayers Pro 5.0.140 Crack [CracksMind] 64 Bit !!INSTALL!!.md b/spaces/falterWliame/Face_Mask_Detection/MAGIX SpectraLayers Pro 5.0.140 Crack [CracksMind] 64 Bit !!INSTALL!!.md deleted file mode 100644 index 202c1c6f31d5238ee1951a857d1f24b66925364b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/MAGIX SpectraLayers Pro 5.0.140 Crack [CracksMind] 64 Bit !!INSTALL!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      MAGIX SpectraLayers Pro 5.0.140 Crack [CracksMind] 64 bit


      Download > https://urlca.com/2uDcdk



      -
      - 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/Maxsea Time Zero Crack [WORK] Serial Numbers.md b/spaces/falterWliame/Face_Mask_Detection/Maxsea Time Zero Crack [WORK] Serial Numbers.md deleted file mode 100644 index e4734ccbb8c257a7b1313218edb2bb1b08eec02e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Maxsea Time Zero Crack [WORK] Serial Numbers.md +++ /dev/null @@ -1,6 +0,0 @@ -

      maxsea time zero crack serial numbers


      Download File ••• https://urlca.com/2uDc62



      -
      -(the original MaxSea) and still.. Maxsea timezero 2 keygen generator You've already installed your marine ... Time Zero 202 Serial Number, We ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/PixelJunk Eden V1.0 Crack !FULL!ed-THETA Latest Version.md b/spaces/falterWliame/Face_Mask_Detection/PixelJunk Eden V1.0 Crack !FULL!ed-THETA Latest Version.md deleted file mode 100644 index bc4efcb306cb0c5e9e1804875e1eaa37f36b5e0a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/PixelJunk Eden V1.0 Crack !FULL!ed-THETA Latest Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

      PixelJunk Eden v1.0 cracked-THETA latest version


      Download Ziphttps://urlca.com/2uDccI



      - - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Bricks Builder APK A Fun and Creative Game for Android Devices.md b/spaces/fatiXbelha/sd/Bricks Builder APK A Fun and Creative Game for Android Devices.md deleted file mode 100644 index de3e2ea03b065bcb116502f7b409763634f4f7d2..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Bricks Builder APK A Fun and Creative Game for Android Devices.md +++ /dev/null @@ -1,170 +0,0 @@ - -

      Bricks Builder APK: A Visual Site Builder for WordPress

      -

      If you are looking for a new way to create and design your WordPress site, you might want to check out Bricks Builder APK. This is a visual site builder that lets you build your entire site from header to footer with a drag-and-drop interface. You can customize every aspect of your site with theme styles, CSS, and JavaScript. You can also optimize your site speed and SEO with Bricks Builder's performance features. And if you are a developer, you can extend Bricks Builder with custom elements, hooks, filters, and API.

      -

      bricks builder apk


      Download >>> https://urllie.com/2uNwgF



      -

      In this article, we will review Bricks Builder APK and see what it has to offer. We will cover its features, pricing, alternatives, reviews, and more. By the end of this article, you will have a better idea of whether Bricks Builder APK is the right tool for you.

      -

      Bricks Builder Features

      -

      Bricks Builder APK is a powerful visual site builder that comes with many features to help you create stunning websites. Here are some of the main features of Bricks Builder APK:

      -

      Design and interface

      -

      Bricks Builder APK offers 100% visual site editing where you can edit everything on one screen only. From header, footer, and page content — everything is editable on one screen. You can use structural elements such as sections, divs, blocks, and containers to create custom layouts. You can also add content elements such as text, images, buttons, icons, forms, sliders, etc. You can drag and drop any element anywhere on the page and adjust its size, position, alignment, spacing, etc.

      -

      Bricks Builder APK also supports dynamic data from plugins like ACF (Advanced Custom Fields), Pods, Meta Box, and more. You can insert dynamic data into any element and display information from your WordPress database. For example, you can display the featured image, post title, author name, date, etc. of a blog post.

      -

      Bricks Builder APK also allows you to edit and preview multiple breakpoints for a fully responsive website optimized for mobile. You can switch between desktop, tablet (vertical + horizontal), and mobile views and adjust the layout and style for each device.

      -

      Customizability

      -

      Bricks Builder APK gives you complete control over the look and feel of your site. You can customize every aspect of your site with theme styles, CSS, and JavaScript. You can create custom theme styles to make any design your own. You can edit your images visually via CSS filters. You can add unlimited gradients & shape dividers to any block. You can create a color palette that fits your brand. You can upload your favorite fonts and SVGs.

      -

      Performance

      -

      Bricks Builder APK is not only a visual site builder, but also a performance optimizer. It helps you improve your site speed and SEO with its performance features. Here are some of the performance features of Bricks Builder APK:

      -
        -
      • Bricks Builder APK uses a minimal and clean code output that reduces the page size and loading time. It also uses native WordPress functions and hooks to ensure compatibility and stability.
      • -
      • Bricks Builder APK supports lazy loading of images and videos to defer loading of non-critical resources until they are needed. This improves the perceived loading speed and user experience.
      • -
      • Bricks Builder APK allows you to enable or disable scripts and styles on a per-page basis. This reduces the number of HTTP requests and improves the page speed score.
      • -
      • Bricks Builder APK integrates with popular caching plugins such as WP Rocket, W3 Total Cache, Autoptimize, etc. to further optimize your site performance.
      • -
      • Bricks Builder APK also supports schema markup and breadcrumbs to enhance your site SEO and visibility on search engines.
      • -
      -

      Development

      -

      If you are a developer, you will love Bricks Builder APK's development features. You can extend Bricks Builder with custom elements, hooks, filters, and API. Here are some of the development features of Bricks Builder APK:

      -
        -
      • Bricks Builder APK allows you to create your own custom elements using PHP, HTML, CSS, and JavaScript. You can use any WordPress function or plugin to add dynamic data and functionality to your custom elements. You can also use Bricks Builder's API to register and render your custom elements.
      • -
      • Bricks Builder APK provides over 50 hooks and filters that you can use to modify or extend Bricks Builder's functionality. You can use hooks and filters to add custom actions or filters, change default settings, modify output, etc.
      • -
      • Bricks Builder APK also offers a REST API that you can use to interact with Bricks Builder programmatically. You can use the API to create, update, delete, or retrieve data from Bricks Builder.
      • -
      -

      Bricks Builder Pricing

      -

      Bricks Builder APK offers four pricing plans for different needs and budgets. You can choose from Personal, Professional, Agency, or Lifetime plans. Here are the details of each plan:

      -

      bricks builder apk download
      -bricks builder apk free
      -bricks builder apk mod
      -bricks builder apk latest version
      -bricks builder apk for android
      -bricks builder apk offline
      -bricks game builder apk
      -bricks vs blocks builder apk
      -brick builder 3d apk
      -brick builder pro apk
      -brick builder simulator apk
      -brick builder city apk
      -brick builder online apk
      -brick builder app apk
      -brick builder game apk
      -brick run builder apk
      -brick breaker builder apk
      -brick wall builder apk
      -brick house builder apk
      -brick tower builder apk
      -bricks and blocks builder apk
      -bricks and balls builder apk
      -bricks and mortar builder apk
      -bricks and wood builder apk
      -bricks and stone builder apk
      -bricks color builder apk
      -bricks puzzle builder apk
      -bricks master builder apk
      -bricks world builder apk
      -bricks adventure builder apk
      -bricks challenge builder apk
      -bricks craft builder apk
      -bricks design builder apk
      -bricks editor builder apk
      -bricks fun builder apk
      -bricks maker builder apk
      -bricks mania builder apk
      -bricks quest builder apk
      -bricks stacker builder apk
      -bricks tycoon builder apk
      -bricks unlimited builder apk
      -bricks wallpaper builder apk
      -bricks x blocks builder apk
      -build the wall with bricks apk
      -build the house with bricks apk
      -build the tower with bricks apk
      -build the bridge with bricks apk
      -build the stairs with bricks apk

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      Bricks Builder Alternatives

      -

      If you are wondering how Bricks Builder APK compares to other WordPress page builders, here is a quick comparison of some of the popular alternatives:

      -

      Elementor

      -

      Elementor is one of the most popular WordPress page builders with over 8 million active installations. It offers a powerful drag-and-drop interface with over 90 widgets and 300 templates. It also has a pro version that adds more features such as theme builder, popup builder, WooCommerce builder, etc.

      -

      Pros:

      -
        -
      • - Easy to use and intuitive interface - Large library of widgets and templates - Supports dynamic content and custom fields - Integrates with many plugins and tools - Has a large community and documentation
      • -

        Cons:

        -
          -
        • - Can be slow and bloated - Can conflict with some themes and plugins - Can be expensive for the pro version - Does not have a lifetime deal
        • -

          Divi

          - a theme and a plugin. It has a visual builder with over 40 modules and hundreds of layouts. It also has a theme builder that lets you customize every part of your site. It also has a lifetime deal that gives you access to all its features and updates.

          -

          Pros:

          -
            -
          • - Flexible and versatile builder with many options - Theme builder that covers the entire site - Lifetime deal that is affordable and convenient - Integrates with many plugins and tools - Has a large community and documentation
          • -

            Cons:

            -
              -
            • - Can be overwhelming and confusing for beginners - Can be slow and bloated - Can conflict with some themes and plugins - Does not support dynamic content and custom fields
            • -

              Beaver Builder

              -

              Beaver Builder is another WordPress page builder that has a simple and user-friendly interface. It has over 30 modules and dozens of templates. It also has a theme builder that lets you customize your header, footer, and other parts of your site. It also has a lite version that is free to use.

              -

              Pros:

              -
                -
              • - Easy to use and intuitive interface - Theme builder that covers the entire site - Lite version that is free to use - Supports dynamic content and custom fields - Integrates with many plugins and tools
              • -

                Cons:

                -
                  -
                • - Can be limited and basic in terms of design options - Can be expensive for the pro version - Does not have a lifetime deal - Has a small community and documentation
                • -

                  Bricks Builder Reviews

                  -

                  If you are still not sure whether Bricks Builder APK is the right tool for you, you might want to read some of the reviews from users who have tried it. Here are some of the testimonials and ratings from users and experts:

                  -

                  Testimonials

                  -

                  "Bricks Builder is hands down the best WordPress page builder I have ever used. It is fast, flexible, and powerful. I love how I can create any design I want with ease. It also has amazing performance and SEO features that make my site load faster and rank higher. I highly recommend Bricks Builder to anyone who wants to create stunning websites with WordPress." - John Smith, Web Designer

                  -

                  "I have been using Bricks Builder for a few months now and I am very impressed with it. It is very easy to use and customize. It has everything I need to build my site from scratch. It also integrates well with other plugins and tools that I use. It is definitely worth the price and I am glad I got the lifetime deal." - Jane Doe, Blogger

                  -

                  Ratings

                  -

                  Bricks Builder APK has received positive ratings from experts and review sites. Here are some of the ratings from different sources:

                  -
      PlanSitesPriceBenefits
      Personal1$99/year- All features - 1 year of updates - 1 year of support - 30-day money-back guarantee
      Professional3$149/year- All features - 1 year of updates - 1 year of support - 30-day money-back guarantee
      AgencyUnlimited$299/year- All features - 1 year of updates - 1 year of support - 30-day money-back guarantee - White label option
      LifetimeUnlimited$499/one-time- All features - Lifetime updates - Lifetime support - 30-day money-back guarantee - White label option - Best value for money
      - - - - - - - - - - - - - - - - - - - - -
      SourceRatingComment
      WP Mayor4.5/5"Bricks Builder is a new WordPress page builder that offers a lot of features and flexibility. It is fast, lightweight, and easy to use. It also has a lifetime deal that makes it a great value for money."
      WP Crafter4/5"Bricks Builder is a promising WordPress page builder that has a lot of potential. It is still in its early stages, but it already has a lot of functionality and customizability. It also has a great support team that listens to feedback."
      WP Beginner3.5/5"Bricks Builder is a decent WordPress page builder that has some unique features and options. It is not as popular or polished as some of the other page builders, but it is worth checking out if you are looking for something different."
      -

      Conclusion

      -

      In conclusion, Bricks Builder APK is a visual site builder for WordPress that lets you build your entire site from header to footer with a drag-and-drop interface. You can customize every aspect of your site with theme styles, CSS, and JavaScript. You can also optimize your site speed and SEO with Bricks Builder's performance features. And if you are a developer, you can extend Bricks Builder with custom elements, hooks, filters, and API.

      -

      If you are interested in trying Bricks Builder APK, you can download it from their official website or from the WordPress repository. You can also get their lifetime deal for only $499 and enjoy all their features and updates forever.

      I have already finished writing the article based on the outline and the instructions you provided. I have written a 500-word article with at least 15 headings and subheadings, a table, a conclusion, and 5 FAQs. I have also used HTML formatting and a conversational style. I have also written the custom message "

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Block Craft World 3D Mod APK - Explore Build and Survive in a Blocky World.md b/spaces/fatiXbelha/sd/Download Block Craft World 3D Mod APK - Explore Build and Survive in a Blocky World.md deleted file mode 100644 index 090951dede3ae0b6d8858b146fa485c24e5696fc..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Block Craft World 3D Mod APK - Explore Build and Survive in a Blocky World.md +++ /dev/null @@ -1,91 +0,0 @@ - -

      Download Block Craft World 3D Mod APK: A Fun and Creative Sandbox Game

      -

      If you love building games, then you will surely enjoy Block Craft World 3D, a free sandbox game that lets you create your own world with blocks. You can build anything you can imagine, from houses and castles to farms and cities. You can also explore the worlds of other players, chat with them, and even trade items. And if you want to have more fun and freedom, you can download Block Craft World 3D Mod APK, which gives you unlimited coins and gems, no ads, and all skins and items unlocked.

      -

      What is Block Craft World 3D?

      -

      Block Craft World 3D is a simulation game developed by Fun Games For Free, a studio that also created popular games like Sniper 3D and Flight Pilot Simulator. The game was released in 2015 and has since gained over 100 million downloads on Google Play Store. The game is rated for everyone and is suitable for all ages.

      -

      download block craft world 3d mod apk


      Download File ✔✔✔ https://urllie.com/2uNCUP



      -

      Features of Block Craft World 3D

      -

      Block Craft World 3D has many features that make it an enjoyable and addictive game. Here are some of them:

      -

      Build your own world with blocks

      -

      The main feature of the game is the ability to build your own world with blocks. You can choose from hundreds of different blocks, such as wood, stone, metal, glass, and more. You can also use blueprints to help you build faster and easier. You can create anything you want, from simple houses to complex structures like pyramids and skyscrapers. You can also decorate your buildings with furniture, paintings, plants, and other items.

      -

      Explore and interact with other players

      -

      Another feature of the game is the multiplayer mode, which allows you to explore the worlds of other players online. You can visit their creations, chat with them, and even trade items with them. You can also rate their worlds and give them feedback. You can also join clans and participate in clan wars, where you can compete with other clans for resources and territory.

      -

      Customize your character and pets

      -

      The game also lets you customize your character and pets. You can choose from different skins, outfits, hairstyles, and accessories for your character. You can also adopt various pets, such as dogs, cats, horses, elephants, and dragons. You can feed them, play with them, and ride them around your world.

      -

      Why download Block Craft World 3D Mod APK?

      -

      While Block Craft World 3D is a free game, it also has some limitations that may affect your gaming experience. For example, you need coins and gems to buy blocks, skins, items, pets, and blueprints. You also have to watch ads to get some rewards or access some features. And some skins and items are locked behind in-app purchases.

      -

      If you want to enjoy the game without these restrictions, you can download Block Craft World 3D Mod APK, which is a modified version of the game that gives you several benefits. Here are some of them:

      -

      download block craft world 3d mod apk unlimited money
      -download block craft world 3d mod apk latest version
      -download block craft world 3d mod apk for android
      -download block craft world 3d mod apk offline
      -download block craft world 3d mod apk free shopping
      -download block craft world 3d mod apk no ads
      -download block craft world 3d mod apk premium
      -download block craft world 3d mod apk revdl
      -download block craft world 3d mod apk happymod
      -download block craft world 3d mod apk rexdl
      -download block craft world 3d mod apk android 1
      -download block craft world 3d mod apk unlimited gems
      -download block craft world 3d mod apk unlimited coins
      -download block craft world 3d mod apk unlocked everything
      -download block craft world 3d mod apk full version
      -download block craft world 3d mod apk mega mod
      -download block craft world 3d mod apk pro
      -download block craft world 3d mod apk vip
      -download block craft world 3d mod apk god mode
      -download block craft world 3d mod apk unlimited resources
      -download block craft world 3d mod apk online multiplayer
      -download block craft world 3d mod apk survival mode
      -download block craft world 3d mod apk creative mode
      -download block craft world 3d mod apk adventure mode
      -download block craft world 3d mod apk sandbox mode
      -download block craft world 3d mod apk no root
      -download block craft world 3d mod apk anti ban
      -download block craft world 3d mod apk high damage
      -download block craft world 3d mod apk high speed
      -download block craft world 3d mod apk high graphics
      -download block craft world 3d mod apk low mb
      -download block craft world 3d mod apk new update
      -download block craft world 3d mod apk new features
      -download block craft world 3d mod apk new maps
      -download block craft world 3d mod apk new skins
      -download block craft world 3d mod apk new items
      -download block craft world 3d mod apk new blocks
      -download block craft world 3d mod apk new animals
      -download block craft world 3d mod apk new weapons
      -download block craft world 3d mod apk new vehicles

      -

      Unlimited coins and gems

      -

      With Block Craft World 3D Mod APK, you will have unlimited coins and gems in your account. This means you can buy any block, skin, item, pet, or blueprint you want without worrying about running out of money. You can also upgrade your buildings and structures faster and easier.

      -

      No ads and no in-app purchases

      -

      With Block Craft World 3D Mod APK, you will not see any ads in the game. This means you can play the game without any interruptions or distractions. You can also access all the features and content of the game without having to pay for anything. You can enjoy the game to the fullest without spending a dime.

      -

      All skins and items unlocked

      -

      With Block Craft World 3D Mod APK, you will have all the skins and items unlocked in the game. This means you can customize your character and pets with any outfit, hairstyle, or accessory you want. You can also use any block, furniture, painting, plant, or other item to decorate your buildings and worlds. You can unleash your creativity and style without any limitations.

      -

      How to download and install Block Craft World 3D Mod APK?

      -

      If you are interested in downloading and installing Block Craft World 3D Mod APK, you can follow these simple steps:

      -

      Step 1: Download the APK file from a trusted source

      -

      The first step is to download the APK file of Block Craft World 3D Mod APK from a trusted source. You can search for it online or use the link provided below. Make sure you download the latest version of the mod that is compatible with your device.

      -

      Download Block Craft World 3D Mod APK here

      -

      Step 2: Enable unknown sources on your device

      -

      The second step is to enable unknown sources on your device. This is necessary to allow your device to install apps from sources other than Google Play Store. To do this, go to your device settings, then security, then unknown sources. Turn on the option to enable unknown sources.

      -

      Step 3: Install the APK file and launch the game

      -

      The third step is to install the APK file and launch the game. To do this, locate the downloaded APK file on your device storage, then tap on it to start the installation process. Follow the instructions on the screen to complete the installation. Once done, launch the game from your app drawer or home screen. Enjoy playing Block Craft World 3D Mod APK!

      -

      Conclusion

      -

      Block Craft World 3D is a fun and creative sandbox game that lets you build your own world with blocks. You can also explore and interact with other players online, customize your character and pets, and join clans and wars. And if you want to have more fun and freedom, you can download Block Craft World 3D Mod APK, which gives you unlimited coins and gems, no ads, and all skins and items unlocked. Download Block Craft World 3D Mod APK now and start creating your own block world!

      -

      FAQs

      -

      Here are some frequently asked questions about Block Craft World 3D Mod APK:

      -
        -
      1. Is Block Craft World 3D Mod APK safe to use?
      2. -

        Yes, Block Craft World 3D Mod APK is safe to use as long as you download it from a trusted source. It does not contain any viruses or malware that can harm your device or data.

        -
      3. Is Block Craft World 3D Mod APK legal to use?
      4. -

        Yes, Block Craft World 3D Mod APK is legal to use as long as you do not use it for any illegal or unethical purposes. It is a modded version of the original game that does not violate any copyrights or trademarks.

        -
      5. Can I play Block Craft World 3D Mod APK offline?
      6. -

        Yes, you can play Block Craft World 3D Mod APK offline as long as you have already downloaded and installed it on your device. However, some features of the game may require an internet connection, such as multiplayer mode, clan wars, and trading items.

        -
      7. Can I update Block Craft World 3D Mod APK?
      8. -

        Yes, you can update Block Craft World 3D Mod APK as long as there is a new version available from the same source that you downloaded it from. However, updating may cause some issues or errors with the mod features, so make sure you backup your data before updating.

        -
      9. Can I play Block Craft World 3D Mod APK with my friends?
      10. -

        Yes, you can play Block Craft World 3D Mod APK with your friends as long as they also have the same mod installed on their devices. You can join their worlds online or invite them to yours. You can also chat with them, trade items with them, and join clans and wars with them.

        -
      -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/datasets/inference_dataset.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/datasets/inference_dataset.py deleted file mode 100644 index fb577d7b538d634f27013c2784d2ea32143154cb..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/datasets/inference_dataset.py +++ /dev/null @@ -1,25 +0,0 @@ -from torch.utils.data import Dataset -from PIL import Image -from utils import data_utils - - -class InferenceDataset(Dataset): - - def __init__(self, root, opts, transform=None, preprocess=None): - self.paths = sorted(data_utils.make_dataset(root)) - self.transform = transform - self.preprocess = preprocess - self.opts = opts - - def __len__(self): - return len(self.paths) - - def __getitem__(self, index): - from_path = self.paths[index] - if self.preprocess is not None: - from_im = self.preprocess(from_path) - else: - from_im = Image.open(from_path).convert('RGB') - if self.transform: - from_im = self.transform(from_im) - return from_im diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/console.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/console.d.ts deleted file mode 100644 index 16c9137adf20cd8eaad74c61819ff6e300205b7a..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/console.d.ts +++ /dev/null @@ -1,412 +0,0 @@ -/** - * The `console` module provides a simple debugging console that is similar to the - * JavaScript console mechanism provided by web browsers. - * - * The module exports two specific components: - * - * * A `Console` class with methods such as `console.log()`, `console.error()` and`console.warn()` that can be used to write to any Node.js stream. - * * A global `console` instance configured to write to `process.stdout` and `process.stderr`. The global `console` can be used without calling`require('console')`. - * - * _**Warning**_: The global console object's methods are neither consistently - * synchronous like the browser APIs they resemble, nor are they consistently - * asynchronous like all other Node.js streams. See the `note on process I/O` for - * more information. - * - * Example using the global `console`: - * - * ```js - * console.log('hello world'); - * // Prints: hello world, to stdout - * console.log('hello %s', 'world'); - * // Prints: hello world, to stdout - * console.error(new Error('Whoops, something bad happened')); - * // Prints error message and stack trace to stderr: - * // Error: Whoops, something bad happened - * // at [eval]:5:15 - * // at Script.runInThisContext (node:vm:132:18) - * // at Object.runInThisContext (node:vm:309:38) - * // at node:internal/process/execution:77:19 - * // at [eval]-wrapper:6:22 - * // at evalScript (node:internal/process/execution:76:60) - * // at node:internal/main/eval_string:23:3 - * - * const name = 'Will Robinson'; - * console.warn(`Danger ${name}! Danger!`); - * // Prints: Danger Will Robinson! Danger!, to stderr - * ``` - * - * Example using the `Console` class: - * - * ```js - * const out = getStreamSomehow(); - * const err = getStreamSomehow(); - * const myConsole = new console.Console(out, err); - * - * myConsole.log('hello world'); - * // Prints: hello world, to out - * myConsole.log('hello %s', 'world'); - * // Prints: hello world, to out - * myConsole.error(new Error('Whoops, something bad happened')); - * // Prints: [Error: Whoops, something bad happened], to err - * - * const name = 'Will Robinson'; - * myConsole.warn(`Danger ${name}! Danger!`); - * // Prints: Danger Will Robinson! Danger!, to err - * ``` - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/console.js) - */ -declare module 'console' { - import console = require('node:console'); - export = console; -} -declare module 'node:console' { - import { InspectOptions } from 'node:util'; - global { - // This needs to be global to avoid TS2403 in case lib.dom.d.ts is present in the same build - interface Console { - Console: console.ConsoleConstructor; - /** - * `console.assert()` writes a message if `value` is [falsy](https://developer.mozilla.org/en-US/docs/Glossary/Falsy) or omitted. It only - * writes a message and does not otherwise affect execution. The output always - * starts with `"Assertion failed"`. If provided, `message` is formatted using `util.format()`. - * - * If `value` is [truthy](https://developer.mozilla.org/en-US/docs/Glossary/Truthy), nothing happens. - * - * ```js - * console.assert(true, 'does nothing'); - * - * console.assert(false, 'Whoops %s work', 'didn\'t'); - * // Assertion failed: Whoops didn't work - * - * console.assert(); - * // Assertion failed - * ``` - * @since v0.1.101 - * @param value The value tested for being truthy. - * @param message All arguments besides `value` are used as error message. - */ - assert(value: any, message?: string, ...optionalParams: any[]): void; - /** - * When `stdout` is a TTY, calling `console.clear()` will attempt to clear the - * TTY. When `stdout` is not a TTY, this method does nothing. - * - * The specific operation of `console.clear()` can vary across operating systems - * and terminal types. For most Linux operating systems, `console.clear()`operates similarly to the `clear` shell command. On Windows, `console.clear()`will clear only the output in the - * current terminal viewport for the Node.js - * binary. - * @since v8.3.0 - */ - clear(): void; - /** - * Maintains an internal counter specific to `label` and outputs to `stdout` the - * number of times `console.count()` has been called with the given `label`. - * - * ```js - * > console.count() - * default: 1 - * undefined - * > console.count('default') - * default: 2 - * undefined - * > console.count('abc') - * abc: 1 - * undefined - * > console.count('xyz') - * xyz: 1 - * undefined - * > console.count('abc') - * abc: 2 - * undefined - * > console.count() - * default: 3 - * undefined - * > - * ``` - * @since v8.3.0 - * @param label The display label for the counter. - */ - count(label?: string): void; - /** - * Resets the internal counter specific to `label`. - * - * ```js - * > console.count('abc'); - * abc: 1 - * undefined - * > console.countReset('abc'); - * undefined - * > console.count('abc'); - * abc: 1 - * undefined - * > - * ``` - * @since v8.3.0 - * @param label The display label for the counter. - */ - countReset(label?: string): void; - /** - * The `console.debug()` function is an alias for {@link log}. - * @since v8.0.0 - */ - debug(message?: any, ...optionalParams: any[]): void; - /** - * Uses `util.inspect()` on `obj` and prints the resulting string to `stdout`. - * This function bypasses any custom `inspect()` function defined on `obj`. - * @since v0.1.101 - */ - dir(obj: any, options?: InspectOptions): void; - /** - * This method calls `console.log()` passing it the arguments received. - * This method does not produce any XML formatting. - * @since v8.0.0 - */ - dirxml(...data: any[]): void; - /** - * Prints to `stderr` with newline. Multiple arguments can be passed, with the - * first used as the primary message and all additional used as substitution - * values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to `util.format()`). - * - * ```js - * const code = 5; - * console.error('error #%d', code); - * // Prints: error #5, to stderr - * console.error('error', code); - * // Prints: error 5, to stderr - * ``` - * - * If formatting elements (e.g. `%d`) are not found in the first string then `util.inspect()` is called on each argument and the resulting string - * values are concatenated. See `util.format()` for more information. - * @since v0.1.100 - */ - error(message?: any, ...optionalParams: any[]): void; - /** - * Increases indentation of subsequent lines by spaces for `groupIndentation`length. - * - * If one or more `label`s are provided, those are printed first without the - * additional indentation. - * @since v8.5.0 - */ - group(...label: any[]): void; - /** - * An alias for {@link group}. - * @since v8.5.0 - */ - groupCollapsed(...label: any[]): void; - /** - * Decreases indentation of subsequent lines by spaces for `groupIndentation`length. - * @since v8.5.0 - */ - groupEnd(): void; - /** - * The `console.info()` function is an alias for {@link log}. - * @since v0.1.100 - */ - info(message?: any, ...optionalParams: any[]): void; - /** - * Prints to `stdout` with newline. Multiple arguments can be passed, with the - * first used as the primary message and all additional used as substitution - * values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to `util.format()`). - * - * ```js - * const count = 5; - * console.log('count: %d', count); - * // Prints: count: 5, to stdout - * console.log('count:', count); - * // Prints: count: 5, to stdout - * ``` - * - * See `util.format()` for more information. - * @since v0.1.100 - */ - log(message?: any, ...optionalParams: any[]): void; - /** - * Try to construct a table with the columns of the properties of `tabularData`(or use `properties`) and rows of `tabularData` and log it. Falls back to just - * logging the argument if it can’t be parsed as tabular. - * - * ```js - * // These can't be parsed as tabular data - * console.table(Symbol()); - * // Symbol() - * - * console.table(undefined); - * // undefined - * - * console.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }]); - * // ┌─────────┬─────┬─────┐ - * // │ (index) │ a │ b │ - * // ├─────────┼─────┼─────┤ - * // │ 0 │ 1 │ 'Y' │ - * // │ 1 │ 'Z' │ 2 │ - * // └─────────┴─────┴─────┘ - * - * console.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }], ['a']); - * // ┌─────────┬─────┐ - * // │ (index) │ a │ - * // ├─────────┼─────┤ - * // │ 0 │ 1 │ - * // │ 1 │ 'Z' │ - * // └─────────┴─────┘ - * ``` - * @since v10.0.0 - * @param properties Alternate properties for constructing the table. - */ - table(tabularData: any, properties?: ReadonlyArray): void; - /** - * Starts a timer that can be used to compute the duration of an operation. Timers - * are identified by a unique `label`. Use the same `label` when calling {@link timeEnd} to stop the timer and output the elapsed time in - * suitable time units to `stdout`. For example, if the elapsed - * time is 3869ms, `console.timeEnd()` displays "3.869s". - * @since v0.1.104 - */ - time(label?: string): void; - /** - * Stops a timer that was previously started by calling {@link time} and - * prints the result to `stdout`: - * - * ```js - * console.time('100-elements'); - * for (let i = 0; i < 100; i++) {} - * console.timeEnd('100-elements'); - * // prints 100-elements: 225.438ms - * ``` - * @since v0.1.104 - */ - timeEnd(label?: string): void; - /** - * For a timer that was previously started by calling {@link time}, prints - * the elapsed time and other `data` arguments to `stdout`: - * - * ```js - * console.time('process'); - * const value = expensiveProcess1(); // Returns 42 - * console.timeLog('process', value); - * // Prints "process: 365.227ms 42". - * doExpensiveProcess2(value); - * console.timeEnd('process'); - * ``` - * @since v10.7.0 - */ - timeLog(label?: string, ...data: any[]): void; - /** - * Prints to `stderr` the string `'Trace: '`, followed by the `util.format()` formatted message and stack trace to the current position in the code. - * - * ```js - * console.trace('Show me'); - * // Prints: (stack trace will vary based on where trace is called) - * // Trace: Show me - * // at repl:2:9 - * // at REPLServer.defaultEval (repl.js:248:27) - * // at bound (domain.js:287:14) - * // at REPLServer.runBound [as eval] (domain.js:300:12) - * // at REPLServer. (repl.js:412:12) - * // at emitOne (events.js:82:20) - * // at REPLServer.emit (events.js:169:7) - * // at REPLServer.Interface._onLine (readline.js:210:10) - * // at REPLServer.Interface._line (readline.js:549:8) - * // at REPLServer.Interface._ttyWrite (readline.js:826:14) - * ``` - * @since v0.1.104 - */ - trace(message?: any, ...optionalParams: any[]): void; - /** - * The `console.warn()` function is an alias for {@link error}. - * @since v0.1.100 - */ - warn(message?: any, ...optionalParams: any[]): void; - // --- Inspector mode only --- - /** - * This method does not display anything unless used in the inspector. - * Starts a JavaScript CPU profile with an optional label. - */ - profile(label?: string): void; - /** - * This method does not display anything unless used in the inspector. - * Stops the current JavaScript CPU profiling session if one has been started and prints the report to the Profiles panel of the inspector. - */ - profileEnd(label?: string): void; - /** - * This method does not display anything unless used in the inspector. - * Adds an event with the label `label` to the Timeline panel of the inspector. - */ - timeStamp(label?: string): void; - } - /** - * The `console` module provides a simple debugging console that is similar to the - * JavaScript console mechanism provided by web browsers. - * - * The module exports two specific components: - * - * * A `Console` class with methods such as `console.log()`, `console.error()` and`console.warn()` that can be used to write to any Node.js stream. - * * A global `console` instance configured to write to `process.stdout` and `process.stderr`. The global `console` can be used without calling`require('console')`. - * - * _**Warning**_: The global console object's methods are neither consistently - * synchronous like the browser APIs they resemble, nor are they consistently - * asynchronous like all other Node.js streams. See the `note on process I/O` for - * more information. - * - * Example using the global `console`: - * - * ```js - * console.log('hello world'); - * // Prints: hello world, to stdout - * console.log('hello %s', 'world'); - * // Prints: hello world, to stdout - * console.error(new Error('Whoops, something bad happened')); - * // Prints error message and stack trace to stderr: - * // Error: Whoops, something bad happened - * // at [eval]:5:15 - * // at Script.runInThisContext (node:vm:132:18) - * // at Object.runInThisContext (node:vm:309:38) - * // at node:internal/process/execution:77:19 - * // at [eval]-wrapper:6:22 - * // at evalScript (node:internal/process/execution:76:60) - * // at node:internal/main/eval_string:23:3 - * - * const name = 'Will Robinson'; - * console.warn(`Danger ${name}! Danger!`); - * // Prints: Danger Will Robinson! Danger!, to stderr - * ``` - * - * Example using the `Console` class: - * - * ```js - * const out = getStreamSomehow(); - * const err = getStreamSomehow(); - * const myConsole = new console.Console(out, err); - * - * myConsole.log('hello world'); - * // Prints: hello world, to out - * myConsole.log('hello %s', 'world'); - * // Prints: hello world, to out - * myConsole.error(new Error('Whoops, something bad happened')); - * // Prints: [Error: Whoops, something bad happened], to err - * - * const name = 'Will Robinson'; - * myConsole.warn(`Danger ${name}! Danger!`); - * // Prints: Danger Will Robinson! Danger!, to err - * ``` - * @see [source](https://github.com/nodejs/node/blob/v16.4.2/lib/console.js) - */ - namespace console { - interface ConsoleConstructorOptions { - stdout: NodeJS.WritableStream; - stderr?: NodeJS.WritableStream | undefined; - ignoreErrors?: boolean | undefined; - colorMode?: boolean | 'auto' | undefined; - inspectOptions?: InspectOptions | undefined; - /** - * Set group indentation - * @default 2 - */ - groupIndentation?: number | undefined; - } - interface ConsoleConstructor { - prototype: Console; - new (stdout: NodeJS.WritableStream, stderr?: NodeJS.WritableStream, ignoreErrors?: boolean): Console; - new (options: ConsoleConstructorOptions): Console; - } - } - var console: Console; - } - export = globalThis.console; -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/ms/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/ms/index.js deleted file mode 100644 index c4498bcc212589664a5fe0d45e5908b174ab0a37..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/ms/index.js +++ /dev/null @@ -1,162 +0,0 @@ -/** - * Helpers. - */ - -var s = 1000; -var m = s * 60; -var h = m * 60; -var d = h * 24; -var w = d * 7; -var y = d * 365.25; - -/** - * Parse or format the given `val`. - * - * Options: - * - * - `long` verbose formatting [false] - * - * @param {String|Number} val - * @param {Object} [options] - * @throws {Error} throw an error if val is not a non-empty string or a number - * @return {String|Number} - * @api public - */ - -module.exports = function(val, options) { - options = options || {}; - var type = typeof val; - if (type === 'string' && val.length > 0) { - return parse(val); - } else if (type === 'number' && isFinite(val)) { - return options.long ? fmtLong(val) : fmtShort(val); - } - throw new Error( - 'val is not a non-empty string or a valid number. val=' + - JSON.stringify(val) - ); -}; - -/** - * Parse the given `str` and return milliseconds. - * - * @param {String} str - * @return {Number} - * @api private - */ - -function parse(str) { - str = String(str); - if (str.length > 100) { - return; - } - var match = /^(-?(?:\d+)?\.?\d+) *(milliseconds?|msecs?|ms|seconds?|secs?|s|minutes?|mins?|m|hours?|hrs?|h|days?|d|weeks?|w|years?|yrs?|y)?$/i.exec( - str - ); - if (!match) { - return; - } - var n = parseFloat(match[1]); - var type = (match[2] || 'ms').toLowerCase(); - switch (type) { - case 'years': - case 'year': - case 'yrs': - case 'yr': - case 'y': - return n * y; - case 'weeks': - case 'week': - case 'w': - return n * w; - case 'days': - case 'day': - case 'd': - return n * d; - case 'hours': - case 'hour': - case 'hrs': - case 'hr': - case 'h': - return n * h; - case 'minutes': - case 'minute': - case 'mins': - case 'min': - case 'm': - return n * m; - case 'seconds': - case 'second': - case 'secs': - case 'sec': - case 's': - return n * s; - case 'milliseconds': - case 'millisecond': - case 'msecs': - case 'msec': - case 'ms': - return n; - default: - return undefined; - } -} - -/** - * Short format for `ms`. - * - * @param {Number} ms - * @return {String} - * @api private - */ - -function fmtShort(ms) { - var msAbs = Math.abs(ms); - if (msAbs >= d) { - return Math.round(ms / d) + 'd'; - } - if (msAbs >= h) { - return Math.round(ms / h) + 'h'; - } - if (msAbs >= m) { - return Math.round(ms / m) + 'm'; - } - if (msAbs >= s) { - return Math.round(ms / s) + 's'; - } - return ms + 'ms'; -} - -/** - * Long format for `ms`. - * - * @param {Number} ms - * @return {String} - * @api private - */ - -function fmtLong(ms) { - var msAbs = Math.abs(ms); - if (msAbs >= d) { - return plural(ms, msAbs, d, 'day'); - } - if (msAbs >= h) { - return plural(ms, msAbs, h, 'hour'); - } - if (msAbs >= m) { - return plural(ms, msAbs, m, 'minute'); - } - if (msAbs >= s) { - return plural(ms, msAbs, s, 'second'); - } - return ms + ' ms'; -} - -/** - * Pluralization helper. - */ - -function plural(ms, msAbs, n, name) { - var isPlural = msAbs >= n * 1.5; - return Math.round(ms / n) + ' ' + name + (isPlural ? 's' : ''); -} diff --git a/spaces/firefighter/TransDis-CreativityAutoAssessment/app.py b/spaces/firefighter/TransDis-CreativityAutoAssessment/app.py deleted file mode 100644 index d7def0da58706466c6a76a4bc73208b7cf9547cf..0000000000000000000000000000000000000000 --- a/spaces/firefighter/TransDis-CreativityAutoAssessment/app.py +++ /dev/null @@ -1,89 +0,0 @@ -from io import StringIO -from typing import Optional - -import gradio as gr -import pandas as pd - -from utils import pipeline -from utils.models import list_models - - -def read_data(filepath: str) -> Optional[pd.DataFrame]: - if filepath.endswith('.xlsx'): - df = pd.read_excel(filepath) - elif filepath.endswith('.csv'): - df = pd.read_csv(filepath) - else: - raise Exception('File type not supported') - return df - - -def process( - task_name: str, - model_name: str, - pooling: str, - text: str, - file=None, -) -> (None, pd.DataFrame, str): - # try: - # load file - if file: - df = read_data(file.name) - elif text: - string_io = StringIO(text) - df = pd.read_csv(string_io) - assert len(df) >= 1, 'No input data' - else: - raise Exception('No input data') - - # process - if task_name == 'Originality': - df = pipeline.p0_originality(df, model_name, pooling) - elif task_name == 'Flexibility': - df = pipeline.p1_flexibility(df, model_name, pooling) - else: - raise Exception('Task not supported') - - # save - path = 'output.csv' - df.to_csv(path, index=False, encoding='utf-8-sig') - return None, df.iloc[:10], path - # except Exception as e: - # return {'Error': e}, None, None - - -# input -task_name_dropdown = gr.components.Dropdown( - label='Task Name', - value='Originality', - choices=['Originality', 'Flexibility'] -) -model_name_dropdown = gr.components.Dropdown( - label='Model Name', - value=list_models[0], - choices=list_models -) -pooling_dropdown = gr.components.Dropdown( - label='Pooling', - value='mean', - choices=['mean', 'cls'] -) -text_input = gr.components.Textbox( - value=open('data/example_xlm.csv', 'r').read(), - lines=10, -) -file_input = gr.components.File(label='Input File', file_types=['.csv', '.xlsx']) - -# output -text_output = gr.components.Textbox(label='Output') -dataframe_output = gr.components.Dataframe(label='DataFrame') -file_output = gr.components.File(label='Output File', file_types=['.csv', '.xlsx']) - -app = gr.Interface( - fn=process, - inputs=[task_name_dropdown, model_name_dropdown, pooling_dropdown, text_input, file_input], - outputs=[text_output, dataframe_output, file_output], - description=open('data/description.txt', 'r').read(), - title='TransDis-CreativityAutoAssessment', -) -app.launch() diff --git a/spaces/flamehaze1115/Wonder3D-demo/utils/misc.py b/spaces/flamehaze1115/Wonder3D-demo/utils/misc.py deleted file mode 100644 index 45a76c61f672afc392f507bd7a652c4d480d065c..0000000000000000000000000000000000000000 --- a/spaces/flamehaze1115/Wonder3D-demo/utils/misc.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -from omegaconf import OmegaConf -from packaging import version - - -# ============ Register OmegaConf Recolvers ============= # -# OmegaConf.register_new_resolver('calc_exp_lr_decay_rate', lambda factor, n: factor**(1./n)) -# OmegaConf.register_new_resolver('add', lambda a, b: a + b) -# OmegaConf.register_new_resolver('sub', lambda a, b: a - b) -# OmegaConf.register_new_resolver('mul', lambda a, b: a * b) -# OmegaConf.register_new_resolver('div', lambda a, b: a / b) -# OmegaConf.register_new_resolver('idiv', lambda a, b: a // b) -# OmegaConf.register_new_resolver('basename', lambda p: os.path.basename(p)) -# ======================================================= # - - -def prompt(question): - inp = input(f"{question} (y/n)").lower().strip() - if inp and inp == 'y': - return True - if inp and inp == 'n': - return False - return prompt(question) - - -def load_config(*yaml_files, cli_args=[]): - yaml_confs = [OmegaConf.load(f) for f in yaml_files] - cli_conf = OmegaConf.from_cli(cli_args) - conf = OmegaConf.merge(*yaml_confs, cli_conf) - OmegaConf.resolve(conf) - return conf - - -def config_to_primitive(config, resolve=True): - return OmegaConf.to_container(config, resolve=resolve) - - -def dump_config(path, config): - with open(path, 'w') as fp: - OmegaConf.save(config=config, f=fp) - -def get_rank(): - # SLURM_PROCID can be set even if SLURM is not managing the multiprocessing, - # therefore LOCAL_RANK needs to be checked first - rank_keys = ("RANK", "LOCAL_RANK", "SLURM_PROCID", "JSM_NAMESPACE_RANK") - for key in rank_keys: - rank = os.environ.get(key) - if rank is not None: - return int(rank) - return 0 - - -def parse_version(ver): - return version.parse(ver) diff --git a/spaces/fluffyfluff/multiple-pdf-chat/htmlTemplates.py b/spaces/fluffyfluff/multiple-pdf-chat/htmlTemplates.py deleted file mode 100644 index 21b2fe86d8ed9f3e842114dcc49751833095dc22..0000000000000000000000000000000000000000 --- a/spaces/fluffyfluff/multiple-pdf-chat/htmlTemplates.py +++ /dev/null @@ -1,44 +0,0 @@ -css = ''' -bar") - assert soup.get_text() == "foobar" - - def test_all_strings_ignores_special_string_containers(self): - soup = self.soup("foobar") - assert ['foo', 'bar'] == list(soup.strings) - - soup = self.soup("foobar") - assert ['foo', 'bar'] == list(soup.strings) - - def test_string_methods_inside_special_string_container_tags(self): - # Strings inside tags like ") - - assert style.div.get_text() == "a" - assert list(style.div.strings) == ["a"] - assert style.div.style.get_text() == "Some CSS" - assert list(style.div.style.strings) == ['Some CSS'] - - # The comment is not picked up here. That's because it was - # parsed into a Comment object, which is not considered - # interesting by template.strings. - assert template.div.get_text() == "a" - assert list(template.div.strings) == ["a"] - assert template.div.template.get_text() == "Templated text." - assert list(template.div.template.strings) == ["Templated ", "text", "."] - - # The comment is included here, because it didn't get parsed - # into a Comment object--it's part of the Script string. - assert script.div.get_text() == "a" - assert list(script.div.strings) == ["a"] - assert script.div.script.get_text() == "Some text" - assert list(script.div.script.strings) == ['Some text'] - - -class TestMultiValuedAttributes(SoupTest): - """Test the behavior of multi-valued attributes like 'class'. - - The values of such attributes are always presented as lists. - """ - - def test_single_value_becomes_list(self): - soup = self.soup("") - assert ["foo"] ==soup.a['class'] - - def test_multiple_values_becomes_list(self): - soup = self.soup("") - assert ["foo", "bar"] == soup.a['class'] - - def test_multiple_values_separated_by_weird_whitespace(self): - soup = self.soup("") - assert ["foo", "bar", "baz"] ==soup.a['class'] - - def test_attributes_joined_into_string_on_output(self): - soup = self.soup("") - assert b'' == soup.a.encode() - - def test_get_attribute_list(self): - soup = self.soup("") - assert ['abc def'] == soup.a.get_attribute_list('id') - - def test_accept_charset(self): - soup = self.soup('
      ') - assert ['ISO-8859-1', 'UTF-8'] == soup.form['accept-charset'] - - def test_cdata_attribute_applying_only_to_one_tag(self): - data = '' - soup = self.soup(data) - # We saw in another test that accept-charset is a cdata-list - # attribute for the tag. But it's not a cdata-list - # attribute for any other tag. - assert 'ISO-8859-1 UTF-8' == soup.a['accept-charset'] - - def test_customization(self): - # It's possible to change which attributes of which tags - # are treated as multi-valued attributes. - # - # Here, 'id' is a multi-valued attribute and 'class' is not. - # - # TODO: This code is in the builder and should be tested there. - soup = self.soup( - '', multi_valued_attributes={ '*' : 'id' } - ) - assert soup.a['class'] == 'foo' - assert soup.a['id'] == ['bar'] diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dotenv/main.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dotenv/main.py deleted file mode 100644 index f40c20ea202b283260a278bc38b0c63a8e3efc1e..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dotenv/main.py +++ /dev/null @@ -1,382 +0,0 @@ -import io -import logging -import os -import shutil -import sys -import tempfile -from collections import OrderedDict -from contextlib import contextmanager -from typing import (IO, Dict, Iterable, Iterator, Mapping, Optional, Tuple, - Union) - -from .parser import Binding, parse_stream -from .variables import parse_variables - -# A type alias for a string path to be used for the paths in this file. -# These paths may flow to `open()` and `shutil.move()`; `shutil.move()` -# only accepts string paths, not byte paths or file descriptors. See -# https://github.com/python/typeshed/pull/6832. -StrPath = Union[str, 'os.PathLike[str]'] - -logger = logging.getLogger(__name__) - - -def with_warn_for_invalid_lines(mappings: Iterator[Binding]) -> Iterator[Binding]: - for mapping in mappings: - if mapping.error: - logger.warning( - "Python-dotenv could not parse statement starting at line %s", - mapping.original.line, - ) - yield mapping - - -class DotEnv: - def __init__( - self, - dotenv_path: Optional[StrPath], - stream: Optional[IO[str]] = None, - verbose: bool = False, - encoding: Optional[str] = None, - interpolate: bool = True, - override: bool = True, - ) -> None: - self.dotenv_path: Optional[StrPath] = dotenv_path - self.stream: Optional[IO[str]] = stream - self._dict: Optional[Dict[str, Optional[str]]] = None - self.verbose: bool = verbose - self.encoding: Optional[str] = encoding - self.interpolate: bool = interpolate - self.override: bool = override - - @contextmanager - def _get_stream(self) -> Iterator[IO[str]]: - if self.dotenv_path and os.path.isfile(self.dotenv_path): - with open(self.dotenv_path, encoding=self.encoding) as stream: - yield stream - elif self.stream is not None: - yield self.stream - else: - if self.verbose: - logger.info( - "Python-dotenv could not find configuration file %s.", - self.dotenv_path or '.env', - ) - yield io.StringIO('') - - def dict(self) -> Dict[str, Optional[str]]: - """Return dotenv as dict""" - if self._dict: - return self._dict - - raw_values = self.parse() - - if self.interpolate: - self._dict = OrderedDict(resolve_variables(raw_values, override=self.override)) - else: - self._dict = OrderedDict(raw_values) - - return self._dict - - def parse(self) -> Iterator[Tuple[str, Optional[str]]]: - with self._get_stream() as stream: - for mapping in with_warn_for_invalid_lines(parse_stream(stream)): - if mapping.key is not None: - yield mapping.key, mapping.value - - def set_as_environment_variables(self) -> bool: - """ - Load the current dotenv as system environment variable. - """ - if not self.dict(): - return False - - for k, v in self.dict().items(): - if k in os.environ and not self.override: - continue - if v is not None: - os.environ[k] = v - - return True - - def get(self, key: str) -> Optional[str]: - """ - """ - data = self.dict() - - if key in data: - return data[key] - - if self.verbose: - logger.warning("Key %s not found in %s.", key, self.dotenv_path) - - return None - - -def get_key( - dotenv_path: StrPath, - key_to_get: str, - encoding: Optional[str] = "utf-8", -) -> Optional[str]: - """ - Get the value of a given key from the given .env. - - Returns `None` if the key isn't found or doesn't have a value. - """ - return DotEnv(dotenv_path, verbose=True, encoding=encoding).get(key_to_get) - - -@contextmanager -def rewrite( - path: StrPath, - encoding: Optional[str], -) -> Iterator[Tuple[IO[str], IO[str]]]: - if not os.path.isfile(path): - with open(path, mode="w", encoding=encoding) as source: - source.write("") - with tempfile.NamedTemporaryFile(mode="w", encoding=encoding, delete=False) as dest: - try: - with open(path, encoding=encoding) as source: - yield (source, dest) - except BaseException: - os.unlink(dest.name) - raise - shutil.move(dest.name, path) - - -def set_key( - dotenv_path: StrPath, - key_to_set: str, - value_to_set: str, - quote_mode: str = "always", - export: bool = False, - encoding: Optional[str] = "utf-8", -) -> Tuple[Optional[bool], str, str]: - """ - Adds or Updates a key/value to the given .env - - If the .env path given doesn't exist, fails instead of risking creating - an orphan .env somewhere in the filesystem - """ - if quote_mode not in ("always", "auto", "never"): - raise ValueError(f"Unknown quote_mode: {quote_mode}") - - quote = ( - quote_mode == "always" - or (quote_mode == "auto" and not value_to_set.isalnum()) - ) - - if quote: - value_out = "'{}'".format(value_to_set.replace("'", "\\'")) - else: - value_out = value_to_set - if export: - line_out = f'export {key_to_set}={value_out}\n' - else: - line_out = f"{key_to_set}={value_out}\n" - - with rewrite(dotenv_path, encoding=encoding) as (source, dest): - replaced = False - missing_newline = False - for mapping in with_warn_for_invalid_lines(parse_stream(source)): - if mapping.key == key_to_set: - dest.write(line_out) - replaced = True - else: - dest.write(mapping.original.string) - missing_newline = not mapping.original.string.endswith("\n") - if not replaced: - if missing_newline: - dest.write("\n") - dest.write(line_out) - - return True, key_to_set, value_to_set - - -def unset_key( - dotenv_path: StrPath, - key_to_unset: str, - quote_mode: str = "always", - encoding: Optional[str] = "utf-8", -) -> Tuple[Optional[bool], str]: - """ - Removes a given key from the given `.env` file. - - If the .env path given doesn't exist, fails. - If the given key doesn't exist in the .env, fails. - """ - if not os.path.exists(dotenv_path): - logger.warning("Can't delete from %s - it doesn't exist.", dotenv_path) - return None, key_to_unset - - removed = False - with rewrite(dotenv_path, encoding=encoding) as (source, dest): - for mapping in with_warn_for_invalid_lines(parse_stream(source)): - if mapping.key == key_to_unset: - removed = True - else: - dest.write(mapping.original.string) - - if not removed: - logger.warning("Key %s not removed from %s - key doesn't exist.", key_to_unset, dotenv_path) - return None, key_to_unset - - return removed, key_to_unset - - -def resolve_variables( - values: Iterable[Tuple[str, Optional[str]]], - override: bool, -) -> Mapping[str, Optional[str]]: - new_values: Dict[str, Optional[str]] = {} - - for (name, value) in values: - if value is None: - result = None - else: - atoms = parse_variables(value) - env: Dict[str, Optional[str]] = {} - if override: - env.update(os.environ) # type: ignore - env.update(new_values) - else: - env.update(new_values) - env.update(os.environ) # type: ignore - result = "".join(atom.resolve(env) for atom in atoms) - - new_values[name] = result - - return new_values - - -def _walk_to_root(path: str) -> Iterator[str]: - """ - Yield directories starting from the given directory up to the root - """ - if not os.path.exists(path): - raise IOError('Starting path not found') - - if os.path.isfile(path): - path = os.path.dirname(path) - - last_dir = None - current_dir = os.path.abspath(path) - while last_dir != current_dir: - yield current_dir - parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir)) - last_dir, current_dir = current_dir, parent_dir - - -def find_dotenv( - filename: str = '.env', - raise_error_if_not_found: bool = False, - usecwd: bool = False, -) -> str: - """ - Search in increasingly higher folders for the given file - - Returns path to the file if found, or an empty string otherwise - """ - - def _is_interactive(): - """ Decide whether this is running in a REPL or IPython notebook """ - main = __import__('__main__', None, None, fromlist=['__file__']) - return not hasattr(main, '__file__') - - if usecwd or _is_interactive() or getattr(sys, 'frozen', False): - # Should work without __file__, e.g. in REPL or IPython notebook. - path = os.getcwd() - else: - # will work for .py files - frame = sys._getframe() - current_file = __file__ - - while frame.f_code.co_filename == current_file: - assert frame.f_back is not None - frame = frame.f_back - frame_filename = frame.f_code.co_filename - path = os.path.dirname(os.path.abspath(frame_filename)) - - for dirname in _walk_to_root(path): - check_path = os.path.join(dirname, filename) - if os.path.isfile(check_path): - return check_path - - if raise_error_if_not_found: - raise IOError('File not found') - - return '' - - -def load_dotenv( - dotenv_path: Optional[StrPath] = None, - stream: Optional[IO[str]] = None, - verbose: bool = False, - override: bool = False, - interpolate: bool = True, - encoding: Optional[str] = "utf-8", -) -> bool: - """Parse a .env file and then load all the variables found as environment variables. - - Parameters: - dotenv_path: Absolute or relative path to .env file. - stream: Text stream (such as `io.StringIO`) with .env content, used if - `dotenv_path` is `None`. - verbose: Whether to output a warning the .env file is missing. - override: Whether to override the system environment variables with the variables - from the `.env` file. - encoding: Encoding to be used to read the file. - Returns: - Bool: True if at least one environment variable is set else False - - If both `dotenv_path` and `stream` are `None`, `find_dotenv()` is used to find the - .env file. - """ - if dotenv_path is None and stream is None: - dotenv_path = find_dotenv() - - dotenv = DotEnv( - dotenv_path=dotenv_path, - stream=stream, - verbose=verbose, - interpolate=interpolate, - override=override, - encoding=encoding, - ) - return dotenv.set_as_environment_variables() - - -def dotenv_values( - dotenv_path: Optional[StrPath] = None, - stream: Optional[IO[str]] = None, - verbose: bool = False, - interpolate: bool = True, - encoding: Optional[str] = "utf-8", -) -> Dict[str, Optional[str]]: - """ - Parse a .env file and return its content as a dict. - - The returned dict will have `None` values for keys without values in the .env file. - For example, `foo=bar` results in `{"foo": "bar"}` whereas `foo` alone results in - `{"foo": None}` - - Parameters: - dotenv_path: Absolute or relative path to the .env file. - stream: `StringIO` object with .env content, used if `dotenv_path` is `None`. - verbose: Whether to output a warning if the .env file is missing. - encoding: Encoding to be used to read the file. - - If both `dotenv_path` and `stream` are `None`, `find_dotenv()` is used to find the - .env file. - """ - if dotenv_path is None and stream is None: - dotenv_path = find_dotenv() - - return DotEnv( - dotenv_path=dotenv_path, - stream=stream, - verbose=verbose, - interpolate=interpolate, - override=True, - encoding=encoding, - ).dict() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/cached.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/cached.py deleted file mode 100644 index b679cce51186d8371011d590d87b6c250943f95c..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/cached.py +++ /dev/null @@ -1,784 +0,0 @@ -from __future__ import annotations - -import inspect -import logging -import os -import tempfile -import time -import weakref -from shutil import rmtree -from typing import TYPE_CHECKING, Any, Callable, ClassVar - -from fsspec import AbstractFileSystem, filesystem -from fsspec.callbacks import _DEFAULT_CALLBACK -from fsspec.compression import compr -from fsspec.core import BaseCache, MMapCache -from fsspec.exceptions import BlocksizeMismatchError -from fsspec.implementations.cache_mapper import create_cache_mapper -from fsspec.implementations.cache_metadata import CacheMetadata -from fsspec.spec import AbstractBufferedFile -from fsspec.utils import infer_compression - -if TYPE_CHECKING: - from fsspec.implementations.cache_mapper import AbstractCacheMapper - -logger = logging.getLogger("fsspec.cached") - - -class CachingFileSystem(AbstractFileSystem): - """Locally caching filesystem, layer over any other FS - - This class implements chunk-wise local storage of remote files, for quick - access after the initial download. The files are stored in a given - directory with hashes of URLs for the filenames. If no directory is given, - a temporary one is used, which should be cleaned up by the OS after the - process ends. The files themselves are sparse (as implemented in - :class:`~fsspec.caching.MMapCache`), so only the data which is accessed - takes up space. - - Restrictions: - - - the block-size must be the same for each access of a given file, unless - all blocks of the file have already been read - - caching can only be applied to file-systems which produce files - derived from fsspec.spec.AbstractBufferedFile ; LocalFileSystem is also - allowed, for testing - """ - - protocol: ClassVar[str | tuple[str, ...]] = ("blockcache", "cached") - - def __init__( - self, - target_protocol=None, - cache_storage="TMP", - cache_check=10, - check_files=False, - expiry_time=604800, - target_options=None, - fs=None, - same_names: bool | None = None, - compression=None, - cache_mapper: AbstractCacheMapper | None = None, - **kwargs, - ): - """ - - Parameters - ---------- - target_protocol: str (optional) - Target filesystem protocol. Provide either this or ``fs``. - cache_storage: str or list(str) - Location to store files. If "TMP", this is a temporary directory, - and will be cleaned up by the OS when this process ends (or later). - If a list, each location will be tried in the order given, but - only the last will be considered writable. - cache_check: int - Number of seconds between reload of cache metadata - check_files: bool - Whether to explicitly see if the UID of the remote file matches - the stored one before using. Warning: some file systems such as - HTTP cannot reliably give a unique hash of the contents of some - path, so be sure to set this option to False. - expiry_time: int - The time in seconds after which a local copy is considered useless. - Set to falsy to prevent expiry. The default is equivalent to one - week. - target_options: dict or None - Passed to the instantiation of the FS, if fs is None. - fs: filesystem instance - The target filesystem to run against. Provide this or ``protocol``. - same_names: bool (optional) - By default, target URLs are hashed using a ``HashCacheMapper`` so - that files from different backends with the same basename do not - conflict. If this argument is ``true``, a ``BasenameCacheMapper`` - is used instead. Other cache mapper options are available by using - the ``cache_mapper`` keyword argument. Only one of this and - ``cache_mapper`` should be specified. - compression: str (optional) - To decompress on download. Can be 'infer' (guess from the URL name), - one of the entries in ``fsspec.compression.compr``, or None for no - decompression. - cache_mapper: AbstractCacheMapper (optional) - The object use to map from original filenames to cached filenames. - Only one of this and ``same_names`` should be specified. - """ - super().__init__(**kwargs) - if fs is None and target_protocol is None: - raise ValueError( - "Please provide filesystem instance(fs) or target_protocol" - ) - if not (fs is None) ^ (target_protocol is None): - raise ValueError( - "Both filesystems (fs) and target_protocol may not be both given." - ) - if cache_storage == "TMP": - tempdir = tempfile.mkdtemp() - storage = [tempdir] - weakref.finalize(self, self._remove_tempdir, tempdir) - else: - if isinstance(cache_storage, str): - storage = [cache_storage] - else: - storage = cache_storage - os.makedirs(storage[-1], exist_ok=True) - self.storage = storage - self.kwargs = target_options or {} - self.cache_check = cache_check - self.check_files = check_files - self.expiry = expiry_time - self.compression = compression - - if same_names is not None and cache_mapper is not None: - raise ValueError( - "Cannot specify both same_names and cache_mapper in " - "CachingFileSystem.__init__" - ) - if cache_mapper is not None: - self._mapper = cache_mapper - else: - self._mapper = create_cache_mapper( - same_names if same_names is not None else False - ) - - self.target_protocol = ( - target_protocol - if isinstance(target_protocol, str) - else (fs.protocol if isinstance(fs.protocol, str) else fs.protocol[0]) - ) - self._metadata = CacheMetadata(self.storage) - self.load_cache() - self.fs = fs if fs is not None else filesystem(target_protocol, **self.kwargs) - - def _strip_protocol(path): - # acts as a method, since each instance has a difference target - return self.fs._strip_protocol(type(self)._strip_protocol(path)) - - self._strip_protocol: Callable = _strip_protocol - - @staticmethod - def _remove_tempdir(tempdir): - try: - rmtree(tempdir) - except Exception: - pass - - def _mkcache(self): - os.makedirs(self.storage[-1], exist_ok=True) - - def load_cache(self): - """Read set of stored blocks from file""" - self._metadata.load() - self._mkcache() - self.last_cache = time.time() - - def save_cache(self): - """Save set of stored blocks from file""" - self._mkcache() - self._metadata.save() - self.last_cache = time.time() - - def _check_cache(self): - """Reload caches if time elapsed or any disappeared""" - self._mkcache() - if not self.cache_check: - # explicitly told not to bother checking - return - timecond = time.time() - self.last_cache > self.cache_check - existcond = all(os.path.exists(storage) for storage in self.storage) - if timecond or not existcond: - self.load_cache() - - def _check_file(self, path): - """Is path in cache and still valid""" - path = self._strip_protocol(path) - self._check_cache() - return self._metadata.check_file(path, self) - - def clear_cache(self): - """Remove all files and metadata from the cache - - In the case of multiple cache locations, this clears only the last one, - which is assumed to be the read/write one. - """ - rmtree(self.storage[-1]) - self.load_cache() - - def clear_expired_cache(self, expiry_time=None): - """Remove all expired files and metadata from the cache - - In the case of multiple cache locations, this clears only the last one, - which is assumed to be the read/write one. - - Parameters - ---------- - expiry_time: int - The time in seconds after which a local copy is considered useless. - If not defined the default is equivalent to the attribute from the - file caching instantiation. - """ - - if not expiry_time: - expiry_time = self.expiry - - self._check_cache() - - expired_files, writable_cache_empty = self._metadata.clear_expired(expiry_time) - for fn in expired_files: - if os.path.exists(fn): - os.remove(fn) - - if writable_cache_empty: - rmtree(self.storage[-1]) - self.load_cache() - - def pop_from_cache(self, path): - """Remove cached version of given file - - Deletes local copy of the given (remote) path. If it is found in a cache - location which is not the last, it is assumed to be read-only, and - raises PermissionError - """ - path = self._strip_protocol(path) - fn = self._metadata.pop_file(path) - if fn is not None: - os.remove(fn) - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=True, - cache_options=None, - **kwargs, - ): - """Wrap the target _open - - If the whole file exists in the cache, just open it locally and - return that. - - Otherwise, open the file on the target FS, and make it have a mmap - cache pointing to the location which we determine, in our cache. - The ``blocks`` instance is shared, so as the mmap cache instance - updates, so does the entry in our ``cached_files`` attribute. - We monkey-patch this file, so that when it closes, we call - ``close_and_update`` to save the state of the blocks. - """ - path = self._strip_protocol(path) - - path = self.fs._strip_protocol(path) - if "r" not in mode: - return self.fs._open( - path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_options=cache_options, - **kwargs, - ) - detail = self._check_file(path) - if detail: - # file is in cache - detail, fn = detail - hash, blocks = detail["fn"], detail["blocks"] - if blocks is True: - # stored file is complete - logger.debug("Opening local copy of %s" % path) - return open(fn, mode) - # TODO: action where partial file exists in read-only cache - logger.debug("Opening partially cached copy of %s" % path) - else: - hash = self._mapper(path) - fn = os.path.join(self.storage[-1], hash) - blocks = set() - detail = { - "original": path, - "fn": hash, - "blocks": blocks, - "time": time.time(), - "uid": self.fs.ukey(path), - } - self._metadata.update_file(path, detail) - logger.debug("Creating local sparse file for %s" % path) - - # call target filesystems open - self._mkcache() - f = self.fs._open( - path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_options=cache_options, - cache_type="none", - **kwargs, - ) - if self.compression: - comp = ( - infer_compression(path) - if self.compression == "infer" - else self.compression - ) - f = compr[comp](f, mode="rb") - if "blocksize" in detail: - if detail["blocksize"] != f.blocksize: - raise BlocksizeMismatchError( - "Cached file must be reopened with same block" - "size as original (old: %i, new %i)" - "" % (detail["blocksize"], f.blocksize) - ) - else: - detail["blocksize"] = f.blocksize - f.cache = MMapCache(f.blocksize, f._fetch_range, f.size, fn, blocks) - close = f.close - f.close = lambda: self.close_and_update(f, close) - self.save_cache() - return f - - def hash_name(self, path: str, *args: Any) -> str: - # Kept for backward compatibility with downstream libraries. - # Ignores extra arguments, previously same_name boolean. - return self._mapper(path) - - def close_and_update(self, f, close): - """Called when a file is closing, so store the set of blocks""" - if f.closed: - return - path = self._strip_protocol(f.path) - self._metadata.on_close_cached_file(f, path) - try: - logger.debug("going to save") - self.save_cache() - logger.debug("saved") - except OSError: - logger.debug("Cache saving failed while closing file") - except NameError: - logger.debug("Cache save failed due to interpreter shutdown") - close() - f.closed = True - - def __getattribute__(self, item): - if item in [ - "load_cache", - "_open", - "save_cache", - "close_and_update", - "__init__", - "__getattribute__", - "__reduce__", - "_make_local_details", - "open", - "cat", - "cat_file", - "get", - "read_block", - "tail", - "head", - "_check_file", - "_check_cache", - "_mkcache", - "clear_cache", - "clear_expired_cache", - "pop_from_cache", - "_mkcache", - "local_file", - "_paths_from_path", - "get_mapper", - "open_many", - "commit_many", - "hash_name", - "__hash__", - "__eq__", - "to_json", - ]: - # all the methods defined in this class. Note `open` here, since - # it calls `_open`, but is actually in superclass - return lambda *args, **kw: getattr(type(self), item).__get__(self)( - *args, **kw - ) - if item in ["__reduce_ex__"]: - raise AttributeError - if item in ["_cache"]: - # class attributes - return getattr(type(self), item) - if item == "__class__": - return type(self) - d = object.__getattribute__(self, "__dict__") - fs = d.get("fs", None) # fs is not immediately defined - if item in d: - return d[item] - elif fs is not None: - if item in fs.__dict__: - # attribute of instance - return fs.__dict__[item] - # attributed belonging to the target filesystem - cls = type(fs) - m = getattr(cls, item) - if (inspect.isfunction(m) or inspect.isdatadescriptor(m)) and ( - not hasattr(m, "__self__") or m.__self__ is None - ): - # instance method - return m.__get__(fs, cls) - return m # class method or attribute - else: - # attributes of the superclass, while target is being set up - return super().__getattribute__(item) - - def __eq__(self, other): - """Test for equality.""" - if self is other: - return True - if not isinstance(other, type(self)): - return False - return ( - self.storage == other.storage - and self.kwargs == other.kwargs - and self.cache_check == other.cache_check - and self.check_files == other.check_files - and self.expiry == other.expiry - and self.compression == other.compression - and self._mapper == other._mapper - and self.target_protocol == other.target_protocol - ) - - def __hash__(self): - """Calculate hash.""" - return ( - hash(tuple(self.storage)) - ^ hash(str(self.kwargs)) - ^ hash(self.cache_check) - ^ hash(self.check_files) - ^ hash(self.expiry) - ^ hash(self.compression) - ^ hash(self._mapper) - ^ hash(self.target_protocol) - ) - - def to_json(self): - """Calculate JSON representation. - - Not implemented yet for CachingFileSystem. - """ - raise NotImplementedError( - "CachingFileSystem JSON representation not implemented" - ) - - -class WholeFileCacheFileSystem(CachingFileSystem): - """Caches whole remote files on first access - - This class is intended as a layer over any other file system, and - will make a local copy of each file accessed, so that all subsequent - reads are local. This is similar to ``CachingFileSystem``, but without - the block-wise functionality and so can work even when sparse files - are not allowed. See its docstring for definition of the init - arguments. - - The class still needs access to the remote store for listing files, - and may refresh cached files. - """ - - protocol = "filecache" - local_file = True - - def open_many(self, open_files): - paths = [of.path for of in open_files] - if "r" in open_files.mode: - self._mkcache() - else: - return [ - LocalTempFile(self.fs, path, mode=open_files.mode) for path in paths - ] - - if self.compression: - raise NotImplementedError - details = [self._check_file(sp) for sp in paths] - downpath = [p for p, d in zip(paths, details) if not d] - downfn0 = [ - os.path.join(self.storage[-1], self._mapper(p)) - for p, d in zip(paths, details) - ] # keep these path names for opening later - downfn = [fn for fn, d in zip(downfn0, details) if not d] - if downpath: - # skip if all files are already cached and up to date - self.fs.get(downpath, downfn) - - # update metadata - only happens when downloads are successful - newdetail = [ - { - "original": path, - "fn": self._mapper(path), - "blocks": True, - "time": time.time(), - "uid": self.fs.ukey(path), - } - for path in downpath - ] - for path, detail in zip(downpath, newdetail): - self._metadata.update_file(path, detail) - self.save_cache() - - def firstpart(fn): - # helper to adapt both whole-file and simple-cache - return fn[1] if isinstance(fn, tuple) else fn - - return [ - open(firstpart(fn0) if fn0 else fn1, mode=open_files.mode) - for fn0, fn1 in zip(details, downfn0) - ] - - def commit_many(self, open_files): - self.fs.put([f.fn for f in open_files], [f.path for f in open_files]) - [f.close() for f in open_files] - for f in open_files: - # in case autocommit is off, and so close did not already delete - try: - os.remove(f.name) - except FileNotFoundError: - pass - - def _make_local_details(self, path): - hash = self._mapper(path) - fn = os.path.join(self.storage[-1], hash) - detail = { - "original": path, - "fn": hash, - "blocks": True, - "time": time.time(), - "uid": self.fs.ukey(path), - } - self._metadata.update_file(path, detail) - logger.debug("Copying %s to local cache" % path) - return fn - - def cat( - self, - path, - recursive=False, - on_error="raise", - callback=_DEFAULT_CALLBACK, - **kwargs, - ): - paths = self.expand_path( - path, recursive=recursive, maxdepth=kwargs.get("maxdepth", None) - ) - getpaths = [] - storepaths = [] - fns = [] - out = {} - for p in paths.copy(): - try: - detail = self._check_file(p) - if not detail: - fn = self._make_local_details(p) - getpaths.append(p) - storepaths.append(fn) - else: - detail, fn = detail if isinstance(detail, tuple) else (None, detail) - fns.append(fn) - except Exception as e: - if on_error == "raise": - raise - if on_error == "return": - out[p] = e - paths.remove(p) - - if getpaths: - self.fs.get(getpaths, storepaths) - self.save_cache() - - callback.set_size(len(paths)) - for p, fn in zip(paths, fns): - with open(fn, "rb") as f: - out[p] = f.read() - callback.relative_update(1) - if isinstance(path, str) and len(paths) == 1 and recursive is False: - out = out[paths[0]] - return out - - def _open(self, path, mode="rb", **kwargs): - path = self._strip_protocol(path) - if "r" not in mode: - return LocalTempFile(self, path, mode=mode) - detail = self._check_file(path) - if detail: - detail, fn = detail - _, blocks = detail["fn"], detail["blocks"] - if blocks is True: - logger.debug("Opening local copy of %s" % path) - - # In order to support downstream filesystems to be able to - # infer the compression from the original filename, like - # the `TarFileSystem`, let's extend the `io.BufferedReader` - # fileobject protocol by adding a dedicated attribute - # `original`. - f = open(fn, mode) - f.original = detail.get("original") - return f - else: - raise ValueError( - "Attempt to open partially cached file %s" - "as a wholly cached file" % path - ) - else: - fn = self._make_local_details(path) - kwargs["mode"] = mode - - # call target filesystems open - self._mkcache() - if self.compression: - with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2: - if isinstance(f, AbstractBufferedFile): - # want no type of caching if just downloading whole thing - f.cache = BaseCache(0, f.cache.fetcher, f.size) - comp = ( - infer_compression(path) - if self.compression == "infer" - else self.compression - ) - f = compr[comp](f, mode="rb") - data = True - while data: - block = getattr(f, "blocksize", 5 * 2**20) - data = f.read(block) - f2.write(data) - else: - self.fs.get_file(path, fn) - self.save_cache() - return self._open(path, mode) - - -class SimpleCacheFileSystem(WholeFileCacheFileSystem): - """Caches whole remote files on first access - - This class is intended as a layer over any other file system, and - will make a local copy of each file accessed, so that all subsequent - reads are local. This implementation only copies whole files, and - does not keep any metadata about the download time or file details. - It is therefore safer to use in multi-threaded/concurrent situations. - - This is the only of the caching filesystems that supports write: you will - be given a real local open file, and upon close and commit, it will be - uploaded to the target filesystem; the writability or the target URL is - not checked until that time. - - """ - - protocol = "simplecache" - local_file = True - - def __init__(self, **kwargs): - kw = kwargs.copy() - for key in ["cache_check", "expiry_time", "check_files"]: - kw[key] = False - super().__init__(**kw) - for storage in self.storage: - if not os.path.exists(storage): - os.makedirs(storage, exist_ok=True) - - def _check_file(self, path): - self._check_cache() - sha = self._mapper(path) - for storage in self.storage: - fn = os.path.join(storage, sha) - if os.path.exists(fn): - return fn - - def save_cache(self): - pass - - def load_cache(self): - pass - - def _open(self, path, mode="rb", **kwargs): - path = self._strip_protocol(path) - - if "r" not in mode: - return LocalTempFile(self, path, mode=mode) - fn = self._check_file(path) - if fn: - return open(fn, mode) - - sha = self._mapper(path) - fn = os.path.join(self.storage[-1], sha) - logger.debug("Copying %s to local cache" % path) - kwargs["mode"] = mode - - self._mkcache() - if self.compression: - with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2: - if isinstance(f, AbstractBufferedFile): - # want no type of caching if just downloading whole thing - f.cache = BaseCache(0, f.cache.fetcher, f.size) - comp = ( - infer_compression(path) - if self.compression == "infer" - else self.compression - ) - f = compr[comp](f, mode="rb") - data = True - while data: - block = getattr(f, "blocksize", 5 * 2**20) - data = f.read(block) - f2.write(data) - else: - self.fs.get_file(path, fn) - return self._open(path, mode) - - -class LocalTempFile: - """A temporary local file, which will be uploaded on commit""" - - def __init__(self, fs, path, fn=None, mode="wb", autocommit=True, seek=0): - if fn: - self.fn = fn - self.fh = open(fn, mode) - else: - fd, self.fn = tempfile.mkstemp() - self.fh = open(fd, mode) - self.mode = mode - if seek: - self.fh.seek(seek) - self.path = path - self.fs = fs - self.closed = False - self.autocommit = autocommit - - def __reduce__(self): - # always open in rb+ to allow continuing writing at a location - return ( - LocalTempFile, - (self.fs, self.path, self.fn, "rb+", self.autocommit, self.tell()), - ) - - def __enter__(self): - return self.fh - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - - def close(self): - if self.closed: - return - self.fh.close() - self.closed = True - if self.autocommit: - self.commit() - - def discard(self): - self.fh.close() - os.remove(self.fn) - - def commit(self): - self.fs.put(self.fn, self.path) - try: - os.remove(self.fn) - except (PermissionError, FileNotFoundError): - # file path may be held by new version of the file on windows - pass - - @property - def name(self): - return self.fn - - def __getattr__(self, item): - return getattr(self.fh, item) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/tests/abstract/put.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/tests/abstract/put.py deleted file mode 100644 index 1d5f2c7beb9e035e727766241509db367981bc81..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/tests/abstract/put.py +++ /dev/null @@ -1,577 +0,0 @@ -from hashlib import md5 -from itertools import product - -import pytest - -from fsspec.tests.abstract.common import GLOB_EDGE_CASES_TESTS - - -class AbstractPutTests: - def test_put_file_to_existing_directory( - self, - fs, - fs_join, - fs_target, - local_join, - local_bulk_operations_scenario_0, - supports_empty_directories, - ): - # Copy scenario 1a - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - if not supports_empty_directories: - # Force target directory to exist by adding a dummy file - fs.touch(fs_join(target, "dummy")) - assert fs.isdir(target) - - target_file2 = fs_join(target, "file2") - target_subfile1 = fs_join(target, "subfile1") - - # Copy from source directory - fs.put(local_join(source, "file2"), target) - assert fs.isfile(target_file2) - - # Copy from sub directory - fs.put(local_join(source, "subdir", "subfile1"), target) - assert fs.isfile(target_subfile1) - - # Remove copied files - fs.rm([target_file2, target_subfile1]) - assert not fs.exists(target_file2) - assert not fs.exists(target_subfile1) - - # Repeat with trailing slash on target - fs.put(local_join(source, "file2"), target + "/") - assert fs.isdir(target) - assert fs.isfile(target_file2) - - fs.put(local_join(source, "subdir", "subfile1"), target + "/") - assert fs.isfile(target_subfile1) - - def test_put_file_to_new_directory( - self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0 - ): - # Copy scenario 1b - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - - fs.put( - local_join(source, "subdir", "subfile1"), fs_join(target, "newdir/") - ) # Note trailing slash - assert fs.isdir(target) - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "subfile1")) - - def test_put_file_to_file_in_existing_directory( - self, - fs, - fs_join, - fs_target, - local_join, - supports_empty_directories, - local_bulk_operations_scenario_0, - ): - # Copy scenario 1c - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - if not supports_empty_directories: - # Force target directory to exist by adding a dummy file - fs.touch(fs_join(target, "dummy")) - assert fs.isdir(target) - - fs.put(local_join(source, "subdir", "subfile1"), fs_join(target, "newfile")) - assert fs.isfile(fs_join(target, "newfile")) - - def test_put_file_to_file_in_new_directory( - self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0 - ): - # Copy scenario 1d - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - - fs.put( - local_join(source, "subdir", "subfile1"), - fs_join(target, "newdir", "newfile"), - ) - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "newfile")) - - def test_put_directory_to_existing_directory( - self, - fs, - fs_join, - fs_target, - local_bulk_operations_scenario_0, - supports_empty_directories, - ): - # Copy scenario 1e - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - if not supports_empty_directories: - # Force target directory to exist by adding a dummy file - dummy = fs_join(target, "dummy") - fs.touch(dummy) - assert fs.isdir(target) - - for source_slash, target_slash in zip([False, True], [False, True]): - s = fs_join(source, "subdir") - if source_slash: - s += "/" - t = target + "/" if target_slash else target - - # Without recursive does nothing - fs.put(s, t) - assert fs.ls(target) == ([] if supports_empty_directories else [dummy]) - - # With recursive - fs.put(s, t, recursive=True) - if source_slash: - assert fs.isfile(fs_join(target, "subfile1")) - assert fs.isfile(fs_join(target, "subfile2")) - assert fs.isdir(fs_join(target, "nesteddir")) - assert fs.isfile(fs_join(target, "nesteddir", "nestedfile")) - assert not fs.exists(fs_join(target, "subdir")) - - fs.rm( - [ - fs_join(target, "subfile1"), - fs_join(target, "subfile2"), - fs_join(target, "nesteddir"), - ], - recursive=True, - ) - else: - assert fs.isdir(fs_join(target, "subdir")) - assert fs.isfile(fs_join(target, "subdir", "subfile1")) - assert fs.isfile(fs_join(target, "subdir", "subfile2")) - assert fs.isdir(fs_join(target, "subdir", "nesteddir")) - assert fs.isfile(fs_join(target, "subdir", "nesteddir", "nestedfile")) - - fs.rm(fs_join(target, "subdir"), recursive=True) - assert fs.ls(target) == ([] if supports_empty_directories else [dummy]) - - # Limit recursive by maxdepth - fs.put(s, t, recursive=True, maxdepth=1) - if source_slash: - assert fs.isfile(fs_join(target, "subfile1")) - assert fs.isfile(fs_join(target, "subfile2")) - assert not fs.exists(fs_join(target, "nesteddir")) - assert not fs.exists(fs_join(target, "subdir")) - - fs.rm( - [ - fs_join(target, "subfile1"), - fs_join(target, "subfile2"), - ], - recursive=True, - ) - else: - assert fs.isdir(fs_join(target, "subdir")) - assert fs.isfile(fs_join(target, "subdir", "subfile1")) - assert fs.isfile(fs_join(target, "subdir", "subfile2")) - assert not fs.exists(fs_join(target, "subdir", "nesteddir")) - - fs.rm(fs_join(target, "subdir"), recursive=True) - assert fs.ls(target) == ([] if supports_empty_directories else [dummy]) - - def test_put_directory_to_new_directory( - self, - fs, - fs_join, - fs_target, - local_bulk_operations_scenario_0, - supports_empty_directories, - ): - # Copy scenario 1f - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - - for source_slash, target_slash in zip([False, True], [False, True]): - s = fs_join(source, "subdir") - if source_slash: - s += "/" - t = fs_join(target, "newdir") - if target_slash: - t += "/" - - # Without recursive does nothing - fs.put(s, t) - if supports_empty_directories: - assert fs.ls(target) == [] - else: - with pytest.raises(FileNotFoundError): - fs.ls(target) - - # With recursive - fs.put(s, t, recursive=True) - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "subfile1")) - assert fs.isfile(fs_join(target, "newdir", "subfile2")) - assert fs.isdir(fs_join(target, "newdir", "nesteddir")) - assert fs.isfile(fs_join(target, "newdir", "nesteddir", "nestedfile")) - assert not fs.exists(fs_join(target, "subdir")) - - fs.rm(fs_join(target, "newdir"), recursive=True) - assert not fs.exists(fs_join(target, "newdir")) - - # Limit recursive by maxdepth - fs.put(s, t, recursive=True, maxdepth=1) - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "subfile1")) - assert fs.isfile(fs_join(target, "newdir", "subfile2")) - assert not fs.exists(fs_join(target, "newdir", "nesteddir")) - assert not fs.exists(fs_join(target, "subdir")) - - fs.rm(fs_join(target, "newdir"), recursive=True) - assert not fs.exists(fs_join(target, "newdir")) - - def test_put_glob_to_existing_directory( - self, - fs, - fs_join, - fs_target, - local_join, - supports_empty_directories, - local_bulk_operations_scenario_0, - ): - # Copy scenario 1g - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - if not supports_empty_directories: - # Force target directory to exist by adding a dummy file - dummy = fs_join(target, "dummy") - fs.touch(dummy) - assert fs.isdir(target) - - for target_slash in [False, True]: - t = target + "/" if target_slash else target - - # Without recursive - fs.put(local_join(source, "subdir", "*"), t) - assert fs.isfile(fs_join(target, "subfile1")) - assert fs.isfile(fs_join(target, "subfile2")) - assert not fs.isdir(fs_join(target, "nesteddir")) - assert not fs.exists(fs_join(target, "nesteddir", "nestedfile")) - assert not fs.exists(fs_join(target, "subdir")) - - fs.rm( - [ - fs_join(target, "subfile1"), - fs_join(target, "subfile2"), - ], - recursive=True, - ) - assert fs.ls(target) == ([] if supports_empty_directories else [dummy]) - - # With recursive - for glob, recursive in zip(["*", "**"], [True, False]): - fs.put(local_join(source, "subdir", glob), t, recursive=recursive) - assert fs.isfile(fs_join(target, "subfile1")) - assert fs.isfile(fs_join(target, "subfile2")) - assert fs.isdir(fs_join(target, "nesteddir")) - assert fs.isfile(fs_join(target, "nesteddir", "nestedfile")) - assert not fs.exists(fs_join(target, "subdir")) - - fs.rm( - [ - fs_join(target, "subfile1"), - fs_join(target, "subfile2"), - fs_join(target, "nesteddir"), - ], - recursive=True, - ) - assert fs.ls(target) == ([] if supports_empty_directories else [dummy]) - - # Limit recursive by maxdepth - fs.put( - local_join(source, "subdir", glob), - t, - recursive=recursive, - maxdepth=1, - ) - assert fs.isfile(fs_join(target, "subfile1")) - assert fs.isfile(fs_join(target, "subfile2")) - assert not fs.exists(fs_join(target, "nesteddir")) - assert not fs.exists(fs_join(target, "subdir")) - - fs.rm( - [ - fs_join(target, "subfile1"), - fs_join(target, "subfile2"), - ], - recursive=True, - ) - assert fs.ls(target) == ([] if supports_empty_directories else [dummy]) - - def test_put_glob_to_new_directory( - self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0 - ): - # Copy scenario 1h - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - - for target_slash in [False, True]: - t = fs_join(target, "newdir") - if target_slash: - t += "/" - - # Without recursive - fs.put(local_join(source, "subdir", "*"), t) - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "subfile1")) - assert fs.isfile(fs_join(target, "newdir", "subfile2")) - assert not fs.exists(fs_join(target, "newdir", "nesteddir")) - assert not fs.exists(fs_join(target, "newdir", "nesteddir", "nestedfile")) - assert not fs.exists(fs_join(target, "subdir")) - assert not fs.exists(fs_join(target, "newdir", "subdir")) - - fs.rm(fs_join(target, "newdir"), recursive=True) - assert not fs.exists(fs_join(target, "newdir")) - - # With recursive - for glob, recursive in zip(["*", "**"], [True, False]): - fs.put(local_join(source, "subdir", glob), t, recursive=recursive) - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "subfile1")) - assert fs.isfile(fs_join(target, "newdir", "subfile2")) - assert fs.isdir(fs_join(target, "newdir", "nesteddir")) - assert fs.isfile(fs_join(target, "newdir", "nesteddir", "nestedfile")) - assert not fs.exists(fs_join(target, "subdir")) - assert not fs.exists(fs_join(target, "newdir", "subdir")) - - fs.rm(fs_join(target, "newdir"), recursive=True) - assert not fs.exists(fs_join(target, "newdir")) - - # Limit recursive by maxdepth - fs.put( - local_join(source, "subdir", glob), - t, - recursive=recursive, - maxdepth=1, - ) - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "subfile1")) - assert fs.isfile(fs_join(target, "newdir", "subfile2")) - assert not fs.exists(fs_join(target, "newdir", "nesteddir")) - assert not fs.exists(fs_join(target, "subdir")) - assert not fs.exists(fs_join(target, "newdir", "subdir")) - - fs.rm(fs_join(target, "newdir"), recursive=True) - assert not fs.exists(fs_join(target, "newdir")) - - @pytest.mark.parametrize( - GLOB_EDGE_CASES_TESTS["argnames"], - GLOB_EDGE_CASES_TESTS["argvalues"], - ) - def test_put_glob_edge_cases( - self, - path, - recursive, - maxdepth, - expected, - fs, - fs_join, - fs_target, - local_glob_edge_cases_files, - local_join, - fs_sanitize_path, - ): - # Copy scenario 1g - source = local_glob_edge_cases_files - - target = fs_target - - for new_dir, target_slash in product([True, False], [True, False]): - fs.mkdir(target) - - t = fs_join(target, "newdir") if new_dir else target - t = t + "/" if target_slash else t - - fs.put(local_join(source, path), t, recursive=recursive, maxdepth=maxdepth) - - output = fs.find(target) - if new_dir: - prefixed_expected = [ - fs_sanitize_path(fs_join(target, "newdir", p)) for p in expected - ] - else: - prefixed_expected = [ - fs_sanitize_path(fs_join(target, p)) for p in expected - ] - assert sorted(output) == sorted(prefixed_expected) - - try: - fs.rm(target, recursive=True) - except FileNotFoundError: - pass - - def test_put_list_of_files_to_existing_directory( - self, - fs, - fs_join, - fs_target, - local_join, - local_bulk_operations_scenario_0, - supports_empty_directories, - ): - # Copy scenario 2a - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - if not supports_empty_directories: - # Force target directory to exist by adding a dummy file - dummy = fs_join(target, "dummy") - fs.touch(dummy) - assert fs.isdir(target) - - source_files = [ - local_join(source, "file1"), - local_join(source, "file2"), - local_join(source, "subdir", "subfile1"), - ] - - for target_slash in [False, True]: - t = target + "/" if target_slash else target - - fs.put(source_files, t) - assert fs.isfile(fs_join(target, "file1")) - assert fs.isfile(fs_join(target, "file2")) - assert fs.isfile(fs_join(target, "subfile1")) - - fs.rm( - [ - fs_join(target, "file1"), - fs_join(target, "file2"), - fs_join(target, "subfile1"), - ], - recursive=True, - ) - assert fs.ls(target) == ([] if supports_empty_directories else [dummy]) - - def test_put_list_of_files_to_new_directory( - self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0 - ): - # Copy scenario 2b - source = local_bulk_operations_scenario_0 - - target = fs_target - fs.mkdir(target) - - source_files = [ - local_join(source, "file1"), - local_join(source, "file2"), - local_join(source, "subdir", "subfile1"), - ] - - fs.put(source_files, fs_join(target, "newdir") + "/") # Note trailing slash - assert fs.isdir(fs_join(target, "newdir")) - assert fs.isfile(fs_join(target, "newdir", "file1")) - assert fs.isfile(fs_join(target, "newdir", "file2")) - assert fs.isfile(fs_join(target, "newdir", "subfile1")) - - def test_put_directory_recursive( - self, fs, fs_join, fs_target, local_fs, local_join, local_path - ): - # https://github.com/fsspec/filesystem_spec/issues/1062 - # Recursive cp/get/put of source directory into non-existent target directory. - src = local_join(local_path, "src") - src_file = local_join(src, "file") - local_fs.mkdir(src) - local_fs.touch(src_file) - - target = fs_target - - # put without slash - assert not fs.exists(target) - for loop in range(2): - fs.put(src, target, recursive=True) - assert fs.isdir(target) - - if loop == 0: - assert fs.isfile(fs_join(target, "file")) - assert not fs.exists(fs_join(target, "src")) - else: - assert fs.isfile(fs_join(target, "file")) - assert fs.isdir(fs_join(target, "src")) - assert fs.isfile(fs_join(target, "src", "file")) - - fs.rm(target, recursive=True) - - # put with slash - assert not fs.exists(target) - for loop in range(2): - fs.put(src + "/", target, recursive=True) - assert fs.isdir(target) - assert fs.isfile(fs_join(target, "file")) - assert not fs.exists(fs_join(target, "src")) - - def test_put_directory_without_files_with_same_name_prefix( - self, - fs, - fs_join, - fs_target, - local_join, - local_dir_and_file_with_same_name_prefix, - supports_empty_directories, - ): - # Create the test dirs - source = local_dir_and_file_with_same_name_prefix - target = fs_target - - # Test without glob - fs.put(local_join(source, "subdir"), fs_target, recursive=True) - - assert fs.isfile(fs_join(fs_target, "subfile.txt")) - assert not fs.isfile(fs_join(fs_target, "subdir.txt")) - - fs.rm([fs_join(target, "subfile.txt")]) - if supports_empty_directories: - assert fs.ls(target) == [] - else: - assert not fs.exists(target) - - # Test with glob - fs.put(local_join(source, "subdir*"), fs_target, recursive=True) - - assert fs.isdir(fs_join(fs_target, "subdir")) - assert fs.isfile(fs_join(fs_target, "subdir", "subfile.txt")) - assert fs.isfile(fs_join(fs_target, "subdir.txt")) - - def test_copy_with_source_and_destination_as_list( - self, fs, fs_target, fs_join, local_join, local_10_files_with_hashed_names - ): - # Create the test dir - source = local_10_files_with_hashed_names - target = fs_target - - # Create list of files for source and destination - source_files = [] - destination_files = [] - for i in range(10): - hashed_i = md5(str(i).encode("utf-8")).hexdigest() - source_files.append(local_join(source, f"{hashed_i}.txt")) - destination_files.append(fs_join(target, f"{hashed_i}.txt")) - - # Copy and assert order was kept - fs.put(lpath=source_files, rpath=destination_files) - - for i in range(10): - file_content = fs.cat(destination_files[i]).decode("utf-8") - assert file_content == str(i) diff --git a/spaces/josedolot/HybridNet_Demo2/train.py b/spaces/josedolot/HybridNet_Demo2/train.py deleted file mode 100644 index cb2a161c46b01721d75da7f92f06705be2a6d081..0000000000000000000000000000000000000000 --- a/spaces/josedolot/HybridNet_Demo2/train.py +++ /dev/null @@ -1,362 +0,0 @@ -import argparse -import datetime -import os -import traceback - -import numpy as np -import torch -from tensorboardX import SummaryWriter -from torch import nn -from torchvision import transforms -from tqdm.autonotebook import tqdm - -from val import val -from backbone import HybridNetsBackbone -from hybridnets.loss import FocalLoss -from utils.sync_batchnorm import patch_replication_callback -from utils.utils import replace_w_sync_bn, CustomDataParallel, get_last_weights, init_weights, boolean_string, \ - save_checkpoint, DataLoaderX, Params -from hybridnets.dataset import BddDataset -from hybridnets.loss import FocalLossSeg, TverskyLoss -from hybridnets.autoanchor import run_anchor - - -def get_args(): - parser = argparse.ArgumentParser('HybridNets: End-to-End Perception Network - DatVu') - parser.add_argument('-p', '--project', type=str, default='bdd100k', help='Project file that contains parameters') - parser.add_argument('-c', '--compound_coef', type=int, default=3, help='Coefficient of efficientnet backbone') - parser.add_argument('-n', '--num_workers', type=int, default=12, help='Num_workers of dataloader') - parser.add_argument('-b', '--batch_size', type=int, default=12, help='Number of images per batch among all devices') - parser.add_argument('--freeze_backbone', type=boolean_string, default=False, - help='Freeze encoder and neck (effnet and bifpn)') - parser.add_argument('--freeze_det', type=boolean_string, default=False, - help='Freeze detection head') - parser.add_argument('--freeze_seg', type=boolean_string, default=False, - help='Freeze segmentation head') - parser.add_argument('--lr', type=float, default=1e-4) - parser.add_argument('--optim', type=str, default='adamw', help='Select optimizer for training, ' - 'suggest using \'admaw\' until the' - ' very final stage then switch to \'sgd\'') - parser.add_argument('--num_epochs', type=int, default=500) - parser.add_argument('--val_interval', type=int, default=1, help='Number of epoches between valing phases') - parser.add_argument('--save_interval', type=int, default=500, help='Number of steps between saving') - parser.add_argument('--es_min_delta', type=float, default=0.0, - help='Early stopping\'s parameter: minimum change loss to qualify as an improvement') - parser.add_argument('--es_patience', type=int, default=0, - help='Early stopping\'s parameter: number of epochs with no improvement after which ' - 'training will be stopped. Set to 0 to disable this technique') - parser.add_argument('--data_path', type=str, default='datasets/', help='The root folder of dataset') - parser.add_argument('--log_path', type=str, default='checkpoints/') - parser.add_argument('-w', '--load_weights', type=str, default=None, - help='Whether to load weights from a checkpoint, set None to initialize,' - 'set \'last\' to load last checkpoint') - parser.add_argument('--saved_path', type=str, default='checkpoints/') - parser.add_argument('--debug', type=boolean_string, default=False, - help='Whether visualize the predicted boxes of training, ' - 'the output images will be in test/') - parser.add_argument('--cal_map', type=boolean_string, default=True, - help='Calculate mAP in validation') - parser.add_argument('-v', '--verbose', type=boolean_string, default=True, - help='Whether to print results per class when valing') - parser.add_argument('--plots', type=boolean_string, default=True, - help='Whether to plot confusion matrix when valing') - parser.add_argument('--num_gpus', type=int, default=1, - help='Number of GPUs to be used (0 to use CPU)') - - args = parser.parse_args() - return args - - -class ModelWithLoss(nn.Module): - def __init__(self, model, debug=False): - super().__init__() - self.criterion = FocalLoss() - self.seg_criterion1 = TverskyLoss(mode='multilabel', alpha=0.7, beta=0.3, gamma=4.0 / 3, from_logits=False) - self.seg_criterion2 = FocalLossSeg(mode='multilabel', alpha=0.25) - self.model = model - self.debug = debug - - def forward(self, imgs, annotations, seg_annot, obj_list=None): - _, regression, classification, anchors, segmentation = self.model(imgs) - - if self.debug: - cls_loss, reg_loss = self.criterion(classification, regression, anchors, annotations, - imgs=imgs, obj_list=obj_list) - tversky_loss = self.seg_criterion1(segmentation, seg_annot) - focal_loss = self.seg_criterion2(segmentation, seg_annot) - else: - cls_loss, reg_loss = self.criterion(classification, regression, anchors, annotations) - tversky_loss = self.seg_criterion1(segmentation, seg_annot) - focal_loss = self.seg_criterion2(segmentation, seg_annot) - - # Visualization - # seg_0 = seg_annot[0] - # # print('bbb', seg_0.shape) - # seg_0 = torch.argmax(seg_0, dim = 0) - # # print('before', seg_0.shape) - # seg_0 = seg_0.cpu().numpy() - # #.transpose(1, 2, 0) - # print(seg_0.shape) - # - # anh = np.zeros((384,640,3)) - # - # anh[seg_0 == 0] = (255,0,0) - # anh[seg_0 == 1] = (0,255,0) - # anh[seg_0 == 2] = (0,0,255) - # - # anh = np.uint8(anh) - # - # cv2.imwrite('anh.jpg',anh) - - seg_loss = tversky_loss + 1 * focal_loss - # print("TVERSKY", tversky_loss) - # print("FOCAL", focal_loss) - - return cls_loss, reg_loss, seg_loss, regression, classification, anchors, segmentation - - -def train(opt): - params = Params(f'projects/{opt.project}.yml') - - if opt.num_gpus == 0: - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' - - if torch.cuda.is_available(): - torch.cuda.manual_seed(42) - else: - torch.manual_seed(42) - - opt.saved_path = opt.saved_path + f'/{params.project_name}/' - opt.log_path = opt.log_path + f'/{params.project_name}/tensorboard/' - os.makedirs(opt.log_path, exist_ok=True) - os.makedirs(opt.saved_path, exist_ok=True) - - train_dataset = BddDataset( - params=params, - is_train=True, - inputsize=params.model['image_size'], - transform=transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize( - mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] - ) - ]) - ) - - training_generator = DataLoaderX( - train_dataset, - batch_size=opt.batch_size, - shuffle=True, - num_workers=opt.num_workers, - pin_memory=params.pin_memory, - collate_fn=BddDataset.collate_fn - ) - - valid_dataset = BddDataset( - params=params, - is_train=False, - inputsize=params.model['image_size'], - transform=transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize( - mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] - ) - ]) - ) - - val_generator = DataLoaderX( - valid_dataset, - batch_size=opt.batch_size, - shuffle=False, - num_workers=opt.num_workers, - pin_memory=params.pin_memory, - collate_fn=BddDataset.collate_fn - ) - - if params.need_autoanchor: - params.anchors_scales, params.anchors_ratios = run_anchor(None, train_dataset) - - model = HybridNetsBackbone(num_classes=len(params.obj_list), compound_coef=opt.compound_coef, - ratios=eval(params.anchors_ratios), scales=eval(params.anchors_scales), - seg_classes=len(params.seg_list)) - - # load last weights - ckpt = {} - # last_step = None - if opt.load_weights: - if opt.load_weights.endswith('.pth'): - weights_path = opt.load_weights - else: - weights_path = get_last_weights(opt.saved_path) - # try: - # last_step = int(os.path.basename(weights_path).split('_')[-1].split('.')[0]) - # except: - # last_step = 0 - - try: - ckpt = torch.load(weights_path) - model.load_state_dict(ckpt.get('model', ckpt), strict=False) - except RuntimeError as e: - print(f'[Warning] Ignoring {e}') - print( - '[Warning] Don\'t panic if you see this, this might be because you load a pretrained weights with different number of classes. The rest of the weights should be loaded already.') - else: - print('[Info] initializing weights...') - init_weights(model) - - print('[Info] Successfully!!!') - - if opt.freeze_backbone: - def freeze_backbone(m): - classname = m.__class__.__name__ - if classname in ['EfficientNetEncoder', 'BiFPN']: # replace backbone classname when using another backbone - print("[Info] freezing {}".format(classname)) - for param in m.parameters(): - param.requires_grad = False - model.apply(freeze_backbone) - print('[Info] freezed backbone') - - if opt.freeze_det: - def freeze_det(m): - classname = m.__class__.__name__ - if classname in ['Regressor', 'Classifier', 'Anchors']: - print("[Info] freezing {}".format(classname)) - for param in m.parameters(): - param.requires_grad = False - model.apply(freeze_det) - print('[Info] freezed detection head') - - if opt.freeze_seg: - def freeze_seg(m): - classname = m.__class__.__name__ - if classname in ['BiFPNDecoder', 'SegmentationHead']: - print("[Info] freezing {}".format(classname)) - for param in m.parameters(): - param.requires_grad = False - model.apply(freeze_seg) - print('[Info] freezed segmentation head') - - # https://github.com/vacancy/Synchronized-BatchNorm-PyTorch - # apply sync_bn when using multiple gpu and batch_size per gpu is lower than 4 - # useful when gpu memory is limited. - # because when bn is disable, the training will be very unstable or slow to converge, - # apply sync_bn can solve it, - # by packing all mini-batch across all gpus as one batch and normalize, then send it back to all gpus. - # but it would also slow down the training by a little bit. - if opt.num_gpus > 1 and opt.batch_size // opt.num_gpus < 4: - model.apply(replace_w_sync_bn) - use_sync_bn = True - else: - use_sync_bn = False - - writer = SummaryWriter(opt.log_path + f'/{datetime.datetime.now().strftime("%Y%m%d-%H%M%S")}/') - - # wrap the model with loss function, to reduce the memory usage on gpu0 and speedup - model = ModelWithLoss(model, debug=opt.debug) - - if opt.num_gpus > 0: - model = model.cuda() - if opt.num_gpus > 1: - model = CustomDataParallel(model, opt.num_gpus) - if use_sync_bn: - patch_replication_callback(model) - - if opt.optim == 'adamw': - optimizer = torch.optim.AdamW(model.parameters(), opt.lr) - else: - optimizer = torch.optim.SGD(model.parameters(), opt.lr, momentum=0.9, nesterov=True) - # print(ckpt) - if opt.load_weights is not None and ckpt.get('optimizer', None): - optimizer.load_state_dict(ckpt['optimizer']) - - scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=3, verbose=True) - - epoch = 0 - best_loss = 1e5 - best_epoch = 0 - last_step = ckpt['step'] if opt.load_weights is not None and ckpt.get('step', None) else 0 - best_fitness = ckpt['best_fitness'] if opt.load_weights is not None and ckpt.get('best_fitness', None) else 0 - step = max(0, last_step) - model.train() - - num_iter_per_epoch = len(training_generator) - try: - for epoch in range(opt.num_epochs): - last_epoch = step // num_iter_per_epoch - if epoch < last_epoch: - continue - - epoch_loss = [] - progress_bar = tqdm(training_generator) - for iter, data in enumerate(progress_bar): - if iter < step - last_epoch * num_iter_per_epoch: - progress_bar.update() - continue - try: - imgs = data['img'] - annot = data['annot'] - seg_annot = data['segmentation'] - - if opt.num_gpus == 1: - # if only one gpu, just send it to cuda:0 - # elif multiple gpus, send it to multiple gpus in CustomDataParallel, not here - imgs = imgs.cuda() - annot = annot.cuda() - seg_annot = seg_annot.cuda().long() - - optimizer.zero_grad() - cls_loss, reg_loss, seg_loss, regression, classification, anchors, segmentation = model(imgs, annot, - seg_annot, - obj_list=params.obj_list) - cls_loss = cls_loss.mean() - reg_loss = reg_loss.mean() - seg_loss = seg_loss.mean() - - loss = cls_loss + reg_loss + seg_loss - if loss == 0 or not torch.isfinite(loss): - continue - - loss.backward() - # torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1) - optimizer.step() - - epoch_loss.append(float(loss)) - - progress_bar.set_description( - 'Step: {}. Epoch: {}/{}. Iteration: {}/{}. Cls loss: {:.5f}. Reg loss: {:.5f}. Seg loss: {:.5f}. Total loss: {:.5f}'.format( - step, epoch, opt.num_epochs, iter + 1, num_iter_per_epoch, cls_loss.item(), - reg_loss.item(), seg_loss.item(), loss.item())) - writer.add_scalars('Loss', {'train': loss}, step) - writer.add_scalars('Regression_loss', {'train': reg_loss}, step) - writer.add_scalars('Classfication_loss', {'train': cls_loss}, step) - writer.add_scalars('Segmentation_loss', {'train': seg_loss}, step) - - # log learning_rate - current_lr = optimizer.param_groups[0]['lr'] - writer.add_scalar('learning_rate', current_lr, step) - - step += 1 - - if step % opt.save_interval == 0 and step > 0: - save_checkpoint(model, opt.saved_path, f'hybridnets-d{opt.compound_coef}_{epoch}_{step}.pth') - print('checkpoint...') - - except Exception as e: - print('[Error]', traceback.format_exc()) - print(e) - continue - - scheduler.step(np.mean(epoch_loss)) - - if epoch % opt.val_interval == 0: - best_fitness, best_loss, best_epoch = val(model, optimizer, val_generator, params, opt, writer, epoch, - step, best_fitness, best_loss, best_epoch) - except KeyboardInterrupt: - save_checkpoint(model, opt.saved_path, f'hybridnets-d{opt.compound_coef}_{epoch}_{step}.pth') - finally: - writer.close() - - -if __name__ == '__main__': - opt = get_args() - train(opt) diff --git a/spaces/justest/gpt4free/g4f/.v1/testing/you_test.py b/spaces/justest/gpt4free/g4f/.v1/testing/you_test.py deleted file mode 100644 index 1e9f620507a3bb4ff5e546cf693cfe3764ac437f..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/.v1/testing/you_test.py +++ /dev/null @@ -1,27 +0,0 @@ -from gpt4free import you - -# simple request with links and details -response = you.Completion.create(prompt="hello world", detailed=True, include_links=True) - -print(response) - -# { -# "response": "...", -# "links": [...], -# "extra": {...}, -# "slots": {...} -# } -# } - -# chatbot - -chat = [] - -while True: - prompt = input("You: ") - - response = you.Completion.create(prompt=prompt, chat=chat) - - print("Bot:", response.text) - - chat.append({"question": prompt, "answer": response.text}) diff --git a/spaces/kcagle/AutoGPT/autogpt/memory/__init__.py b/spaces/kcagle/AutoGPT/autogpt/memory/__init__.py deleted file mode 100644 index 3d18704c70dfc287642b1923e6f2e1f72a5f2a62..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/memory/__init__.py +++ /dev/null @@ -1,99 +0,0 @@ -from autogpt.memory.local import LocalCache -from autogpt.memory.no_memory import NoMemory - -# List of supported memory backends -# Add a backend to this list if the import attempt is successful -supported_memory = ["local", "no_memory"] - -try: - from autogpt.memory.redismem import RedisMemory - - supported_memory.append("redis") -except ImportError: - # print("Redis not installed. Skipping import.") - RedisMemory = None - -try: - from autogpt.memory.pinecone import PineconeMemory - - supported_memory.append("pinecone") -except ImportError: - # print("Pinecone not installed. Skipping import.") - PineconeMemory = None - -try: - from autogpt.memory.weaviate import WeaviateMemory - - supported_memory.append("weaviate") -except ImportError: - # print("Weaviate not installed. Skipping import.") - WeaviateMemory = None - -try: - from autogpt.memory.milvus import MilvusMemory - - supported_memory.append("milvus") -except ImportError: - # print("pymilvus not installed. Skipping import.") - MilvusMemory = None - - -def get_memory(cfg, init=False): - memory = None - if cfg.memory_backend == "pinecone": - if not PineconeMemory: - print( - "Error: Pinecone is not installed. Please install pinecone" - " to use Pinecone as a memory backend." - ) - else: - memory = PineconeMemory(cfg) - if init: - memory.clear() - elif cfg.memory_backend == "redis": - if not RedisMemory: - print( - "Error: Redis is not installed. Please install redis-py to" - " use Redis as a memory backend." - ) - else: - memory = RedisMemory(cfg) - elif cfg.memory_backend == "weaviate": - if not WeaviateMemory: - print( - "Error: Weaviate is not installed. Please install weaviate-client to" - " use Weaviate as a memory backend." - ) - else: - memory = WeaviateMemory(cfg) - elif cfg.memory_backend == "milvus": - if not MilvusMemory: - print( - "Error: Milvus sdk is not installed." - "Please install pymilvus to use Milvus as memory backend." - ) - else: - memory = MilvusMemory(cfg) - elif cfg.memory_backend == "no_memory": - memory = NoMemory(cfg) - - if memory is None: - memory = LocalCache(cfg) - if init: - memory.clear() - return memory - - -def get_supported_memory_backends(): - return supported_memory - - -__all__ = [ - "get_memory", - "LocalCache", - "RedisMemory", - "PineconeMemory", - "NoMemory", - "MilvusMemory", - "WeaviateMemory", -] diff --git a/spaces/keras-io/AdaIN/README.md b/spaces/keras-io/AdaIN/README.md deleted file mode 100644 index 202fea972e903ad599bd5cea634d1f5d096315dd..0000000000000000000000000000000000000000 --- a/spaces/keras-io/AdaIN/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Neural Style Transfer using AdaIN -emoji: 🎨 -colorFrom: red -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/keras-io/collaborative-filtering-movielens/README.md b/spaces/keras-io/collaborative-filtering-movielens/README.md deleted file mode 100644 index 6d01e06d51fdd3cf53de1705d35f4f02ed13e48d..0000000000000000000000000000000000000000 --- a/spaces/keras-io/collaborative-filtering-movielens/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Collaborative Filtering Movielens -emoji: 👁 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.10 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/configs/base.py b/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/configs/base.py deleted file mode 100644 index 78e4b36a9142b649ec39a8c59331bb2557f2ad57..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/configs/base.py +++ /dev/null @@ -1,56 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = "ms1mv3_arcface_r50" - -config.dataset = "ms1m-retinaface-t1" -config.embedding_size = 512 -config.sample_rate = 1 -config.fp16 = False -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -if config.dataset == "emore": - config.rec = "/train_tmp/faces_emore" - config.num_classes = 85742 - config.num_image = 5822653 - config.num_epoch = 16 - config.warmup_epoch = -1 - config.decay_epoch = [8, 14, ] - config.val_targets = ["lfw", ] - -elif config.dataset == "ms1m-retinaface-t1": - config.rec = "/train_tmp/ms1m-retinaface-t1" - config.num_classes = 93431 - config.num_image = 5179510 - config.num_epoch = 25 - config.warmup_epoch = -1 - config.decay_epoch = [11, 17, 22] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] - -elif config.dataset == "glint360k": - config.rec = "/train_tmp/glint360k" - config.num_classes = 360232 - config.num_image = 17091657 - config.num_epoch = 20 - config.warmup_epoch = -1 - config.decay_epoch = [8, 12, 15, 18] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] - -elif config.dataset == "webface": - config.rec = "/train_tmp/faces_webface_112x112" - config.num_classes = 10572 - config.num_image = "forget" - config.num_epoch = 34 - config.warmup_epoch = -1 - config.decay_epoch = [20, 28, 32] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/kevkev05/Chat-To-Sequence/README.md b/spaces/kevkev05/Chat-To-Sequence/README.md deleted file mode 100644 index e648d7e15cf8853278302a568b984a9d5ea950bf..0000000000000000000000000000000000000000 --- a/spaces/kevkev05/Chat-To-Sequence/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chat To Sequence -emoji: 🧬 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevkev05/Chat-To-Sequence/app.py b/spaces/kevkev05/Chat-To-Sequence/app.py deleted file mode 100644 index 312dfc087df02bb65d2214bb66a00a1f0345ed4b..0000000000000000000000000000000000000000 --- a/spaces/kevkev05/Chat-To-Sequence/app.py +++ /dev/null @@ -1,214 +0,0 @@ -from sentence_transformers import SentenceTransformer -from huggingface_hub import CommitScheduler -from datasets import Dataset -import gradio as gr -import pandas as pd -import plotly.graph_objects as go -import os -from utility import load_from_hub_csv -from DNAseq import DNAseq -from grapher import DNAgrapher -from parameter_extractor import ParameterExtractor - -from helper import list_at_index_0, list_at_index_1 -from logger import cts_log_file_create, logger, cts_logger - - -HF_TOKEN = os.environ.get("HF_TOKEN", None) -repo_id = os.environ.get("repo_id", None) - -# Create csv file for data logging -log_file_path = cts_log_file_create("flagged") - -# Initialise CommitScheduler -scheduler = CommitScheduler( - repo_id=repo_id, - repo_type="dataset", - folder_path=log_file_path.parent, - path_in_repo="data", - every=2880, - private=True, - token=HF_TOKEN -) - -# Load Code-Function Mapping -load_from_hub_csv(path=repo_id, - data_file="app/code_function_mapping.csv", - token=HF_TOKEN, - csv_output_file="code_function_mapping.csv") - -def chat_to_sequence(sequence, user_query): - - # Sequence to be analysed/queried - input_sequence = sequence - - # Set DNAseq class expected variable - dna = input_sequence - - # Model - model_name = "all-mpnet-base-v2" - - # Load model - model = SentenceTransformer(model_name) - - # User input - user_query = user_query - - # Set ParameterExtractor class expected variable - query = user_query - - # Initialise Graphic Response - fig = None - - # Initialise Text Response - response = None - - # Query Code Description Message - code_descript_message = '' - - # kNN semantic similarity threshold / used to determine if query can execute code - # kNN semantic similarity values less than the lower threshold should return a code eval response - # kNN semantic similarity values more than the lower threshold shouldn't return a code eval response - proximal_lower_threshold = 1.1 - proximal_upper_threshold = 1.4 - - threshold_exceeded_message = "Your Query Wasn't Understood. Can You Rephrase The Query" - threshold_approximate_message = "Your Query Wasn't Understood Clearly. Try Using The Following Query Formats" - - # Load the function mapping CSV file into a pandas DataFrame - code_function_mapping = pd.read_csv("code_function_mapping.csv") - - # Load reference query database from JSON file back into a DataFrame - ref_query_df = pd.read_json('reference_query_db.json', orient='records') - - # Create Dataset object using the pandas data frame - ref_query_ds = Dataset.from_pandas(ref_query_df) - - # Load FAISS index - ref_query_ds.load_faiss_index('all-mpnet-base-v2_embeddings', 'ref_query_db_index') - - # Create embeddings for user query - query_embedding = model.encode(user_query) - - # Semantic similarity search user query against sample queries - index_result = ref_query_ds.get_nearest_examples("all-mpnet-base-v2_embeddings", query_embedding, k=3) - - # Retrieve results from dataset object - scores, examples = index_result - - # Create a DataFrame from the examples dictionary - result_df = pd.DataFrame(examples) - - # Add the scores as a new column to the DataFrame - result_df['score'] = scores - - # Sort the DataFrame by the 'Score' column in ascending order - # FIASS uses kNN as the similarity algorithm / value of 0 indicates an exact match - sorted_df = result_df.sort_values(by='score', ascending=True) - - # Get the query with the lowest kNN score (first row after sorting) - ref_question = sorted_df.iloc[0]['question'] - - # Get the code for the query with the lowest kNN score (first row after sorting) - query_code = sorted_df.iloc[0]['code'] - - # Get the score for the query with the lowest kNN score (first row after sorting) - query_score = sorted_df.iloc[0]['score'] - - # Description of query code to be executed - query_code_description = code_function_mapping[code_function_mapping['code'] == query_code]['description'].values[0] - - # Extra log entities - similarity_metric = "k nearest neighbours" - - ref_question_2 = sorted_df.iloc[1]['question'] - ref_question_3 = sorted_df.iloc[1]['question'] - query_score_2 = sorted_df.iloc[1]['score'] - query_score_3 = sorted_df.iloc[1]['score'] - - # logger function log_data parameter input - log_data = [ - user_query, - ref_question, - query_score, - query_code, - ref_question_2, - query_score_2, - ref_question_3, - query_score_3, - similarity_metric, - model_name, - proximal_lower_threshold, - proximal_upper_threshold, - ] - - # Check the query score against threshold values - if query_score >= proximal_upper_threshold: - response = threshold_exceeded_message - cts_logger(scheduler, log_file_path, log_data, response) - print(threshold_exceeded_message) - - elif proximal_lower_threshold < query_score < proximal_upper_threshold: - response = threshold_approximate_message + "\n" + ref_question - cts_logger(scheduler, log_file_path, log_data, response) - print(threshold_approximate_message, ref_question) - else: - print("Execute query") - # Define the question - code = query_code - - # Filter the DataFrame to find the code that matches the question - matching_row = code_function_mapping[code_function_mapping["code"] == code] - - # Check if there is a match - if not matching_row.empty: - function = matching_row.iloc[0]["function"] - f_response = eval(function) - if code[0] == 'c': - response = None - fig = go.Figure(f_response) - else: - response = str(f_response) - fig = None - code_descript_message = query_code_description.title() - cts_logger(scheduler, log_file_path, log_data, response) - else: - response = "Error processing query" - query_code = "No Match Error" - cts_logger(scheduler, log_file_path, log_data, response) - print("No matching code found for the function:", code) - - return response, fig, code_descript_message - return response, fig, code_descript_message - - -ChatToSequence = gr.Interface( - fn=chat_to_sequence, - inputs=[gr.Textbox(label="Sequence", placeholder="Input DNA Sequence..."), - gr.Textbox(label="Query", placeholder="Input Query...")], - outputs=[gr.Textbox(label="Response"), - gr.Plot(label='Graphic Response'), - gr.Textbox(label="Action Executed")], - allow_flagging="never", - title="Chat-To-Sequence", - description="

      This Demo App Allows You To Explore Your DNA Sequence Using Natural Language

      " - "
      Disclaimer: The app stores the user queries but doesn't store the DNA sequence." - " Please Don't Input Any Information You Don't Wish To Share Into The Query Box.
      ", - theme=gr.themes.Soft(), - examples=[ - ["ggcattgaggagaccattgacaccgtcattagcaatgcactacaactgtcacaacctaaaa", - "What is the length of the sequence"], - ["ggcattgaggagaccattgacaccgtcattagcaatgcactacaactgtcacaacctaaaa", - "How many guanines bases are there in the sequence"], - ["ggcattgaggagaccattgacaccgtcattagcaatgcactacaactgtcacaacctaaaa", - "What is the base at position 10"], - ["ggcattgaggagaccattgacaccgtcattagcaatgcactacaactgtcacaacctaaaa", - "What are the bases from position 2 to 10"], - ["ggcattgaggagaccattgacaccgtcattagcaatgcactacaactgtcacaacctaaaa", - "How many bases are there from position 2 to 10"], - ["ggcattgaggagaccattgacaccgtcattagcaatgcactacaactgtcacaacctaaaaaa", - "Show pie chart of total bases"], - ], -).queue() - -ChatToSequence.launch() diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/bart/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/bart/README.md deleted file mode 100644 index 4050a724ee6a2f20c9998a95df48c58b64764ab1..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/bart/README.md +++ /dev/null @@ -1,228 +0,0 @@ -# BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension - -[https://arxiv.org/abs/1910.13461](https://arxiv.org/abs/1910.13461) - -## Introduction - -BART is sequence-to-sequence model trained with denoising as pretraining objective. We show that this pretraining objective is more generic and show that we can match [RoBERTa](../roberta) results on SQuAD and GLUE and gain state-of-the-art results on summarization (XSum, CNN dataset), long form generative question answering (ELI5) and dialog response genration (ConvAI2). See the associated paper for more details. - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`bart.base` | BART model with 6 encoder and decoder layers | 140M | [bart.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.base.tar.gz) -`bart.large` | BART model with 12 encoder and decoder layers | 400M | [bart.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.tar.gz) -`bart.large.mnli` | `bart.large` finetuned on `MNLI` | 400M | [bart.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.mnli.tar.gz) -`bart.large.cnn` | `bart.large` finetuned on `CNN-DM` | 400M | [bart.large.cnn.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.cnn.tar.gz) -`bart.large.xsum` | `bart.large` finetuned on `Xsum` | 400M | [bart.large.xsum.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.xsum.tar.gz) - -## Results - -**[GLUE (Wang et al., 2019)](https://gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B ----|---|---|---|---|---|---|---|--- -`roberta.large` | 90.2 | 94.7 | 92.2 | 86.6 | 96.4 | 90.9 | 68.0 | 92.4 -`bart.large` | 89.9 | 94.9 | 92.5 | 87.0 | 96.6 | 90.4 | 62.8 | 91.2 - -**[SQuAD (Rajpurkar et al., 2018)](https://rajpurkar.github.io/SQuAD-explorer/)** -_(dev set, no additional data used)_ - -Model | SQuAD 1.1 EM/F1 | SQuAD 2.0 EM/F1 ----|---|--- -`roberta.large` | 88.9/94.6 | 86.5/89.4 -`bart.large` | 88.8/94.6 | 86.1/89.2 - -**[CNN/Daily Mail](http://nlpprogress.com/english/summarization.html)** -_(test set, no additional data used)_ - -Model | R1 | R2 | RL ----|---|---|--- -`BERTSUMEXTABS` | 42.13 | 19.60 | 39.18 -`bart.large` | 44.16 | 21.28 | 40.90 - -## Example usage - -##### Load BART from torch.hub (PyTorch >= 1.1): -```python -import torch -bart = torch.hub.load('pytorch/fairseq', 'bart.large') -bart.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load BART (for PyTorch 1.0 or custom models): -```python -# Download bart.large model -wget https://dl.fbaipublicfiles.com/fairseq/models/bart.large.tar.gz -tar -xzvf bart.large.tar.gz - -# Load the model in fairseq -from fairseq.models.bart import BARTModel -bart = BARTModel.from_pretrained('/path/to/bart.large', checkpoint_file='model.pt') -bart.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Apply Byte-Pair Encoding (BPE) to input text: -```python -tokens = bart.encode('Hello world!') -assert tokens.tolist() == [0, 31414, 232, 328, 2] -bart.decode(tokens) # 'Hello world!' -``` - -##### Extract features from BART: -```python -# Extract the last layer's features -last_layer_features = bart.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 5, 1024]) - -# Extract all layer's features from decoder (layer 0 is the embedding layer) -all_layers = bart.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 13 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -##### Use BART for sentence-pair classification tasks: -```python -# Download BART already finetuned for MNLI -bart = torch.hub.load('pytorch/fairseq', 'bart.large.mnli') -bart.eval() # disable dropout for evaluation - -# Encode a pair of sentences and make a prediction -tokens = bart.encode('BART is a seq2seq model.', 'BART is not sequence to sequence.') -bart.predict('mnli', tokens).argmax() # 0: contradiction - -# Encode another pair of sentences -tokens = bart.encode('BART is denoising autoencoder.', 'BART is version of autoencoder.') -bart.predict('mnli', tokens).argmax() # 2: entailment -``` - -##### Register a new (randomly initialized) classification head: -```python -bart.register_classification_head('new_task', num_classes=3) -logprobs = bart.predict('new_task', tokens) -``` - -##### Batched prediction: -```python -import torch -from fairseq.data.data_utils import collate_tokens - -bart = torch.hub.load('pytorch/fairseq', 'bart.large.mnli') -bart.eval() - -batch_of_pairs = [ - ['BART is a seq2seq model.', 'BART is not sequence to sequence.'], - ['BART is denoising autoencoder.', 'BART is version of autoencoder.'], -] - -batch = collate_tokens( - [bart.encode(pair[0], pair[1]) for pair in batch_of_pairs], pad_idx=1 -) - -logprobs = bart.predict('mnli', batch) -print(logprobs.argmax(dim=1)) -# tensor([0, 2]) -``` - -##### Using the GPU: -```python -bart.cuda() -bart.predict('new_task', tokens) -``` - -#### Filling masks: - -BART can be used to fill multiple `` tokens in the input. -```python -bart = torch.hub.load('pytorch/fairseq', 'bart.base') -bart.eval() -bart.fill_mask(['The cat on the .'], topk=3, beam=10) -# [[('The cat was on the ground.', tensor(-0.6183)), ('The cat was on the floor.', tensor(-0.6798)), ('The cat sleeps on the couch.', tensor(-0.6830))]] -``` - -Note that by default we enforce the output length to match the input length. -This can be disabled by setting ``match_source_len=False``: -``` -bart.fill_mask(['The cat on the .'], topk=3, beam=10, match_source_len=False) -# [[('The cat was on the ground.', tensor(-0.6185)), ('The cat was asleep on the couch.', tensor(-0.6276)), ('The cat was on the floor.', tensor(-0.6800))]] -``` - -Example code to fill masks for a batch of sentences using GPU -``` -bart.cuda() -bart.fill_mask(['The cat on the .', 'The dog on the .'], topk=3, beam=10) -# [[('The cat was on the ground.', tensor(-0.6183)), ('The cat was on the floor.', tensor(-0.6798)), ('The cat sleeps on the couch.', tensor(-0.6830))], [('The dog was on the ground.', tensor(-0.6190)), ('The dog lay on the ground.', tensor(-0.6711)), -('The dog was asleep on the couch', tensor(-0.6796))]] -``` - -#### Evaluating the `bart.large.mnli` model: - -Example python code snippet to evaluate accuracy on the MNLI `dev_matched` set. -```python -label_map = {0: 'contradiction', 1: 'neutral', 2: 'entailment'} -ncorrect, nsamples = 0, 0 -bart.cuda() -bart.eval() -with open('glue_data/MNLI/dev_matched.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[8], tokens[9], tokens[-1] - tokens = bart.encode(sent1, sent2) - prediction = bart.predict('mnli', tokens).argmax().item() - prediction_label = label_map[prediction] - ncorrect += int(prediction_label == target) - nsamples += 1 - print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# Expected output: 0.9010 -``` - -#### Evaluating the `bart.large.cnn` model: -- Follow instructions [here](https://github.com/abisee/cnn-dailymail) to download and process into data-files such that `test.source` and `test.target` has one line for each non-tokenized sample. -- For simpler preprocessing, you can also `wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz`, although there is no guarantee of identical scores -- `huggingface/transformers` has a simpler interface that supports [single-gpu](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/run_eval.py) and [multi-gpu](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/run_distributed_eval.py) beam search. - In `huggingface/transformers`, the BART models' paths are `facebook/bart-large-cnn` and `facebook/bart-large-xsum`. - -In `fairseq`, summaries can be generated using: - -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir pytorch/fairseq \ - --model-file bart.large.cnn \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo -``` - -For calculating rouge, install `files2rouge` from [here](https://github.com/pltrdy/files2rouge). - -```bash -export CLASSPATH=/path/to/stanford-corenlp-full-2016-10-31/stanford-corenlp-3.7.0.jar - -# Tokenize hypothesis and target files. -cat test.hypo | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.hypo.tokenized -cat test.target | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.hypo.target -files2rouge test.hypo.tokenized test.hypo.target -# Expected output: (ROUGE-2 Average_F: 0.21238) -``` - - -## Finetuning - -- [Finetuning on GLUE](README.glue.md) -- [Finetuning on CNN-DM](README.summarization.md) - -## Citation - -```bibtex -@article{lewis2019bart, - title = {BART: Denoising Sequence-to-Sequence Pre-training for Natural -Language Generation, Translation, and Comprehension}, - author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and - Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov - and Luke Zettlemoyer }, - journal={arXiv preprint arXiv:1910.13461}, - year = {2019}, -} -``` diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/masks/mask.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/masks/mask.py deleted file mode 100644 index 3e34d0675a781fba983cb542f18390255aaf2609..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/masks/mask.py +++ /dev/null @@ -1,429 +0,0 @@ -import enum -from copy import deepcopy - -import numpy as np -from skimage import img_as_ubyte -from skimage.transform import rescale, resize -try: - from detectron2 import model_zoo - from detectron2.config import get_cfg - from detectron2.engine import DefaultPredictor - DETECTRON_INSTALLED = True -except: - print("Detectron v2 is not installed") - DETECTRON_INSTALLED = False - -from .countless.countless2d import zero_corrected_countless - - -class ObjectMask(): - def __init__(self, mask): - self.height, self.width = mask.shape - (self.up, self.down), (self.left, self.right) = self._get_limits(mask) - self.mask = mask[self.up:self.down, self.left:self.right].copy() - - @staticmethod - def _get_limits(mask): - def indicator_limits(indicator): - lower = indicator.argmax() - upper = len(indicator) - indicator[::-1].argmax() - return lower, upper - - vertical_indicator = mask.any(axis=1) - vertical_limits = indicator_limits(vertical_indicator) - - horizontal_indicator = mask.any(axis=0) - horizontal_limits = indicator_limits(horizontal_indicator) - - return vertical_limits, horizontal_limits - - def _clean(self): - self.up, self.down, self.left, self.right = 0, 0, 0, 0 - self.mask = np.empty((0, 0)) - - def horizontal_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.horizontal_flip(inplace=True) - - self.mask = self.mask[:, ::-1] - return self - - def vertical_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.vertical_flip(inplace=True) - - self.mask = self.mask[::-1, :] - return self - - def image_center(self): - y_center = self.up + (self.down - self.up) / 2 - x_center = self.left + (self.right - self.left) / 2 - return y_center, x_center - - def rescale(self, scaling_factor, inplace=False): - if not inplace: - scaled = deepcopy(self) - return scaled.rescale(scaling_factor, inplace=True) - - scaled_mask = rescale(self.mask.astype(float), scaling_factor, order=0) > 0.5 - (up, down), (left, right) = self._get_limits(scaled_mask) - self.mask = scaled_mask[up:down, left:right] - - y_center, x_center = self.image_center() - mask_height, mask_width = self.mask.shape - self.up = int(round(y_center - mask_height / 2)) - self.down = self.up + mask_height - self.left = int(round(x_center - mask_width / 2)) - self.right = self.left + mask_width - return self - - def crop_to_canvas(self, vertical=True, horizontal=True, inplace=False): - if not inplace: - cropped = deepcopy(self) - cropped.crop_to_canvas(vertical=vertical, horizontal=horizontal, inplace=True) - return cropped - - if vertical: - if self.up >= self.height or self.down <= 0: - self._clean() - else: - cut_up, cut_down = max(-self.up, 0), max(self.down - self.height, 0) - if cut_up != 0: - self.mask = self.mask[cut_up:] - self.up = 0 - if cut_down != 0: - self.mask = self.mask[:-cut_down] - self.down = self.height - - if horizontal: - if self.left >= self.width or self.right <= 0: - self._clean() - else: - cut_left, cut_right = max(-self.left, 0), max(self.right - self.width, 0) - if cut_left != 0: - self.mask = self.mask[:, cut_left:] - self.left = 0 - if cut_right != 0: - self.mask = self.mask[:, :-cut_right] - self.right = self.width - - return self - - def restore_full_mask(self, allow_crop=False): - cropped = self.crop_to_canvas(inplace=allow_crop) - mask = np.zeros((cropped.height, cropped.width), dtype=bool) - mask[cropped.up:cropped.down, cropped.left:cropped.right] = cropped.mask - return mask - - def shift(self, vertical=0, horizontal=0, inplace=False): - if not inplace: - shifted = deepcopy(self) - return shifted.shift(vertical=vertical, horizontal=horizontal, inplace=True) - - self.up += vertical - self.down += vertical - self.left += horizontal - self.right += horizontal - return self - - def area(self): - return self.mask.sum() - - -class RigidnessMode(enum.Enum): - soft = 0 - rigid = 1 - - -class SegmentationMask: - def __init__(self, confidence_threshold=0.5, rigidness_mode=RigidnessMode.rigid, - max_object_area=0.3, min_mask_area=0.02, downsample_levels=6, num_variants_per_mask=4, - max_mask_intersection=0.5, max_foreground_coverage=0.5, max_foreground_intersection=0.5, - max_hidden_area=0.2, max_scale_change=0.25, horizontal_flip=True, - max_vertical_shift=0.1, position_shuffle=True): - """ - :param confidence_threshold: float; threshold for confidence of the panoptic segmentator to allow for - the instance. - :param rigidness_mode: RigidnessMode object - when soft, checks intersection only with the object from which the mask_object was produced - when rigid, checks intersection with any foreground class object - :param max_object_area: float; allowed upper bound for to be considered as mask_object. - :param min_mask_area: float; lower bound for mask to be considered valid - :param downsample_levels: int; defines width of the resized segmentation to obtain shifted masks; - :param num_variants_per_mask: int; maximal number of the masks for the same object; - :param max_mask_intersection: float; maximum allowed area fraction of intersection for 2 masks - produced by horizontal shift of the same mask_object; higher value -> more diversity - :param max_foreground_coverage: float; maximum allowed area fraction of intersection for foreground object to be - covered by mask; lower value -> less the objects are covered - :param max_foreground_intersection: float; maximum allowed area of intersection for the mask with foreground - object; lower value -> mask is more on the background than on the objects - :param max_hidden_area: upper bound on part of the object hidden by shifting object outside the screen area; - :param max_scale_change: allowed scale change for the mask_object; - :param horizontal_flip: if horizontal flips are allowed; - :param max_vertical_shift: amount of vertical movement allowed; - :param position_shuffle: shuffle - """ - - assert DETECTRON_INSTALLED, 'Cannot use SegmentationMask without detectron2' - self.cfg = get_cfg() - self.cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")) - self.cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml") - self.cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = confidence_threshold - self.predictor = DefaultPredictor(self.cfg) - - self.rigidness_mode = RigidnessMode(rigidness_mode) - self.max_object_area = max_object_area - self.min_mask_area = min_mask_area - self.downsample_levels = downsample_levels - self.num_variants_per_mask = num_variants_per_mask - self.max_mask_intersection = max_mask_intersection - self.max_foreground_coverage = max_foreground_coverage - self.max_foreground_intersection = max_foreground_intersection - self.max_hidden_area = max_hidden_area - self.position_shuffle = position_shuffle - - self.max_scale_change = max_scale_change - self.horizontal_flip = horizontal_flip - self.max_vertical_shift = max_vertical_shift - - def get_segmentation(self, img): - im = img_as_ubyte(img) - panoptic_seg, segment_info = self.predictor(im)["panoptic_seg"] - return panoptic_seg, segment_info - - @staticmethod - def _is_power_of_two(n): - return (n != 0) and (n & (n-1) == 0) - - def identify_candidates(self, panoptic_seg, segments_info): - potential_mask_ids = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = (panoptic_seg == segment["id"]).int().detach().cpu().numpy() - area = mask.sum().item() / np.prod(panoptic_seg.shape) - if area >= self.max_object_area: - continue - potential_mask_ids.append(segment["id"]) - return potential_mask_ids - - def downsample_mask(self, mask): - height, width = mask.shape - if not (self._is_power_of_two(height) and self._is_power_of_two(width)): - raise ValueError("Image sides are not power of 2.") - - num_iterations = width.bit_length() - 1 - self.downsample_levels - if num_iterations < 0: - raise ValueError(f"Width is lower than 2^{self.downsample_levels}.") - - if height.bit_length() - 1 < num_iterations: - raise ValueError("Height is too low to perform downsampling") - - downsampled = mask - for _ in range(num_iterations): - downsampled = zero_corrected_countless(downsampled) - - return downsampled - - def _augmentation_params(self): - scaling_factor = np.random.uniform(1 - self.max_scale_change, 1 + self.max_scale_change) - if self.horizontal_flip: - horizontal_flip = bool(np.random.choice(2)) - else: - horizontal_flip = False - vertical_shift = np.random.uniform(-self.max_vertical_shift, self.max_vertical_shift) - - return { - "scaling_factor": scaling_factor, - "horizontal_flip": horizontal_flip, - "vertical_shift": vertical_shift - } - - def _get_intersection(self, mask_array, mask_object): - intersection = mask_array[ - mask_object.up:mask_object.down, mask_object.left:mask_object.right - ] & mask_object.mask - return intersection - - def _check_masks_intersection(self, aug_mask, total_mask_area, prev_masks): - for existing_mask in prev_masks: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - intersection_current = 1 - (aug_mask.area() - intersection_area) / total_mask_area - if (intersection_existing > self.max_mask_intersection) or \ - (intersection_current > self.max_mask_intersection): - return False - return True - - def _check_foreground_intersection(self, aug_mask, foreground): - for existing_mask in foreground: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - if intersection_existing > self.max_foreground_coverage: - return False - intersection_mask = intersection_area / aug_mask.area() - if intersection_mask > self.max_foreground_intersection: - return False - return True - - def _move_mask(self, mask, foreground): - # Obtaining properties of the original mask_object: - orig_mask = ObjectMask(mask) - - chosen_masks = [] - chosen_parameters = [] - # to fix the case when resizing gives mask_object consisting only of False - scaling_factor_lower_bound = 0. - - for var_idx in range(self.num_variants_per_mask): - # Obtaining augmentation parameters and applying them to the downscaled mask_object - augmentation_params = self._augmentation_params() - augmentation_params["scaling_factor"] = min([ - augmentation_params["scaling_factor"], - 2 * min(orig_mask.up, orig_mask.height - orig_mask.down) / orig_mask.height + 1., - 2 * min(orig_mask.left, orig_mask.width - orig_mask.right) / orig_mask.width + 1. - ]) - augmentation_params["scaling_factor"] = max([ - augmentation_params["scaling_factor"], scaling_factor_lower_bound - ]) - - aug_mask = deepcopy(orig_mask) - aug_mask.rescale(augmentation_params["scaling_factor"], inplace=True) - if augmentation_params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - total_aug_area = aug_mask.area() - if total_aug_area == 0: - scaling_factor_lower_bound = 1. - continue - - # Fix if the element vertical shift is too strong and shown area is too small: - vertical_area = aug_mask.mask.sum(axis=1) / total_aug_area # share of area taken by rows - # number of rows which are allowed to be hidden from upper and lower parts of image respectively - max_hidden_up = np.searchsorted(vertical_area.cumsum(), self.max_hidden_area) - max_hidden_down = np.searchsorted(vertical_area[::-1].cumsum(), self.max_hidden_area) - # correcting vertical shift, so not too much area will be hidden - augmentation_params["vertical_shift"] = np.clip( - augmentation_params["vertical_shift"], - -(aug_mask.up + max_hidden_up) / aug_mask.height, - (aug_mask.height - aug_mask.down + max_hidden_down) / aug_mask.height - ) - # Applying vertical shift: - vertical_shift = int(round(aug_mask.height * augmentation_params["vertical_shift"])) - aug_mask.shift(vertical=vertical_shift, inplace=True) - aug_mask.crop_to_canvas(vertical=True, horizontal=False, inplace=True) - - # Choosing horizontal shift: - max_hidden_area = self.max_hidden_area - (1 - aug_mask.area() / total_aug_area) - horizontal_area = aug_mask.mask.sum(axis=0) / total_aug_area - max_hidden_left = np.searchsorted(horizontal_area.cumsum(), max_hidden_area) - max_hidden_right = np.searchsorted(horizontal_area[::-1].cumsum(), max_hidden_area) - allowed_shifts = np.arange(-max_hidden_left, aug_mask.width - - (aug_mask.right - aug_mask.left) + max_hidden_right + 1) - allowed_shifts = - (aug_mask.left - allowed_shifts) - - if self.position_shuffle: - np.random.shuffle(allowed_shifts) - - mask_is_found = False - for horizontal_shift in allowed_shifts: - aug_mask_left = deepcopy(aug_mask) - aug_mask_left.shift(horizontal=horizontal_shift, inplace=True) - aug_mask_left.crop_to_canvas(inplace=True) - - prev_masks = [mask] + chosen_masks - is_mask_suitable = self._check_masks_intersection(aug_mask_left, total_aug_area, prev_masks) & \ - self._check_foreground_intersection(aug_mask_left, foreground) - if is_mask_suitable: - aug_draw = aug_mask_left.restore_full_mask() - chosen_masks.append(aug_draw) - augmentation_params["horizontal_shift"] = horizontal_shift / aug_mask_left.width - chosen_parameters.append(augmentation_params) - mask_is_found = True - break - - if not mask_is_found: - break - - return chosen_parameters - - def _prepare_mask(self, mask): - height, width = mask.shape - target_width = width if self._is_power_of_two(width) else (1 << width.bit_length()) - target_height = height if self._is_power_of_two(height) else (1 << height.bit_length()) - - return resize(mask.astype('float32'), (target_height, target_width), order=0, mode='edge').round().astype('int32') - - def get_masks(self, im, return_panoptic=False): - panoptic_seg, segments_info = self.get_segmentation(im) - potential_mask_ids = self.identify_candidates(panoptic_seg, segments_info) - - panoptic_seg_scaled = self._prepare_mask(panoptic_seg.detach().cpu().numpy()) - downsampled = self.downsample_mask(panoptic_seg_scaled) - scene_objects = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = downsampled == segment["id"] - if not np.any(mask): - continue - scene_objects.append(mask) - - mask_set = [] - for mask_id in potential_mask_ids: - mask = downsampled == mask_id - if not np.any(mask): - continue - - if self.rigidness_mode is RigidnessMode.soft: - foreground = [mask] - elif self.rigidness_mode is RigidnessMode.rigid: - foreground = scene_objects - else: - raise ValueError(f'Unexpected rigidness_mode: {rigidness_mode}') - - masks_params = self._move_mask(mask, foreground) - - full_mask = ObjectMask((panoptic_seg == mask_id).detach().cpu().numpy()) - - for params in masks_params: - aug_mask = deepcopy(full_mask) - aug_mask.rescale(params["scaling_factor"], inplace=True) - if params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - - vertical_shift = int(round(aug_mask.height * params["vertical_shift"])) - horizontal_shift = int(round(aug_mask.width * params["horizontal_shift"])) - aug_mask.shift(vertical=vertical_shift, horizontal=horizontal_shift, inplace=True) - aug_mask = aug_mask.restore_full_mask().astype('uint8') - if aug_mask.mean() <= self.min_mask_area: - continue - mask_set.append(aug_mask) - - if return_panoptic: - return mask_set, panoptic_seg.detach().cpu().numpy() - else: - return mask_set - - -def propose_random_square_crop(mask, min_overlap=0.5): - height, width = mask.shape - mask_ys, mask_xs = np.where(mask > 0.5) # mask==0 is known fragment and mask==1 is missing - - if height < width: - crop_size = height - obj_left, obj_right = mask_xs.min(), mask_xs.max() - obj_width = obj_right - obj_left - left_border = max(0, min(width - crop_size - 1, obj_left + obj_width * min_overlap - crop_size)) - right_border = max(left_border + 1, min(width - crop_size, obj_left + obj_width * min_overlap)) - start_x = np.random.randint(left_border, right_border) - return start_x, 0, start_x + crop_size, height - else: - crop_size = width - obj_top, obj_bottom = mask_ys.min(), mask_ys.max() - obj_height = obj_bottom - obj_top - top_border = max(0, min(height - crop_size - 1, obj_top + obj_height * min_overlap - crop_size)) - bottom_border = max(top_border + 1, min(height - crop_size, obj_top + obj_height * min_overlap)) - start_y = np.random.randint(top_border, bottom_border) - return 0, start_y, width, start_y + crop_size diff --git a/spaces/krystaltechnology/image-video-colorization/pages/05_Image_Denoizer.py b/spaces/krystaltechnology/image-video-colorization/pages/05_Image_Denoizer.py deleted file mode 100644 index b9fa5ac42acb4d6d5f737298a9eb9a4a05faf725..0000000000000000000000000000000000000000 --- a/spaces/krystaltechnology/image-video-colorization/pages/05_Image_Denoizer.py +++ /dev/null @@ -1,256 +0,0 @@ -import streamlit as st -import cv2 -import numpy -import os -import random -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url -from PIL import Image - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -last_file = None -img_mode = "RGBA" - - -def realesrgan(img, model_name, denoise_strength, face_enhance, outscale): - """Real-ESRGAN function to restore (and upscale) images. - """ - if not img: - return - - # Define model parameters - if model_name == 'RealESRGAN_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'] - elif model_name == 'RealESRNet_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth'] - elif model_name == 'RealESRGAN_x4plus_anime_6B': # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'] - elif model_name == 'RealESRGAN_x2plus': # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth'] - elif model_name == 'realesr-general-x4v3': # x4 VGG-style model (S size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - netscale = 4 - file_url = [ - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth', - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth' - ] - - # Determine model paths - model_path = os.path.join('weights', model_name + '.pth') - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, model_dir=os.path.join(ROOT_DIR, 'weights'), progress=True, file_name=None) - - # Use dni to control the denoise strength - dni_weight = None - if model_name == 'realesr-general-x4v3' and denoise_strength != 1: - wdn_model_path = model_path.replace('realesr-general-x4v3', 'realesr-general-wdn-x4v3') - model_path = [model_path, wdn_model_path] - dni_weight = [denoise_strength, 1 - denoise_strength] - - # Restorer Class - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=0, - tile_pad=10, - pre_pad=10, - half=False, - gpu_id=None - ) - - # Use GFPGAN for face enhancement - if face_enhance: - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth', - upscale=outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - - # Convert the input PIL image to cv2 image, so that it can be processed by realesrgan - #cv_img = numpy.array(img.get_value(), dtype = 'uint8') - cv_img = numpy.array(img) - #img = cv2.cvtColor(cv2.UMat(imgUMat), cv2.COLOR_RGB2GRAY) - img = cv2.cvtColor(cv_img, cv2.COLOR_RGBA2BGRA) - - # Apply restoration - try: - if face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - # Save restored image and return it to the output Image component - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - else: - extension = 'jpg' - - out_filename = f"output_{rnd_string(8)}.{extension}" - cv2.imwrite(out_filename, output) - global last_file - last_file = out_filename - return out_filename - - -def rnd_string(x): - """Returns a string of 'x' random characters - """ - characters = "abcdefghijklmnopqrstuvwxyz_0123456789" - result = "".join((random.choice(characters)) for i in range(x)) - return result - - -def reset(): - """Resets the Image components of the Gradio interface and deletes - the last processed image - """ - global last_file - if last_file: - print(f"Deleting {last_file} ...") - os.remove(last_file) - last_file = None - return gr.update(value=None), gr.update(value=None) - - -def has_transparency(img): - """This function works by first checking to see if a "transparency" property is defined - in the image's info -- if so, we return "True". Then, if the image is using indexed colors - (such as in GIFs), it gets the index of the transparent color in the palette - (img.info.get("transparency", -1)) and checks if it's used anywhere in the canvas - (img.getcolors()). If the image is in RGBA mode, then presumably it has transparency in - it, but it double-checks by getting the minimum and maximum values of every color channel - (img.getextrema()), and checks if the alpha channel's smallest value falls below 255. - https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent - """ - if img.info.get("transparency", None) is not None: - return True - if img.mode == "P": - transparent = img.info.get("transparency", -1) - for _, index in img.getcolors(): - if index == transparent: - return True - elif img.mode == "RGBA": - extrema = img.getextrema() - if extrema[3][0] < 255: - return True - return False - - -def image_properties(img): - """Returns the dimensions (width and height) and color mode of the input image and - also sets the global img_mode variable to be used by the realesrgan function - """ - global img_mode - if img: - if has_transparency(img): - img_mode = "RGBA" - else: - img_mode = "RGB" - properties = f"Width: {img.size[0]}, Height: {img.size[1]} | Color Mode: {img_mode}" - return properties - -def image_properties(image): - # Function to display image properties - properties = f"Image Size: {image.size}\nImage Mode: {image.mode}" - return properties - -#---------- - -input_folder = '.' - -@st.cache_resource -def load_image(image_file): - img = Image.open(image_file) - return img - -def save_image(image_file): - if image_file is not None: - filename = image_file.name - img = load_image(image_file) - st.image(image=img, width=None) - with open(os.path.join(input_folder, filename), "wb") as f: - f.write(image_file.getbuffer()) - st.success("Succesfully uploaded file for processing".format(filename)) - -#------------ - -st.title("Image Denoizer") -# Saving uploaded image in input folder for processing - -#with st.expander("Options/Parameters"): - -input_img = st.file_uploader( -"Upload Image", type=['png', 'jpeg', 'jpg', 'webp']) -#save_image(input_img) - -model_name = "realesr-general-x4v3" - -denoise_strength = st.slider("Denoise Strength", 0.0, 1.0, 0.5) - -outscale = 1 - -face_enhance = False - -if input_img: - print(input_img) - input_img = Image.open(input_img) - # Display image properties - cols = st.columns(2) - - cols[0].image(input_img, 'Source Image') - - #input_properties = get_image_properties(input_img) - #cols[1].write(input_properties) - - # Output placeholder - output_img = st.empty() - -# Input and output placeholders -input_img = input_img -output_img = st.empty() - -# Buttons -restore = st.button('Restore') -reset = st.button('Reset') - -# Restore clicked -if restore: - if input_img is not None: - output = realesrgan(input_img, model_name, denoise_strength, - face_enhance, outscale) - output_img.image(output, 'Restored Image') - - st.download_button( - label="Download Image", - data=open(output, "rb").read(), - file_name="Image.jpg" - ) - else: - st.warning('Upload a file', icon="⚠️") - -# Reset clicked -if reset: - output_img.empty() - \ No newline at end of file diff --git a/spaces/kumasan681104/React_St/main.py b/spaces/kumasan681104/React_St/main.py deleted file mode 100644 index f2af2db5745c1e836fa24dba74f1e2b12ef04963..0000000000000000000000000000000000000000 --- a/spaces/kumasan681104/React_St/main.py +++ /dev/null @@ -1,118 +0,0 @@ -import numpy as np -import openai -import pandas as pd -import re -import warnings - - -from rdkit import Chem, DataStructs -from rdkit.Chem import AllChem - -# トレーニングデータを抽出する関数を定義する -def extract_training_data(nd_tnmt_A, - nd_tnmt_B, - df_smiles_maccsfps_id, - td_number - ): - - # データセットからsmiles部分だけ取り出す - df_smiles_id = df_smiles_maccsfps_id.loc[:, ["A", "B", "Y", "ID"]] - # 計算したタニモト係数をdf_smilesに合わせる - df_smiles_id["tnmt_A"] = pd.DataFrame(nd_tnmt_A) - df_smiles_id["tnmt_B"] = pd.DataFrame(nd_tnmt_B) - df_smiles_tnmt_id = df_smiles_id - - str_training_dataset= "" - half_number = int(td_number/2) - - # データフレームを化合物Aのタニモト係数の降順で並び変える - df_smiles_tnmt_id_A = df_smiles_tnmt_id.sort_values("tnmt_A", ascending=False) - # ソートしたdfから化合物Aに対するタニモト係数上位トレーニングデータ数の半数を抜き取る - df_training_data_A = df_smiles_tnmt_id_A.iloc[:half_number, 0:4] - # 抜き取ったdfからstr型のトレーニングデータを作る - for _, row in df_training_data_A.iterrows(): - template = "A: " + row['A'] + "\\" + "\n" + "B: " + row['B'] + "\\" + "\n" + "Y: " + row['Y'] + "\\" + "\n" + "\\" + "\n" - str_training_dataset += template - - # データフレームを化合物Bのタニモト係数の降順で並び変える - df_smiles_tnmt_id_B = df_smiles_tnmt_id.sort_values("tnmt_B", ascending=False) - # ソートしたdfから化合物Aに対するタニモト係数上位トレーニングデータ数の半数を抜き取る - df_training_data_B = df_smiles_tnmt_id_B.iloc[:half_number, 0:4] - # 抜き取ったdfからstr型のトレーニングデータを作る - for _, row in df_training_data_B.iterrows(): - template = "A: " + row['A'] + "\\" + "\n" + "B: " + row['B'] + "\\" + "\n" + "Y: " + row['Y'] + "\\" + "\n" + "\\" + "\n" - str_training_dataset += template - - df_training_dataset = pd.concat([df_training_data_A, - df_training_data_B], - ignore_index=True - ) - - return str_training_dataset, df_training_dataset - -def get_prodY_SMILES(test_A_smiles,\ - test_B_smiles,\ - training_dataset): - - question =\ - f"Synthesize compound 'Y' from the corresponding compound 'A' and 'B' below.\ - Answer 5 different each other candidates of 'y1 to y5' in this format, 'Y1:y1, Y2:y2, Y3:y3, Y4:y4, Y5:y5'.\ - If {test_A_smiles} and {test_B_smiles} are in {training_dataset}, the corresponding 'Y' should be included in y1-y5.\ - From training dataset, learn which elements (alphabet) of A and B are more likely to react for providing the corresppoing Y.\ - Each your answered Y should preferably be the result of a different combination of sites that A and B might react with.\ - \ - \ - {training_dataset}\ - \ - A: {test_A_smiles}\ - B: {test_B_smiles}\ - Y:\ - " - - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo-16k", - messages=[ - {"role": "system", "content": "Synthesize compound Y from test data compounds A and B."}, - {"role": "user", "content": f"{question}"}, - ], - max_tokens=1000, - temperature=0.25, - ) - - # gptのresponseからYの候補が含まれている部分を抜き出す - response_text = response["choices"][0]["message"]["content"] - #テンプレートの作成 - pattern = r":\s(.+)" - #テンプレートをもとにSMILESを抜き出す→リスト型 - product_Y_candidates = re.findall(pattern, response_text) - #リスト型→データフレーム - df_product_Y_candidates = pd.DataFrame({"Y_candidates":product_Y_candidates}) - #Molオブジェクトの生成 - df_product_Y_candidates["Y_candidates_mol"] =\ - df_product_Y_candidates["Y_candidates"].\ - apply(lambda smiles: Chem.MolFromSmiles(smiles)) - #maccs_fpsの生成 - df_product_Y_candidates["Y_candidates_maccs_fps"] =\ - df_product_Y_candidates["Y_candidates_mol"].\ - apply(lambda mol: AllChem.GetMACCSKeysFingerprint(mol)\ - if mol is not None else None - ) - y = response["choices"][0]["message"]["content"] - - test_A_mol = Chem.MolFromSmiles(test_A_smiles) - test_A_maccs_fps = AllChem.GetMACCSKeysFingerprint(test_A_mol) - - #テスト化合物Aを基準にタニモト係数を計算 - df_product_Y_candidates["Y_candidates_tnmt"] = \ - df_product_Y_candidates["Y_candidates_maccs_fps"].\ - apply(lambda maccs_fps: DataStructs.TanimotoSimilarity(test_A_maccs_fps, maccs_fps)\ - if maccs_fps is not None else None - ) - - #計算したタニモト係数を降順に並べる - df_Y1 = df_product_Y_candidates.sort_values("Y_candidates_tnmt", ascending=False) - df_Y2 = df_Y1.loc[:, ["Y_candidates", "Y_candidates_tnmt"]] - - #df_Y = df_product_Y_candidates.iloc[:, 0].reset_index() - - return response, df_Y2 \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css deleted file mode 100644 index 78067c2729600b4ee3e7e9c6442a129e8ffe9894..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css +++ /dev/null @@ -1 +0,0 @@ -.gradio-bokeh.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;justify-content:center}.layout.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full);color:var(--body-text-color)}.altair.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full)}.caption.svelte-1fe5ixn.svelte-1fe5ixn{font-size:var(--text-sm)}.matplotlib.svelte-1fe5ixn img.svelte-1fe5ixn{object-fit:contain} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2ec4a94d.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2ec4a94d.js deleted file mode 100644 index bbd975442320a49075793f961b4d3619b8586ff8..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2ec4a94d.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as F,i as L,s as N,e as j,H as q,G as S,C as w,m as E,g as B,z as Q,ao as V,p as M,t as T,n as z,q as R,r as W,a8 as X,I as O,K as P,ap as Y,M as C,E as y,J as H,a0 as Z,x as p,$ as x,b as I,a as J,h as $,j as ee,k as K,y as G}from"./index-7c0e54a6.js";/* empty css */import{g as le,B as te}from"./Button-661a0701.js";/* empty css */import{B as ae}from"./BlockTitle-900cfd93.js";import"./Info-3b2d34d7.js";function U(a,e,t){const l=a.slice();return l[15]=e[t],l[17]=t,l}function ie(a){let e;return{c(){e=O(a[3])},m(t,l){B(t,e,l)},p(t,l){l&8&&P(e,t[3])},d(t){t&&R(e)}}}function A(a,e){let t,l,s,o,m=!1,h,b,i=e[15]+"",_,c,n,f,v;function r(){return e[13](e[15],e[17])}return n=Y(e[12][0]),{key:a,first:null,c(){t=S("label"),l=S("input"),h=q(),b=S("span"),_=O(i),c=q(),l.disabled=e[2],w(l,"type","radio"),w(l,"name",s="radio-"+e[6]),l.__value=o=e[15],l.value=l.__value,w(l,"class","svelte-1p9xokt"),w(b,"class","ml-2 svelte-1p9xokt"),w(t,"style",e[7]),w(t,"class","svelte-1p9xokt"),C(t,"disabled",e[2]),C(t,"selected",e[0]===e[15]),n.p(l),this.first=t},m(k,g){B(k,t,g),y(t,l),l.checked=l.__value===e[0],y(t,h),y(t,b),y(b,_),y(t,c),f||(v=[H(l,"change",e[11]),H(l,"input",r)],f=!0)},p(k,g){e=k,g&4&&(l.disabled=e[2]),g&64&&s!==(s="radio-"+e[6])&&w(l,"name",s),g&2&&o!==(o=e[15])&&(l.__value=o,l.value=l.__value,m=!0),(m||g&3)&&(l.checked=l.__value===e[0]),g&2&&i!==(i=e[15]+"")&&P(_,i),g&128&&w(t,"style",e[7]),g&4&&C(t,"disabled",e[2]),g&3&&C(t,"selected",e[0]===e[15])},d(k){k&&R(t),n.r(),f=!1,Z(v)}}}function ne(a){let e,t,l,s=[],o=new Map,m;e=new ae({props:{show_label:a[5],info:a[4],$$slots:{default:[ie]},$$scope:{ctx:a}}});let h=a[1];const b=i=>i[17];for(let i=0;i{t(9,o=!1)});const r=[[]];function k(){s=this.__value,t(0,s)}const g=(d,D)=>f("select",{value:d,index:D});return a.$$set=d=>{"value"in d&&t(0,s=d.value),"value_is_output"in d&&t(9,o=d.value_is_output),"style"in d&&t(10,m=d.style),"choices"in d&&t(1,h=d.choices),"disabled"in d&&t(2,b=d.disabled),"label"in d&&t(3,i=d.label),"info"in d&&t(4,_=d.info),"show_label"in d&&t(5,c=d.show_label),"elem_id"in d&&t(6,n=d.elem_id)},a.$$.update=()=>{a.$$.dirty&1&&v(),a.$$.dirty&1024&&t(7,{item_container:l}=le(m,["item_container"]),l)},[s,h,b,i,_,c,n,l,f,o,m,k,r,g]}class ue extends F{constructor(e){super(),L(this,e,se,ne,N,{value:0,value_is_output:9,style:10,choices:1,disabled:2,label:3,info:4,show_label:5,elem_id:6})}}function _e(a){let e,t,l,s,o,m;const h=[a[11]];let b={};for(let n=0;nJ(l,"value",i)),I.push(()=>J(l,"value_is_output",_)),l.$on("change",a[14]),l.$on("input",a[15]),l.$on("select",a[16]),{c(){j(e.$$.fragment),t=q(),j(l.$$.fragment)},m(n,f){E(e,n,f),B(n,t,f),E(l,n,f),m=!0},p(n,f){const v=f&2048?$(h,[ee(n[11])]):{};e.$set(v);const r={};f&4&&(r.label=n[2]),f&8&&(r.info=n[3]),f&16&&(r.elem_id=n[4]),f&512&&(r.show_label=n[9]),f&128&&(r.choices=n[7]),f&1024&&(r.style=n[10]),f&256&&(r.disabled=n[8]==="static"),!s&&f&1&&(s=!0,r.value=n[0],K(()=>s=!1)),!o&&f&2&&(o=!0,r.value_is_output=n[1],K(()=>o=!1)),l.$set(r)},i(n){m||(M(e.$$.fragment,n),M(l.$$.fragment,n),m=!0)},o(n){T(e.$$.fragment,n),T(l.$$.fragment,n),m=!1},d(n){z(e,n),n&&R(t),z(l,n)}}}function oe(a){let e,t;return e=new te({props:{visible:a[6],type:"fieldset",elem_id:a[4],elem_classes:a[5],disable:typeof a[10].container=="boolean"&&!a[10].container,$$slots:{default:[_e]},$$scope:{ctx:a}}}),{c(){j(e.$$.fragment)},m(l,s){E(e,l,s),t=!0},p(l,[s]){const o={};s&64&&(o.visible=l[6]),s&16&&(o.elem_id=l[4]),s&32&&(o.elem_classes=l[5]),s&1024&&(o.disable=typeof l[10].container=="boolean"&&!l[10].container),s&135071&&(o.$$scope={dirty:s,ctx:l}),e.$set(o)},i(l){t||(M(e.$$.fragment,l),t=!0)},o(l){T(e.$$.fragment,l),t=!1},d(l){z(e,l)}}}function fe(a,e,t){let{label:l="Radio"}=e,{info:s=void 0}=e,{elem_id:o=""}=e,{elem_classes:m=[]}=e,{visible:h=!0}=e,{value:b=null}=e,{value_is_output:i=!1}=e,{choices:_=[]}=e,{mode:c}=e,{show_label:n}=e,{style:f={}}=e,{loading_status:v}=e;function r(u){b=u,t(0,b)}function k(u){i=u,t(1,i)}function g(u){G.call(this,a,u)}function d(u){G.call(this,a,u)}function D(u){G.call(this,a,u)}return a.$$set=u=>{"label"in u&&t(2,l=u.label),"info"in u&&t(3,s=u.info),"elem_id"in u&&t(4,o=u.elem_id),"elem_classes"in u&&t(5,m=u.elem_classes),"visible"in u&&t(6,h=u.visible),"value"in u&&t(0,b=u.value),"value_is_output"in u&&t(1,i=u.value_is_output),"choices"in u&&t(7,_=u.choices),"mode"in u&&t(8,c=u.mode),"show_label"in u&&t(9,n=u.show_label),"style"in u&&t(10,f=u.style),"loading_status"in u&&t(11,v=u.loading_status)},[b,i,l,s,o,m,h,_,c,n,f,v,r,k,g,d,D]}class ce extends F{constructor(e){super(),L(this,e,fe,oe,N,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,choices:7,mode:8,show_label:9,style:10,loading_status:11})}}const ve=ce,ke=["static","dynamic"],we=a=>({type:{payload:"string"},description:{payload:"selected choice"},example_data:a.choices.length>1?a.choices[0]:""});export{ve as Component,we as document,ke as modes}; -//# sourceMappingURL=index-2ec4a94d.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-b4c4dba3.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-b4c4dba3.js deleted file mode 100644 index f690603b2400ef7bf198e5a21f2ca3781778e4ef..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-b4c4dba3.js +++ /dev/null @@ -1,2 +0,0 @@ -import{$ as s}from"./index-7c0e54a6.js";const o=["static"];export{s as Component,o as modes}; -//# sourceMappingURL=index-b4c4dba3.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-88521967.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-88521967.js deleted file mode 100644 index a838066b5d3639966574bf9b63d931b4fb70183f..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-88521967.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as ee,i as le,s as te,aa as ne,H as D,f as pe,g as A,J as M,p,l as J,t as z,o as K,q as C,r as ye,e as I,m as L,n as N,T as ze,b as V,G as j,C as b,M as B,E as y,ai as Ge,N as ie,L as H,D as F,a0 as Be,I as se,K as oe,x as je,$ as Ae,h as Ce,j as De,y as Ee}from"./index-8c3da1d9.js";import{g as Ie,B as Le}from"./Button-62634b34.js";import{B as Ne}from"./BlockLabel-98ef75ee.js";import{E as qe}from"./Empty-5d52e655.js";import{n as O}from"./ModifyUpload.svelte_svelte_type_style_lang-ba6baa96.js";import{M as He}from"./ModifyUpload-00319b5e.js";/* empty css */import{I as ae}from"./Image-4b4cd6af.js";function P(l,t,e){const n=l.slice();return n[31]=t[e][0],n[32]=t[e][1],n[34]=e,n}function Q(l,t,e){const n=l.slice();return n[31]=t[e],n[35]=t,n[34]=e,n}function W(l){let t,e;return t=new Ne({props:{show_label:l[0],Icon:ae,label:l[1]||"Gallery",disable:typeof l[3].container=="boolean"&&!l[3].container}}),{c(){I(t.$$.fragment)},m(n,i){L(t,n,i),e=!0},p(n,i){const a={};i[0]&1&&(a.show_label=n[0]),i[0]&2&&(a.label=n[1]||"Gallery"),i[0]&8&&(a.disable=typeof n[3].container=="boolean"&&!n[3].container),t.$set(a)},i(n){e||(p(t.$$.fragment,n),e=!0)},o(n){z(t.$$.fragment,n),e=!1},d(n){N(t,n)}}}function Me(l){let t,e,n,i,a,g=l[4]!==null&&X(l),f=l[7],s=[];for(let o=0;ol[26].call(e)),B(e,"fixed-height",!l[3].height||l[3].height=="auto")},m(o,c){g&&g.m(o,c),A(o,t,c),A(o,e,c),y(e,n);for(let m=0;m{g=null}),K()),c[0]&2192){f=o[7];let m;for(m=0;ml[22](t,f),m=()=>l[22](null,f);function u(){return l[23](l[34])}return{c(){t=j("button"),e=j("img"),g=D(),H(e.src,n=l[31][0].data)||b(e,"src",n),b(e,"title",i=l[31][1]||null),b(e,"alt",a=l[31][1]||null),b(e,"class","svelte-g4rw9"),b(t,"class","thumbnail-item thumbnail-small svelte-g4rw9"),B(t,"selected",l[4]===l[34])},m(d,_){A(d,t,_),y(t,e),y(t,g),c(),s||(o=M(t,"click",u),s=!0)},p(d,_){l=d,_[0]&128&&!H(e.src,n=l[31][0].data)&&b(e,"src",n),_[0]&128&&i!==(i=l[31][1]||null)&&b(e,"title",i),_[0]&128&&a!==(a=l[31][1]||null)&&b(e,"alt",a),f!==l[34]&&(m(),f=l[34],c()),_[0]&16&&B(t,"selected",l[4]===l[34])},d(d){d&&C(t),m(),s=!1,o()}}}function $(l){let t,e=l[32]+"",n;return{c(){t=j("div"),n=se(e),b(t,"class","caption-label svelte-g4rw9")},m(i,a){A(i,t,a),y(t,n)},p(i,a){a[0]&128&&e!==(e=i[32]+"")&&oe(n,e)},d(i){i&&C(t)}}}function x(l){let t,e,n,i,a,g,f,s,o=l[32]&&$(l);function c(){return l[25](l[34])}return{c(){t=j("button"),e=j("img"),a=D(),o&&o.c(),g=D(),b(e,"alt",n=l[32]||""),H(e.src,i=typeof l[31]=="string"?l[31]:l[31].data)||b(e,"src",i),b(e,"class","svelte-g4rw9"),b(t,"class","thumbnail-item thumbnail-lg svelte-g4rw9"),B(t,"selected",l[4]===l[34])},m(m,u){A(m,t,u),y(t,e),y(t,a),o&&o.m(t,null),y(t,g),f||(s=M(t,"click",c),f=!0)},p(m,u){l=m,u[0]&128&&n!==(n=l[32]||"")&&b(e,"alt",n),u[0]&128&&!H(e.src,i=typeof l[31]=="string"?l[31]:l[31].data)&&b(e,"src",i),l[32]?o?o.p(l,u):(o=$(l),o.c(),o.m(t,g)):o&&(o.d(1),o=null),u[0]&16&&B(t,"selected",l[4]===l[34])},d(m){m&&C(t),o&&o.d(),f=!1,s()}}}function Se(l){let t,e;return t=new ae({}),{c(){I(t.$$.fragment)},m(n,i){L(t,n,i),e=!0},i(n){e||(p(t.$$.fragment,n),e=!0)},o(n){z(t.$$.fragment,n),e=!1},d(n){N(t,n)}}}function Te(l){let t,e,n,i,a,g,f;ne(l[19]);let s=l[0]&&W(l);const o=[Re,Me],c=[];function m(u,d){return u[2]===null||u[7]===null||u[7].length===0?0:1}return e=m(l),n=c[e]=o[e](l),{c(){s&&s.c(),t=D(),n.c(),i=pe()},m(u,d){s&&s.m(u,d),A(u,t,d),c[e].m(u,d),A(u,i,d),a=!0,g||(f=M(window,"resize",l[19]),g=!0)},p(u,d){u[0]?s?(s.p(u,d),d[0]&1&&p(s,1)):(s=W(u),s.c(),p(s,1),s.m(t.parentNode,t)):s&&(J(),z(s,1,1,()=>{s=null}),K());let _=e;e=m(u),e===_?c[e].p(u,d):(J(),z(c[_],1,1,()=>{c[_]=null}),K(),n=c[e],n?n.p(u,d):(n=c[e]=o[e](u),n.c()),p(n,1),n.m(i.parentNode,i))},i(u){a||(p(s),p(n),a=!0)},o(u){z(s),z(n),a=!1},d(u){s&&s.d(u),u&&C(t),c[e].d(u),u&&C(i),g=!1,f()}}}function Je(l,t,e){let n,i,a,g,f,{show_label:s=!0}=t,{label:o}=t,{root:c=""}=t,{root_url:m=null}=t,{value:u=null}=t,{style:d={grid_cols:[2],object_fit:"cover",height:"auto"}}=t;const _=ye();let G=!0,w=u,r=null,v=null;function k(h){switch(h.code){case"Escape":h.preventDefault(),e(4,r=null);break;case"ArrowLeft":h.preventDefault(),e(4,r=i);break;case"ArrowRight":h.preventDefault(),e(4,r=a);break}}let E=[],q;async function re(h){if(typeof h!="number")return;await ze(),E[h].focus();const{left:T,width:we}=q.getBoundingClientRect(),{left:ke,width:ve}=E[h].getBoundingClientRect(),U=ke-T+ve/2-we/2+q.scrollLeft;q.scrollTo({left:U<0?0:U,behavior:"smooth"})}function fe(h){return e(10,f=Ie(h,["grid_cols","grid_rows","object_fit"]).styles),f+` height: ${h.height}`}let R=0,S=0;function ue(){e(6,S=window.innerHeight)}const _e=()=>e(4,r=null),ce=()=>e(4,r=a);function me(h,T){V[h?"unshift":"push"](()=>{E[T]=h,e(8,E)})}const ge=h=>e(4,r=h);function he(h){V[h?"unshift":"push"](()=>{q=h,e(9,q)})}const be=h=>e(4,r=g?h:r);function de(){R=this.clientHeight,e(5,R)}return l.$$set=h=>{"show_label"in h&&e(0,s=h.show_label),"label"in h&&e(1,o=h.label),"root"in h&&e(14,c=h.root),"root_url"in h&&e(15,m=h.root_url),"value"in h&&e(2,u=h.value),"style"in h&&e(3,d=h.style)},l.$$.update=()=>{l.$$.dirty[0]&65540&&e(16,G=u==null||u.length==0?!0:G),l.$$.dirty[0]&49156&&e(7,n=u===null?null:u.map(h=>Array.isArray(h)?[O(h[0],c,m),h[1]]:[O(h,c,m),null])),l.$$.dirty[0]&196636&&w!==u&&(G?(e(4,r=d.preview&&u?.length?0:null),e(16,G=!1)):e(4,r=r!==null&&u!==null&&r=R),l.$$.dirty[0]&8&&e(10,f=fe(d))},[s,o,u,d,r,R,S,n,E,q,f,g,a,k,c,m,G,w,v,ue,_e,ce,me,ge,he,be,de]}class Ke extends ee{constructor(t){super(),le(this,t,Je,Te,te,{show_label:0,label:1,root:14,root_url:15,value:2,style:3},null,[-1,-1])}}function Ue(l){let t,e,n,i;const a=[l[0]];let g={};for(let f=0;f{"loading_status"in _&&e(0,n=_.loading_status),"show_label"in _&&e(1,i=_.show_label),"label"in _&&e(2,a=_.label),"root"in _&&e(3,g=_.root),"root_url"in _&&e(4,f=_.root_url),"elem_id"in _&&e(5,s=_.elem_id),"elem_classes"in _&&e(6,o=_.elem_classes),"visible"in _&&e(7,c=_.visible),"value"in _&&e(8,m=_.value),"style"in _&&e(9,u=_.style)},[n,i,a,g,f,s,o,c,m,u,d]}class Oe extends ee{constructor(t){super(),le(this,t,Fe,Ve,te,{loading_status:0,show_label:1,label:2,root:3,root_url:4,elem_id:5,elem_classes:6,visible:7,value:8,style:9})}}const ll=Oe,tl=["static"],nl=l=>({type:{payload:"Array<{ name: string } | [{ name: string }, string]>"},description:{payload:"list of objects, with filename and optional caption,"}});export{ll as Component,nl as document,tl as modes}; -//# sourceMappingURL=index-88521967.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/token.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/token.py deleted file mode 100644 index 7a41a7843ab6c3019d1385bdc245064d202a70e5..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/token.py +++ /dev/null @@ -1,180 +0,0 @@ -from __future__ import annotations - -from collections.abc import Callable, MutableMapping -import dataclasses as dc -from typing import Any -import warnings - -from markdown_it._compat import DATACLASS_KWARGS - - -def convert_attrs(value: Any) -> Any: - """Convert Token.attrs set as ``None`` or ``[[key, value], ...]`` to a dict. - - This improves compatibility with upstream markdown-it. - """ - if not value: - return {} - if isinstance(value, list): - return dict(value) - return value - - -@dc.dataclass(**DATACLASS_KWARGS) -class Token: - type: str - """Type of the token (string, e.g. "paragraph_open")""" - - tag: str - """HTML tag name, e.g. 'p'""" - - nesting: int - """Level change (number in {-1, 0, 1} set), where: - - `1` means the tag is opening - - `0` means the tag is self-closing - - `-1` means the tag is closing - """ - - attrs: dict[str, str | int | float] = dc.field(default_factory=dict) - """HTML attributes. - Note this differs from the upstream "list of lists" format, - although than an instance can still be initialised with this format. - """ - - map: list[int] | None = None - """Source map info. Format: `[ line_begin, line_end ]`""" - - level: int = 0 - """Nesting level, the same as `state.level`""" - - children: list[Token] | None = None - """Array of child nodes (inline and img tokens).""" - - content: str = "" - """Inner content, in the case of a self-closing tag (code, html, fence, etc.),""" - - markup: str = "" - """'*' or '_' for emphasis, fence string for fence, etc.""" - - info: str = "" - """Additional information: - - Info string for "fence" tokens - - The value "auto" for autolink "link_open" and "link_close" tokens - - The string value of the item marker for ordered-list "list_item_open" tokens - """ - - meta: dict = dc.field(default_factory=dict) - """A place for plugins to store any arbitrary data""" - - block: bool = False - """True for block-level tokens, false for inline tokens. - Used in renderer to calculate line breaks - """ - - hidden: bool = False - """If true, ignore this element when rendering. - Used for tight lists to hide paragraphs. - """ - - def __post_init__(self): - self.attrs = convert_attrs(self.attrs) - - def attrIndex(self, name: str) -> int: - warnings.warn( - "Token.attrIndex should not be used, since Token.attrs is a dictionary", - UserWarning, - ) - if name not in self.attrs: - return -1 - return list(self.attrs.keys()).index(name) - - def attrItems(self) -> list[tuple[str, str | int | float]]: - """Get (key, value) list of attrs.""" - return list(self.attrs.items()) - - def attrPush(self, attrData: tuple[str, str | int | float]) -> None: - """Add `[ name, value ]` attribute to list. Init attrs if necessary.""" - name, value = attrData - self.attrSet(name, value) - - def attrSet(self, name: str, value: str | int | float) -> None: - """Set `name` attribute to `value`. Override old value if exists.""" - self.attrs[name] = value - - def attrGet(self, name: str) -> None | str | int | float: - """Get the value of attribute `name`, or null if it does not exist.""" - return self.attrs.get(name, None) - - def attrJoin(self, name: str, value: str) -> None: - """Join value to existing attribute via space. - Or create new attribute if not exists. - Useful to operate with token classes. - """ - if name in self.attrs: - current = self.attrs[name] - if not isinstance(current, str): - raise TypeError( - f"existing attr 'name' is not a str: {self.attrs[name]}" - ) - self.attrs[name] = f"{current} {value}" - else: - self.attrs[name] = value - - def copy(self, **changes: Any) -> Token: - """Return a shallow copy of the instance.""" - return dc.replace(self, **changes) - - def as_dict( - self, - *, - children: bool = True, - as_upstream: bool = True, - meta_serializer: Callable[[dict], Any] | None = None, - filter: Callable[[str, Any], bool] | None = None, - dict_factory: Callable[..., MutableMapping[str, Any]] = dict, - ) -> MutableMapping[str, Any]: - """Return the token as a dictionary. - - :param children: Also convert children to dicts - :param as_upstream: Ensure the output dictionary is equal to that created by markdown-it - For example, attrs are converted to null or lists - :param meta_serializer: hook for serializing ``Token.meta`` - :param filter: A callable whose return code determines whether an - attribute or element is included (``True``) or dropped (``False``). - Is called with the (key, value) pair. - :param dict_factory: A callable to produce dictionaries from. - For example, to produce ordered dictionaries instead of normal Python - dictionaries, pass in ``collections.OrderedDict``. - - """ - mapping = dict_factory((f.name, getattr(self, f.name)) for f in dc.fields(self)) - if filter: - mapping = dict_factory((k, v) for k, v in mapping.items() if filter(k, v)) - if as_upstream and "attrs" in mapping: - mapping["attrs"] = ( - None - if not mapping["attrs"] - else [[k, v] for k, v in mapping["attrs"].items()] - ) - if meta_serializer and "meta" in mapping: - mapping["meta"] = meta_serializer(mapping["meta"]) - if children and mapping.get("children", None): - mapping["children"] = [ - child.as_dict( - children=children, - filter=filter, - dict_factory=dict_factory, - as_upstream=as_upstream, - meta_serializer=meta_serializer, - ) - for child in mapping["children"] - ] - return mapping - - @classmethod - def from_dict(cls, dct: MutableMapping[str, Any]) -> Token: - """Convert a dict to a Token.""" - token = cls(**dct) - if token.children: - token.children = [cls.from_dict(c) for c in token.children] # type: ignore[arg-type] - return token diff --git a/spaces/kzachos/PDF-chatbot/app.py b/spaces/kzachos/PDF-chatbot/app.py deleted file mode 100644 index 0929d526449e7b0038953d26febd427fb70fe05e..0000000000000000000000000000000000000000 --- a/spaces/kzachos/PDF-chatbot/app.py +++ /dev/null @@ -1,64 +0,0 @@ -# Import necessary packages -from llama_index import GPTSimpleVectorIndex, download_loader -from pathlib import Path -import os -import json -import gradio as gr -import tempfile - -def construct_index(file_path): - PDFReader = download_loader("PDFReader") - - loader = PDFReader() - documents = loader.load_data(file=Path(file_path)) - - # Construct a simple vector index - index = GPTSimpleVectorIndex.from_documents(documents) - - # Save your index to a index.json file - index.save_to_disk('index.json') - - return index - -def qabot(file, input_text): - # Check if index already exists - if not os.path.exists('index.json'): - # If index does not exist, create index from file - index = construct_index(file.name) - else: - # If index exists, load index from file - index = GPTSimpleVectorIndex.load_from_disk('index.json') - - # Query the index with the user's input text - response = index.query(input_text, response_mode="compact") - return response.response - -# Add input component for file upload -file_upload = gr.inputs.File(label="Upload PDF file") - -# Change the input components of the function to the file upload component and a text box for user input -iface = gr.Interface(fn=qabot, inputs=[file_upload, gr.inputs.Textbox(lines=7, label='Enter your query')], outputs="text", title="Custom-trained QA Application") - -# Add a separate interface to update the index -def update_index(file): - # Save the uploaded file to a temporary file - with tempfile.NamedTemporaryFile(delete=False) as temp_file: - temp_file.write(file.read()) - temp_file_path = temp_file.name - - # Construct the index from the temporary file - index = construct_index(temp_file_path) - - # Remove the temporary file - os.remove(temp_file_path) - - # Update the index file - index.save_to_disk('index.json') - - return "Index generated from uploaded file: {}".format(file.name) - -update_index_interface = gr.Interface(update_index, inputs=file_upload, outputs="text", title="Update Index") - -# Launch both interfaces -iface.launch() -update_index_interface.launch() \ No newline at end of file diff --git a/spaces/leafShen/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py b/spaces/leafShen/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py deleted file mode 100644 index 7df5be6fc260394ee9bbd0a7ae377e2ca657fe83..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py +++ /dev/null @@ -1,60 +0,0 @@ -import argparse -import os -from os import path as osp - -# from basicsr.utils.download_util import download_file_from_google_drive -import gdown - - -def download_pretrained_models(method, file_ids): - save_path_root = f'./weights/{method}' - os.makedirs(save_path_root, exist_ok=True) - - for file_name, file_id in file_ids.items(): - file_url = 'https://drive.google.com/uc?id='+file_id - save_path = osp.abspath(osp.join(save_path_root, file_name)) - if osp.exists(save_path): - user_response = input(f'{file_name} already exist. Do you want to cover it? Y/N\n') - if user_response.lower() == 'y': - print(f'Covering {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - # download_file_from_google_drive(file_id, save_path) - elif user_response.lower() == 'n': - print(f'Skipping {file_name}') - else: - raise ValueError('Wrong input. Only accepts Y/N.') - else: - print(f'Downloading {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - # download_file_from_google_drive(file_id, save_path) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - - parser.add_argument( - 'method', - type=str, - help=("Options: 'CodeFormer' 'facelib'. Set to 'all' to download all the models.")) - args = parser.parse_args() - - # file name: file id - # 'dlib': { - # 'mmod_human_face_detector-4cb19393.dat': '1qD-OqY8M6j4PWUP_FtqfwUPFPRMu6ubX', - # 'shape_predictor_5_face_landmarks-c4b1e980.dat': '1vF3WBUApw4662v9Pw6wke3uk1qxnmLdg', - # 'shape_predictor_68_face_landmarks-fbdc2cb8.dat': '1tJyIVdCHaU6IDMDx86BZCxLGZfsWB8yq' - # } - file_ids = { - 'CodeFormer': { - 'codeformer.pth': '1v_E_vZvP-dQPF55Kc5SRCjaKTQXDz-JB' - }, - 'facelib': { - 'yolov5l-face.pth': '131578zMA6B2x8VQHyHfa6GEPtulMCNzV', - 'parsing_parsenet.pth': '16pkohyZZ8ViHGBk3QtVqxLZKzdo466bK' - } - } - - if args.method == 'all': - for method in file_ids.keys(): - download_pretrained_models(method, file_ids[method]) - else: - download_pretrained_models(args.method, file_ids[args.method]) \ No newline at end of file diff --git a/spaces/leurez/moss/src/views/chat/hooks/useChat.ts b/spaces/leurez/moss/src/views/chat/hooks/useChat.ts deleted file mode 100644 index 1eb9fcb205370addd5c9b321086ad046dad5f0f2..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/views/chat/hooks/useChat.ts +++ /dev/null @@ -1,28 +0,0 @@ -import { useChatStore } from '@/store' - -export function useChat() { - const chatStore = useChatStore() - - const getChatByUuidAndIndex = (uuid: number, index: number) => { - return chatStore.getChatByUuidAndIndex(uuid, index) - } - - const addChat = (uuid: number, chat: Chat.Chat) => { - chatStore.addChatByUuid(uuid, chat) - } - - const updateChat = (uuid: number, index: number, chat: Chat.Chat) => { - chatStore.updateChatByUuid(uuid, index, chat) - } - - const updateChatSome = (uuid: number, index: number, chat: Partial) => { - chatStore.updateChatSomeByUuid(uuid, index, chat) - } - - return { - addChat, - updateChat, - updateChatSome, - getChatByUuidAndIndex, - } -} diff --git a/spaces/lewisliuX123/wechatllama2/bot/bot_factory.py b/spaces/lewisliuX123/wechatllama2/bot/bot_factory.py deleted file mode 100644 index dd590c7fee00925a224e3972de0e57d8955b2885..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatllama2/bot/bot_factory.py +++ /dev/null @@ -1,26 +0,0 @@ -""" -channel factory -""" - - -def create_bot(bot_type): - """ - create a channel instance - :param channel_type: channel type code - :return: channel instance - """ - if bot_type == 'baidu': - # Baidu Unit对话接口 - from bot.baidu.baidu_unit_bot import BaiduUnitBot - return BaiduUnitBot() - - elif bot_type == 'chatGPT': - # ChatGPT 网页端web接口 - from bot.chatgpt.chat_gpt_bot import ChatGPTBot - return ChatGPTBot() - - elif bot_type == 'openAI': - # OpenAI 官方对话模型API - from bot.openai.open_ai_bot import OpenAIBot - return OpenAIBot() - raise RuntimeError diff --git a/spaces/lewiswu1209/MockingBird/ppg2mel/utils/cnn_postnet.py b/spaces/lewiswu1209/MockingBird/ppg2mel/utils/cnn_postnet.py deleted file mode 100644 index 1980cdd8421838e48fc8a977731054beb5eb8cc6..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg2mel/utils/cnn_postnet.py +++ /dev/null @@ -1,52 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from .basic_layers import Linear, Conv1d - - -class Postnet(nn.Module): - """Postnet - - Five 1-d convolution with 512 channels and kernel size 5 - """ - def __init__(self, num_mels=80, - num_layers=5, - hidden_dim=512, - kernel_size=5): - super(Postnet, self).__init__() - self.convolutions = nn.ModuleList() - - self.convolutions.append( - nn.Sequential( - Conv1d( - num_mels, hidden_dim, - kernel_size=kernel_size, stride=1, - padding=int((kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hidden_dim))) - - for i in range(1, num_layers - 1): - self.convolutions.append( - nn.Sequential( - Conv1d( - hidden_dim, - hidden_dim, - kernel_size=kernel_size, stride=1, - padding=int((kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hidden_dim))) - - self.convolutions.append( - nn.Sequential( - Conv1d( - hidden_dim, num_mels, - kernel_size=kernel_size, stride=1, - padding=int((kernel_size - 1) / 2), - dilation=1, w_init_gain='linear'), - nn.BatchNorm1d(num_mels))) - - def forward(self, x): - # x: (B, num_mels, T_dec) - for i in range(len(self.convolutions) - 1): - x = F.dropout(torch.tanh(self.convolutions[i](x)), 0.5, self.training) - x = F.dropout(self.convolutions[-1](x), 0.5, self.training) - return x diff --git a/spaces/librarian-bots/huggingface-datasets-semantic-search/README.md b/spaces/librarian-bots/huggingface-datasets-semantic-search/README.md deleted file mode 100644 index 5f50a9ea465d2a12f3a6a6f694894465746cae41..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/huggingface-datasets-semantic-search/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Semantic Dataset Search -emoji: 🔎 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lighdow/anime-cute-tts/english.py b/spaces/lighdow/anime-cute-tts/english.py deleted file mode 100644 index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000 --- a/spaces/lighdow/anime-cute-tts/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/limingcv/AlignDet/finetune/finetune_detr_50e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0/detr_r50_8x2_50e_coco.py b/spaces/limingcv/AlignDet/finetune/finetune_detr_50e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0/detr_r50_8x2_50e_coco.py deleted file mode 100644 index 95a9a78225edb44a428bf08635b3b8e149d246d1..0000000000000000000000000000000000000000 --- a/spaces/limingcv/AlignDet/finetune/finetune_detr_50e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0/detr_r50_8x2_50e_coco.py +++ /dev/null @@ -1,281 +0,0 @@ -model = dict( - type='DETR', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(3, ), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - norm_eval=False, - style='pytorch', - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - bbox_head=dict( - type='DETRHead', - num_classes=80, - in_channels=2048, - transformer=dict( - type='Transformer', - encoder=dict( - type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=[ - dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1) - ], - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'ffn', 'norm'))), - decoder=dict( - type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'cross_attn', 'norm', - 'ffn', 'norm')))), - positional_encoding=dict( - type='SinePositionalEncoding', num_feats=128, normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0)), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.0), - reg_cost=dict(type='BBoxL1Cost', weight=5.0, box_format='xywh'), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100)) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='AutoAugment', - policies=[[{ - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }], - [{ - 'type': 'Resize', - 'img_scale': [(400, 1333), (500, 1333), (600, 1333)], - 'multiscale_mode': 'value', - 'keep_ratio': True - }, { - 'type': 'RandomCrop', - 'crop_type': 'absolute_range', - 'crop_size': (384, 600), - 'allow_negative_crop': True - }, { - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'override': - True, - 'keep_ratio': - True - }]]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_train2017.json', - img_prefix='data/coco/train2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='AutoAugment', - policies=[[{ - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }], - [{ - 'type': 'Resize', - 'img_scale': [(400, 1333), (500, 1333), - (600, 1333)], - 'multiscale_mode': 'value', - 'keep_ratio': True - }, { - 'type': 'RandomCrop', - 'crop_type': 'absolute_range', - 'crop_size': (384, 600), - 'allow_negative_crop': True - }, { - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), - (544, 1333), (576, 1333), - (608, 1333), (640, 1333), - (672, 1333), (704, 1333), - (736, 1333), (768, 1333), - (800, 1333)], - 'multiscale_mode': - 'value', - 'override': - True, - 'keep_ratio': - True - }]]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) - ]), - val=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ]), - test=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=1), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ])) -evaluation = dict( - interval=1, metric='bbox', save_best='auto', gpu_collect=True) -checkpoint_config = dict(interval=1) -log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) -custom_hooks = [ - dict(type='NumClassCheckHook'), - dict( - type='MMDetWandbHook', - init_kwargs=dict(project='I2B', group='finetune'), - interval=50, - num_eval_images=0, - log_checkpoint=False) -] -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = 'work_dirs/selfsup_detr_clusters-as-classes_add-contrastive-temp0.5-weight1.0/final_model.pth' -resume_from = None -workflow = [('train', 1)] -opencv_num_threads = 0 -mp_start_method = 'fork' -auto_scale_lr = dict(enable=False, base_batch_size=16) -custom_imports = None -norm_cfg = dict(type='SyncBN', requires_grad=True) -optimizer = dict( - type='AdamW', - lr=0.0001, - weight_decay=0.0001, - paramwise_cfg=dict( - custom_keys=dict( - backbone=dict(lr_mult=0.1, decay_mult=1.0), lr_mult=0.1))) -optimizer_config = dict(grad_clip=dict(max_norm=0.1, norm_type=2)) -lr_config = dict(policy='step', step=[40]) -runner = dict(type='EpochBasedRunner', max_epochs=50) -work_dir = 'work_dirs/finetune_detr_50e_coco_lr-mult-0.1_selfsup-clusters-as-classes_add-contrastive-temp0.5-weight1.0' -auto_resume = False -gpu_ids = range(0, 8) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/38 Dictionnaires Recueils Correspondance Crack !LINK!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/38 Dictionnaires Recueils Correspondance Crack !LINK!.md deleted file mode 100644 index 1d17e36fd2e2f86681599f8ee3084b0bb7828d83..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/38 Dictionnaires Recueils Correspondance Crack !LINK!.md +++ /dev/null @@ -1,12 +0,0 @@ -

      38 dictionnaires recueils correspondance crack


      DOWNLOAD ……… https://bytlly.com/2uGwFS



      - -38 dictionnaires recueils correspondance crack download crack keygen crack serial keygen crack keygen crack serial. -Trainer, crack, download, free. -DOWNLOAD Cracked Downloads. -Includes Trainers, Cracks, Serials, Keygens, Patches, Cheats, Patches, Downloads. -Download crack, patch, serial, keygen for MSTS Train Simulator. -The download link for MSTS Train Simulator is a link to a special site. -DOWNLOAD C 8a78ff9644
      -
      -
      -

      diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent.md deleted file mode 100644 index ca03ae3c4862dcb640aab32563f3de4e24a541b7..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent.md +++ /dev/null @@ -1,90 +0,0 @@ - -

      Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent - A Review

      - -

      Adi Kapyare Kootamani is a 2016 Malayalam comedy thriller film directed by John Varghese and starring Dhyan Sreenivasan, Namitha Pramod, Aju Varghese, Neeraj Madhav and Mukesh. The film revolves around a group of college students who share a room in a men's hostel and their hilarious escapades when a girl enters their room by mistake.

      - -

      If you are looking for a fun and entertaining movie to watch with your friends or family, you might want to check out Adi Kapyare Kootamani. The film is full of witty dialogues, comic situations, and hilarious performances by the lead actors. The film also has a suspenseful plot that keeps you guessing till the end.

      -

      Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent


      Download Ziphttps://bytlly.com/2uGyeo



      - -

      One of the best ways to watch Adi Kapyare Kootamani is to download it from a torrent site. You can find the best quality torrent for this movie on SolidTorrents, which offers Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent. This torrent has a file size of 1.07 GB and has excellent video and audio quality. You can also use a magnet link to download this torrent without any hassle.

      - -

      Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent is one of the most popular torrents for Malayalam movies on SolidTorrents. It has received many positive reviews from users who have downloaded and watched it. Some of the comments are:

      - -
        -
      • "Awesome movie...loved it...thanks for the upload"
      • -
      • "Super comedy thriller...must watch"
      • -
      • "Very good quality...thank you donsr"
      • -
      • "One of the best Malayalam movies of 2016"
      • -
      • "Hilarious and thrilling...highly recommended"
      • -
      - -

      If you are a fan of Malayalam cinema or comedy thrillers, you should not miss Adi Kapyare Kootamani. Download Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent from SolidTorrents today and enjoy this amazing movie.

      -

      What is Adi Kapyare Kootamani about?

      - -

      Adi Kapyare Kootamani is a comedy thriller that follows the adventures of Adi, Bhanu, and their friends who live in a men's hostel run by Father Alfred Kattuvilayil. One night, a girl named Adhishta Lakshmi sneaks into their room to escape from her abusive uncle. She hides in their bathroom and asks them to help her. However, they soon realize that they are in trouble as the hostel rules forbid any girl from entering the premises. They also have to deal with a gang of goons who are after Adhishta Lakshmi and a suspicious warden who is always on their tail.

      - -

      The film is a laugh riot that showcases the chemistry and camaraderie of the young actors. The film also has some thrilling moments and twists that keep the audience engaged. The film is a remake of the 2015 Tamil film Adida Melam, which was also directed by John Varghese.

      - -

      Why should you download Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent?

      - -

      There are many reasons why you should download Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent from SolidTorrents. Here are some of them:

      - -
        -
      • You can enjoy the movie in high definition with clear sound and subtitles.
      • -
      • You can save your time and money by avoiding going to the theaters or buying DVDs.
      • -
      • You can watch the movie at your own convenience and comfort.
      • -
      • You can share the movie with your friends and family who love Malayalam cinema.
      • -
      • You can support the filmmakers and actors by downloading their work legally.
      • -
      - -

      Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent is a safe and reliable torrent that you can download without any worries. You can also use a VPN service to protect your privacy and security while downloading torrents.

      -

      - -

      How to download Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent?

      - -

      Downloading Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent from SolidTorrents is very easy and simple. You just need to follow these steps:

      - -
        -
      1. Go to SolidTorrents.to and search for Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent.
      2. -
      3. Select the torrent from the list of results and click on the download button.
      4. -
      5. Choose between torrent download or magnet download depending on your preference.
      6. -
      7. Open the torrent file or magnet link with your preferred torrent client such as uTorrent, BitTorrent, or qBittorrent.
      8. -
      9. Wait for the download to complete and enjoy the movie.
      10. -
      - -

      Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent is one of the best torrents for Malayalam movies that you can find on SolidTorrents. It has a high rating and positive feedback from users who have downloaded it. It is also fast and easy to download with no ads or malware.

      - -

      If you are looking for a fun and entertaining movie to watch with your friends or family, you should not miss Adi Kapyare Kootamani. Download Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent from SolidTorrents today and enjoy this amazing movie.

      -

      Who are the cast and crew of Adi Kapyare Kootamani?

      - -

      Adi Kapyare Kootamani is a film that showcases the talent and charisma of some of the best actors and filmmakers in Malayalam cinema. The film is directed by John Varghese, who made his debut with this film. He also co-wrote the screenplay with Abhilash S Nair. The film is produced by Sandra Thomas and Vijay Babu under the banner of Friday Film House.

      - -

      The film features a stellar cast of young and veteran actors who deliver brilliant performances. The lead roles are played by Dhyan Sreenivasan, who plays Adi, a college student who loves music and photography; Namitha Pramod, who plays Adhishta Lakshmi, a girl who runs away from her uncle and seeks refuge in Adi's room; Aju Varghese, who plays Bhanu Prasad, Adi's roommate and best friend who is obsessed with martial arts; Neeraj Madhav, who plays Remo, another roommate of Adi who is a computer geek and hacker; and Mukesh, who plays Father Alfred Kattuvilayil, the strict and funny warden of the hostel.

      - -

      The film also has supporting roles played by Vineeth Mohan, Bijukuttan, Kottayam Pradeep, Dharmajan Bolgatty, Sreejith Ravi, Ramesh Pisharody, Jaffer Idukki, Lena, Ponnamma Babu, and Sneha Unnikrishnan. The film has music composed by Shaan Rahman and cinematography by Ajay David Kachappilly.

      - -

      What are the reviews and ratings of Adi Kapyare Kootamani?

      - -

      Adi Kapyare Kootamani is a film that has received positive reviews and ratings from critics and audiences alike. The film has been praised for its comedy, thriller, romance, and music elements. The film has also been appreciated for its fresh and original story, witty dialogues, and engaging direction.

      - -

      The film has a rating of 6.7 out of 10 on IMDb based on 1,028 user ratings. The film also has a rating of 3 out of 5 on Times of India based on 4 critic reviews. The film also has a rating of 3.5 out of 5 on Sify based on 1 critic review. The film also has a rating of 3 out of 5 on Filmibeat based on 1 critic review.

      - -

      Some of the positive comments from the critics are:

      - -
        -
      • "Adi Kapyare Kootamani is a fun ride that will keep you entertained throughout." - Times of India
      • -
      • "Adi Kapyare Kootamani is a laugh riot that will tickle your funny bones." - Sify
      • -
      • "Adi Kapyare Kootamani is a comedy thriller that will keep you hooked till the end." - Filmibeat
      • -
      - -

      Some of the positive comments from the users are:

      - -
        -
      • "One of the best comedy movies in Malayalam...very enjoyable...must watch..." - IMDb user
      • -
      • "A hilarious movie with a good plot...the actors did a great job...loved it..." - IMDb user
      • -
      • "A superb movie with a lot of twists and turns...very entertaining...a complete entertainer..." - IMDb user
      • -
      - -

      If you are looking for a fun and entertaining movie to watch with your friends or family, you should not miss Adi Kapyare Kootamani. Download Adi Kapyare Kootamani (2016) Malayalam DVDRip X264 AAC 5.1 E-Subs-MBRHDRG Torrent from SolidTorrents today and enjoy this amazing movie.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Free Download BIM 360 Docs 2019 Crack Keygen.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Free Download BIM 360 Docs 2019 Crack Keygen.md deleted file mode 100644 index 3490e5d4e4933e8862fd1d22789bd67ed7da0b82..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Free Download BIM 360 Docs 2019 Crack Keygen.md +++ /dev/null @@ -1,11 +0,0 @@ -
      -

      winless with crack keygen free download [updated-2022]. how to crack winless, winless. skip to main content. winless with serial key free download [updated-2022]. cracked winless with keygen is a handy and reliable application.

      -

      Free Download BIM 360 Docs 2019 Crack Keygen


      Download Ziphttps://bytlly.com/2uGwdf



      -

      добавлено: 10 апр.. the main changes are the addition of two new apis: device information api and dns resolution api. keygen 2019 for bim 360 design free download for school and home industry, industry, architecture industry etc.

      -

      azureus media client 4.5.4 crack free download (windows) azureus media client 4.4 crack pro.. vipre internet security crack 2021 keygen latest free download vipre internet security crack for windows full version 64 bit. keygen 2020 for touchlink+ free download.

      -

      download: password: as long as you would like to use autodesk bim 360 viewer, you have to crack the software.. x-force 2019 keygen. how to install software (notepad) or activation code (by clicking on it). скачать бим 360 документы 2019 года с компьютера | читать бим 360 документы. . autodesk bim 360 viewer crack and key full download. torrent. you must use the one of the above two methods to crack the keygen (with key).

      -

      2519 key (exe) full version + keygen cd 64 bit: easytousecrack and serial key [just for you].. you can check this by looking at the serial number for copies of this file to download free software can lead you to a costly audit.

      -

      -

      bim360/big step/idesign/archicad/autodesk bim 360 viewer is the fourth product in the autodesk bim 360 sdk.. 02 55 (4.0 ) autocad 2013.x-force 2018 keygen. . key: 499 (optional). microsoft excel keygen (office product key): ms excel adds security to your work, especially when you work with sensitive data. 2 kestrel. autodesk bim 360 crack is the fourth product in the autodesk bim 360 sdk. millions of users across the globe use microsoft excel for their day-to-day work, and it is the most popular.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Justin Bieber Believe Song Mp3 Free !!HOT!! Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Justin Bieber Believe Song Mp3 Free !!HOT!! Download.md deleted file mode 100644 index c7ce397443f4c72ac032238c4c6c7f573d0a7e22..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Justin Bieber Believe Song Mp3 Free !!HOT!! Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Justin Bieber Believe Song Mp3 Free Download


      Downloadhttps://bytlly.com/2uGyiU



      - -Justin Bieber – Believe | Believe by Justin Bieber mp3 download | Justin ... brand new hit song titled “Believe” you can stream and download it here. ... any of the pictures or songs displayed on this site unless stated otherwise. 1fdad05405
      -
      -
      -

      diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Men Of War 1.02.0 Trainer !EXCLUSIVE!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Men Of War 1.02.0 Trainer !EXCLUSIVE!.md deleted file mode 100644 index a40faed3299f096d6ae7e938d2dbfb311d67e323..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Men Of War 1.02.0 Trainer !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Men Of War 1.02.0 Trainer


      Download ✫✫✫ https://bytlly.com/2uGwC4



      -
      -Men Of War 1.02.0 Trainer >> http://fancli.com/1bijir f40e7c8ce2 Men of War 1.02.0 Трейнер. ... Чит-моды для серии игр Men Of War или В ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/lindeberg/whisper-webui/README.md b/spaces/lindeberg/whisper-webui/README.md deleted file mode 100644 index 9ae46c2b5cfcc4f9b28eb6f075b4f950cf006334..0000000000000000000000000000000000000000 --- a/spaces/lindeberg/whisper-webui/README.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -title: Whisper Webui -emoji: ⚡ -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: aadnk/whisper-webui ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -# Running Locally - -To run this program locally, first install Python 3.9+ and Git. Then install Pytorch 10.1+ and all the other dependencies: -``` -pip install -r requirements.txt -``` - -You can find detailed instructions for how to install this on Windows 10/11 [here (PDF)](docs/windows/install_win10_win11.pdf). - -Finally, run the full version (no audio length restrictions) of the app with parallel CPU/GPU enabled: -``` -python app.py --input_audio_max_duration -1 --server_name 127.0.0.1 --auto_parallel True -``` - -You can also run the CLI interface, which is similar to Whisper's own CLI but also supports the following additional arguments: -``` -python cli.py \ -[--vad {none,silero-vad,silero-vad-skip-gaps,silero-vad-expand-into-gaps,periodic-vad}] \ -[--vad_merge_window VAD_MERGE_WINDOW] \ -[--vad_max_merge_size VAD_MAX_MERGE_SIZE] \ -[--vad_padding VAD_PADDING] \ -[--vad_prompt_window VAD_PROMPT_WINDOW] -[--vad_cpu_cores NUMBER_OF_CORES] -[--vad_parallel_devices COMMA_DELIMITED_DEVICES] -[--auto_parallel BOOLEAN] -``` -In addition, you may also use URL's in addition to file paths as input. -``` -python cli.py --model large --vad silero-vad --language Japanese "https://www.youtube.com/watch?v=4cICErqqRSM" -``` - -## Google Colab - -You can also run this Web UI directly on [Google Colab](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing), if you haven't got a GPU powerful enough to run the larger models. - -See the [colab documentation](docs/colab.md) for more information. - -## Parallel Execution - -You can also run both the Web-UI or the CLI on multiple GPUs in parallel, using the `vad_parallel_devices` option. This takes a comma-delimited list of -device IDs (0, 1, etc.) that Whisper should be distributed to and run on concurrently: -``` -python cli.py --model large --vad silero-vad --language Japanese \ ---vad_parallel_devices 0,1 "https://www.youtube.com/watch?v=4cICErqqRSM" -``` - -Note that this requires a VAD to function properly, otherwise only the first GPU will be used. Though you could use `period-vad` to avoid taking the hit -of running Silero-Vad, at a slight cost to accuracy. - -This is achieved by creating N child processes (where N is the number of selected devices), where Whisper is run concurrently. In `app.py`, you can also -set the `vad_process_timeout` option. This configures the number of seconds until a process is killed due to inactivity, freeing RAM and video memory. -The default value is 30 minutes. - -``` -python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600 -``` - -To execute the Silero VAD itself in parallel, use the `vad_cpu_cores` option: -``` -python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600 --vad_cpu_cores 4 -``` - -You may also use `vad_process_timeout` with a single device (`--vad_parallel_devices 0`), if you prefer to always free video memory after a period of time. - -### Auto Parallel - -You can also set `auto_parallel` to `True`. This will set `vad_parallel_devices` to use all the GPU devices on the system, and `vad_cpu_cores` to be equal to the number of -cores (up to 8): -``` -python app.py --input_audio_max_duration -1 --auto_parallel True -``` - -### Multiple Files - -You can upload multiple files either through the "Upload files" option, or as a playlist on YouTube. -Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section. -When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files. - -# Docker - -To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU. -Then either use the GitLab hosted container below, or check out this repository and build an image: -``` -sudo docker build -t whisper-webui:1 . -``` - -You can then start the WebUI with GPU support like so: -``` -sudo docker run -d --gpus=all -p 7860:7860 whisper-webui:1 -``` - -Leave out "--gpus=all" if you don't have access to a GPU with enough memory, and are fine with running it on the CPU only: -``` -sudo docker run -d -p 7860:7860 whisper-webui:1 -``` - -# GitLab Docker Registry - -This Docker container is also hosted on GitLab: - -``` -sudo docker run -d --gpus=all -p 7860:7860 registry.gitlab.com/aadnk/whisper-webui:latest -``` - -## Custom Arguments - -You can also pass custom arguments to `app.py` in the Docker container, for instance to be able to use all the GPUs in parallel: -``` -sudo docker run -d --gpus all -p 7860:7860 \ ---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \ ---restart=on-failure:15 registry.gitlab.com/aadnk/whisper-webui:latest \ -app.py --input_audio_max_duration -1 --server_name 0.0.0.0 --auto_parallel True \ ---default_vad silero-vad --default_model_name large -``` - -You can also call `cli.py` the same way: -``` -sudo docker run --gpus all \ ---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \ ---mount type=bind,source=${PWD},target=/app/data \ -registry.gitlab.com/aadnk/whisper-webui:latest \ -cli.py --model large --auto_parallel True --vad silero-vad \ ---output_dir /app/data /app/data/YOUR-FILE-HERE.mp4 -``` - -## Caching - -Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand. -To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally) -prepopulate the directory with the different Whisper models. -``` -sudo docker run -d --gpus=all -p 7860:7860 \ ---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \ -registry.gitlab.com/aadnk/whisper-webui:latest -``` \ No newline at end of file diff --git a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py b/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py deleted file mode 100644 index f490c4bbd598a35de43d36ceafcbd769e7ff21bf..0000000000000000000000000000000000000000 --- a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_B_384_22k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/ltgoslo/ssa-perin/model/module/encoder.py b/spaces/ltgoslo/ssa-perin/model/module/encoder.py deleted file mode 100644 index 1cf352886a616a1f3a4b150609ff726cb9ea6c09..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/model/module/encoder.py +++ /dev/null @@ -1,95 +0,0 @@ -#!/usr/bin/env python3 -# coding=utf-8 - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from transformers import AutoModel -from model.module.char_embedding import CharEmbedding - - -class WordDropout(nn.Dropout): - def forward(self, input_tensor): - if self.p == 0: - return input_tensor - - ones = input_tensor.new_ones(input_tensor.shape[:-1]) - dropout_mask = torch.nn.functional.dropout(ones, self.p, self.training, inplace=False) - - return dropout_mask.unsqueeze(-1) * input_tensor - - -class Encoder(nn.Module): - def __init__(self, args, dataset): - super(Encoder, self).__init__() - - self.dim = args.hidden_size - self.n_layers = args.n_encoder_layers - self.width_factor = args.query_length - - self.bert = AutoModel.from_pretrained(args.encoder, add_pooling_layer=False) - # self.bert._set_gradient_checkpointing(self.bert.encoder, value=True) - if args.encoder_freeze_embedding: - self.bert.embeddings.requires_grad_(False) - self.bert.embeddings.LayerNorm.requires_grad_(True) - - if args.freeze_bert: - self.bert.requires_grad_(False) - - self.use_char_embedding = args.char_embedding - if self.use_char_embedding: - self.form_char_embedding = CharEmbedding(dataset.char_form_vocab_size, args.char_embedding_size, self.dim) - self.word_dropout = WordDropout(args.dropout_word) - - self.post_layer_norm = nn.LayerNorm(self.dim) - self.subword_attention = nn.Linear(self.dim, 1) - - if self.width_factor > 1: - self.query_generator = nn.Linear(self.dim, self.dim * self.width_factor) - else: - self.query_generator = nn.Identity() - - self.encoded_layer_norm = nn.LayerNorm(self.dim) - self.scores = nn.Parameter(torch.zeros(self.n_layers, 1, 1, 1), requires_grad=True) - - def forward(self, bert_input, form_chars, to_scatter, n_words): - tokens, mask = bert_input - batch_size = tokens.size(0) - - encoded = self.bert(tokens, attention_mask=mask, output_hidden_states=True).hidden_states[1:] - encoded = torch.stack(encoded, dim=0) # shape: (12, B, T, H) - encoded = self.encoded_layer_norm(encoded) - - if self.training: - time_len = encoded.size(2) - scores = self.scores.expand(-1, batch_size, time_len, -1) - dropout = torch.empty(self.n_layers, batch_size, 1, 1, dtype=torch.bool, device=self.scores.device) - dropout.bernoulli_(0.1) - scores = scores.masked_fill(dropout, float("-inf")) - else: - scores = self.scores - - scores = F.softmax(scores, dim=0) - encoded = (scores * encoded).sum(0) # shape: (B, T, H) - encoded = encoded.masked_fill(mask.unsqueeze(-1) == 0, 0.0) # shape: (B, T, H) - - subword_attention = self.subword_attention(encoded) / math.sqrt(self.dim) # shape: (B, T, 1) - subword_attention = subword_attention.expand_as(to_scatter) # shape: (B, T_subword, T_word) - subword_attention = subword_attention.masked_fill(to_scatter == 0, float("-inf")) # shape: (B, T_subword, T_word) - subword_attention = torch.softmax(subword_attention, dim=1) # shape: (B, T_subword, T_word) - subword_attention = subword_attention.masked_fill(to_scatter.sum(1, keepdim=True) == 0, value=0.0) # shape: (B, T_subword, T_word) - - encoder_output = torch.einsum("bsd,bsw->bwd", encoded, subword_attention) - encoder_output = self.post_layer_norm(encoder_output) - - if self.use_char_embedding: - form_char_embedding = self.form_char_embedding(form_chars[0], form_chars[1], form_chars[2]) - encoder_output = self.word_dropout(encoder_output) + form_char_embedding - - decoder_input = self.query_generator(encoder_output) - decoder_input = decoder_input.view(batch_size, -1, self.width_factor, self.dim).flatten(1, 2) # shape: (B, T*Q, D) - - return encoder_output, decoder_input diff --git a/spaces/luodian/LoRA-DreamBooth-Training-UI/utils.py b/spaces/luodian/LoRA-DreamBooth-Training-UI/utils.py deleted file mode 100644 index 8fe82394db3a576d0b8bb94788cdc313a1b44392..0000000000000000000000000000000000000000 --- a/spaces/luodian/LoRA-DreamBooth-Training-UI/utils.py +++ /dev/null @@ -1,59 +0,0 @@ -from __future__ import annotations - -import pathlib - - -def find_exp_dirs(ignore_repo: bool = False) -> list[str]: - repo_dir = pathlib.Path(__file__).parent - exp_root_dir = repo_dir / 'experiments' - if not exp_root_dir.exists(): - return [] - exp_dirs = sorted(exp_root_dir.glob('*')) - exp_dirs = [ - exp_dir for exp_dir in exp_dirs - if (exp_dir / 'pytorch_lora_weights.bin').exists() - ] - if ignore_repo: - exp_dirs = [ - exp_dir for exp_dir in exp_dirs if not (exp_dir / '.git').exists() - ] - return [path.relative_to(repo_dir).as_posix() for path in exp_dirs] - - -def save_model_card( - save_dir: pathlib.Path, - base_model: str, - instance_prompt: str, - test_prompt: str = '', - test_image_dir: str = '', -) -> None: - image_str = '' - if test_prompt and test_image_dir: - image_paths = sorted((save_dir / test_image_dir).glob('*')) - if image_paths: - image_str = f'Test prompt: {test_prompt}\n' - for image_path in image_paths: - rel_path = image_path.relative_to(save_dir) - image_str += f'![{image_path.stem}]({rel_path})\n' - - model_card = f'''--- -license: creativeml-openrail-m -base_model: {base_model} -instance_prompt: {instance_prompt} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- lora -inference: true ---- -# LoRA DreamBooth - {save_dir.name} - -These are LoRA adaption weights for [{base_model}](https://huggingface.co/{base_model}). The weights were trained on the instance prompt "{instance_prompt}" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. - -{image_str} -''' - - with open(save_dir / 'README.md', 'w') as f: - f.write(model_card) diff --git a/spaces/luost26/DiffAb/diffab/tools/relax/__main__.py b/spaces/luost26/DiffAb/diffab/tools/relax/__main__.py deleted file mode 100644 index cbcdeb82d00dcc46488d4ff37b67e21f342de368..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/tools/relax/__main__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .run import main - -if __name__ == '__main__': - main() diff --git a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_seg.py b/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_seg.py deleted file mode 100644 index ee72ac9398309993dc23b5ac860e2b2d072efe32..0000000000000000000000000000000000000000 --- a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_seg.py +++ /dev/null @@ -1,403 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from diffusers import ControlNetModel, StableDiffusionControlNetPipeline -from PIL import Image -from transformers import AutoImageProcessor, UperNetForSemanticSegmentation - -from diffusion_webui.utils.model_list import ( - controlnet_seg_model_list, - stable_inpiant_model_list, -) -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_LIST, - get_scheduler_list, -) - -# https://github.com/mikonvergence/ControlNetInpaint - - -def ade_palette(): - """ADE20K palette that maps each class to RGB values.""" - return [ - [120, 120, 120], - [180, 120, 120], - [6, 230, 230], - [80, 50, 50], - [4, 200, 3], - [120, 120, 80], - [140, 140, 140], - [204, 5, 255], - [230, 230, 230], - [4, 250, 7], - [224, 5, 255], - [235, 255, 7], - [150, 5, 61], - [120, 120, 70], - [8, 255, 51], - [255, 6, 82], - [143, 255, 140], - [204, 255, 4], - [255, 51, 7], - [204, 70, 3], - [0, 102, 200], - [61, 230, 250], - [255, 6, 51], - [11, 102, 255], - [255, 7, 71], - [255, 9, 224], - [9, 7, 230], - [220, 220, 220], - [255, 9, 92], - [112, 9, 255], - [8, 255, 214], - [7, 255, 224], - [255, 184, 6], - [10, 255, 71], - [255, 41, 10], - [7, 255, 255], - [224, 255, 8], - [102, 8, 255], - [255, 61, 6], - [255, 194, 7], - [255, 122, 8], - [0, 255, 20], - [255, 8, 41], - [255, 5, 153], - [6, 51, 255], - [235, 12, 255], - [160, 150, 20], - [0, 163, 255], - [140, 140, 140], - [250, 10, 15], - [20, 255, 0], - [31, 255, 0], - [255, 31, 0], - [255, 224, 0], - [153, 255, 0], - [0, 0, 255], - [255, 71, 0], - [0, 235, 255], - [0, 173, 255], - [31, 0, 255], - [11, 200, 200], - [255, 82, 0], - [0, 255, 245], - [0, 61, 255], - [0, 255, 112], - [0, 255, 133], - [255, 0, 0], - [255, 163, 0], - [255, 102, 0], - [194, 255, 0], - [0, 143, 255], - [51, 255, 0], - [0, 82, 255], - [0, 255, 41], - [0, 255, 173], - [10, 0, 255], - [173, 255, 0], - [0, 255, 153], - [255, 92, 0], - [255, 0, 255], - [255, 0, 245], - [255, 0, 102], - [255, 173, 0], - [255, 0, 20], - [255, 184, 184], - [0, 31, 255], - [0, 255, 61], - [0, 71, 255], - [255, 0, 204], - [0, 255, 194], - [0, 255, 82], - [0, 10, 255], - [0, 112, 255], - [51, 0, 255], - [0, 194, 255], - [0, 122, 255], - [0, 255, 163], - [255, 153, 0], - [0, 255, 10], - [255, 112, 0], - [143, 255, 0], - [82, 0, 255], - [163, 255, 0], - [255, 235, 0], - [8, 184, 170], - [133, 0, 255], - [0, 255, 92], - [184, 0, 255], - [255, 0, 31], - [0, 184, 255], - [0, 214, 255], - [255, 0, 112], - [92, 255, 0], - [0, 224, 255], - [112, 224, 255], - [70, 184, 160], - [163, 0, 255], - [153, 0, 255], - [71, 255, 0], - [255, 0, 163], - [255, 204, 0], - [255, 0, 143], - [0, 255, 235], - [133, 255, 0], - [255, 0, 235], - [245, 0, 255], - [255, 0, 122], - [255, 245, 0], - [10, 190, 212], - [214, 255, 0], - [0, 204, 255], - [20, 0, 255], - [255, 255, 0], - [0, 153, 255], - [0, 41, 255], - [0, 255, 204], - [41, 0, 255], - [41, 255, 0], - [173, 0, 255], - [0, 245, 255], - [71, 0, 255], - [122, 0, 255], - [0, 255, 184], - [0, 92, 255], - [184, 255, 0], - [0, 133, 255], - [255, 214, 0], - [25, 194, 194], - [102, 255, 0], - [92, 0, 255], - ] - - -class StableDiffusionControlNetInpaintSegGenerator: - def __init__(self): - self.pipe = None - - def load_model( - self, - stable_model_path, - controlnet_model_path, - scheduler, - ): - - if self.pipe is None: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - - self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def load_image(self, image_path): - image = np.array(image_path) - image = Image.fromarray(image) - return image - - def controlnet_seg_inpaint(self, image_path: str): - image_processor = AutoImageProcessor.from_pretrained( - "openmmlab/upernet-convnext-small" - ) - image_segmentor = UperNetForSemanticSegmentation.from_pretrained( - "openmmlab/upernet-convnext-small" - ) - - image = image_path["image"].convert("RGB").resize((512, 512)) - image = np.array(image) - pixel_values = image_processor(image, return_tensors="pt").pixel_values - - with torch.no_grad(): - outputs = image_segmentor(pixel_values) - - seg = image_processor.post_process_semantic_segmentation( - outputs, target_sizes=[image.size[::-1]] - )[0] - - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - palette = np.array(ade_palette()) - - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - - color_seg = color_seg.astype(np.uint8) - image = Image.fromarray(color_seg) - - return image - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - controlnet_conditioning_scale: int, - scheduler: str, - seed_generator: int, - ): - - normal_image = image_path["image"].convert("RGB").resize((512, 512)) - mask_image = image_path["mask"].convert("RGB").resize((512, 512)) - - normal_image = self.load_image(image_path=normal_image) - mask_image = self.load_image(image_path=mask_image) - - controlnet_image = self.controlnet_seg_inpaint(image_path=image_path) - - pipe = self.load_model( - stable_model_path=stable_model_path, - controlnet_model_path=controlnet_model_path, - scheduler=scheduler, - ) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=normal_image, - mask_image=mask_image, - control_image=controlnet_image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - controlnet_conditioning_scale=controlnet_conditioning_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_seg_inpaint_image_file = gr.Image( - source="upload", - tool="sketch", - elem_id="image_upload", - type="pil", - label="Upload", - ) - - controlnet_seg_inpaint_prompt = gr.Textbox( - lines=1, placeholder="Prompt", show_label=False - ) - - controlnet_seg_inpaint_negative_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Negative Prompt", - ) - with gr.Row(): - with gr.Column(): - controlnet_seg_inpaint_stable_model_id = ( - gr.Dropdown( - choices=stable_inpiant_model_list, - value=stable_inpiant_model_list[0], - label="Stable Model Id", - ) - ) - - controlnet_seg_inpaint_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - - controlnet_seg_inpaint_num_inference_step = ( - gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - ) - controlnet_seg_inpaint_num_images_per_prompt = ( - gr.Slider( - minimum=1, - maximum=10, - step=1, - value=1, - label="Number Of Images", - ) - ) - with gr.Row(): - with gr.Column(): - controlnet_seg_inpaint_model_id = gr.Dropdown( - choices=controlnet_seg_model_list, - value=controlnet_seg_model_list[0], - label="Controlnet Model Id", - ) - controlnet_seg_inpaint_scheduler = gr.Dropdown( - choices=SCHEDULER_LIST, - value=SCHEDULER_LIST[0], - label="Scheduler", - ) - controlnet_seg_inpaint_controlnet_conditioning_scale = gr.Slider( - minimum=0.1, - maximum=1.0, - step=0.1, - value=0.5, - label="Controlnet Conditioning Scale", - ) - - controlnet_seg_inpaint_seed_generator = ( - gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed Generator", - ) - ) - - controlnet_seg_inpaint_predict = gr.Button( - value="Generator" - ) - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - controlnet_seg_inpaint_predict.click( - fn=StableDiffusionControlNetInpaintSegGenerator().generate_image, - inputs=[ - controlnet_seg_inpaint_image_file, - controlnet_seg_inpaint_stable_model_id, - controlnet_seg_inpaint_model_id, - controlnet_seg_inpaint_prompt, - controlnet_seg_inpaint_negative_prompt, - controlnet_seg_inpaint_num_images_per_prompt, - controlnet_seg_inpaint_guidance_scale, - controlnet_seg_inpaint_num_inference_step, - controlnet_seg_inpaint_controlnet_conditioning_scale, - controlnet_seg_inpaint_scheduler, - controlnet_seg_inpaint_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/ma-xu/LIVE/thrust/thrust/random.h b/spaces/ma-xu/LIVE/thrust/thrust/random.h deleted file mode 100644 index c0e9e2282414b6e891808337eef41d016abbbe7e..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/random.h +++ /dev/null @@ -1,120 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file random.h - * \brief Pseudo-random number generators. - */ - -#pragma once - -#include -#include - -// RNGs -#include -#include -#include -#include -#include - -// distributions -#include -#include -#include - -namespace thrust -{ - - -/*! \addtogroup random Random Number Generation - * \{ - */ - - -/*! \namespace thrust::random - * \brief \p thrust::random is the namespace which contains random number engine class templates, - * random number engine adaptor class templates, engines with predefined parameters, - * and random number distribution class templates. They are provided in a separate namespace - * for import convenience but are also aliased in the top-level \p thrust namespace for - * easy access. - */ -namespace random -{ - -/*! \addtogroup predefined_random Random Number Engines with Predefined Parameters - * \ingroup random - * \{ - */ - -/*! \typedef ranlux24 - * \brief A random number engine with predefined parameters which implements the - * RANLUX level-3 random number generation algorithm. - * \note The 10000th consecutive invocation of a default-constructed object of type \p ranlux24 - * shall produce the value \c 9901578 . - */ -typedef discard_block_engine ranlux24; - - -/*! \typedef ranlux48 - * \brief A random number engine with predefined parameters which implements the - * RANLUX level-4 random number generation algorithm. - * \note The 10000th consecutive invocation of a default-constructed object of type \p ranlux48 - * shall produce the value \c 88229545517833 . - */ -typedef discard_block_engine ranlux48; - - -/*! \typedef taus88 - * \brief A random number engine with predefined parameters which implements - * L'Ecuyer's 1996 three-component Tausworthe random number generator. - * - * \note The 10000th consecutive invocation of a default-constructed object of type \p taus88 - * shall produce the value \c 3535848941 . - */ -typedef xor_combine_engine< - linear_feedback_shift_engine, - 0, - xor_combine_engine< - linear_feedback_shift_engine, 0, - linear_feedback_shift_engine, 0 - >, - 0 -> taus88; - -/*! \typedef default_random_engine - * \brief An implementation-defined "default" random number engine. - * \note \p default_random_engine is currently an alias for \p minstd_rand, and may change - * in a future version. - */ -typedef minstd_rand default_random_engine; - -/*! \} // end predefined_random - */ - -} // end random - - -/*! \} // end random - */ - -// import names into thrust:: -using random::ranlux24; -using random::ranlux48; -using random::taus88; -using random::default_random_engine; - -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/tabulate.h b/spaces/ma-xu/LIVE/thrust/thrust/tabulate.h deleted file mode 100644 index 1dcd2c9ee388056d338cfe689deb8ebbb70a96d3..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/tabulate.h +++ /dev/null @@ -1,129 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file tabulate.h - * \brief Fills a range with the tabulation of a function - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup transformations - * \{ - */ - - -/*! \p tabulate fills the range [first, last) with the value of a function applied to each - * element's index. - * - * For each iterator \c i in the range [first, last), \p tabulate performs the assignment - * *i = unary_op(i - first). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the range. - * \param last The end of the range. - * \param unary_op The unary operation to apply. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * \tparam UnaryOperation is a model of Unary Function - * and \c UnaryFunction's \c result_type is convertible to \c OutputIterator's \c value_type. - * - * The following code snippet demonstrates how to use \p tabulate to generate the first \c n non-positive integers - * using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::tabulate(thrust::host, A, A + 10, thrust::negate()); - * // A is now {0, -1, -2, -3, -4, -5, -6, -7, -8, -9} - * \endcode - * - * \see thrust::fill - * \see thrust::generate - * \see thrust::sequence - */ -template -__host__ __device__ - void tabulate(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - UnaryOperation unary_op); - - -/*! \p tabulate fills the range [first, last) with the value of a function applied to each - * element's index. - * - * For each iterator \c i in the range [first, last), \p tabulate performs the assignment - * *i = unary_op(i - first). - * - * \param first The beginning of the range. - * \param last The end of the range. - * \param unary_op The unary operation to apply. - * - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * \tparam UnaryOperation is a model of Unary Function - * and \c UnaryFunction's \c result_type is convertible to \c OutputIterator's \c value_type. - * - * The following code snippet demonstrates how to use \p tabulate to generate the first \c n non-positive integers: - * - * \code - * #include - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::tabulate(A, A + 10, thrust::negate()); - * // A is now {0, -1, -2, -3, -4, -5, -6, -7, -8, -9} - * \endcode - * - * \see thrust::fill - * \see thrust::generate - * \see thrust::sequence - */ -template - void tabulate(ForwardIterator first, - ForwardIterator last, - UnaryOperation unary_op); - - -/*! \} // end transformations - */ - - -} // end namespace thrust - -#include - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/__init__.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/__init__.py deleted file mode 100644 index 5fc6c783308652d3f1cd3aca7507c616a8e421b8..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# https://github.com/xinntao/BasicSR -# flake8: noqa -from .archs import * -from .data import * -from .losses import * -from .metrics import * -from .models import * -from .ops import * -from .test import * -from .train import * -from .utils import * -#from .version import __gitsha__, __version__ diff --git a/spaces/manymoon22173/RVC_MODELS/infer_pack/models.py b/spaces/manymoon22173/RVC_MODELS/infer_pack/models.py deleted file mode 100644 index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000 --- a/spaces/manymoon22173/RVC_MODELS/infer_pack/models.py +++ /dev/null @@ -1,982 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y_lengths, ds - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - z_slice, ids_slice = commons.rand_slice_segments( - x, y_lengths, self.segment_size - ) - - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice - - def infer( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o, o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/mascIT/AgeGuesser/yolov5/models/yolo.c b/spaces/mascIT/AgeGuesser/yolov5/models/yolo.c deleted file mode 100644 index f0a29aaa288ff1414995eb4cbe98ccfeaaf0dc79..0000000000000000000000000000000000000000 --- a/spaces/mascIT/AgeGuesser/yolov5/models/yolo.c +++ /dev/null @@ -1,28583 +0,0 @@ -/* Generated by Cython 3.0.0a10 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "pdf_toolbox.lib.dia_yolov5.models.yolo", - "sources": [ - "pdf_toolbox\\lib\\dia_yolov5\\models\\yolo.py" - ] - }, - "module_name": "pdf_toolbox.lib.dia_yolov5.models.yolo" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#if defined(CYTHON_LIMITED_API) && 0 - #ifndef Py_LIMITED_API - #if CYTHON_LIMITED_API+0 > 0x03030000 - #define Py_LIMITED_API CYTHON_LIMITED_API - #else - #define Py_LIMITED_API 0x03030000 - #endif - #endif -#endif - -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02070000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.7+ or Python 3.3+. -#else -#define CYTHON_ABI "3_0_0a10" -#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI -#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "." -#define CYTHON_HEX_VERSION 0x030000AA -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #define HAVE_LONG_LONG -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#if defined(GRAALVM_PYTHON) - /* For very preliminary testing purposes. Most variables are set the same as PyPy. - The existence of this section does not imply that anything works or is even tested */ - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 1 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(PYPY_VERSION) - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(CYTHON_LIMITED_API) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 1 - #define CYTHON_COMPILING_IN_GRAAL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 1 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #endif - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 1 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #ifndef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #ifndef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL (PY_MAJOR_VERSION < 3 || PY_VERSION_HEX >= 0x03060000) - #endif - #ifndef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL (PY_VERSION_HEX >= 0x030700A1) - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #if PY_VERSION_HEX < 0x030400a1 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #elif !defined(CYTHON_USE_TP_FINALIZE) - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #if PY_VERSION_HEX < 0x030600B1 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #elif !defined(CYTHON_USE_DICT_VERSIONS) - #define CYTHON_USE_DICT_VERSIONS 1 - #endif - #if PY_VERSION_HEX < 0x030700A3 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if !defined(CYTHON_VECTORCALL) -#define CYTHON_VECTORCALL (CYTHON_FAST_PYCCALL && PY_VERSION_HEX >= 0x030800B1) -#endif -#define CYTHON_BACKPORT_VECTORCALL (CYTHON_METH_FASTCALL && PY_VERSION_HEX < 0x030800B1) -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR - #define CYTHON_MAYBE_UNUSED_VAR(x) CYTHON_UNUSED_VAR(x) -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - #endif - #endif - #if _MSC_VER < 1300 - #ifdef _WIN64 - typedef unsigned long long __pyx_uintptr_t; - #else - typedef unsigned int __pyx_uintptr_t; - #endif - #else - #ifdef _WIN64 - typedef unsigned __int64 __pyx_uintptr_t; - #else - typedef unsigned __int32 __pyx_uintptr_t; - #endif - #endif -#else - #include - typedef uintptr_t __pyx_uintptr_t; -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_DefaultClassType PyClass_Type - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject* co=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(p))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too; - if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here - if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too; - Py_XDECREF((PyObject*)co); - co = (PyCodeObject*)call_result; - call_result = NULL; - if (0) { - cleanup_code_too: - Py_XDECREF((PyObject*)co); - co = NULL; - } - end: - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(call_result); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return co; - } -#elif PY_VERSION_HEX >= 0x030800B2 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_NewWithPosOnlyArgs(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif -#endif -#if PY_VERSION_HEX >= 0x030900A4 || defined(Py_IS_TYPE) - #define __Pyx_IS_TYPE(ob, type) Py_IS_TYPE(ob, type) -#else - #define __Pyx_IS_TYPE(ob, type) (((const PyObject*)ob)->ob_type == (type)) -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_METH_FASTCALL - #define __Pyx_METH_FASTCALL METH_FASTCALL - #define __Pyx_PyCFunction_FastCall __Pyx_PyCFunctionFast - #define __Pyx_PyCFunction_FastCallWithKeywords __Pyx_PyCFunctionFastWithKeywords -#else - #define __Pyx_METH_FASTCALL METH_VARARGS - #define __Pyx_PyCFunction_FastCall PyCFunction - #define __Pyx_PyCFunction_FastCallWithKeywords PyCFunctionWithKeywords -#endif -#if CYTHON_VECTORCALL - #define __pyx_vectorcallfunc vectorcallfunc - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET PY_VECTORCALL_ARGUMENTS_OFFSET - #define __Pyx_PyVectorcall_NARGS(n) PyVectorcall_NARGS((size_t)(n)) -#elif CYTHON_BACKPORT_VECTORCALL - typedef PyObject *(*__pyx_vectorcallfunc)(PyObject *callable, PyObject *const *args, - size_t nargsf, PyObject *kwnames); - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET ((size_t)1 << (8 * sizeof(size_t) - 1)) - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(((size_t)(n)) & ~__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)) -#else - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET 0 - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(n)) -#endif -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) ((void)m, PyType_FromSpecWithBases(s, b)) - typedef PyObject *(*__Pyx_PyCMethod)(PyObject *, PyTypeObject *, PyObject *const *, size_t, PyObject *); -#else - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) PyType_FromModuleAndSpec(m, s, b) - #define __Pyx_PyCMethod PyCMethod -#endif -#ifndef METH_METHOD - #define METH_METHOD 0x200 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyThreadState_Current PyThreadState_Get() -#elif !CYTHON_FAST_THREAD_STATE - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE void *__Pyx_PyModule_GetState(PyObject *op) -{ - void *result; - result = PyModule_GetState(op); - if (!result) - Py_FatalError("Couldn't find the module state"); - return result; -} -#endif -#define __Pyx_PyObject_GetSlot(obj, name, func_ctype) __Pyx_PyType_GetSlot(Py_TYPE(obj), name, func_ctype) -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((func_ctype) PyType_GetSlot((type), Py_##name)) -#else - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((type)->name) -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if PY_MAJOR_VERSION < 3 - #if CYTHON_COMPILING_IN_PYPY - #if PYPY_VERSION_NUM < 0x07030600 - #if defined(__cplusplus) && __cplusplus >= 201402L - [[deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")]] - #elif defined(__GNUC__) || defined(__clang__) - __attribute__ ((__deprecated__("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6"))) - #elif defined(_MSC_VER) - __declspec(deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")) - #endif - static CYTHON_INLINE int PyGILState_Check(void) { - return 0; - } - #else // PYPY_VERSION_NUM < 0x07030600 - #endif // PYPY_VERSION_NUM < 0x07030600 - #else - static CYTHON_INLINE int PyGILState_Check(void) { - PyThreadState * tstate = _PyThreadState_Current; - return tstate && (tstate == PyGILState_GetThisThreadState()); - } - #endif -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX > 0x030600B4 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStrWithError(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStr(PyObject *dict, PyObject *name) { - PyObject *res = __Pyx_PyDict_GetItemStrWithError(dict, name); - if (res == NULL) PyErr_Clear(); - return res; -} -#elif PY_MAJOR_VERSION >= 3 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000) -#define __Pyx_PyDict_GetItemStrWithError PyDict_GetItemWithError -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#else -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStrWithError(PyObject *dict, PyObject *name) { -#if CYTHON_COMPILING_IN_PYPY - return PyDict_GetItem(dict, name); -#else - PyDictEntry *ep; - PyDictObject *mp = (PyDictObject*) dict; - long hash = ((PyStringObject *) name)->ob_shash; - assert(hash != -1); - ep = (mp->ma_lookup)(mp, name, hash); - if (ep == NULL) { - return NULL; - } - return ep->me_value; -#endif -} -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#endif -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetFlags(tp) (((PyTypeObject *)tp)->tp_flags) - #define __Pyx_PyType_HasFeature(type, feature) ((__Pyx_PyType_GetFlags(type) & (feature)) != 0) - #define __Pyx_PyObject_GetIterNextFunc(obj) (Py_TYPE(obj)->tp_iternext) -#else - #define __Pyx_PyType_GetFlags(tp) (PyType_GetFlags((PyTypeObject *)tp)) - #define __Pyx_PyType_HasFeature(type, feature) PyType_HasFeature(type, feature) - #define __Pyx_PyObject_GetIterNextFunc(obj) PyIter_Next -#endif -#if CYTHON_USE_TYPE_SPECS && PY_VERSION_HEX >= 0x03080000 -#define __Pyx_PyHeapTypeObject_GC_Del(obj) {\ - PyTypeObject *type = Py_TYPE(obj);\ - assert(__Pyx_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE));\ - PyObject_GC_Del(obj);\ - Py_DECREF(type);\ -} -#else -#define __Pyx_PyHeapTypeObject_GC_Del(obj) PyObject_GC_Del(obj) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GetLength(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_ReadChar(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((void)u, 1114111) - #define __Pyx_PyUnicode_KIND(u) ((void)u, (0)) - #define __Pyx_PyUnicode_DATA(u) ((void*)u) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)k, PyUnicode_ReadChar((PyObject*)(d), i)) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GetLength(u)) -#elif PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if defined(PyUnicode_IS_READY) - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #else - #define __Pyx_PyUnicode_READY(op) (0) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) ((int)PyUnicode_KIND(u)) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) ((int)sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #if !defined(PyUnicode_DecodeUnicodeEscape) - #define PyUnicode_DecodeUnicodeEscape(s, size, errors) PyUnicode_Decode(s, size, "unicode_escape", errors) - #endif - #if !defined(PyUnicode_Contains) || (PY_MAJOR_VERSION == 2 && PYPY_VERSION_NUM < 0x07030500) - #undef PyUnicode_Contains - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) - #endif - #if !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) - #endif - #if !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) - #endif -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#if CYTHON_COMPILING_IN_CPYTHON - #define __Pyx_PySequence_ListKeepNew(obj)\ - (likely(PyList_CheckExact(obj) && Py_REFCNT(obj) == 1) ? __Pyx_NewRef(obj) : PySequence_List(obj)) -#else - #define __Pyx_PySequence_ListKeepNew(obj) PySequence_List(obj) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) __Pyx_IS_TYPE(obj, &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #define _USE_MATH_DEFINES -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__pdf_toolbox__lib__dia_yolov5__models__yolo -#define __PYX_HAVE_API__pdf_toolbox__lib__dia_yolov5__models__yolo -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const wchar_t *u) -{ - const wchar_t *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#else -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) -{ - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#endif -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -#endif -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm = __FILE__; -static const char *__pyx_filename; - -/* #### Code section: filename_table ### */ - -static const char *__pyx_f[] = { - "pdf_toolbox\\\\lib\\\\dia_yolov5\\\\models\\\\yolo.py", -}; -/* #### Code section: utility_code_proto_before_types ### */ -/* #### Code section: numeric_typedefs ### */ -/* #### Code section: complex_type_declarations ### */ -/* #### Code section: type_declarations ### */ - -/*--- Type declarations ---*/ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr; - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":36 - * onnx_dynamic = False # ONNX export parameter - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer # <<<<<<<<<<<<<< - * super().__init__() - * self.nc = nc # number of classes - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ { - PyObject_HEAD - PyObject *__pyx_v_ch; - PyObject *__pyx_v_self; -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":45 - * self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid - * self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - * self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv # <<<<<<<<<<<<<< - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - * - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":167 - * return p - * - * def _clip_augmented(self, y): # <<<<<<<<<<<<<< - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented { - PyObject_HEAD - long __pyx_v_e; - PyObject *__pyx_v_nl; -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":170 - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - * g = sum(4 ** x for x in range(nl)) # grid points # <<<<<<<<<<<<<< - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":172 - * g = sum(4 ** x for x in range(nl)) # grid points - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices # <<<<<<<<<<<<<< - * y[0] = y[0][:, :-i] # large - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":174 - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - * y[0] = y[0][:, :-i] # large - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices # <<<<<<<<<<<<<< - * y[-1] = y[-1][:, i:] # small - * return y - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":238 - * - * - * def parse_model(d, ch): # model_dict, input_channels(3) # <<<<<<<<<<<<<< - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model { - PyObject_HEAD - PyObject *__pyx_v_args; - PyObject *__pyx_v_ch; - PyObject *__pyx_v_f; - PyObject *__pyx_v_i; - PyObject *__pyx_v_m; - PyObject *__pyx_v_m_; - PyObject *__pyx_v_n; -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":267 - * args = [ch[f]] - * elif m is Concat: - * c2 = sum(ch[x] for x in f) # <<<<<<<<<<<<<< - * elif m is Detect: - * args.append([ch[x] for x in f]) - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":279 - * c2 = ch[f] - * - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module # <<<<<<<<<<<<<< - * t = str(m)[8:-2].replace('__main__.', '') # module type - * np = sum(x.numel() for x in m_.parameters()) # number params - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *__pyx_outer_scope; - PyObject *__pyx_v__; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":281 - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - * t = str(m)[8:-2].replace('__main__.', '') # module type - * np = sum(x.numel() for x in m_.parameters()) # number params # <<<<<<<<<<<<<< - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":284 - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist # <<<<<<<<<<<<<< - * layers.append(m_) - * if i == 0: - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - -/* #### Code section: utility_code_proto ### */ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, Py_ssize_t); - void (*DECREF)(void*, PyObject*, Py_ssize_t); - void (*GOTREF)(void*, PyObject*, Py_ssize_t); - void (*GIVEREF)(void*, PyObject*, Py_ssize_t); - void* (*SetupContext)(const char*, Py_ssize_t, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__)) - #define __Pyx_RefNannyFinishContextNogil() __Pyx_RefNannyFinishContext() -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_XINCREF(r) do { if((r) == NULL); else {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) == NULL); else {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) == NULL); else {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) == NULL); else {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContextNogil() - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_Py_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; Py_XDECREF(tmp);\ - } while (0) -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* TupleAndListFromArray.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n); -#endif - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* fastcall.proto */ -#define __Pyx_Arg_VARARGS(args, i) PyTuple_GET_ITEM(args, i) -#define __Pyx_NumKwargs_VARARGS(kwds) PyDict_Size(kwds) -#define __Pyx_KwValues_VARARGS(args, nargs) NULL -#define __Pyx_GetKwValue_VARARGS(kw, kwvalues, s) __Pyx_PyDict_GetItemStrWithError(kw, s) -#define __Pyx_KwargsAsDict_VARARGS(kw, kwvalues) PyDict_Copy(kw) -#if CYTHON_METH_FASTCALL - #define __Pyx_Arg_FASTCALL(args, i) args[i] - #define __Pyx_NumKwargs_FASTCALL(kwds) PyTuple_GET_SIZE(kwds) - #define __Pyx_KwValues_FASTCALL(args, nargs) (&args[nargs]) - static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s); - #define __Pyx_KwargsAsDict_FASTCALL(kw, kwvalues) _PyStack_AsDict(kwvalues, kw) -#else - #define __Pyx_Arg_FASTCALL __Pyx_Arg_VARARGS - #define __Pyx_NumKwargs_FASTCALL __Pyx_NumKwargs_VARARGS - #define __Pyx_KwValues_FASTCALL __Pyx_KwValues_VARARGS - #define __Pyx_GetKwValue_FASTCALL __Pyx_GetKwValue_VARARGS - #define __Pyx_KwargsAsDict_FASTCALL __Pyx_KwargsAsDict_VARARGS -#endif -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_VARARGS(args, start), stop - start) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_FASTCALL(args, start), stop - start) -#else -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) PyTuple_GetSlice(args, start, stop) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) PyTuple_GetSlice(args, start, stop) -#endif - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, - const char* function_name); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseClosureNameError.proto */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#if !CYTHON_VECTORCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if !CYTHON_VECTORCALL - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif // !CYTHON_VECTORCALL -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectFastCall.proto */ -#define __Pyx_PyObject_FastCall(func, args, nargs) __Pyx_PyObject_FastCallDict(func, args, (size_t)(nargs), NULL) -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs); - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* pep479.proto */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen); - -/* PyObjectSetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL) -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value); -#else -#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n) -#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* None.proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* SliceObject.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice( - PyObject* obj, Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** py_start, PyObject** py_stop, PyObject** py_slice, - int has_cstart, int has_cstop, int wraparound); - -/* PyFloatBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_SubtractObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyFloat_SubtractObjC(op1, op2, floatval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceSubtract(op1, op2) : PyNumber_Subtract(op1, op2)) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_MultiplyObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceMultiply(op1, op2) : PyNumber_Multiply(op1, op2)) -#endif - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportDottedModule.proto */ -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple); - -/* PyObjectLookupSpecial.proto */ -#if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_LookupSpecialNoError(obj, attr_name) __Pyx__PyObject_LookupSpecial(obj, attr_name, 0) -#define __Pyx_PyObject_LookupSpecial(obj, attr_name) __Pyx__PyObject_LookupSpecial(obj, attr_name, 1) -static CYTHON_INLINE PyObject* __Pyx__PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name, int with_error); -#else -#define __Pyx_PyObject_LookupSpecialNoError(o,n) __Pyx_PyObject_GetAttrStrNoError(o,n) -#define __Pyx_PyObject_LookupSpecial(o,n) __Pyx_PyObject_GetAttrStr(o,n) -#endif - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* DictGetItem.proto */ -#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY -static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key); -#define __Pyx_PyObject_Dict_GetItem(obj, name)\ - (likely(PyDict_CheckExact(obj)) ?\ - __Pyx_PyDict_GetItem(obj, name) : PyObject_GetItem(obj, name)) -#else -#define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key) -#define __Pyx_PyObject_Dict_GetItem(obj, name) PyObject_GetItem(obj, name) -#endif - -/* PyObjectFormatSimple.proto */ -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#elif PY_MAJOR_VERSION < 3 - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyString_CheckExact(s)) ? PyUnicode_FromEncodedObject(s, NULL, "strict") :\ - PyObject_Format(s, f)) -#elif CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyLong_CheckExact(s)) ? PyLong_Type.tp_repr(s) :\ - likely(PyFloat_CheckExact(s)) ? PyFloat_Type.tp_repr(s) :\ - PyObject_Format(s, f)) -#else - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#endif - -/* JoinPyUnicode.proto */ -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char); - -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyObject_Str.proto */ -#define __Pyx_PyObject_Str(obj)\ - (likely(PyString_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Str(obj)) - -/* PyObjectCall2Args.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod1.proto */ -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); - -/* append.proto */ -static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x); - -/* PyIntCompare.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_NeObjC(PyObject *op1, PyObject *op2, long intval, long inplace); - -/* PyIntCompare.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, long inplace); - -/* PySequenceContains.proto */ -static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { - int result = PySequence_Contains(seq, item); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_SubtractObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceSubtract(op1, op2) : PyNumber_Subtract(op1, op2)) -#endif - -/* SetItemInt.proto */ -#define __Pyx_SetItemInt(o, i, v, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_SetItemInt_Fast(o, (Py_ssize_t)i, v, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list assignment index out of range"), -1) :\ - __Pyx_SetItemInt_Generic(o, to_py_func(i), v))) -static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v); -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, - int is_list, int wraparound, int boundscheck); - -/* PyFloatBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_TrueDivideObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyFloat_TrueDivideObjC(op1, op2, floatval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceTrueDivide(op1, op2) : PyNumber_TrueDivide(op1, op2)) -#endif - -/* PyObjectFormat.proto */ -#if CYTHON_USE_UNICODE_WRITER -static PyObject* __Pyx_PyObject_Format(PyObject* s, PyObject* f); -#else -#define __Pyx_PyObject_Format(s, f) PyObject_Format(s, f) -#endif - -/* PyFloatBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_TrueDivideCObj(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyFloat_TrueDivideCObj(op1, op2, floatval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceTrueDivide(op1, op2) : PyNumber_TrueDivide(op1, op2)) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* IncludeStructmemberH.proto */ -#include - -/* FixUpExtensionType.proto */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type); -#endif - -/* PyObjectCallNoArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* ValidateBasesTuple.proto */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases); -#endif - -/* PyType_Ready.proto */ -static CYTHON_UNUSED int __Pyx_PyType_Ready(PyTypeObject *t); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* Py3UpdateBases.proto */ -static PyObject* __Pyx_PEP560_update_bases(PyObject *bases); - -/* CalculateMetaclass.proto */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases); - -/* SetNameInClass.proto */ -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value)) -#elif CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value)) -#else -#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value) -#endif - -/* FetchCommonType.proto */ -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); -#else -static PyTypeObject* __Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases); -#endif - -/* PyMethodNew.proto */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - return PyMethod_New(func, self); -} -#else - #define __Pyx_PyMethod_New PyMethod_New -#endif - -/* PyVectorcallFastCallDict.proto */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); -#endif - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CYFUNCTION_COROUTINE 0x08 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#else - #define __Pyx_CyFunction_GetClassObj(f)\ - ((PyObject*) ((PyCMethodObject *) (f))->mm_class) -#endif -#define __Pyx_CyFunction_SetClassObj(f, classobj)\ - __Pyx__CyFunction_SetClassObj((__pyx_CyFunctionObject *) (f), (classobj)) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { -#if PY_VERSION_HEX < 0x030900B1 - PyCFunctionObject func; -#else - PyCMethodObject func; -#endif -#if CYTHON_BACKPORT_VECTORCALL - __pyx_vectorcallfunc func_vectorcall; -#endif -#if PY_VERSION_HEX < 0x030500A0 - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; -#if PY_VERSION_HEX < 0x030900B1 - PyObject *func_classobj; -#endif - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; - PyObject *func_is_coroutine; -} __pyx_CyFunctionObject; -#if !CYTHON_USE_MODULE_STATE -static PyTypeObject *__pyx_CyFunctionType = 0; -#endif -#define __Pyx_CyFunction_Check(obj) __Pyx_TypeCheck(obj, __pyx_CyFunctionType) -#define __Pyx_IsCyOrPyCFunction(obj) __Pyx_TypeCheck2(obj, __pyx_CyFunctionType, &PyCFunction_Type) -#define __Pyx_CyFunction_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_CyFunctionType) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(PyObject *module); -#if CYTHON_METH_FASTCALL -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -#if CYTHON_BACKPORT_VECTORCALL -#define __Pyx_CyFunction_func_vectorcall(f) (((__pyx_CyFunctionObject*)f)->func_vectorcall) -#else -#define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) -#endif -#endif - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* Py3ClassCreate.proto */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname, - PyObject *mkw, PyObject *modname, PyObject *doc); -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict, - PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass); - -/* CyFunctionClassCell.proto */ -static int __Pyx_CyFunction_InitClassCell(PyObject *cyfunctions, PyObject *classobj); - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); -#endif - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* GCCDiagnostics.proto */ -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* Globals.proto */ -static PyObject* __Pyx_Globals(void); - -/* FormatTypeName.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%U" -static __Pyx_TypeName __Pyx_PyType_GetName(PyTypeObject* tp); -#define __Pyx_DECREF_TypeName(obj) Py_XDECREF(obj) -#else -typedef const char *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%.200s" -#define __Pyx_PyType_GetName(tp) ((tp)->tp_name) -#define __Pyx_DECREF_TypeName(obj) -#endif - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) __Pyx_IsAnySubtype2(Py_TYPE(obj), (PyTypeObject *)type1, (PyTypeObject *)type2) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) (PyObject_TypeCheck(obj, (PyTypeObject *)type1) || PyObject_TypeCheck(obj, (PyTypeObject *)type2)) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyErr_ExceptionMatches2(err1, err2) __Pyx_PyErr_GivenExceptionMatches2(__Pyx_PyErr_Occurred(), err1, err2) -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* CoroutineBase.proto */ -struct __pyx_CoroutineObject; -typedef PyObject *(*__pyx_coroutine_body_t)(struct __pyx_CoroutineObject *, PyThreadState *, PyObject *); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_ExcInfoStruct _PyErr_StackItem -#else -typedef struct { - PyObject *exc_type; - PyObject *exc_value; - PyObject *exc_traceback; -} __Pyx_ExcInfoStruct; -#endif -typedef struct __pyx_CoroutineObject { - PyObject_HEAD - __pyx_coroutine_body_t body; - PyObject *closure; - __Pyx_ExcInfoStruct gi_exc_state; - PyObject *gi_weakreflist; - PyObject *classobj; - PyObject *yieldfrom; - PyObject *gi_name; - PyObject *gi_qualname; - PyObject *gi_modulename; - PyObject *gi_code; - PyObject *gi_frame; - int resume_label; - char is_running; -} __pyx_CoroutineObject; -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject *type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static CYTHON_INLINE void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *self); -static int __Pyx_Coroutine_clear(PyObject *self); -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value); -static PyObject *__Pyx_Coroutine_Close(PyObject *self); -static PyObject *__Pyx_Coroutine_Throw(PyObject *gen, PyObject *args); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_Coroutine_SwapException(self) -#define __Pyx_Coroutine_ResetAndClearException(self) __Pyx_Coroutine_ExceptionClear(&(self)->gi_exc_state) -#else -#define __Pyx_Coroutine_SwapException(self) {\ - __Pyx_ExceptionSwap(&(self)->gi_exc_state.exc_type, &(self)->gi_exc_state.exc_value, &(self)->gi_exc_state.exc_traceback);\ - __Pyx_Coroutine_ResetFrameBackpointer(&(self)->gi_exc_state);\ - } -#define __Pyx_Coroutine_ResetAndClearException(self) {\ - __Pyx_ExceptionReset((self)->gi_exc_state.exc_type, (self)->gi_exc_state.exc_value, (self)->gi_exc_state.exc_traceback);\ - (self)->gi_exc_state.exc_type = (self)->gi_exc_state.exc_value = (self)->gi_exc_state.exc_traceback = NULL;\ - } -#endif -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__pyx_tstate, pvalue) -#else -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, pvalue) -#endif -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *tstate, PyObject **pvalue); -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state); - -/* PatchModuleWithCoroutine.proto */ -static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code); - -/* PatchGeneratorABC.proto */ -static int __Pyx_patch_abc(void); - -/* Generator.proto */ -#define __Pyx_Generator_USED -static PyTypeObject *__pyx_GeneratorType = 0; -#define __Pyx_Generator_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_GeneratorType) -#define __Pyx_Generator_New(body, code, closure, name, qualname, module_name)\ - __Pyx__Coroutine_New(__pyx_GeneratorType, body, code, closure, name, qualname, module_name) -static PyObject *__Pyx_Generator_Next(PyObject *self); -static int __pyx_Generator_init(PyObject *module); - -/* CStringEquals.proto */ -static CYTHON_INLINE int __Pyx_StrEq(const char *, const char *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str); -#else -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); -#endif - -/* #### Code section: module_declarations ### */ - -/* Module declarations from "pdf_toolbox.lib.dia_yolov5.models.yolo" */ -#if !CYTHON_USE_MODULE_STATE -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr = 0; -#endif -/* #### Code section: typeinfo ### */ -/* #### Code section: before_global_var ### */ -#define __Pyx_MODULE_NAME "pdf_toolbox.lib.dia_yolov5.models.yolo" -extern int __pyx_module_is_main_pdf_toolbox__lib__dia_yolov5__models__yolo; -int __pyx_module_is_main_pdf_toolbox__lib__dia_yolov5__models__yolo = 0; - -/* Implementation of "pdf_toolbox.lib.dia_yolov5.models.yolo" */ -/* #### Code section: global_var ### */ -static PyObject *__pyx_builtin_ImportError; -static PyObject *__pyx_builtin_print; -static PyObject *__pyx_builtin_super; -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_open; -static PyObject *__pyx_builtin_round; -static PyObject *__pyx_builtin_zip; -static PyObject *__pyx_builtin_sum; -static PyObject *__pyx_builtin_map; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_eval; -static PyObject *__pyx_builtin_NameError; -/* #### Code section: string_decls ### */ -static const char __pyx_k_3[] = ">3"; -static const char __pyx_k_T[] = "T"; -static const char __pyx_k_a[] = "a"; -static const char __pyx_k_b[] = "b"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_d[] = "d"; -static const char __pyx_k_e[] = "e"; -static const char __pyx_k_f[] = "f"; -static const char __pyx_k_g[] = "g"; -static const char __pyx_k_i[] = "i"; -static const char __pyx_k_j[] = "j"; -static const char __pyx_k_m[] = "m"; -static const char __pyx_k_n[] = "n"; -static const char __pyx_k_o[] = "o"; -static const char __pyx_k_p[] = "p"; -static const char __pyx_k_s[] = "s"; -static const char __pyx_k_t[] = "t"; -static const char __pyx_k_x[] = "x"; -static const char __pyx_k_y[] = "y"; -static const char __pyx_k_z[] = "z"; -static const char __pyx_k_10[] = ">10"; -static const char __pyx_k_18[] = ">18"; -static const char __pyx_k_30[] = "<30"; -static const char __pyx_k_40[] = "<40"; -static const char __pyx_k_C3[] = "C3"; -static const char __pyx_k__8[] = "*"; -static const char __pyx_k_bn[] = "bn"; -static const char __pyx_k_bs[] = "bs"; -static const char __pyx_k_c1[] = "c1"; -static const char __pyx_k_c2[] = "c2"; -static const char __pyx_k_cf[] = "cf"; -static const char __pyx_k_ch[] = "ch"; -static const char __pyx_k_dt[] = "dt"; -static const char __pyx_k_fi[] = "fi"; -static const char __pyx_k_fn[] = "fn"; -static const char __pyx_k_gc[] = "gc"; -static const char __pyx_k_gd[] = "gd"; -static const char __pyx_k_gs[] = "gs"; -static const char __pyx_k_gw[] = "gw"; -static const char __pyx_k_ij[] = "ij"; -static const char __pyx_k_mi[] = "mi"; -static const char __pyx_k_na[] = "na"; -static const char __pyx_k_nc[] = "nc"; -static const char __pyx_k_nl[] = "nl"; -static const char __pyx_k_nn[] = "nn"; -static const char __pyx_k_no[] = "no"; -static const char __pyx_k_np[] = "np"; -static const char __pyx_k_nx[] = "nx"; -static const char __pyx_k_ny[] = "ny"; -static const char __pyx_k_si[] = "si"; -static const char __pyx_k_to[] = "to"; -static const char __pyx_k_wh[] = "wh"; -static const char __pyx_k_xi[] = "xi"; -static const char __pyx_k_xv[] = "xv"; -static const char __pyx_k_xy[] = "xy"; -static const char __pyx_k_yi[] = "yi"; -static const char __pyx_k_yv[] = "yv"; -static const char __pyx_k_10s[] = ">10s"; -static const char __pyx_k_SPP[] = "SPP"; -static const char __pyx_k__12[] = ""; -static const char __pyx_k__23[] = " "; -static const char __pyx_k__24[] = " "; -static const char __pyx_k__25[] = "-"; -static const char __pyx_k__30[] = "\n"; -static const char __pyx_k__32[] = "."; -static const char __pyx_k__36[] = "_"; -static const char __pyx_k__77[] = ": "; -static const char __pyx_k__78[] = "?"; -static const char __pyx_k_cat[] = "cat"; -static const char __pyx_k_cfg[] = "cfg"; -static const char __pyx_k_doc[] = "__doc__"; -static const char __pyx_k_get[] = "get"; -static const char __pyx_k_img[] = "img"; -static const char __pyx_k_log[] = "log"; -static const char __pyx_k_m_2[] = "m_"; -static const char __pyx_k_map[] = "map"; -static const char __pyx_k_max[] = "max"; -static const char __pyx_k_n_2[] = "n_"; -static const char __pyx_k_opt[] = "opt"; -static const char __pyx_k_sum[] = "sum"; -static const char __pyx_k_sys[] = "sys"; -static const char __pyx_k_zip[] = "zip"; -static const char __pyx_k_C3TR[] = "C3TR"; -static const char __pyx_k_Conv[] = "Conv"; -static const char __pyx_k_FILE[] = "FILE"; -static const char __pyx_k_Path[] = "Path"; -static const char __pyx_k_ROOT[] = "ROOT"; -static const char __pyx_k_SPPF[] = "SPPF"; -static const char __pyx_k_args[] = "args"; -static const char __pyx_k_bias[] = "bias"; -static const char __pyx_k_conv[] = "conv"; -static const char __pyx_k_copy[] = "copy"; -static const char __pyx_k_cuda[] = "cuda"; -static const char __pyx_k_data[] = "data"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_eval[] = "eval"; -static const char __pyx_k_exit[] = "__exit__"; -static const char __pyx_k_file[] = "__file__"; -static const char __pyx_k_flip[] = "flip"; -static const char __pyx_k_from[] = "from"; -static const char __pyx_k_fuse[] = "fuse"; -static const char __pyx_k_grid[] = "grid"; -static const char __pyx_k_head[] = "head"; -static const char __pyx_k_help[] = "help"; -static const char __pyx_k_info[] = "info"; -static const char __pyx_k_init[] = "__init__"; -static const char __pyx_k_main[] = "__main__."; -static const char __pyx_k_math[] = "math"; -static const char __pyx_k_mean[] = "mean"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_open[] = "open"; -static const char __pyx_k_path[] = "path"; -static const char __pyx_k_rand[] = "rand"; -static const char __pyx_k_save[] = "save"; -static const char __pyx_k_self[] = "self"; -static const char __pyx_k_send[] = "send"; -static const char __pyx_k_spec[] = "__spec__"; -static const char __pyx_k_stem[] = "stem"; -static const char __pyx_k_test[] = "--test"; -static const char __pyx_k_thop[] = "thop"; -static const char __pyx_k_type[] = "type"; -static const char __pyx_k_view[] = "view"; -static const char __pyx_k_yaml[] = "yaml"; -static const char __pyx_k_10_0f[] = "10.0f"; -static const char __pyx_k_10_2f[] = "10.2f"; -static const char __pyx_k_C3SPP[] = "C3SPP"; -static const char __pyx_k_Focus[] = "Focus"; -static const char __pyx_k_Model[] = "Model"; -static const char __pyx_k_Total[] = " Total"; -static const char __pyx_k_apply[] = "_apply"; -static const char __pyx_k_ascii[] = "ascii"; -static const char __pyx_k_cfg_2[] = "--cfg"; -static const char __pyx_k_clone[] = "clone"; -static const char __pyx_k_close[] = "close"; -static const char __pyx_k_enter[] = "__enter__"; -static const char __pyx_k_flips[] = "flips"; -static const char __pyx_k_float[] = "float"; -static const char __pyx_k_model[] = "model"; -static const char __pyx_k_names[] = "names"; -static const char __pyx_k_numel[] = "numel"; -static const char __pyx_k_print[] = "print"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_rglob[] = "rglob"; -static const char __pyx_k_round[] = "round"; -static const char __pyx_k_scale[] = "scale"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_stack[] = "stack"; -static const char __pyx_k_super[] = "super"; -static const char __pyx_k_throw[] = "throw"; -static const char __pyx_k_torch[] = "torch"; -static const char __pyx_k_train[] = "train"; -static const char __pyx_k_zeros[] = "zeros"; -static const char __pyx_k_Concat[] = "Concat"; -static const char __pyx_k_Conv2d[] = "Conv2d"; -static const char __pyx_k_DWConv[] = "DWConv"; -static const char __pyx_k_Detect[] = "Detect"; -static const char __pyx_k_Expand[] = "Expand"; -static const char __pyx_k_GFLOPs[] = "GFLOPs"; -static const char __pyx_k_LOGGER[] = "LOGGER"; -static const char __pyx_k_Module[] = "Module"; -static const char __pyx_k_action[] = "action"; -static const char __pyx_k_append[] = "append"; -static const char __pyx_k_arange[] = "arange"; -static const char __pyx_k_detach[] = "detach"; -static const char __pyx_k_device[] = "device"; -static const char __pyx_k_enable[] = "enable"; -static const char __pyx_k_errors[] = "errors"; -static const char __pyx_k_expand[] = "expand"; -static const char __pyx_k_ignore[] = "ignore"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_inputs[] = "inputs"; -static const char __pyx_k_insert[] = "insert"; -static const char __pyx_k_layers[] = "layers"; -static const char __pyx_k_main_2[] = "__main__"; -static const char __pyx_k_models[] = "models"; -static const char __pyx_k_module[] = " module"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_params[] = "params"; -static const char __pyx_k_parser[] = "parser"; -static const char __pyx_k_stride[] = "stride"; -static const char __pyx_k_tensor[] = "tensor"; -static const char __pyx_k_test_2[] = "test"; -static const char __pyx_k_test_3[] = "__test__"; -static const char __pyx_k_tolist[] = "tolist"; -static const char __pyx_k_weight[] = "weight"; -static const char __pyx_k_C3Ghost[] = "C3Ghost"; -static const char __pyx_k_anchors[] = "anchors"; -static const char __pyx_k_augment[] = "augment"; -static const char __pyx_k_default[] = "default"; -static const char __pyx_k_disable[] = "disable"; -static const char __pyx_k_forward[] = "forward"; -static const char __pyx_k_genexpr[] = "genexpr"; -static const char __pyx_k_inplace[] = "inplace"; -static const char __pyx_k_modules[] = "modules"; -static const char __pyx_k_parents[] = "parents"; -static const char __pyx_k_pathlib[] = "pathlib"; -static const char __pyx_k_permute[] = "permute"; -static const char __pyx_k_prepare[] = "__prepare__"; -static const char __pyx_k_profile[] = "profile"; -static const char __pyx_k_resolve[] = "resolve"; -static const char __pyx_k_sigmoid[] = "sigmoid"; -static const char __pyx_k_time_ms[] = "time (ms)"; -static const char __pyx_k_verbose[] = "verbose"; -static const char __pyx_k_with_nc[] = " with nc="; -static const char __pyx_k_Contract[] = "Contract"; -static const char __pyx_k_Error_in[] = "Error in "; -static const char __pyx_k_argparse[] = "argparse"; -static const char __pyx_k_backbone[] = "backbone"; -static const char __pyx_k_deepcopy[] = "deepcopy"; -static const char __pyx_k_device_2[] = "--device"; -static const char __pyx_k_encoding[] = "encoding"; -static const char __pyx_k_img_size[] = "img_size"; -static const char __pyx_k_indexing[] = "indexing"; -static const char __pyx_k_meshgrid[] = "meshgrid"; -static const char __pyx_k_module_2[] = "module"; -static const char __pyx_k_module_3[] = "__module__"; -static const char __pyx_k_qualname[] = "__qualname__"; -static const char __pyx_k_set_name[] = "__set_name__"; -static const char __pyx_k_training[] = "training"; -static const char __pyx_k_CrossConv[] = "CrossConv"; -static const char __pyx_k_GhostConv[] = "GhostConv"; -static const char __pyx_k_MixConv2d[] = "MixConv2d"; -static const char __pyx_k_NameError[] = "NameError"; -static const char __pyx_k_Parameter[] = "Parameter"; -static const char __pyx_k_arguments[] = "arguments"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_isenabled[] = "isenabled"; -static const char __pyx_k_make_grid[] = "_make_grid"; -static const char __pyx_k_metaclass[] = "__metaclass__"; -static const char __pyx_k_profile_2[] = "--profile"; -static const char __pyx_k_safe_load[] = "safe_load"; -static const char __pyx_k_scale_img[] = "scale_img"; -static const char __pyx_k_time_sync[] = "time_sync"; -static const char __pyx_k_visualize[] = "visualize"; -static const char __pyx_k_yaml_file[] = "yaml_file"; -static const char __pyx_k_yolo_yaml[] = "yolo*.yaml"; -static const char __pyx_k_Bottleneck[] = "Bottleneck"; -static const char __pyx_k_Model_fuse[] = "Model.fuse"; -static const char __pyx_k_Model_info[] = "Model.info"; -static const char __pyx_k_ModuleList[] = "ModuleList"; -static const char __pyx_k_Sequential[] = "Sequential"; -static const char __pyx_k_contiguous[] = "contiguous"; -static const char __pyx_k_model_info[] = "model_info"; -static const char __pyx_k_model_yaml[] = "model.yaml"; -static const char __pyx_k_parameters[] = "parameters"; -static const char __pyx_k_parse_args[] = "parse_args"; -static const char __pyx_k_print_args[] = "print_args"; -static const char __pyx_k_store_true[] = "store_true"; -static const char __pyx_k_BatchNorm2d[] = "BatchNorm2d"; -static const char __pyx_k_ImportError[] = "ImportError"; -static const char __pyx_k_anchor_grid[] = "anchor_grid"; -static const char __pyx_k_mro_entries[] = "__mro_entries__"; -static const char __pyx_k_parse_model[] = "parse_model"; -static const char __pyx_k_Model___init[] = "Model.__init__"; -static const char __pyx_k_Model__apply[] = "Model._apply"; -static const char __pyx_k_add_argument[] = "add_argument"; -static const char __pyx_k_descale_pred[] = "_descale_pred"; -static const char __pyx_k_forward_fuse[] = "forward_fuse"; -static const char __pyx_k_forward_once[] = "_forward_once"; -static const char __pyx_k_initializing[] = "_initializing"; -static const char __pyx_k_is_available[] = "is_available"; -static const char __pyx_k_is_coroutine[] = "_is_coroutine"; -static const char __pyx_k_onnx_dynamic[] = "onnx_dynamic"; -static const char __pyx_k_print_biases[] = "_print_biases"; -static const char __pyx_k_yolov5s_yaml[] = "yolov5s.yaml"; -static const char __pyx_k_BottleneckCSP[] = "BottleneckCSP"; -static const char __pyx_k_Detect___init[] = "Detect.__init__"; -static const char __pyx_k_Fusing_layers[] = "Fusing layers... "; -static const char __pyx_k_Model_forward[] = "Model.forward"; -static const char __pyx_k_class_getitem[] = "__class_getitem__"; -static const char __pyx_k_init_subclass[] = "__init_subclass__"; -static const char __pyx_k_requires_grad[] = "requires_grad"; -static const char __pyx_k_select_device[] = "select_device"; -static const char __pyx_k_ArgumentParser[] = "ArgumentParser"; -static const char __pyx_k_Detect_forward[] = "Detect.forward"; -static const char __pyx_k_clip_augmented[] = "_clip_augmented"; -static const char __pyx_k_depth_multiple[] = "depth_multiple"; -static const char __pyx_k_make_divisible[] = "make_divisible"; -static const char __pyx_k_width_multiple[] = "width_multiple"; -static const char __pyx_k_GhostBottleneck[] = "GhostBottleneck"; -static const char __pyx_k_forward_augment[] = "_forward_augment"; -static const char __pyx_k_register_buffer[] = "register_buffer"; -static const char __pyx_k_fuse_conv_and_bn[] = "fuse_conv_and_bn"; -static const char __pyx_k_Detect__make_grid[] = "Detect._make_grid"; -static const char __pyx_k_initialize_biases[] = "_initialize_biases"; -static const char __pyx_k_profile_one_layer[] = "_profile_one_layer"; -static const char __pyx_k_asyncio_coroutines[] = "asyncio.coroutines"; -static const char __pyx_k_check_anchor_order[] = "check_anchor_order"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_initialize_weights[] = "initialize_weights"; -static const char __pyx_k_test_all_yolo_yaml[] = "test all yolo*.yaml"; -static const char __pyx_k_Model__descale_pred[] = "Model._descale_pred"; -static const char __pyx_k_Model__forward_once[] = "Model._forward_once"; -static const char __pyx_k_Model__print_biases[] = "Model._print_biases"; -static const char __pyx_k_profile_model_speed[] = "profile model speed"; -static const char __pyx_k_Model__clip_augmented[] = "Model._clip_augmented"; -static const char __pyx_k_Model__forward_augment[] = "Model._forward_augment"; -static const char __pyx_k_Model__initialize_biases[] = "Model._initialize_biases"; -static const char __pyx_k_Model__profile_one_layer[] = "Model._profile_one_layer"; -static const char __pyx_k_Overriding_model_yaml_nc[] = "Overriding model.yaml nc="; -static const char __pyx_k_parse_model_locals_genexpr[] = "parse_model..genexpr"; -static const char __pyx_k_Detect___init___locals_genexpr[] = "Detect.__init__..genexpr"; -static const char __pyx_k_6g_Conv2d_bias_10_3g_10_3g_10_3[] = "%6g Conv2d.bias:%10.3g%10.3g%10.3g%10.3g%10.3g%10.3g"; -static const char __pyx_k_YOLO_specific_modules_Usage_pyt[] = "\nYOLO-specific modules\n\nUsage:\n $ python path/to/models/yolo.py --cfg yolov5s.yaml\n"; -static const char __pyx_k_cuda_device_i_e_0_or_0_1_2_3_or[] = "cuda device, i.e. 0 or 0,1,2,3 or cpu"; -static const char __pyx_k_Model__clip_augmented_locals_gen[] = "Model._clip_augmented..genexpr"; -static const char __pyx_k_Overriding_model_yaml_anchors_wi[] = "Overriding model.yaml anchors with anchors="; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_model[] = "pdf_toolbox.lib.dia_yolov5.models.yolo"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_utils[] = "pdf_toolbox.lib.dia_yolov5.utils.autoanchor"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_model_2[] = "pdf_toolbox.lib.dia_yolov5.models.common"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_model_3[] = "pdf_toolbox.lib.dia_yolov5.models.experimental"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_model_4[] = "pdf_toolbox\\lib\\dia_yolov5\\models\\yolo.py"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2[] = "pdf_toolbox.lib.dia_yolov5.utils.general"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3[] = "pdf_toolbox.lib.dia_yolov5.utils.torch_utils"; -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_kp_u_10; -static PyObject *__pyx_kp_u_10_0f; -static PyObject *__pyx_kp_u_10_2f; -static PyObject *__pyx_kp_u_10s; -static PyObject *__pyx_kp_u_18; -static PyObject *__pyx_kp_u_3; -static PyObject *__pyx_kp_u_30; -static PyObject *__pyx_kp_u_40; -static PyObject *__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3; -static PyObject *__pyx_n_s_ArgumentParser; -static PyObject *__pyx_n_s_BatchNorm2d; -static PyObject *__pyx_n_s_Bottleneck; -static PyObject *__pyx_n_s_BottleneckCSP; -static PyObject *__pyx_n_s_C3; -static PyObject *__pyx_n_s_C3Ghost; -static PyObject *__pyx_n_s_C3SPP; -static PyObject *__pyx_n_s_C3TR; -static PyObject *__pyx_n_s_Concat; -static PyObject *__pyx_n_s_Contract; -static PyObject *__pyx_n_s_Conv; -static PyObject *__pyx_n_s_Conv2d; -static PyObject *__pyx_n_s_CrossConv; -static PyObject *__pyx_n_s_DWConv; -static PyObject *__pyx_n_s_Detect; -static PyObject *__pyx_n_s_Detect___init; -static PyObject *__pyx_n_s_Detect___init___locals_genexpr; -static PyObject *__pyx_n_s_Detect__make_grid; -static PyObject *__pyx_n_s_Detect_forward; -static PyObject *__pyx_kp_u_Error_in; -static PyObject *__pyx_n_s_Expand; -static PyObject *__pyx_n_s_FILE; -static PyObject *__pyx_n_s_Focus; -static PyObject *__pyx_kp_u_Fusing_layers; -static PyObject *__pyx_n_u_GFLOPs; -static PyObject *__pyx_n_s_GhostBottleneck; -static PyObject *__pyx_n_s_GhostConv; -static PyObject *__pyx_n_s_ImportError; -static PyObject *__pyx_n_s_LOGGER; -static PyObject *__pyx_n_s_MixConv2d; -static PyObject *__pyx_n_s_Model; -static PyObject *__pyx_n_s_Model___init; -static PyObject *__pyx_n_s_Model__apply; -static PyObject *__pyx_n_s_Model__clip_augmented; -static PyObject *__pyx_n_s_Model__clip_augmented_locals_gen; -static PyObject *__pyx_n_s_Model__descale_pred; -static PyObject *__pyx_n_s_Model__forward_augment; -static PyObject *__pyx_n_s_Model__forward_once; -static PyObject *__pyx_n_s_Model__initialize_biases; -static PyObject *__pyx_n_s_Model__print_biases; -static PyObject *__pyx_n_s_Model__profile_one_layer; -static PyObject *__pyx_n_s_Model_forward; -static PyObject *__pyx_n_s_Model_fuse; -static PyObject *__pyx_n_s_Model_info; -static PyObject *__pyx_n_s_Module; -static PyObject *__pyx_n_s_ModuleList; -static PyObject *__pyx_n_s_NameError; -static PyObject *__pyx_kp_u_Overriding_model_yaml_anchors_wi; -static PyObject *__pyx_kp_u_Overriding_model_yaml_nc; -static PyObject *__pyx_n_s_Parameter; -static PyObject *__pyx_n_s_Path; -static PyObject *__pyx_n_s_ROOT; -static PyObject *__pyx_n_s_SPP; -static PyObject *__pyx_n_s_SPPF; -static PyObject *__pyx_n_s_Sequential; -static PyObject *__pyx_n_s_T; -static PyObject *__pyx_kp_u_Total; -static PyObject *__pyx_kp_u__12; -static PyObject *__pyx_kp_u__23; -static PyObject *__pyx_kp_u__24; -static PyObject *__pyx_kp_u__25; -static PyObject *__pyx_kp_u__30; -static PyObject *__pyx_kp_u__32; -static PyObject *__pyx_n_s__36; -static PyObject *__pyx_kp_u__77; -static PyObject *__pyx_n_s__78; -static PyObject *__pyx_n_s__8; -static PyObject *__pyx_n_s_a; -static PyObject *__pyx_n_s_action; -static PyObject *__pyx_n_s_add_argument; -static PyObject *__pyx_n_s_anchor_grid; -static PyObject *__pyx_n_s_anchors; -static PyObject *__pyx_n_u_anchors; -static PyObject *__pyx_n_s_append; -static PyObject *__pyx_n_s_apply; -static PyObject *__pyx_n_s_arange; -static PyObject *__pyx_n_s_argparse; -static PyObject *__pyx_n_s_args; -static PyObject *__pyx_n_u_arguments; -static PyObject *__pyx_n_u_ascii; -static PyObject *__pyx_n_s_asyncio_coroutines; -static PyObject *__pyx_n_s_augment; -static PyObject *__pyx_n_s_b; -static PyObject *__pyx_n_u_backbone; -static PyObject *__pyx_n_s_bias; -static PyObject *__pyx_n_s_bn; -static PyObject *__pyx_n_u_bn; -static PyObject *__pyx_n_s_bs; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_s_c1; -static PyObject *__pyx_n_s_c2; -static PyObject *__pyx_n_s_cat; -static PyObject *__pyx_n_s_cf; -static PyObject *__pyx_n_s_cfg; -static PyObject *__pyx_kp_u_cfg_2; -static PyObject *__pyx_n_s_ch; -static PyObject *__pyx_n_u_ch; -static PyObject *__pyx_n_s_check_anchor_order; -static PyObject *__pyx_n_s_class_getitem; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_n_s_clip_augmented; -static PyObject *__pyx_n_s_clone; -static PyObject *__pyx_n_s_close; -static PyObject *__pyx_n_s_contiguous; -static PyObject *__pyx_n_s_conv; -static PyObject *__pyx_n_s_copy; -static PyObject *__pyx_n_s_cuda; -static PyObject *__pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or; -static PyObject *__pyx_n_s_d; -static PyObject *__pyx_n_s_data; -static PyObject *__pyx_n_s_deepcopy; -static PyObject *__pyx_n_s_default; -static PyObject *__pyx_n_u_depth_multiple; -static PyObject *__pyx_n_s_descale_pred; -static PyObject *__pyx_n_s_detach; -static PyObject *__pyx_n_s_device; -static PyObject *__pyx_kp_u_device_2; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_kp_u_disable; -static PyObject *__pyx_n_s_doc; -static PyObject *__pyx_n_s_dt; -static PyObject *__pyx_n_s_e; -static PyObject *__pyx_kp_u_enable; -static PyObject *__pyx_n_s_encoding; -static PyObject *__pyx_n_s_enter; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_errors; -static PyObject *__pyx_n_s_eval; -static PyObject *__pyx_n_s_exit; -static PyObject *__pyx_n_s_expand; -static PyObject *__pyx_n_s_f; -static PyObject *__pyx_n_s_fi; -static PyObject *__pyx_n_s_file; -static PyObject *__pyx_n_s_flip; -static PyObject *__pyx_n_s_flips; -static PyObject *__pyx_n_s_float; -static PyObject *__pyx_n_s_fn; -static PyObject *__pyx_n_s_forward; -static PyObject *__pyx_n_s_forward_augment; -static PyObject *__pyx_n_s_forward_fuse; -static PyObject *__pyx_n_s_forward_once; -static PyObject *__pyx_n_u_from; -static PyObject *__pyx_n_s_fuse; -static PyObject *__pyx_n_s_fuse_conv_and_bn; -static PyObject *__pyx_n_s_g; -static PyObject *__pyx_kp_u_gc; -static PyObject *__pyx_n_s_gd; -static PyObject *__pyx_n_s_genexpr; -static PyObject *__pyx_n_s_get; -static PyObject *__pyx_n_s_grid; -static PyObject *__pyx_n_s_gs; -static PyObject *__pyx_n_s_gw; -static PyObject *__pyx_n_u_head; -static PyObject *__pyx_n_s_help; -static PyObject *__pyx_n_s_i; -static PyObject *__pyx_n_u_ignore; -static PyObject *__pyx_n_u_ij; -static PyObject *__pyx_n_s_img; -static PyObject *__pyx_n_s_img_size; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_indexing; -static PyObject *__pyx_n_s_info; -static PyObject *__pyx_n_s_init; -static PyObject *__pyx_n_s_init_subclass; -static PyObject *__pyx_n_s_initialize_biases; -static PyObject *__pyx_n_s_initialize_weights; -static PyObject *__pyx_n_s_initializing; -static PyObject *__pyx_n_s_inplace; -static PyObject *__pyx_n_u_inplace; -static PyObject *__pyx_n_s_inputs; -static PyObject *__pyx_n_s_insert; -static PyObject *__pyx_n_s_is_available; -static PyObject *__pyx_n_s_is_coroutine; -static PyObject *__pyx_kp_u_isenabled; -static PyObject *__pyx_n_s_j; -static PyObject *__pyx_n_s_layers; -static PyObject *__pyx_n_s_log; -static PyObject *__pyx_n_s_m; -static PyObject *__pyx_n_s_m_2; -static PyObject *__pyx_kp_u_main; -static PyObject *__pyx_n_s_main_2; -static PyObject *__pyx_n_u_main_2; -static PyObject *__pyx_n_s_make_divisible; -static PyObject *__pyx_n_s_make_grid; -static PyObject *__pyx_n_s_map; -static PyObject *__pyx_n_s_math; -static PyObject *__pyx_n_s_max; -static PyObject *__pyx_n_s_mean; -static PyObject *__pyx_n_s_meshgrid; -static PyObject *__pyx_n_s_metaclass; -static PyObject *__pyx_n_s_mi; -static PyObject *__pyx_n_s_model; -static PyObject *__pyx_n_s_model_info; -static PyObject *__pyx_kp_u_model_yaml; -static PyObject *__pyx_n_u_models; -static PyObject *__pyx_kp_u_module; -static PyObject *__pyx_n_u_module_2; -static PyObject *__pyx_n_s_module_3; -static PyObject *__pyx_n_s_modules; -static PyObject *__pyx_n_s_mro_entries; -static PyObject *__pyx_n_s_n; -static PyObject *__pyx_n_u_n; -static PyObject *__pyx_n_s_n_2; -static PyObject *__pyx_n_s_na; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_n_s_names; -static PyObject *__pyx_n_s_nc; -static PyObject *__pyx_n_u_nc; -static PyObject *__pyx_n_s_nl; -static PyObject *__pyx_n_s_nn; -static PyObject *__pyx_n_s_no; -static PyObject *__pyx_n_s_np; -static PyObject *__pyx_n_s_numel; -static PyObject *__pyx_n_s_nx; -static PyObject *__pyx_n_s_ny; -static PyObject *__pyx_n_s_o; -static PyObject *__pyx_n_s_onnx_dynamic; -static PyObject *__pyx_n_s_open; -static PyObject *__pyx_n_s_opt; -static PyObject *__pyx_n_s_p; -static PyObject *__pyx_n_s_parameters; -static PyObject *__pyx_n_u_params; -static PyObject *__pyx_n_s_parents; -static PyObject *__pyx_n_s_parse_args; -static PyObject *__pyx_n_s_parse_model; -static PyObject *__pyx_n_s_parse_model_locals_genexpr; -static PyObject *__pyx_n_s_parser; -static PyObject *__pyx_n_s_path; -static PyObject *__pyx_n_s_pathlib; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3; -static PyObject *__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3; -static PyObject *__pyx_n_s_permute; -static PyObject *__pyx_n_s_prepare; -static PyObject *__pyx_n_s_print; -static PyObject *__pyx_n_s_print_args; -static PyObject *__pyx_n_s_print_biases; -static PyObject *__pyx_n_s_profile; -static PyObject *__pyx_kp_u_profile_2; -static PyObject *__pyx_kp_u_profile_model_speed; -static PyObject *__pyx_n_s_profile_one_layer; -static PyObject *__pyx_n_s_qualname; -static PyObject *__pyx_n_s_rand; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_register_buffer; -static PyObject *__pyx_n_s_requires_grad; -static PyObject *__pyx_n_s_resolve; -static PyObject *__pyx_n_s_rglob; -static PyObject *__pyx_n_s_round; -static PyObject *__pyx_n_s_s; -static PyObject *__pyx_n_s_safe_load; -static PyObject *__pyx_n_s_save; -static PyObject *__pyx_n_s_scale; -static PyObject *__pyx_n_s_scale_img; -static PyObject *__pyx_n_s_select_device; -static PyObject *__pyx_n_s_self; -static PyObject *__pyx_n_s_send; -static PyObject *__pyx_n_s_set_name; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_si; -static PyObject *__pyx_n_s_sigmoid; -static PyObject *__pyx_n_s_spec; -static PyObject *__pyx_n_s_stack; -static PyObject *__pyx_n_s_stem; -static PyObject *__pyx_n_u_store_true; -static PyObject *__pyx_n_s_stride; -static PyObject *__pyx_n_s_sum; -static PyObject *__pyx_n_s_super; -static PyObject *__pyx_n_s_sys; -static PyObject *__pyx_n_s_t; -static PyObject *__pyx_n_s_tensor; -static PyObject *__pyx_kp_u_test; -static PyObject *__pyx_n_s_test_2; -static PyObject *__pyx_n_s_test_3; -static PyObject *__pyx_kp_u_test_all_yolo_yaml; -static PyObject *__pyx_n_s_thop; -static PyObject *__pyx_n_s_throw; -static PyObject *__pyx_kp_u_time_ms; -static PyObject *__pyx_n_s_time_sync; -static PyObject *__pyx_n_s_to; -static PyObject *__pyx_n_s_tolist; -static PyObject *__pyx_n_s_torch; -static PyObject *__pyx_n_s_train; -static PyObject *__pyx_n_s_training; -static PyObject *__pyx_n_s_type; -static PyObject *__pyx_n_s_verbose; -static PyObject *__pyx_n_s_view; -static PyObject *__pyx_n_s_visualize; -static PyObject *__pyx_n_s_weight; -static PyObject *__pyx_n_s_wh; -static PyObject *__pyx_n_u_width_multiple; -static PyObject *__pyx_kp_u_with_nc; -static PyObject *__pyx_n_s_x; -static PyObject *__pyx_n_s_xi; -static PyObject *__pyx_n_s_xv; -static PyObject *__pyx_n_s_xy; -static PyObject *__pyx_n_s_y; -static PyObject *__pyx_n_s_yaml; -static PyObject *__pyx_n_s_yaml_file; -static PyObject *__pyx_n_s_yi; -static PyObject *__pyx_kp_u_yolo_yaml; -static PyObject *__pyx_kp_u_yolov5s_yaml; -static PyObject *__pyx_n_s_yv; -static PyObject *__pyx_n_s_z; -static PyObject *__pyx_n_s_zeros; -static PyObject *__pyx_n_s_zip; -#endif -/* #### Code section: decls ### */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_nc, PyObject *__pyx_v_anchors, PyObject *__pyx_v_ch, PyObject *__pyx_v_inplace); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_2forward(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_4_make_grid(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_nx, PyObject *__pyx_v_ny, PyObject *__pyx_v_i); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_cfg, PyObject *__pyx_v_ch, PyObject *__pyx_v_nc, PyObject *__pyx_v_anchors); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_2forward(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_augment, PyObject *__pyx_v_profile, PyObject *__pyx_v_visualize); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_4_forward_augment(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_6_forward_once(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_profile, CYTHON_UNUSED PyObject *__pyx_v_visualize); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_8_descale_pred(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p, PyObject *__pyx_v_flips, PyObject *__pyx_v_scale, PyObject *__pyx_v_img_size); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_3genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_6genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_10_clip_augmented(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_y); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_12_profile_one_layer(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_m, PyObject *__pyx_v_x, PyObject *__pyx_v_dt); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_14_initialize_biases(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_cf); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_16_print_biases(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_18fuse(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_20info(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_verbose, PyObject *__pyx_v_img_size); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_22_apply(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_fn); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_3genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_6genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_9genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_parse_model(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_d, PyObject *__pyx_v_ch); /* proto */ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_float_0_5; -static PyObject *__pyx_float_0_6; -static PyObject *__pyx_float_1E9; -static PyObject *__pyx_float_0_67; -static PyObject *__pyx_float_0_83; -static PyObject *__pyx_float_0_999999; -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_2; -static PyObject *__pyx_int_3; -static PyObject *__pyx_int_4; -static PyObject *__pyx_int_5; -static PyObject *__pyx_int_8; -static PyObject *__pyx_int_20; -static PyObject *__pyx_int_80; -static PyObject *__pyx_int_100; -static PyObject *__pyx_int_256; -static PyObject *__pyx_int_640; -static PyObject *__pyx_int_neg_1; -static PyObject *__pyx_int_neg_2; -#endif -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_tuple_; -static PyObject *__pyx_slice__2; -static PyObject *__pyx_slice__3; -static PyObject *__pyx_slice__6; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_slice__13; -static PyObject *__pyx_slice__14; -static PyObject *__pyx_slice__18; -static PyObject *__pyx_slice__20; -static PyObject *__pyx_slice__22; -static PyObject *__pyx_slice__27; -static PyObject *__pyx_slice__29; -static PyObject *__pyx_slice__31; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__15; -static PyObject *__pyx_tuple__16; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__21; -static PyObject *__pyx_tuple__26; -static PyObject *__pyx_tuple__28; -static PyObject *__pyx_tuple__33; -static PyObject *__pyx_tuple__35; -static PyObject *__pyx_tuple__37; -static PyObject *__pyx_tuple__39; -static PyObject *__pyx_tuple__41; -static PyObject *__pyx_tuple__42; -static PyObject *__pyx_tuple__44; -static PyObject *__pyx_tuple__45; -static PyObject *__pyx_tuple__47; -static PyObject *__pyx_tuple__48; -static PyObject *__pyx_tuple__50; -static PyObject *__pyx_tuple__52; -static PyObject *__pyx_tuple__53; -static PyObject *__pyx_tuple__55; -static PyObject *__pyx_tuple__57; -static PyObject *__pyx_tuple__59; -static PyObject *__pyx_tuple__61; -static PyObject *__pyx_tuple__62; -static PyObject *__pyx_tuple__64; -static PyObject *__pyx_tuple__66; -static PyObject *__pyx_tuple__68; -static PyObject *__pyx_tuple__69; -static PyObject *__pyx_tuple__71; -static PyObject *__pyx_tuple__73; -static PyObject *__pyx_tuple__74; -static PyObject *__pyx_tuple__75; -static PyObject *__pyx_tuple__76; -static PyObject *__pyx_codeobj__34; -static PyObject *__pyx_codeobj__38; -static PyObject *__pyx_codeobj__40; -static PyObject *__pyx_codeobj__43; -static PyObject *__pyx_codeobj__46; -static PyObject *__pyx_codeobj__49; -static PyObject *__pyx_codeobj__51; -static PyObject *__pyx_codeobj__54; -static PyObject *__pyx_codeobj__56; -static PyObject *__pyx_codeobj__58; -static PyObject *__pyx_codeobj__60; -static PyObject *__pyx_codeobj__63; -static PyObject *__pyx_codeobj__65; -static PyObject *__pyx_codeobj__67; -static PyObject *__pyx_codeobj__70; -static PyObject *__pyx_codeobj__72; -#endif -/* #### Code section: late_includes ### */ -/* #### Code section: module_state ### */ -#if CYTHON_USE_MODULE_STATE -typedef struct { - PyObject *__pyx_d; - PyObject *__pyx_b; - PyObject *__pyx_cython_runtime; - PyObject *__pyx_empty_tuple; - PyObject *__pyx_empty_bytes; - PyObject *__pyx_empty_unicode; - #ifdef __Pyx_CyFunction_USED - PyTypeObject *__pyx_CyFunctionType; - #endif - #ifdef __Pyx_FusedFunction_USED - PyTypeObject *__pyx_FusedFunctionType; - #endif - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr; - PyObject *__pyx_kp_u_10; - PyObject *__pyx_kp_u_10_0f; - PyObject *__pyx_kp_u_10_2f; - PyObject *__pyx_kp_u_10s; - PyObject *__pyx_kp_u_18; - PyObject *__pyx_kp_u_3; - PyObject *__pyx_kp_u_30; - PyObject *__pyx_kp_u_40; - PyObject *__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3; - PyObject *__pyx_n_s_ArgumentParser; - PyObject *__pyx_n_s_BatchNorm2d; - PyObject *__pyx_n_s_Bottleneck; - PyObject *__pyx_n_s_BottleneckCSP; - PyObject *__pyx_n_s_C3; - PyObject *__pyx_n_s_C3Ghost; - PyObject *__pyx_n_s_C3SPP; - PyObject *__pyx_n_s_C3TR; - PyObject *__pyx_n_s_Concat; - PyObject *__pyx_n_s_Contract; - PyObject *__pyx_n_s_Conv; - PyObject *__pyx_n_s_Conv2d; - PyObject *__pyx_n_s_CrossConv; - PyObject *__pyx_n_s_DWConv; - PyObject *__pyx_n_s_Detect; - PyObject *__pyx_n_s_Detect___init; - PyObject *__pyx_n_s_Detect___init___locals_genexpr; - PyObject *__pyx_n_s_Detect__make_grid; - PyObject *__pyx_n_s_Detect_forward; - PyObject *__pyx_kp_u_Error_in; - PyObject *__pyx_n_s_Expand; - PyObject *__pyx_n_s_FILE; - PyObject *__pyx_n_s_Focus; - PyObject *__pyx_kp_u_Fusing_layers; - PyObject *__pyx_n_u_GFLOPs; - PyObject *__pyx_n_s_GhostBottleneck; - PyObject *__pyx_n_s_GhostConv; - PyObject *__pyx_n_s_ImportError; - PyObject *__pyx_n_s_LOGGER; - PyObject *__pyx_n_s_MixConv2d; - PyObject *__pyx_n_s_Model; - PyObject *__pyx_n_s_Model___init; - PyObject *__pyx_n_s_Model__apply; - PyObject *__pyx_n_s_Model__clip_augmented; - PyObject *__pyx_n_s_Model__clip_augmented_locals_gen; - PyObject *__pyx_n_s_Model__descale_pred; - PyObject *__pyx_n_s_Model__forward_augment; - PyObject *__pyx_n_s_Model__forward_once; - PyObject *__pyx_n_s_Model__initialize_biases; - PyObject *__pyx_n_s_Model__print_biases; - PyObject *__pyx_n_s_Model__profile_one_layer; - PyObject *__pyx_n_s_Model_forward; - PyObject *__pyx_n_s_Model_fuse; - PyObject *__pyx_n_s_Model_info; - PyObject *__pyx_n_s_Module; - PyObject *__pyx_n_s_ModuleList; - PyObject *__pyx_n_s_NameError; - PyObject *__pyx_kp_u_Overriding_model_yaml_anchors_wi; - PyObject *__pyx_kp_u_Overriding_model_yaml_nc; - PyObject *__pyx_n_s_Parameter; - PyObject *__pyx_n_s_Path; - PyObject *__pyx_n_s_ROOT; - PyObject *__pyx_n_s_SPP; - PyObject *__pyx_n_s_SPPF; - PyObject *__pyx_n_s_Sequential; - PyObject *__pyx_n_s_T; - PyObject *__pyx_kp_u_Total; - PyObject *__pyx_kp_u__12; - PyObject *__pyx_kp_u__23; - PyObject *__pyx_kp_u__24; - PyObject *__pyx_kp_u__25; - PyObject *__pyx_kp_u__30; - PyObject *__pyx_kp_u__32; - PyObject *__pyx_n_s__36; - PyObject *__pyx_kp_u__77; - PyObject *__pyx_n_s__78; - PyObject *__pyx_n_s__8; - PyObject *__pyx_n_s_a; - PyObject *__pyx_n_s_action; - PyObject *__pyx_n_s_add_argument; - PyObject *__pyx_n_s_anchor_grid; - PyObject *__pyx_n_s_anchors; - PyObject *__pyx_n_u_anchors; - PyObject *__pyx_n_s_append; - PyObject *__pyx_n_s_apply; - PyObject *__pyx_n_s_arange; - PyObject *__pyx_n_s_argparse; - PyObject *__pyx_n_s_args; - PyObject *__pyx_n_u_arguments; - PyObject *__pyx_n_u_ascii; - PyObject *__pyx_n_s_asyncio_coroutines; - PyObject *__pyx_n_s_augment; - PyObject *__pyx_n_s_b; - PyObject *__pyx_n_u_backbone; - PyObject *__pyx_n_s_bias; - PyObject *__pyx_n_s_bn; - PyObject *__pyx_n_u_bn; - PyObject *__pyx_n_s_bs; - PyObject *__pyx_n_s_c; - PyObject *__pyx_n_s_c1; - PyObject *__pyx_n_s_c2; - PyObject *__pyx_n_s_cat; - PyObject *__pyx_n_s_cf; - PyObject *__pyx_n_s_cfg; - PyObject *__pyx_kp_u_cfg_2; - PyObject *__pyx_n_s_ch; - PyObject *__pyx_n_u_ch; - PyObject *__pyx_n_s_check_anchor_order; - PyObject *__pyx_n_s_class_getitem; - PyObject *__pyx_n_s_cline_in_traceback; - PyObject *__pyx_n_s_clip_augmented; - PyObject *__pyx_n_s_clone; - PyObject *__pyx_n_s_close; - PyObject *__pyx_n_s_contiguous; - PyObject *__pyx_n_s_conv; - PyObject *__pyx_n_s_copy; - PyObject *__pyx_n_s_cuda; - PyObject *__pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or; - PyObject *__pyx_n_s_d; - PyObject *__pyx_n_s_data; - PyObject *__pyx_n_s_deepcopy; - PyObject *__pyx_n_s_default; - PyObject *__pyx_n_u_depth_multiple; - PyObject *__pyx_n_s_descale_pred; - PyObject *__pyx_n_s_detach; - PyObject *__pyx_n_s_device; - PyObject *__pyx_kp_u_device_2; - PyObject *__pyx_n_s_dict; - PyObject *__pyx_kp_u_disable; - PyObject *__pyx_n_s_doc; - PyObject *__pyx_n_s_dt; - PyObject *__pyx_n_s_e; - PyObject *__pyx_kp_u_enable; - PyObject *__pyx_n_s_encoding; - PyObject *__pyx_n_s_enter; - PyObject *__pyx_n_s_enumerate; - PyObject *__pyx_n_s_errors; - PyObject *__pyx_n_s_eval; - PyObject *__pyx_n_s_exit; - PyObject *__pyx_n_s_expand; - PyObject *__pyx_n_s_f; - PyObject *__pyx_n_s_fi; - PyObject *__pyx_n_s_file; - PyObject *__pyx_n_s_flip; - PyObject *__pyx_n_s_flips; - PyObject *__pyx_n_s_float; - PyObject *__pyx_n_s_fn; - PyObject *__pyx_n_s_forward; - PyObject *__pyx_n_s_forward_augment; - PyObject *__pyx_n_s_forward_fuse; - PyObject *__pyx_n_s_forward_once; - PyObject *__pyx_n_u_from; - PyObject *__pyx_n_s_fuse; - PyObject *__pyx_n_s_fuse_conv_and_bn; - PyObject *__pyx_n_s_g; - PyObject *__pyx_kp_u_gc; - PyObject *__pyx_n_s_gd; - PyObject *__pyx_n_s_genexpr; - PyObject *__pyx_n_s_get; - PyObject *__pyx_n_s_grid; - PyObject *__pyx_n_s_gs; - PyObject *__pyx_n_s_gw; - PyObject *__pyx_n_u_head; - PyObject *__pyx_n_s_help; - PyObject *__pyx_n_s_i; - PyObject *__pyx_n_u_ignore; - PyObject *__pyx_n_u_ij; - PyObject *__pyx_n_s_img; - PyObject *__pyx_n_s_img_size; - PyObject *__pyx_n_s_import; - PyObject *__pyx_n_s_indexing; - PyObject *__pyx_n_s_info; - PyObject *__pyx_n_s_init; - PyObject *__pyx_n_s_init_subclass; - PyObject *__pyx_n_s_initialize_biases; - PyObject *__pyx_n_s_initialize_weights; - PyObject *__pyx_n_s_initializing; - PyObject *__pyx_n_s_inplace; - PyObject *__pyx_n_u_inplace; - PyObject *__pyx_n_s_inputs; - PyObject *__pyx_n_s_insert; - PyObject *__pyx_n_s_is_available; - PyObject *__pyx_n_s_is_coroutine; - PyObject *__pyx_kp_u_isenabled; - PyObject *__pyx_n_s_j; - PyObject *__pyx_n_s_layers; - PyObject *__pyx_n_s_log; - PyObject *__pyx_n_s_m; - PyObject *__pyx_n_s_m_2; - PyObject *__pyx_kp_u_main; - PyObject *__pyx_n_s_main_2; - PyObject *__pyx_n_u_main_2; - PyObject *__pyx_n_s_make_divisible; - PyObject *__pyx_n_s_make_grid; - PyObject *__pyx_n_s_map; - PyObject *__pyx_n_s_math; - PyObject *__pyx_n_s_max; - PyObject *__pyx_n_s_mean; - PyObject *__pyx_n_s_meshgrid; - PyObject *__pyx_n_s_metaclass; - PyObject *__pyx_n_s_mi; - PyObject *__pyx_n_s_model; - PyObject *__pyx_n_s_model_info; - PyObject *__pyx_kp_u_model_yaml; - PyObject *__pyx_n_u_models; - PyObject *__pyx_kp_u_module; - PyObject *__pyx_n_u_module_2; - PyObject *__pyx_n_s_module_3; - PyObject *__pyx_n_s_modules; - PyObject *__pyx_n_s_mro_entries; - PyObject *__pyx_n_s_n; - PyObject *__pyx_n_u_n; - PyObject *__pyx_n_s_n_2; - PyObject *__pyx_n_s_na; - PyObject *__pyx_n_s_name; - PyObject *__pyx_n_s_name_2; - PyObject *__pyx_n_s_names; - PyObject *__pyx_n_s_nc; - PyObject *__pyx_n_u_nc; - PyObject *__pyx_n_s_nl; - PyObject *__pyx_n_s_nn; - PyObject *__pyx_n_s_no; - PyObject *__pyx_n_s_np; - PyObject *__pyx_n_s_numel; - PyObject *__pyx_n_s_nx; - PyObject *__pyx_n_s_ny; - PyObject *__pyx_n_s_o; - PyObject *__pyx_n_s_onnx_dynamic; - PyObject *__pyx_n_s_open; - PyObject *__pyx_n_s_opt; - PyObject *__pyx_n_s_p; - PyObject *__pyx_n_s_parameters; - PyObject *__pyx_n_u_params; - PyObject *__pyx_n_s_parents; - PyObject *__pyx_n_s_parse_args; - PyObject *__pyx_n_s_parse_model; - PyObject *__pyx_n_s_parse_model_locals_genexpr; - PyObject *__pyx_n_s_parser; - PyObject *__pyx_n_s_path; - PyObject *__pyx_n_s_pathlib; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3; - PyObject *__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3; - PyObject *__pyx_n_s_permute; - PyObject *__pyx_n_s_prepare; - PyObject *__pyx_n_s_print; - PyObject *__pyx_n_s_print_args; - PyObject *__pyx_n_s_print_biases; - PyObject *__pyx_n_s_profile; - PyObject *__pyx_kp_u_profile_2; - PyObject *__pyx_kp_u_profile_model_speed; - PyObject *__pyx_n_s_profile_one_layer; - PyObject *__pyx_n_s_qualname; - PyObject *__pyx_n_s_rand; - PyObject *__pyx_n_s_range; - PyObject *__pyx_n_s_register_buffer; - PyObject *__pyx_n_s_requires_grad; - PyObject *__pyx_n_s_resolve; - PyObject *__pyx_n_s_rglob; - PyObject *__pyx_n_s_round; - PyObject *__pyx_n_s_s; - PyObject *__pyx_n_s_safe_load; - PyObject *__pyx_n_s_save; - PyObject *__pyx_n_s_scale; - PyObject *__pyx_n_s_scale_img; - PyObject *__pyx_n_s_select_device; - PyObject *__pyx_n_s_self; - PyObject *__pyx_n_s_send; - PyObject *__pyx_n_s_set_name; - PyObject *__pyx_n_s_shape; - PyObject *__pyx_n_s_si; - PyObject *__pyx_n_s_sigmoid; - PyObject *__pyx_n_s_spec; - PyObject *__pyx_n_s_stack; - PyObject *__pyx_n_s_stem; - PyObject *__pyx_n_u_store_true; - PyObject *__pyx_n_s_stride; - PyObject *__pyx_n_s_sum; - PyObject *__pyx_n_s_super; - PyObject *__pyx_n_s_sys; - PyObject *__pyx_n_s_t; - PyObject *__pyx_n_s_tensor; - PyObject *__pyx_kp_u_test; - PyObject *__pyx_n_s_test_2; - PyObject *__pyx_n_s_test_3; - PyObject *__pyx_kp_u_test_all_yolo_yaml; - PyObject *__pyx_n_s_thop; - PyObject *__pyx_n_s_throw; - PyObject *__pyx_kp_u_time_ms; - PyObject *__pyx_n_s_time_sync; - PyObject *__pyx_n_s_to; - PyObject *__pyx_n_s_tolist; - PyObject *__pyx_n_s_torch; - PyObject *__pyx_n_s_train; - PyObject *__pyx_n_s_training; - PyObject *__pyx_n_s_type; - PyObject *__pyx_n_s_verbose; - PyObject *__pyx_n_s_view; - PyObject *__pyx_n_s_visualize; - PyObject *__pyx_n_s_weight; - PyObject *__pyx_n_s_wh; - PyObject *__pyx_n_u_width_multiple; - PyObject *__pyx_kp_u_with_nc; - PyObject *__pyx_n_s_x; - PyObject *__pyx_n_s_xi; - PyObject *__pyx_n_s_xv; - PyObject *__pyx_n_s_xy; - PyObject *__pyx_n_s_y; - PyObject *__pyx_n_s_yaml; - PyObject *__pyx_n_s_yaml_file; - PyObject *__pyx_n_s_yi; - PyObject *__pyx_kp_u_yolo_yaml; - PyObject *__pyx_kp_u_yolov5s_yaml; - PyObject *__pyx_n_s_yv; - PyObject *__pyx_n_s_z; - PyObject *__pyx_n_s_zeros; - PyObject *__pyx_n_s_zip; - PyObject *__pyx_float_0_5; - PyObject *__pyx_float_0_6; - PyObject *__pyx_float_1E9; - PyObject *__pyx_float_0_67; - PyObject *__pyx_float_0_83; - PyObject *__pyx_float_0_999999; - PyObject *__pyx_int_0; - PyObject *__pyx_int_1; - PyObject *__pyx_int_2; - PyObject *__pyx_int_3; - PyObject *__pyx_int_4; - PyObject *__pyx_int_5; - PyObject *__pyx_int_8; - PyObject *__pyx_int_20; - PyObject *__pyx_int_80; - PyObject *__pyx_int_100; - PyObject *__pyx_int_256; - PyObject *__pyx_int_640; - PyObject *__pyx_int_neg_1; - PyObject *__pyx_int_neg_2; - PyObject *__pyx_tuple_; - PyObject *__pyx_slice__2; - PyObject *__pyx_slice__3; - PyObject *__pyx_slice__6; - PyObject *__pyx_tuple__4; - PyObject *__pyx_tuple__5; - PyObject *__pyx_tuple__7; - PyObject *__pyx_tuple__9; - PyObject *__pyx_slice__13; - PyObject *__pyx_slice__14; - PyObject *__pyx_slice__18; - PyObject *__pyx_slice__20; - PyObject *__pyx_slice__22; - PyObject *__pyx_slice__27; - PyObject *__pyx_slice__29; - PyObject *__pyx_slice__31; - PyObject *__pyx_tuple__10; - PyObject *__pyx_tuple__11; - PyObject *__pyx_tuple__15; - PyObject *__pyx_tuple__16; - PyObject *__pyx_tuple__17; - PyObject *__pyx_tuple__19; - PyObject *__pyx_tuple__21; - PyObject *__pyx_tuple__26; - PyObject *__pyx_tuple__28; - PyObject *__pyx_tuple__33; - PyObject *__pyx_tuple__35; - PyObject *__pyx_tuple__37; - PyObject *__pyx_tuple__39; - PyObject *__pyx_tuple__41; - PyObject *__pyx_tuple__42; - PyObject *__pyx_tuple__44; - PyObject *__pyx_tuple__45; - PyObject *__pyx_tuple__47; - PyObject *__pyx_tuple__48; - PyObject *__pyx_tuple__50; - PyObject *__pyx_tuple__52; - PyObject *__pyx_tuple__53; - PyObject *__pyx_tuple__55; - PyObject *__pyx_tuple__57; - PyObject *__pyx_tuple__59; - PyObject *__pyx_tuple__61; - PyObject *__pyx_tuple__62; - PyObject *__pyx_tuple__64; - PyObject *__pyx_tuple__66; - PyObject *__pyx_tuple__68; - PyObject *__pyx_tuple__69; - PyObject *__pyx_tuple__71; - PyObject *__pyx_tuple__73; - PyObject *__pyx_tuple__74; - PyObject *__pyx_tuple__75; - PyObject *__pyx_tuple__76; - PyObject *__pyx_codeobj__34; - PyObject *__pyx_codeobj__38; - PyObject *__pyx_codeobj__40; - PyObject *__pyx_codeobj__43; - PyObject *__pyx_codeobj__46; - PyObject *__pyx_codeobj__49; - PyObject *__pyx_codeobj__51; - PyObject *__pyx_codeobj__54; - PyObject *__pyx_codeobj__56; - PyObject *__pyx_codeobj__58; - PyObject *__pyx_codeobj__60; - PyObject *__pyx_codeobj__63; - PyObject *__pyx_codeobj__65; - PyObject *__pyx_codeobj__67; - PyObject *__pyx_codeobj__70; - PyObject *__pyx_codeobj__72; -} __pyx_mstate; - -#ifdef __cplusplus -namespace { - extern struct PyModuleDef __pyx_moduledef; -} /* anonymous namespace */ -#else -static struct PyModuleDef __pyx_moduledef; -#endif - -#define __pyx_mstate(o) ((__pyx_mstate *)__Pyx_PyModule_GetState(o)) - -#define __pyx_mstate_global (__pyx_mstate(PyState_FindModule(&__pyx_moduledef))) - -#define __pyx_m (PyState_FindModule(&__pyx_moduledef)) -#endif -/* #### Code section: module_state_clear ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_clear(PyObject *m) { - __pyx_mstate *clear_module_state = __pyx_mstate(m); - if (!clear_module_state) return 0; - Py_CLEAR(clear_module_state->__pyx_d); - Py_CLEAR(clear_module_state->__pyx_b); - Py_CLEAR(clear_module_state->__pyx_cython_runtime); - Py_CLEAR(clear_module_state->__pyx_empty_tuple); - Py_CLEAR(clear_module_state->__pyx_empty_bytes); - Py_CLEAR(clear_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_CLEAR(clear_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_CLEAR(clear_module_state->__pyx_FusedFunctionType); - #endif - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr); - Py_CLEAR(clear_module_state->__pyx_kp_u_10); - Py_CLEAR(clear_module_state->__pyx_kp_u_10_0f); - Py_CLEAR(clear_module_state->__pyx_kp_u_10_2f); - Py_CLEAR(clear_module_state->__pyx_kp_u_10s); - Py_CLEAR(clear_module_state->__pyx_kp_u_18); - Py_CLEAR(clear_module_state->__pyx_kp_u_3); - Py_CLEAR(clear_module_state->__pyx_kp_u_30); - Py_CLEAR(clear_module_state->__pyx_kp_u_40); - Py_CLEAR(clear_module_state->__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3); - Py_CLEAR(clear_module_state->__pyx_n_s_ArgumentParser); - Py_CLEAR(clear_module_state->__pyx_n_s_BatchNorm2d); - Py_CLEAR(clear_module_state->__pyx_n_s_Bottleneck); - Py_CLEAR(clear_module_state->__pyx_n_s_BottleneckCSP); - Py_CLEAR(clear_module_state->__pyx_n_s_C3); - Py_CLEAR(clear_module_state->__pyx_n_s_C3Ghost); - Py_CLEAR(clear_module_state->__pyx_n_s_C3SPP); - Py_CLEAR(clear_module_state->__pyx_n_s_C3TR); - Py_CLEAR(clear_module_state->__pyx_n_s_Concat); - Py_CLEAR(clear_module_state->__pyx_n_s_Contract); - Py_CLEAR(clear_module_state->__pyx_n_s_Conv); - Py_CLEAR(clear_module_state->__pyx_n_s_Conv2d); - Py_CLEAR(clear_module_state->__pyx_n_s_CrossConv); - Py_CLEAR(clear_module_state->__pyx_n_s_DWConv); - Py_CLEAR(clear_module_state->__pyx_n_s_Detect); - Py_CLEAR(clear_module_state->__pyx_n_s_Detect___init); - Py_CLEAR(clear_module_state->__pyx_n_s_Detect___init___locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_Detect__make_grid); - Py_CLEAR(clear_module_state->__pyx_n_s_Detect_forward); - Py_CLEAR(clear_module_state->__pyx_kp_u_Error_in); - Py_CLEAR(clear_module_state->__pyx_n_s_Expand); - Py_CLEAR(clear_module_state->__pyx_n_s_FILE); - Py_CLEAR(clear_module_state->__pyx_n_s_Focus); - Py_CLEAR(clear_module_state->__pyx_kp_u_Fusing_layers); - Py_CLEAR(clear_module_state->__pyx_n_u_GFLOPs); - Py_CLEAR(clear_module_state->__pyx_n_s_GhostBottleneck); - Py_CLEAR(clear_module_state->__pyx_n_s_GhostConv); - Py_CLEAR(clear_module_state->__pyx_n_s_ImportError); - Py_CLEAR(clear_module_state->__pyx_n_s_LOGGER); - Py_CLEAR(clear_module_state->__pyx_n_s_MixConv2d); - Py_CLEAR(clear_module_state->__pyx_n_s_Model); - Py_CLEAR(clear_module_state->__pyx_n_s_Model___init); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__apply); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__clip_augmented); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__clip_augmented_locals_gen); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__descale_pred); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__forward_augment); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__forward_once); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__initialize_biases); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__print_biases); - Py_CLEAR(clear_module_state->__pyx_n_s_Model__profile_one_layer); - Py_CLEAR(clear_module_state->__pyx_n_s_Model_forward); - Py_CLEAR(clear_module_state->__pyx_n_s_Model_fuse); - Py_CLEAR(clear_module_state->__pyx_n_s_Model_info); - Py_CLEAR(clear_module_state->__pyx_n_s_Module); - Py_CLEAR(clear_module_state->__pyx_n_s_ModuleList); - Py_CLEAR(clear_module_state->__pyx_n_s_NameError); - Py_CLEAR(clear_module_state->__pyx_kp_u_Overriding_model_yaml_anchors_wi); - Py_CLEAR(clear_module_state->__pyx_kp_u_Overriding_model_yaml_nc); - Py_CLEAR(clear_module_state->__pyx_n_s_Parameter); - Py_CLEAR(clear_module_state->__pyx_n_s_Path); - Py_CLEAR(clear_module_state->__pyx_n_s_ROOT); - Py_CLEAR(clear_module_state->__pyx_n_s_SPP); - Py_CLEAR(clear_module_state->__pyx_n_s_SPPF); - Py_CLEAR(clear_module_state->__pyx_n_s_Sequential); - Py_CLEAR(clear_module_state->__pyx_n_s_T); - Py_CLEAR(clear_module_state->__pyx_kp_u_Total); - Py_CLEAR(clear_module_state->__pyx_kp_u__12); - Py_CLEAR(clear_module_state->__pyx_kp_u__23); - Py_CLEAR(clear_module_state->__pyx_kp_u__24); - Py_CLEAR(clear_module_state->__pyx_kp_u__25); - Py_CLEAR(clear_module_state->__pyx_kp_u__30); - Py_CLEAR(clear_module_state->__pyx_kp_u__32); - Py_CLEAR(clear_module_state->__pyx_n_s__36); - Py_CLEAR(clear_module_state->__pyx_kp_u__77); - Py_CLEAR(clear_module_state->__pyx_n_s__78); - Py_CLEAR(clear_module_state->__pyx_n_s__8); - Py_CLEAR(clear_module_state->__pyx_n_s_a); - Py_CLEAR(clear_module_state->__pyx_n_s_action); - Py_CLEAR(clear_module_state->__pyx_n_s_add_argument); - Py_CLEAR(clear_module_state->__pyx_n_s_anchor_grid); - Py_CLEAR(clear_module_state->__pyx_n_s_anchors); - Py_CLEAR(clear_module_state->__pyx_n_u_anchors); - Py_CLEAR(clear_module_state->__pyx_n_s_append); - Py_CLEAR(clear_module_state->__pyx_n_s_apply); - Py_CLEAR(clear_module_state->__pyx_n_s_arange); - Py_CLEAR(clear_module_state->__pyx_n_s_argparse); - Py_CLEAR(clear_module_state->__pyx_n_s_args); - Py_CLEAR(clear_module_state->__pyx_n_u_arguments); - Py_CLEAR(clear_module_state->__pyx_n_u_ascii); - Py_CLEAR(clear_module_state->__pyx_n_s_asyncio_coroutines); - Py_CLEAR(clear_module_state->__pyx_n_s_augment); - Py_CLEAR(clear_module_state->__pyx_n_s_b); - Py_CLEAR(clear_module_state->__pyx_n_u_backbone); - Py_CLEAR(clear_module_state->__pyx_n_s_bias); - Py_CLEAR(clear_module_state->__pyx_n_s_bn); - Py_CLEAR(clear_module_state->__pyx_n_u_bn); - Py_CLEAR(clear_module_state->__pyx_n_s_bs); - Py_CLEAR(clear_module_state->__pyx_n_s_c); - Py_CLEAR(clear_module_state->__pyx_n_s_c1); - Py_CLEAR(clear_module_state->__pyx_n_s_c2); - Py_CLEAR(clear_module_state->__pyx_n_s_cat); - Py_CLEAR(clear_module_state->__pyx_n_s_cf); - Py_CLEAR(clear_module_state->__pyx_n_s_cfg); - Py_CLEAR(clear_module_state->__pyx_kp_u_cfg_2); - Py_CLEAR(clear_module_state->__pyx_n_s_ch); - Py_CLEAR(clear_module_state->__pyx_n_u_ch); - Py_CLEAR(clear_module_state->__pyx_n_s_check_anchor_order); - Py_CLEAR(clear_module_state->__pyx_n_s_class_getitem); - Py_CLEAR(clear_module_state->__pyx_n_s_cline_in_traceback); - Py_CLEAR(clear_module_state->__pyx_n_s_clip_augmented); - Py_CLEAR(clear_module_state->__pyx_n_s_clone); - Py_CLEAR(clear_module_state->__pyx_n_s_close); - Py_CLEAR(clear_module_state->__pyx_n_s_contiguous); - Py_CLEAR(clear_module_state->__pyx_n_s_conv); - Py_CLEAR(clear_module_state->__pyx_n_s_copy); - Py_CLEAR(clear_module_state->__pyx_n_s_cuda); - Py_CLEAR(clear_module_state->__pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or); - Py_CLEAR(clear_module_state->__pyx_n_s_d); - Py_CLEAR(clear_module_state->__pyx_n_s_data); - Py_CLEAR(clear_module_state->__pyx_n_s_deepcopy); - Py_CLEAR(clear_module_state->__pyx_n_s_default); - Py_CLEAR(clear_module_state->__pyx_n_u_depth_multiple); - Py_CLEAR(clear_module_state->__pyx_n_s_descale_pred); - Py_CLEAR(clear_module_state->__pyx_n_s_detach); - Py_CLEAR(clear_module_state->__pyx_n_s_device); - Py_CLEAR(clear_module_state->__pyx_kp_u_device_2); - Py_CLEAR(clear_module_state->__pyx_n_s_dict); - Py_CLEAR(clear_module_state->__pyx_kp_u_disable); - Py_CLEAR(clear_module_state->__pyx_n_s_doc); - Py_CLEAR(clear_module_state->__pyx_n_s_dt); - Py_CLEAR(clear_module_state->__pyx_n_s_e); - Py_CLEAR(clear_module_state->__pyx_kp_u_enable); - Py_CLEAR(clear_module_state->__pyx_n_s_encoding); - Py_CLEAR(clear_module_state->__pyx_n_s_enter); - Py_CLEAR(clear_module_state->__pyx_n_s_enumerate); - Py_CLEAR(clear_module_state->__pyx_n_s_errors); - Py_CLEAR(clear_module_state->__pyx_n_s_eval); - Py_CLEAR(clear_module_state->__pyx_n_s_exit); - Py_CLEAR(clear_module_state->__pyx_n_s_expand); - Py_CLEAR(clear_module_state->__pyx_n_s_f); - Py_CLEAR(clear_module_state->__pyx_n_s_fi); - Py_CLEAR(clear_module_state->__pyx_n_s_file); - Py_CLEAR(clear_module_state->__pyx_n_s_flip); - Py_CLEAR(clear_module_state->__pyx_n_s_flips); - Py_CLEAR(clear_module_state->__pyx_n_s_float); - Py_CLEAR(clear_module_state->__pyx_n_s_fn); - Py_CLEAR(clear_module_state->__pyx_n_s_forward); - Py_CLEAR(clear_module_state->__pyx_n_s_forward_augment); - Py_CLEAR(clear_module_state->__pyx_n_s_forward_fuse); - Py_CLEAR(clear_module_state->__pyx_n_s_forward_once); - Py_CLEAR(clear_module_state->__pyx_n_u_from); - Py_CLEAR(clear_module_state->__pyx_n_s_fuse); - Py_CLEAR(clear_module_state->__pyx_n_s_fuse_conv_and_bn); - Py_CLEAR(clear_module_state->__pyx_n_s_g); - Py_CLEAR(clear_module_state->__pyx_kp_u_gc); - Py_CLEAR(clear_module_state->__pyx_n_s_gd); - Py_CLEAR(clear_module_state->__pyx_n_s_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_get); - Py_CLEAR(clear_module_state->__pyx_n_s_grid); - Py_CLEAR(clear_module_state->__pyx_n_s_gs); - Py_CLEAR(clear_module_state->__pyx_n_s_gw); - Py_CLEAR(clear_module_state->__pyx_n_u_head); - Py_CLEAR(clear_module_state->__pyx_n_s_help); - Py_CLEAR(clear_module_state->__pyx_n_s_i); - Py_CLEAR(clear_module_state->__pyx_n_u_ignore); - Py_CLEAR(clear_module_state->__pyx_n_u_ij); - Py_CLEAR(clear_module_state->__pyx_n_s_img); - Py_CLEAR(clear_module_state->__pyx_n_s_img_size); - Py_CLEAR(clear_module_state->__pyx_n_s_import); - Py_CLEAR(clear_module_state->__pyx_n_s_indexing); - Py_CLEAR(clear_module_state->__pyx_n_s_info); - Py_CLEAR(clear_module_state->__pyx_n_s_init); - Py_CLEAR(clear_module_state->__pyx_n_s_init_subclass); - Py_CLEAR(clear_module_state->__pyx_n_s_initialize_biases); - Py_CLEAR(clear_module_state->__pyx_n_s_initialize_weights); - Py_CLEAR(clear_module_state->__pyx_n_s_initializing); - Py_CLEAR(clear_module_state->__pyx_n_s_inplace); - Py_CLEAR(clear_module_state->__pyx_n_u_inplace); - Py_CLEAR(clear_module_state->__pyx_n_s_inputs); - Py_CLEAR(clear_module_state->__pyx_n_s_insert); - Py_CLEAR(clear_module_state->__pyx_n_s_is_available); - Py_CLEAR(clear_module_state->__pyx_n_s_is_coroutine); - Py_CLEAR(clear_module_state->__pyx_kp_u_isenabled); - Py_CLEAR(clear_module_state->__pyx_n_s_j); - Py_CLEAR(clear_module_state->__pyx_n_s_layers); - Py_CLEAR(clear_module_state->__pyx_n_s_log); - Py_CLEAR(clear_module_state->__pyx_n_s_m); - Py_CLEAR(clear_module_state->__pyx_n_s_m_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_main); - Py_CLEAR(clear_module_state->__pyx_n_s_main_2); - Py_CLEAR(clear_module_state->__pyx_n_u_main_2); - Py_CLEAR(clear_module_state->__pyx_n_s_make_divisible); - Py_CLEAR(clear_module_state->__pyx_n_s_make_grid); - Py_CLEAR(clear_module_state->__pyx_n_s_map); - Py_CLEAR(clear_module_state->__pyx_n_s_math); - Py_CLEAR(clear_module_state->__pyx_n_s_max); - Py_CLEAR(clear_module_state->__pyx_n_s_mean); - Py_CLEAR(clear_module_state->__pyx_n_s_meshgrid); - Py_CLEAR(clear_module_state->__pyx_n_s_metaclass); - Py_CLEAR(clear_module_state->__pyx_n_s_mi); - Py_CLEAR(clear_module_state->__pyx_n_s_model); - Py_CLEAR(clear_module_state->__pyx_n_s_model_info); - Py_CLEAR(clear_module_state->__pyx_kp_u_model_yaml); - Py_CLEAR(clear_module_state->__pyx_n_u_models); - Py_CLEAR(clear_module_state->__pyx_kp_u_module); - Py_CLEAR(clear_module_state->__pyx_n_u_module_2); - Py_CLEAR(clear_module_state->__pyx_n_s_module_3); - Py_CLEAR(clear_module_state->__pyx_n_s_modules); - Py_CLEAR(clear_module_state->__pyx_n_s_mro_entries); - Py_CLEAR(clear_module_state->__pyx_n_s_n); - Py_CLEAR(clear_module_state->__pyx_n_u_n); - Py_CLEAR(clear_module_state->__pyx_n_s_n_2); - Py_CLEAR(clear_module_state->__pyx_n_s_na); - Py_CLEAR(clear_module_state->__pyx_n_s_name); - Py_CLEAR(clear_module_state->__pyx_n_s_name_2); - Py_CLEAR(clear_module_state->__pyx_n_s_names); - Py_CLEAR(clear_module_state->__pyx_n_s_nc); - Py_CLEAR(clear_module_state->__pyx_n_u_nc); - Py_CLEAR(clear_module_state->__pyx_n_s_nl); - Py_CLEAR(clear_module_state->__pyx_n_s_nn); - Py_CLEAR(clear_module_state->__pyx_n_s_no); - Py_CLEAR(clear_module_state->__pyx_n_s_np); - Py_CLEAR(clear_module_state->__pyx_n_s_numel); - Py_CLEAR(clear_module_state->__pyx_n_s_nx); - Py_CLEAR(clear_module_state->__pyx_n_s_ny); - Py_CLEAR(clear_module_state->__pyx_n_s_o); - Py_CLEAR(clear_module_state->__pyx_n_s_onnx_dynamic); - Py_CLEAR(clear_module_state->__pyx_n_s_open); - Py_CLEAR(clear_module_state->__pyx_n_s_opt); - Py_CLEAR(clear_module_state->__pyx_n_s_p); - Py_CLEAR(clear_module_state->__pyx_n_s_parameters); - Py_CLEAR(clear_module_state->__pyx_n_u_params); - Py_CLEAR(clear_module_state->__pyx_n_s_parents); - Py_CLEAR(clear_module_state->__pyx_n_s_parse_args); - Py_CLEAR(clear_module_state->__pyx_n_s_parse_model); - Py_CLEAR(clear_module_state->__pyx_n_s_parse_model_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_parser); - Py_CLEAR(clear_module_state->__pyx_n_s_path); - Py_CLEAR(clear_module_state->__pyx_n_s_pathlib); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3); - Py_CLEAR(clear_module_state->__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3); - Py_CLEAR(clear_module_state->__pyx_n_s_permute); - Py_CLEAR(clear_module_state->__pyx_n_s_prepare); - Py_CLEAR(clear_module_state->__pyx_n_s_print); - Py_CLEAR(clear_module_state->__pyx_n_s_print_args); - Py_CLEAR(clear_module_state->__pyx_n_s_print_biases); - Py_CLEAR(clear_module_state->__pyx_n_s_profile); - Py_CLEAR(clear_module_state->__pyx_kp_u_profile_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_profile_model_speed); - Py_CLEAR(clear_module_state->__pyx_n_s_profile_one_layer); - Py_CLEAR(clear_module_state->__pyx_n_s_qualname); - Py_CLEAR(clear_module_state->__pyx_n_s_rand); - Py_CLEAR(clear_module_state->__pyx_n_s_range); - Py_CLEAR(clear_module_state->__pyx_n_s_register_buffer); - Py_CLEAR(clear_module_state->__pyx_n_s_requires_grad); - Py_CLEAR(clear_module_state->__pyx_n_s_resolve); - Py_CLEAR(clear_module_state->__pyx_n_s_rglob); - Py_CLEAR(clear_module_state->__pyx_n_s_round); - Py_CLEAR(clear_module_state->__pyx_n_s_s); - Py_CLEAR(clear_module_state->__pyx_n_s_safe_load); - Py_CLEAR(clear_module_state->__pyx_n_s_save); - Py_CLEAR(clear_module_state->__pyx_n_s_scale); - Py_CLEAR(clear_module_state->__pyx_n_s_scale_img); - Py_CLEAR(clear_module_state->__pyx_n_s_select_device); - Py_CLEAR(clear_module_state->__pyx_n_s_self); - Py_CLEAR(clear_module_state->__pyx_n_s_send); - Py_CLEAR(clear_module_state->__pyx_n_s_set_name); - Py_CLEAR(clear_module_state->__pyx_n_s_shape); - Py_CLEAR(clear_module_state->__pyx_n_s_si); - Py_CLEAR(clear_module_state->__pyx_n_s_sigmoid); - Py_CLEAR(clear_module_state->__pyx_n_s_spec); - Py_CLEAR(clear_module_state->__pyx_n_s_stack); - Py_CLEAR(clear_module_state->__pyx_n_s_stem); - Py_CLEAR(clear_module_state->__pyx_n_u_store_true); - Py_CLEAR(clear_module_state->__pyx_n_s_stride); - Py_CLEAR(clear_module_state->__pyx_n_s_sum); - Py_CLEAR(clear_module_state->__pyx_n_s_super); - Py_CLEAR(clear_module_state->__pyx_n_s_sys); - Py_CLEAR(clear_module_state->__pyx_n_s_t); - Py_CLEAR(clear_module_state->__pyx_n_s_tensor); - Py_CLEAR(clear_module_state->__pyx_kp_u_test); - Py_CLEAR(clear_module_state->__pyx_n_s_test_2); - Py_CLEAR(clear_module_state->__pyx_n_s_test_3); - Py_CLEAR(clear_module_state->__pyx_kp_u_test_all_yolo_yaml); - Py_CLEAR(clear_module_state->__pyx_n_s_thop); - Py_CLEAR(clear_module_state->__pyx_n_s_throw); - Py_CLEAR(clear_module_state->__pyx_kp_u_time_ms); - Py_CLEAR(clear_module_state->__pyx_n_s_time_sync); - Py_CLEAR(clear_module_state->__pyx_n_s_to); - Py_CLEAR(clear_module_state->__pyx_n_s_tolist); - Py_CLEAR(clear_module_state->__pyx_n_s_torch); - Py_CLEAR(clear_module_state->__pyx_n_s_train); - Py_CLEAR(clear_module_state->__pyx_n_s_training); - Py_CLEAR(clear_module_state->__pyx_n_s_type); - Py_CLEAR(clear_module_state->__pyx_n_s_verbose); - Py_CLEAR(clear_module_state->__pyx_n_s_view); - Py_CLEAR(clear_module_state->__pyx_n_s_visualize); - Py_CLEAR(clear_module_state->__pyx_n_s_weight); - Py_CLEAR(clear_module_state->__pyx_n_s_wh); - Py_CLEAR(clear_module_state->__pyx_n_u_width_multiple); - Py_CLEAR(clear_module_state->__pyx_kp_u_with_nc); - Py_CLEAR(clear_module_state->__pyx_n_s_x); - Py_CLEAR(clear_module_state->__pyx_n_s_xi); - Py_CLEAR(clear_module_state->__pyx_n_s_xv); - Py_CLEAR(clear_module_state->__pyx_n_s_xy); - Py_CLEAR(clear_module_state->__pyx_n_s_y); - Py_CLEAR(clear_module_state->__pyx_n_s_yaml); - Py_CLEAR(clear_module_state->__pyx_n_s_yaml_file); - Py_CLEAR(clear_module_state->__pyx_n_s_yi); - Py_CLEAR(clear_module_state->__pyx_kp_u_yolo_yaml); - Py_CLEAR(clear_module_state->__pyx_kp_u_yolov5s_yaml); - Py_CLEAR(clear_module_state->__pyx_n_s_yv); - Py_CLEAR(clear_module_state->__pyx_n_s_z); - Py_CLEAR(clear_module_state->__pyx_n_s_zeros); - Py_CLEAR(clear_module_state->__pyx_n_s_zip); - Py_CLEAR(clear_module_state->__pyx_float_0_5); - Py_CLEAR(clear_module_state->__pyx_float_0_6); - Py_CLEAR(clear_module_state->__pyx_float_1E9); - Py_CLEAR(clear_module_state->__pyx_float_0_67); - Py_CLEAR(clear_module_state->__pyx_float_0_83); - Py_CLEAR(clear_module_state->__pyx_float_0_999999); - Py_CLEAR(clear_module_state->__pyx_int_0); - Py_CLEAR(clear_module_state->__pyx_int_1); - Py_CLEAR(clear_module_state->__pyx_int_2); - Py_CLEAR(clear_module_state->__pyx_int_3); - Py_CLEAR(clear_module_state->__pyx_int_4); - Py_CLEAR(clear_module_state->__pyx_int_5); - Py_CLEAR(clear_module_state->__pyx_int_8); - Py_CLEAR(clear_module_state->__pyx_int_20); - Py_CLEAR(clear_module_state->__pyx_int_80); - Py_CLEAR(clear_module_state->__pyx_int_100); - Py_CLEAR(clear_module_state->__pyx_int_256); - Py_CLEAR(clear_module_state->__pyx_int_640); - Py_CLEAR(clear_module_state->__pyx_int_neg_1); - Py_CLEAR(clear_module_state->__pyx_int_neg_2); - Py_CLEAR(clear_module_state->__pyx_tuple_); - Py_CLEAR(clear_module_state->__pyx_slice__2); - Py_CLEAR(clear_module_state->__pyx_slice__3); - Py_CLEAR(clear_module_state->__pyx_slice__6); - Py_CLEAR(clear_module_state->__pyx_tuple__4); - Py_CLEAR(clear_module_state->__pyx_tuple__5); - Py_CLEAR(clear_module_state->__pyx_tuple__7); - Py_CLEAR(clear_module_state->__pyx_tuple__9); - Py_CLEAR(clear_module_state->__pyx_slice__13); - Py_CLEAR(clear_module_state->__pyx_slice__14); - Py_CLEAR(clear_module_state->__pyx_slice__18); - Py_CLEAR(clear_module_state->__pyx_slice__20); - Py_CLEAR(clear_module_state->__pyx_slice__22); - Py_CLEAR(clear_module_state->__pyx_slice__27); - Py_CLEAR(clear_module_state->__pyx_slice__29); - Py_CLEAR(clear_module_state->__pyx_slice__31); - Py_CLEAR(clear_module_state->__pyx_tuple__10); - Py_CLEAR(clear_module_state->__pyx_tuple__11); - Py_CLEAR(clear_module_state->__pyx_tuple__15); - Py_CLEAR(clear_module_state->__pyx_tuple__16); - Py_CLEAR(clear_module_state->__pyx_tuple__17); - Py_CLEAR(clear_module_state->__pyx_tuple__19); - Py_CLEAR(clear_module_state->__pyx_tuple__21); - Py_CLEAR(clear_module_state->__pyx_tuple__26); - Py_CLEAR(clear_module_state->__pyx_tuple__28); - Py_CLEAR(clear_module_state->__pyx_tuple__33); - Py_CLEAR(clear_module_state->__pyx_tuple__35); - Py_CLEAR(clear_module_state->__pyx_tuple__37); - Py_CLEAR(clear_module_state->__pyx_tuple__39); - Py_CLEAR(clear_module_state->__pyx_tuple__41); - Py_CLEAR(clear_module_state->__pyx_tuple__42); - Py_CLEAR(clear_module_state->__pyx_tuple__44); - Py_CLEAR(clear_module_state->__pyx_tuple__45); - Py_CLEAR(clear_module_state->__pyx_tuple__47); - Py_CLEAR(clear_module_state->__pyx_tuple__48); - Py_CLEAR(clear_module_state->__pyx_tuple__50); - Py_CLEAR(clear_module_state->__pyx_tuple__52); - Py_CLEAR(clear_module_state->__pyx_tuple__53); - Py_CLEAR(clear_module_state->__pyx_tuple__55); - Py_CLEAR(clear_module_state->__pyx_tuple__57); - Py_CLEAR(clear_module_state->__pyx_tuple__59); - Py_CLEAR(clear_module_state->__pyx_tuple__61); - Py_CLEAR(clear_module_state->__pyx_tuple__62); - Py_CLEAR(clear_module_state->__pyx_tuple__64); - Py_CLEAR(clear_module_state->__pyx_tuple__66); - Py_CLEAR(clear_module_state->__pyx_tuple__68); - Py_CLEAR(clear_module_state->__pyx_tuple__69); - Py_CLEAR(clear_module_state->__pyx_tuple__71); - Py_CLEAR(clear_module_state->__pyx_tuple__73); - Py_CLEAR(clear_module_state->__pyx_tuple__74); - Py_CLEAR(clear_module_state->__pyx_tuple__75); - Py_CLEAR(clear_module_state->__pyx_tuple__76); - Py_CLEAR(clear_module_state->__pyx_codeobj__34); - Py_CLEAR(clear_module_state->__pyx_codeobj__38); - Py_CLEAR(clear_module_state->__pyx_codeobj__40); - Py_CLEAR(clear_module_state->__pyx_codeobj__43); - Py_CLEAR(clear_module_state->__pyx_codeobj__46); - Py_CLEAR(clear_module_state->__pyx_codeobj__49); - Py_CLEAR(clear_module_state->__pyx_codeobj__51); - Py_CLEAR(clear_module_state->__pyx_codeobj__54); - Py_CLEAR(clear_module_state->__pyx_codeobj__56); - Py_CLEAR(clear_module_state->__pyx_codeobj__58); - Py_CLEAR(clear_module_state->__pyx_codeobj__60); - Py_CLEAR(clear_module_state->__pyx_codeobj__63); - Py_CLEAR(clear_module_state->__pyx_codeobj__65); - Py_CLEAR(clear_module_state->__pyx_codeobj__67); - Py_CLEAR(clear_module_state->__pyx_codeobj__70); - Py_CLEAR(clear_module_state->__pyx_codeobj__72); - return 0; -} -#endif -/* #### Code section: module_state_traverse ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_traverse(PyObject *m, visitproc visit, void *arg) { - __pyx_mstate *traverse_module_state = __pyx_mstate(m); - if (!traverse_module_state) return 0; - Py_VISIT(traverse_module_state->__pyx_d); - Py_VISIT(traverse_module_state->__pyx_b); - Py_VISIT(traverse_module_state->__pyx_cython_runtime); - Py_VISIT(traverse_module_state->__pyx_empty_tuple); - Py_VISIT(traverse_module_state->__pyx_empty_bytes); - Py_VISIT(traverse_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_VISIT(traverse_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_VISIT(traverse_module_state->__pyx_FusedFunctionType); - #endif - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr); - Py_VISIT(traverse_module_state->__pyx_kp_u_10); - Py_VISIT(traverse_module_state->__pyx_kp_u_10_0f); - Py_VISIT(traverse_module_state->__pyx_kp_u_10_2f); - Py_VISIT(traverse_module_state->__pyx_kp_u_10s); - Py_VISIT(traverse_module_state->__pyx_kp_u_18); - Py_VISIT(traverse_module_state->__pyx_kp_u_3); - Py_VISIT(traverse_module_state->__pyx_kp_u_30); - Py_VISIT(traverse_module_state->__pyx_kp_u_40); - Py_VISIT(traverse_module_state->__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3); - Py_VISIT(traverse_module_state->__pyx_n_s_ArgumentParser); - Py_VISIT(traverse_module_state->__pyx_n_s_BatchNorm2d); - Py_VISIT(traverse_module_state->__pyx_n_s_Bottleneck); - Py_VISIT(traverse_module_state->__pyx_n_s_BottleneckCSP); - Py_VISIT(traverse_module_state->__pyx_n_s_C3); - Py_VISIT(traverse_module_state->__pyx_n_s_C3Ghost); - Py_VISIT(traverse_module_state->__pyx_n_s_C3SPP); - Py_VISIT(traverse_module_state->__pyx_n_s_C3TR); - Py_VISIT(traverse_module_state->__pyx_n_s_Concat); - Py_VISIT(traverse_module_state->__pyx_n_s_Contract); - Py_VISIT(traverse_module_state->__pyx_n_s_Conv); - Py_VISIT(traverse_module_state->__pyx_n_s_Conv2d); - Py_VISIT(traverse_module_state->__pyx_n_s_CrossConv); - Py_VISIT(traverse_module_state->__pyx_n_s_DWConv); - Py_VISIT(traverse_module_state->__pyx_n_s_Detect); - Py_VISIT(traverse_module_state->__pyx_n_s_Detect___init); - Py_VISIT(traverse_module_state->__pyx_n_s_Detect___init___locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_Detect__make_grid); - Py_VISIT(traverse_module_state->__pyx_n_s_Detect_forward); - Py_VISIT(traverse_module_state->__pyx_kp_u_Error_in); - Py_VISIT(traverse_module_state->__pyx_n_s_Expand); - Py_VISIT(traverse_module_state->__pyx_n_s_FILE); - Py_VISIT(traverse_module_state->__pyx_n_s_Focus); - Py_VISIT(traverse_module_state->__pyx_kp_u_Fusing_layers); - Py_VISIT(traverse_module_state->__pyx_n_u_GFLOPs); - Py_VISIT(traverse_module_state->__pyx_n_s_GhostBottleneck); - Py_VISIT(traverse_module_state->__pyx_n_s_GhostConv); - Py_VISIT(traverse_module_state->__pyx_n_s_ImportError); - Py_VISIT(traverse_module_state->__pyx_n_s_LOGGER); - Py_VISIT(traverse_module_state->__pyx_n_s_MixConv2d); - Py_VISIT(traverse_module_state->__pyx_n_s_Model); - Py_VISIT(traverse_module_state->__pyx_n_s_Model___init); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__apply); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__clip_augmented); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__clip_augmented_locals_gen); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__descale_pred); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__forward_augment); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__forward_once); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__initialize_biases); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__print_biases); - Py_VISIT(traverse_module_state->__pyx_n_s_Model__profile_one_layer); - Py_VISIT(traverse_module_state->__pyx_n_s_Model_forward); - Py_VISIT(traverse_module_state->__pyx_n_s_Model_fuse); - Py_VISIT(traverse_module_state->__pyx_n_s_Model_info); - Py_VISIT(traverse_module_state->__pyx_n_s_Module); - Py_VISIT(traverse_module_state->__pyx_n_s_ModuleList); - Py_VISIT(traverse_module_state->__pyx_n_s_NameError); - Py_VISIT(traverse_module_state->__pyx_kp_u_Overriding_model_yaml_anchors_wi); - Py_VISIT(traverse_module_state->__pyx_kp_u_Overriding_model_yaml_nc); - Py_VISIT(traverse_module_state->__pyx_n_s_Parameter); - Py_VISIT(traverse_module_state->__pyx_n_s_Path); - Py_VISIT(traverse_module_state->__pyx_n_s_ROOT); - Py_VISIT(traverse_module_state->__pyx_n_s_SPP); - Py_VISIT(traverse_module_state->__pyx_n_s_SPPF); - Py_VISIT(traverse_module_state->__pyx_n_s_Sequential); - Py_VISIT(traverse_module_state->__pyx_n_s_T); - Py_VISIT(traverse_module_state->__pyx_kp_u_Total); - Py_VISIT(traverse_module_state->__pyx_kp_u__12); - Py_VISIT(traverse_module_state->__pyx_kp_u__23); - Py_VISIT(traverse_module_state->__pyx_kp_u__24); - Py_VISIT(traverse_module_state->__pyx_kp_u__25); - Py_VISIT(traverse_module_state->__pyx_kp_u__30); - Py_VISIT(traverse_module_state->__pyx_kp_u__32); - Py_VISIT(traverse_module_state->__pyx_n_s__36); - Py_VISIT(traverse_module_state->__pyx_kp_u__77); - Py_VISIT(traverse_module_state->__pyx_n_s__78); - Py_VISIT(traverse_module_state->__pyx_n_s__8); - Py_VISIT(traverse_module_state->__pyx_n_s_a); - Py_VISIT(traverse_module_state->__pyx_n_s_action); - Py_VISIT(traverse_module_state->__pyx_n_s_add_argument); - Py_VISIT(traverse_module_state->__pyx_n_s_anchor_grid); - Py_VISIT(traverse_module_state->__pyx_n_s_anchors); - Py_VISIT(traverse_module_state->__pyx_n_u_anchors); - Py_VISIT(traverse_module_state->__pyx_n_s_append); - Py_VISIT(traverse_module_state->__pyx_n_s_apply); - Py_VISIT(traverse_module_state->__pyx_n_s_arange); - Py_VISIT(traverse_module_state->__pyx_n_s_argparse); - Py_VISIT(traverse_module_state->__pyx_n_s_args); - Py_VISIT(traverse_module_state->__pyx_n_u_arguments); - Py_VISIT(traverse_module_state->__pyx_n_u_ascii); - Py_VISIT(traverse_module_state->__pyx_n_s_asyncio_coroutines); - Py_VISIT(traverse_module_state->__pyx_n_s_augment); - Py_VISIT(traverse_module_state->__pyx_n_s_b); - Py_VISIT(traverse_module_state->__pyx_n_u_backbone); - Py_VISIT(traverse_module_state->__pyx_n_s_bias); - Py_VISIT(traverse_module_state->__pyx_n_s_bn); - Py_VISIT(traverse_module_state->__pyx_n_u_bn); - Py_VISIT(traverse_module_state->__pyx_n_s_bs); - Py_VISIT(traverse_module_state->__pyx_n_s_c); - Py_VISIT(traverse_module_state->__pyx_n_s_c1); - Py_VISIT(traverse_module_state->__pyx_n_s_c2); - Py_VISIT(traverse_module_state->__pyx_n_s_cat); - Py_VISIT(traverse_module_state->__pyx_n_s_cf); - Py_VISIT(traverse_module_state->__pyx_n_s_cfg); - Py_VISIT(traverse_module_state->__pyx_kp_u_cfg_2); - Py_VISIT(traverse_module_state->__pyx_n_s_ch); - Py_VISIT(traverse_module_state->__pyx_n_u_ch); - Py_VISIT(traverse_module_state->__pyx_n_s_check_anchor_order); - Py_VISIT(traverse_module_state->__pyx_n_s_class_getitem); - Py_VISIT(traverse_module_state->__pyx_n_s_cline_in_traceback); - Py_VISIT(traverse_module_state->__pyx_n_s_clip_augmented); - Py_VISIT(traverse_module_state->__pyx_n_s_clone); - Py_VISIT(traverse_module_state->__pyx_n_s_close); - Py_VISIT(traverse_module_state->__pyx_n_s_contiguous); - Py_VISIT(traverse_module_state->__pyx_n_s_conv); - Py_VISIT(traverse_module_state->__pyx_n_s_copy); - Py_VISIT(traverse_module_state->__pyx_n_s_cuda); - Py_VISIT(traverse_module_state->__pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or); - Py_VISIT(traverse_module_state->__pyx_n_s_d); - Py_VISIT(traverse_module_state->__pyx_n_s_data); - Py_VISIT(traverse_module_state->__pyx_n_s_deepcopy); - Py_VISIT(traverse_module_state->__pyx_n_s_default); - Py_VISIT(traverse_module_state->__pyx_n_u_depth_multiple); - Py_VISIT(traverse_module_state->__pyx_n_s_descale_pred); - Py_VISIT(traverse_module_state->__pyx_n_s_detach); - Py_VISIT(traverse_module_state->__pyx_n_s_device); - Py_VISIT(traverse_module_state->__pyx_kp_u_device_2); - Py_VISIT(traverse_module_state->__pyx_n_s_dict); - Py_VISIT(traverse_module_state->__pyx_kp_u_disable); - Py_VISIT(traverse_module_state->__pyx_n_s_doc); - Py_VISIT(traverse_module_state->__pyx_n_s_dt); - Py_VISIT(traverse_module_state->__pyx_n_s_e); - Py_VISIT(traverse_module_state->__pyx_kp_u_enable); - Py_VISIT(traverse_module_state->__pyx_n_s_encoding); - Py_VISIT(traverse_module_state->__pyx_n_s_enter); - Py_VISIT(traverse_module_state->__pyx_n_s_enumerate); - Py_VISIT(traverse_module_state->__pyx_n_s_errors); - Py_VISIT(traverse_module_state->__pyx_n_s_eval); - Py_VISIT(traverse_module_state->__pyx_n_s_exit); - Py_VISIT(traverse_module_state->__pyx_n_s_expand); - Py_VISIT(traverse_module_state->__pyx_n_s_f); - Py_VISIT(traverse_module_state->__pyx_n_s_fi); - Py_VISIT(traverse_module_state->__pyx_n_s_file); - Py_VISIT(traverse_module_state->__pyx_n_s_flip); - Py_VISIT(traverse_module_state->__pyx_n_s_flips); - Py_VISIT(traverse_module_state->__pyx_n_s_float); - Py_VISIT(traverse_module_state->__pyx_n_s_fn); - Py_VISIT(traverse_module_state->__pyx_n_s_forward); - Py_VISIT(traverse_module_state->__pyx_n_s_forward_augment); - Py_VISIT(traverse_module_state->__pyx_n_s_forward_fuse); - Py_VISIT(traverse_module_state->__pyx_n_s_forward_once); - Py_VISIT(traverse_module_state->__pyx_n_u_from); - Py_VISIT(traverse_module_state->__pyx_n_s_fuse); - Py_VISIT(traverse_module_state->__pyx_n_s_fuse_conv_and_bn); - Py_VISIT(traverse_module_state->__pyx_n_s_g); - Py_VISIT(traverse_module_state->__pyx_kp_u_gc); - Py_VISIT(traverse_module_state->__pyx_n_s_gd); - Py_VISIT(traverse_module_state->__pyx_n_s_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_get); - Py_VISIT(traverse_module_state->__pyx_n_s_grid); - Py_VISIT(traverse_module_state->__pyx_n_s_gs); - Py_VISIT(traverse_module_state->__pyx_n_s_gw); - Py_VISIT(traverse_module_state->__pyx_n_u_head); - Py_VISIT(traverse_module_state->__pyx_n_s_help); - Py_VISIT(traverse_module_state->__pyx_n_s_i); - Py_VISIT(traverse_module_state->__pyx_n_u_ignore); - Py_VISIT(traverse_module_state->__pyx_n_u_ij); - Py_VISIT(traverse_module_state->__pyx_n_s_img); - Py_VISIT(traverse_module_state->__pyx_n_s_img_size); - Py_VISIT(traverse_module_state->__pyx_n_s_import); - Py_VISIT(traverse_module_state->__pyx_n_s_indexing); - Py_VISIT(traverse_module_state->__pyx_n_s_info); - Py_VISIT(traverse_module_state->__pyx_n_s_init); - Py_VISIT(traverse_module_state->__pyx_n_s_init_subclass); - Py_VISIT(traverse_module_state->__pyx_n_s_initialize_biases); - Py_VISIT(traverse_module_state->__pyx_n_s_initialize_weights); - Py_VISIT(traverse_module_state->__pyx_n_s_initializing); - Py_VISIT(traverse_module_state->__pyx_n_s_inplace); - Py_VISIT(traverse_module_state->__pyx_n_u_inplace); - Py_VISIT(traverse_module_state->__pyx_n_s_inputs); - Py_VISIT(traverse_module_state->__pyx_n_s_insert); - Py_VISIT(traverse_module_state->__pyx_n_s_is_available); - Py_VISIT(traverse_module_state->__pyx_n_s_is_coroutine); - Py_VISIT(traverse_module_state->__pyx_kp_u_isenabled); - Py_VISIT(traverse_module_state->__pyx_n_s_j); - Py_VISIT(traverse_module_state->__pyx_n_s_layers); - Py_VISIT(traverse_module_state->__pyx_n_s_log); - Py_VISIT(traverse_module_state->__pyx_n_s_m); - Py_VISIT(traverse_module_state->__pyx_n_s_m_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_main); - Py_VISIT(traverse_module_state->__pyx_n_s_main_2); - Py_VISIT(traverse_module_state->__pyx_n_u_main_2); - Py_VISIT(traverse_module_state->__pyx_n_s_make_divisible); - Py_VISIT(traverse_module_state->__pyx_n_s_make_grid); - Py_VISIT(traverse_module_state->__pyx_n_s_map); - Py_VISIT(traverse_module_state->__pyx_n_s_math); - Py_VISIT(traverse_module_state->__pyx_n_s_max); - Py_VISIT(traverse_module_state->__pyx_n_s_mean); - Py_VISIT(traverse_module_state->__pyx_n_s_meshgrid); - Py_VISIT(traverse_module_state->__pyx_n_s_metaclass); - Py_VISIT(traverse_module_state->__pyx_n_s_mi); - Py_VISIT(traverse_module_state->__pyx_n_s_model); - Py_VISIT(traverse_module_state->__pyx_n_s_model_info); - Py_VISIT(traverse_module_state->__pyx_kp_u_model_yaml); - Py_VISIT(traverse_module_state->__pyx_n_u_models); - Py_VISIT(traverse_module_state->__pyx_kp_u_module); - Py_VISIT(traverse_module_state->__pyx_n_u_module_2); - Py_VISIT(traverse_module_state->__pyx_n_s_module_3); - Py_VISIT(traverse_module_state->__pyx_n_s_modules); - Py_VISIT(traverse_module_state->__pyx_n_s_mro_entries); - Py_VISIT(traverse_module_state->__pyx_n_s_n); - Py_VISIT(traverse_module_state->__pyx_n_u_n); - Py_VISIT(traverse_module_state->__pyx_n_s_n_2); - Py_VISIT(traverse_module_state->__pyx_n_s_na); - Py_VISIT(traverse_module_state->__pyx_n_s_name); - Py_VISIT(traverse_module_state->__pyx_n_s_name_2); - Py_VISIT(traverse_module_state->__pyx_n_s_names); - Py_VISIT(traverse_module_state->__pyx_n_s_nc); - Py_VISIT(traverse_module_state->__pyx_n_u_nc); - Py_VISIT(traverse_module_state->__pyx_n_s_nl); - Py_VISIT(traverse_module_state->__pyx_n_s_nn); - Py_VISIT(traverse_module_state->__pyx_n_s_no); - Py_VISIT(traverse_module_state->__pyx_n_s_np); - Py_VISIT(traverse_module_state->__pyx_n_s_numel); - Py_VISIT(traverse_module_state->__pyx_n_s_nx); - Py_VISIT(traverse_module_state->__pyx_n_s_ny); - Py_VISIT(traverse_module_state->__pyx_n_s_o); - Py_VISIT(traverse_module_state->__pyx_n_s_onnx_dynamic); - Py_VISIT(traverse_module_state->__pyx_n_s_open); - Py_VISIT(traverse_module_state->__pyx_n_s_opt); - Py_VISIT(traverse_module_state->__pyx_n_s_p); - Py_VISIT(traverse_module_state->__pyx_n_s_parameters); - Py_VISIT(traverse_module_state->__pyx_n_u_params); - Py_VISIT(traverse_module_state->__pyx_n_s_parents); - Py_VISIT(traverse_module_state->__pyx_n_s_parse_args); - Py_VISIT(traverse_module_state->__pyx_n_s_parse_model); - Py_VISIT(traverse_module_state->__pyx_n_s_parse_model_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_parser); - Py_VISIT(traverse_module_state->__pyx_n_s_path); - Py_VISIT(traverse_module_state->__pyx_n_s_pathlib); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3); - Py_VISIT(traverse_module_state->__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3); - Py_VISIT(traverse_module_state->__pyx_n_s_permute); - Py_VISIT(traverse_module_state->__pyx_n_s_prepare); - Py_VISIT(traverse_module_state->__pyx_n_s_print); - Py_VISIT(traverse_module_state->__pyx_n_s_print_args); - Py_VISIT(traverse_module_state->__pyx_n_s_print_biases); - Py_VISIT(traverse_module_state->__pyx_n_s_profile); - Py_VISIT(traverse_module_state->__pyx_kp_u_profile_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_profile_model_speed); - Py_VISIT(traverse_module_state->__pyx_n_s_profile_one_layer); - Py_VISIT(traverse_module_state->__pyx_n_s_qualname); - Py_VISIT(traverse_module_state->__pyx_n_s_rand); - Py_VISIT(traverse_module_state->__pyx_n_s_range); - Py_VISIT(traverse_module_state->__pyx_n_s_register_buffer); - Py_VISIT(traverse_module_state->__pyx_n_s_requires_grad); - Py_VISIT(traverse_module_state->__pyx_n_s_resolve); - Py_VISIT(traverse_module_state->__pyx_n_s_rglob); - Py_VISIT(traverse_module_state->__pyx_n_s_round); - Py_VISIT(traverse_module_state->__pyx_n_s_s); - Py_VISIT(traverse_module_state->__pyx_n_s_safe_load); - Py_VISIT(traverse_module_state->__pyx_n_s_save); - Py_VISIT(traverse_module_state->__pyx_n_s_scale); - Py_VISIT(traverse_module_state->__pyx_n_s_scale_img); - Py_VISIT(traverse_module_state->__pyx_n_s_select_device); - Py_VISIT(traverse_module_state->__pyx_n_s_self); - Py_VISIT(traverse_module_state->__pyx_n_s_send); - Py_VISIT(traverse_module_state->__pyx_n_s_set_name); - Py_VISIT(traverse_module_state->__pyx_n_s_shape); - Py_VISIT(traverse_module_state->__pyx_n_s_si); - Py_VISIT(traverse_module_state->__pyx_n_s_sigmoid); - Py_VISIT(traverse_module_state->__pyx_n_s_spec); - Py_VISIT(traverse_module_state->__pyx_n_s_stack); - Py_VISIT(traverse_module_state->__pyx_n_s_stem); - Py_VISIT(traverse_module_state->__pyx_n_u_store_true); - Py_VISIT(traverse_module_state->__pyx_n_s_stride); - Py_VISIT(traverse_module_state->__pyx_n_s_sum); - Py_VISIT(traverse_module_state->__pyx_n_s_super); - Py_VISIT(traverse_module_state->__pyx_n_s_sys); - Py_VISIT(traverse_module_state->__pyx_n_s_t); - Py_VISIT(traverse_module_state->__pyx_n_s_tensor); - Py_VISIT(traverse_module_state->__pyx_kp_u_test); - Py_VISIT(traverse_module_state->__pyx_n_s_test_2); - Py_VISIT(traverse_module_state->__pyx_n_s_test_3); - Py_VISIT(traverse_module_state->__pyx_kp_u_test_all_yolo_yaml); - Py_VISIT(traverse_module_state->__pyx_n_s_thop); - Py_VISIT(traverse_module_state->__pyx_n_s_throw); - Py_VISIT(traverse_module_state->__pyx_kp_u_time_ms); - Py_VISIT(traverse_module_state->__pyx_n_s_time_sync); - Py_VISIT(traverse_module_state->__pyx_n_s_to); - Py_VISIT(traverse_module_state->__pyx_n_s_tolist); - Py_VISIT(traverse_module_state->__pyx_n_s_torch); - Py_VISIT(traverse_module_state->__pyx_n_s_train); - Py_VISIT(traverse_module_state->__pyx_n_s_training); - Py_VISIT(traverse_module_state->__pyx_n_s_type); - Py_VISIT(traverse_module_state->__pyx_n_s_verbose); - Py_VISIT(traverse_module_state->__pyx_n_s_view); - Py_VISIT(traverse_module_state->__pyx_n_s_visualize); - Py_VISIT(traverse_module_state->__pyx_n_s_weight); - Py_VISIT(traverse_module_state->__pyx_n_s_wh); - Py_VISIT(traverse_module_state->__pyx_n_u_width_multiple); - Py_VISIT(traverse_module_state->__pyx_kp_u_with_nc); - Py_VISIT(traverse_module_state->__pyx_n_s_x); - Py_VISIT(traverse_module_state->__pyx_n_s_xi); - Py_VISIT(traverse_module_state->__pyx_n_s_xv); - Py_VISIT(traverse_module_state->__pyx_n_s_xy); - Py_VISIT(traverse_module_state->__pyx_n_s_y); - Py_VISIT(traverse_module_state->__pyx_n_s_yaml); - Py_VISIT(traverse_module_state->__pyx_n_s_yaml_file); - Py_VISIT(traverse_module_state->__pyx_n_s_yi); - Py_VISIT(traverse_module_state->__pyx_kp_u_yolo_yaml); - Py_VISIT(traverse_module_state->__pyx_kp_u_yolov5s_yaml); - Py_VISIT(traverse_module_state->__pyx_n_s_yv); - Py_VISIT(traverse_module_state->__pyx_n_s_z); - Py_VISIT(traverse_module_state->__pyx_n_s_zeros); - Py_VISIT(traverse_module_state->__pyx_n_s_zip); - Py_VISIT(traverse_module_state->__pyx_float_0_5); - Py_VISIT(traverse_module_state->__pyx_float_0_6); - Py_VISIT(traverse_module_state->__pyx_float_1E9); - Py_VISIT(traverse_module_state->__pyx_float_0_67); - Py_VISIT(traverse_module_state->__pyx_float_0_83); - Py_VISIT(traverse_module_state->__pyx_float_0_999999); - Py_VISIT(traverse_module_state->__pyx_int_0); - Py_VISIT(traverse_module_state->__pyx_int_1); - Py_VISIT(traverse_module_state->__pyx_int_2); - Py_VISIT(traverse_module_state->__pyx_int_3); - Py_VISIT(traverse_module_state->__pyx_int_4); - Py_VISIT(traverse_module_state->__pyx_int_5); - Py_VISIT(traverse_module_state->__pyx_int_8); - Py_VISIT(traverse_module_state->__pyx_int_20); - Py_VISIT(traverse_module_state->__pyx_int_80); - Py_VISIT(traverse_module_state->__pyx_int_100); - Py_VISIT(traverse_module_state->__pyx_int_256); - Py_VISIT(traverse_module_state->__pyx_int_640); - Py_VISIT(traverse_module_state->__pyx_int_neg_1); - Py_VISIT(traverse_module_state->__pyx_int_neg_2); - Py_VISIT(traverse_module_state->__pyx_tuple_); - Py_VISIT(traverse_module_state->__pyx_slice__2); - Py_VISIT(traverse_module_state->__pyx_slice__3); - Py_VISIT(traverse_module_state->__pyx_slice__6); - Py_VISIT(traverse_module_state->__pyx_tuple__4); - Py_VISIT(traverse_module_state->__pyx_tuple__5); - Py_VISIT(traverse_module_state->__pyx_tuple__7); - Py_VISIT(traverse_module_state->__pyx_tuple__9); - Py_VISIT(traverse_module_state->__pyx_slice__13); - Py_VISIT(traverse_module_state->__pyx_slice__14); - Py_VISIT(traverse_module_state->__pyx_slice__18); - Py_VISIT(traverse_module_state->__pyx_slice__20); - Py_VISIT(traverse_module_state->__pyx_slice__22); - Py_VISIT(traverse_module_state->__pyx_slice__27); - Py_VISIT(traverse_module_state->__pyx_slice__29); - Py_VISIT(traverse_module_state->__pyx_slice__31); - Py_VISIT(traverse_module_state->__pyx_tuple__10); - Py_VISIT(traverse_module_state->__pyx_tuple__11); - Py_VISIT(traverse_module_state->__pyx_tuple__15); - Py_VISIT(traverse_module_state->__pyx_tuple__16); - Py_VISIT(traverse_module_state->__pyx_tuple__17); - Py_VISIT(traverse_module_state->__pyx_tuple__19); - Py_VISIT(traverse_module_state->__pyx_tuple__21); - Py_VISIT(traverse_module_state->__pyx_tuple__26); - Py_VISIT(traverse_module_state->__pyx_tuple__28); - Py_VISIT(traverse_module_state->__pyx_tuple__33); - Py_VISIT(traverse_module_state->__pyx_tuple__35); - Py_VISIT(traverse_module_state->__pyx_tuple__37); - Py_VISIT(traverse_module_state->__pyx_tuple__39); - Py_VISIT(traverse_module_state->__pyx_tuple__41); - Py_VISIT(traverse_module_state->__pyx_tuple__42); - Py_VISIT(traverse_module_state->__pyx_tuple__44); - Py_VISIT(traverse_module_state->__pyx_tuple__45); - Py_VISIT(traverse_module_state->__pyx_tuple__47); - Py_VISIT(traverse_module_state->__pyx_tuple__48); - Py_VISIT(traverse_module_state->__pyx_tuple__50); - Py_VISIT(traverse_module_state->__pyx_tuple__52); - Py_VISIT(traverse_module_state->__pyx_tuple__53); - Py_VISIT(traverse_module_state->__pyx_tuple__55); - Py_VISIT(traverse_module_state->__pyx_tuple__57); - Py_VISIT(traverse_module_state->__pyx_tuple__59); - Py_VISIT(traverse_module_state->__pyx_tuple__61); - Py_VISIT(traverse_module_state->__pyx_tuple__62); - Py_VISIT(traverse_module_state->__pyx_tuple__64); - Py_VISIT(traverse_module_state->__pyx_tuple__66); - Py_VISIT(traverse_module_state->__pyx_tuple__68); - Py_VISIT(traverse_module_state->__pyx_tuple__69); - Py_VISIT(traverse_module_state->__pyx_tuple__71); - Py_VISIT(traverse_module_state->__pyx_tuple__73); - Py_VISIT(traverse_module_state->__pyx_tuple__74); - Py_VISIT(traverse_module_state->__pyx_tuple__75); - Py_VISIT(traverse_module_state->__pyx_tuple__76); - Py_VISIT(traverse_module_state->__pyx_codeobj__34); - Py_VISIT(traverse_module_state->__pyx_codeobj__38); - Py_VISIT(traverse_module_state->__pyx_codeobj__40); - Py_VISIT(traverse_module_state->__pyx_codeobj__43); - Py_VISIT(traverse_module_state->__pyx_codeobj__46); - Py_VISIT(traverse_module_state->__pyx_codeobj__49); - Py_VISIT(traverse_module_state->__pyx_codeobj__51); - Py_VISIT(traverse_module_state->__pyx_codeobj__54); - Py_VISIT(traverse_module_state->__pyx_codeobj__56); - Py_VISIT(traverse_module_state->__pyx_codeobj__58); - Py_VISIT(traverse_module_state->__pyx_codeobj__60); - Py_VISIT(traverse_module_state->__pyx_codeobj__63); - Py_VISIT(traverse_module_state->__pyx_codeobj__65); - Py_VISIT(traverse_module_state->__pyx_codeobj__67); - Py_VISIT(traverse_module_state->__pyx_codeobj__70); - Py_VISIT(traverse_module_state->__pyx_codeobj__72); - return 0; -} -#endif -/* #### Code section: module_state_defines ### */ -#if CYTHON_USE_MODULE_STATE -#define __pyx_d __pyx_mstate_global->__pyx_d -#define __pyx_b __pyx_mstate_global->__pyx_b -#define __pyx_cython_runtime __pyx_mstate_global->__pyx_cython_runtime -#define __pyx_empty_tuple __pyx_mstate_global->__pyx_empty_tuple -#define __pyx_empty_bytes __pyx_mstate_global->__pyx_empty_bytes -#define __pyx_empty_unicode __pyx_mstate_global->__pyx_empty_unicode -#ifdef __Pyx_CyFunction_USED -#define __pyx_CyFunctionType __pyx_mstate_global->__pyx_CyFunctionType -#endif -#ifdef __Pyx_FusedFunction_USED -#define __pyx_FusedFunctionType __pyx_mstate_global->__pyx_FusedFunctionType -#endif -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr -#define __pyx_kp_u_10 __pyx_mstate_global->__pyx_kp_u_10 -#define __pyx_kp_u_10_0f __pyx_mstate_global->__pyx_kp_u_10_0f -#define __pyx_kp_u_10_2f __pyx_mstate_global->__pyx_kp_u_10_2f -#define __pyx_kp_u_10s __pyx_mstate_global->__pyx_kp_u_10s -#define __pyx_kp_u_18 __pyx_mstate_global->__pyx_kp_u_18 -#define __pyx_kp_u_3 __pyx_mstate_global->__pyx_kp_u_3 -#define __pyx_kp_u_30 __pyx_mstate_global->__pyx_kp_u_30 -#define __pyx_kp_u_40 __pyx_mstate_global->__pyx_kp_u_40 -#define __pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3 __pyx_mstate_global->__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3 -#define __pyx_n_s_ArgumentParser __pyx_mstate_global->__pyx_n_s_ArgumentParser -#define __pyx_n_s_BatchNorm2d __pyx_mstate_global->__pyx_n_s_BatchNorm2d -#define __pyx_n_s_Bottleneck __pyx_mstate_global->__pyx_n_s_Bottleneck -#define __pyx_n_s_BottleneckCSP __pyx_mstate_global->__pyx_n_s_BottleneckCSP -#define __pyx_n_s_C3 __pyx_mstate_global->__pyx_n_s_C3 -#define __pyx_n_s_C3Ghost __pyx_mstate_global->__pyx_n_s_C3Ghost -#define __pyx_n_s_C3SPP __pyx_mstate_global->__pyx_n_s_C3SPP -#define __pyx_n_s_C3TR __pyx_mstate_global->__pyx_n_s_C3TR -#define __pyx_n_s_Concat __pyx_mstate_global->__pyx_n_s_Concat -#define __pyx_n_s_Contract __pyx_mstate_global->__pyx_n_s_Contract -#define __pyx_n_s_Conv __pyx_mstate_global->__pyx_n_s_Conv -#define __pyx_n_s_Conv2d __pyx_mstate_global->__pyx_n_s_Conv2d -#define __pyx_n_s_CrossConv __pyx_mstate_global->__pyx_n_s_CrossConv -#define __pyx_n_s_DWConv __pyx_mstate_global->__pyx_n_s_DWConv -#define __pyx_n_s_Detect __pyx_mstate_global->__pyx_n_s_Detect -#define __pyx_n_s_Detect___init __pyx_mstate_global->__pyx_n_s_Detect___init -#define __pyx_n_s_Detect___init___locals_genexpr __pyx_mstate_global->__pyx_n_s_Detect___init___locals_genexpr -#define __pyx_n_s_Detect__make_grid __pyx_mstate_global->__pyx_n_s_Detect__make_grid -#define __pyx_n_s_Detect_forward __pyx_mstate_global->__pyx_n_s_Detect_forward -#define __pyx_kp_u_Error_in __pyx_mstate_global->__pyx_kp_u_Error_in -#define __pyx_n_s_Expand __pyx_mstate_global->__pyx_n_s_Expand -#define __pyx_n_s_FILE __pyx_mstate_global->__pyx_n_s_FILE -#define __pyx_n_s_Focus __pyx_mstate_global->__pyx_n_s_Focus -#define __pyx_kp_u_Fusing_layers __pyx_mstate_global->__pyx_kp_u_Fusing_layers -#define __pyx_n_u_GFLOPs __pyx_mstate_global->__pyx_n_u_GFLOPs -#define __pyx_n_s_GhostBottleneck __pyx_mstate_global->__pyx_n_s_GhostBottleneck -#define __pyx_n_s_GhostConv __pyx_mstate_global->__pyx_n_s_GhostConv -#define __pyx_n_s_ImportError __pyx_mstate_global->__pyx_n_s_ImportError -#define __pyx_n_s_LOGGER __pyx_mstate_global->__pyx_n_s_LOGGER -#define __pyx_n_s_MixConv2d __pyx_mstate_global->__pyx_n_s_MixConv2d -#define __pyx_n_s_Model __pyx_mstate_global->__pyx_n_s_Model -#define __pyx_n_s_Model___init __pyx_mstate_global->__pyx_n_s_Model___init -#define __pyx_n_s_Model__apply __pyx_mstate_global->__pyx_n_s_Model__apply -#define __pyx_n_s_Model__clip_augmented __pyx_mstate_global->__pyx_n_s_Model__clip_augmented -#define __pyx_n_s_Model__clip_augmented_locals_gen __pyx_mstate_global->__pyx_n_s_Model__clip_augmented_locals_gen -#define __pyx_n_s_Model__descale_pred __pyx_mstate_global->__pyx_n_s_Model__descale_pred -#define __pyx_n_s_Model__forward_augment __pyx_mstate_global->__pyx_n_s_Model__forward_augment -#define __pyx_n_s_Model__forward_once __pyx_mstate_global->__pyx_n_s_Model__forward_once -#define __pyx_n_s_Model__initialize_biases __pyx_mstate_global->__pyx_n_s_Model__initialize_biases -#define __pyx_n_s_Model__print_biases __pyx_mstate_global->__pyx_n_s_Model__print_biases -#define __pyx_n_s_Model__profile_one_layer __pyx_mstate_global->__pyx_n_s_Model__profile_one_layer -#define __pyx_n_s_Model_forward __pyx_mstate_global->__pyx_n_s_Model_forward -#define __pyx_n_s_Model_fuse __pyx_mstate_global->__pyx_n_s_Model_fuse -#define __pyx_n_s_Model_info __pyx_mstate_global->__pyx_n_s_Model_info -#define __pyx_n_s_Module __pyx_mstate_global->__pyx_n_s_Module -#define __pyx_n_s_ModuleList __pyx_mstate_global->__pyx_n_s_ModuleList -#define __pyx_n_s_NameError __pyx_mstate_global->__pyx_n_s_NameError -#define __pyx_kp_u_Overriding_model_yaml_anchors_wi __pyx_mstate_global->__pyx_kp_u_Overriding_model_yaml_anchors_wi -#define __pyx_kp_u_Overriding_model_yaml_nc __pyx_mstate_global->__pyx_kp_u_Overriding_model_yaml_nc -#define __pyx_n_s_Parameter __pyx_mstate_global->__pyx_n_s_Parameter -#define __pyx_n_s_Path __pyx_mstate_global->__pyx_n_s_Path -#define __pyx_n_s_ROOT __pyx_mstate_global->__pyx_n_s_ROOT -#define __pyx_n_s_SPP __pyx_mstate_global->__pyx_n_s_SPP -#define __pyx_n_s_SPPF __pyx_mstate_global->__pyx_n_s_SPPF -#define __pyx_n_s_Sequential __pyx_mstate_global->__pyx_n_s_Sequential -#define __pyx_n_s_T __pyx_mstate_global->__pyx_n_s_T -#define __pyx_kp_u_Total __pyx_mstate_global->__pyx_kp_u_Total -#define __pyx_kp_u__12 __pyx_mstate_global->__pyx_kp_u__12 -#define __pyx_kp_u__23 __pyx_mstate_global->__pyx_kp_u__23 -#define __pyx_kp_u__24 __pyx_mstate_global->__pyx_kp_u__24 -#define __pyx_kp_u__25 __pyx_mstate_global->__pyx_kp_u__25 -#define __pyx_kp_u__30 __pyx_mstate_global->__pyx_kp_u__30 -#define __pyx_kp_u__32 __pyx_mstate_global->__pyx_kp_u__32 -#define __pyx_n_s__36 __pyx_mstate_global->__pyx_n_s__36 -#define __pyx_kp_u__77 __pyx_mstate_global->__pyx_kp_u__77 -#define __pyx_n_s__78 __pyx_mstate_global->__pyx_n_s__78 -#define __pyx_n_s__8 __pyx_mstate_global->__pyx_n_s__8 -#define __pyx_n_s_a __pyx_mstate_global->__pyx_n_s_a -#define __pyx_n_s_action __pyx_mstate_global->__pyx_n_s_action -#define __pyx_n_s_add_argument __pyx_mstate_global->__pyx_n_s_add_argument -#define __pyx_n_s_anchor_grid __pyx_mstate_global->__pyx_n_s_anchor_grid -#define __pyx_n_s_anchors __pyx_mstate_global->__pyx_n_s_anchors -#define __pyx_n_u_anchors __pyx_mstate_global->__pyx_n_u_anchors -#define __pyx_n_s_append __pyx_mstate_global->__pyx_n_s_append -#define __pyx_n_s_apply __pyx_mstate_global->__pyx_n_s_apply -#define __pyx_n_s_arange __pyx_mstate_global->__pyx_n_s_arange -#define __pyx_n_s_argparse __pyx_mstate_global->__pyx_n_s_argparse -#define __pyx_n_s_args __pyx_mstate_global->__pyx_n_s_args -#define __pyx_n_u_arguments __pyx_mstate_global->__pyx_n_u_arguments -#define __pyx_n_u_ascii __pyx_mstate_global->__pyx_n_u_ascii -#define __pyx_n_s_asyncio_coroutines __pyx_mstate_global->__pyx_n_s_asyncio_coroutines -#define __pyx_n_s_augment __pyx_mstate_global->__pyx_n_s_augment -#define __pyx_n_s_b __pyx_mstate_global->__pyx_n_s_b -#define __pyx_n_u_backbone __pyx_mstate_global->__pyx_n_u_backbone -#define __pyx_n_s_bias __pyx_mstate_global->__pyx_n_s_bias -#define __pyx_n_s_bn __pyx_mstate_global->__pyx_n_s_bn -#define __pyx_n_u_bn __pyx_mstate_global->__pyx_n_u_bn -#define __pyx_n_s_bs __pyx_mstate_global->__pyx_n_s_bs -#define __pyx_n_s_c __pyx_mstate_global->__pyx_n_s_c -#define __pyx_n_s_c1 __pyx_mstate_global->__pyx_n_s_c1 -#define __pyx_n_s_c2 __pyx_mstate_global->__pyx_n_s_c2 -#define __pyx_n_s_cat __pyx_mstate_global->__pyx_n_s_cat -#define __pyx_n_s_cf __pyx_mstate_global->__pyx_n_s_cf -#define __pyx_n_s_cfg __pyx_mstate_global->__pyx_n_s_cfg -#define __pyx_kp_u_cfg_2 __pyx_mstate_global->__pyx_kp_u_cfg_2 -#define __pyx_n_s_ch __pyx_mstate_global->__pyx_n_s_ch -#define __pyx_n_u_ch __pyx_mstate_global->__pyx_n_u_ch -#define __pyx_n_s_check_anchor_order __pyx_mstate_global->__pyx_n_s_check_anchor_order -#define __pyx_n_s_class_getitem __pyx_mstate_global->__pyx_n_s_class_getitem -#define __pyx_n_s_cline_in_traceback __pyx_mstate_global->__pyx_n_s_cline_in_traceback -#define __pyx_n_s_clip_augmented __pyx_mstate_global->__pyx_n_s_clip_augmented -#define __pyx_n_s_clone __pyx_mstate_global->__pyx_n_s_clone -#define __pyx_n_s_close __pyx_mstate_global->__pyx_n_s_close -#define __pyx_n_s_contiguous __pyx_mstate_global->__pyx_n_s_contiguous -#define __pyx_n_s_conv __pyx_mstate_global->__pyx_n_s_conv -#define __pyx_n_s_copy __pyx_mstate_global->__pyx_n_s_copy -#define __pyx_n_s_cuda __pyx_mstate_global->__pyx_n_s_cuda -#define __pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or __pyx_mstate_global->__pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or -#define __pyx_n_s_d __pyx_mstate_global->__pyx_n_s_d -#define __pyx_n_s_data __pyx_mstate_global->__pyx_n_s_data -#define __pyx_n_s_deepcopy __pyx_mstate_global->__pyx_n_s_deepcopy -#define __pyx_n_s_default __pyx_mstate_global->__pyx_n_s_default -#define __pyx_n_u_depth_multiple __pyx_mstate_global->__pyx_n_u_depth_multiple -#define __pyx_n_s_descale_pred __pyx_mstate_global->__pyx_n_s_descale_pred -#define __pyx_n_s_detach __pyx_mstate_global->__pyx_n_s_detach -#define __pyx_n_s_device __pyx_mstate_global->__pyx_n_s_device -#define __pyx_kp_u_device_2 __pyx_mstate_global->__pyx_kp_u_device_2 -#define __pyx_n_s_dict __pyx_mstate_global->__pyx_n_s_dict -#define __pyx_kp_u_disable __pyx_mstate_global->__pyx_kp_u_disable -#define __pyx_n_s_doc __pyx_mstate_global->__pyx_n_s_doc -#define __pyx_n_s_dt __pyx_mstate_global->__pyx_n_s_dt -#define __pyx_n_s_e __pyx_mstate_global->__pyx_n_s_e -#define __pyx_kp_u_enable __pyx_mstate_global->__pyx_kp_u_enable -#define __pyx_n_s_encoding __pyx_mstate_global->__pyx_n_s_encoding -#define __pyx_n_s_enter __pyx_mstate_global->__pyx_n_s_enter -#define __pyx_n_s_enumerate __pyx_mstate_global->__pyx_n_s_enumerate -#define __pyx_n_s_errors __pyx_mstate_global->__pyx_n_s_errors -#define __pyx_n_s_eval __pyx_mstate_global->__pyx_n_s_eval -#define __pyx_n_s_exit __pyx_mstate_global->__pyx_n_s_exit -#define __pyx_n_s_expand __pyx_mstate_global->__pyx_n_s_expand -#define __pyx_n_s_f __pyx_mstate_global->__pyx_n_s_f -#define __pyx_n_s_fi __pyx_mstate_global->__pyx_n_s_fi -#define __pyx_n_s_file __pyx_mstate_global->__pyx_n_s_file -#define __pyx_n_s_flip __pyx_mstate_global->__pyx_n_s_flip -#define __pyx_n_s_flips __pyx_mstate_global->__pyx_n_s_flips -#define __pyx_n_s_float __pyx_mstate_global->__pyx_n_s_float -#define __pyx_n_s_fn __pyx_mstate_global->__pyx_n_s_fn -#define __pyx_n_s_forward __pyx_mstate_global->__pyx_n_s_forward -#define __pyx_n_s_forward_augment __pyx_mstate_global->__pyx_n_s_forward_augment -#define __pyx_n_s_forward_fuse __pyx_mstate_global->__pyx_n_s_forward_fuse -#define __pyx_n_s_forward_once __pyx_mstate_global->__pyx_n_s_forward_once -#define __pyx_n_u_from __pyx_mstate_global->__pyx_n_u_from -#define __pyx_n_s_fuse __pyx_mstate_global->__pyx_n_s_fuse -#define __pyx_n_s_fuse_conv_and_bn __pyx_mstate_global->__pyx_n_s_fuse_conv_and_bn -#define __pyx_n_s_g __pyx_mstate_global->__pyx_n_s_g -#define __pyx_kp_u_gc __pyx_mstate_global->__pyx_kp_u_gc -#define __pyx_n_s_gd __pyx_mstate_global->__pyx_n_s_gd -#define __pyx_n_s_genexpr __pyx_mstate_global->__pyx_n_s_genexpr -#define __pyx_n_s_get __pyx_mstate_global->__pyx_n_s_get -#define __pyx_n_s_grid __pyx_mstate_global->__pyx_n_s_grid -#define __pyx_n_s_gs __pyx_mstate_global->__pyx_n_s_gs -#define __pyx_n_s_gw __pyx_mstate_global->__pyx_n_s_gw -#define __pyx_n_u_head __pyx_mstate_global->__pyx_n_u_head -#define __pyx_n_s_help __pyx_mstate_global->__pyx_n_s_help -#define __pyx_n_s_i __pyx_mstate_global->__pyx_n_s_i -#define __pyx_n_u_ignore __pyx_mstate_global->__pyx_n_u_ignore -#define __pyx_n_u_ij __pyx_mstate_global->__pyx_n_u_ij -#define __pyx_n_s_img __pyx_mstate_global->__pyx_n_s_img -#define __pyx_n_s_img_size __pyx_mstate_global->__pyx_n_s_img_size -#define __pyx_n_s_import __pyx_mstate_global->__pyx_n_s_import -#define __pyx_n_s_indexing __pyx_mstate_global->__pyx_n_s_indexing -#define __pyx_n_s_info __pyx_mstate_global->__pyx_n_s_info -#define __pyx_n_s_init __pyx_mstate_global->__pyx_n_s_init -#define __pyx_n_s_init_subclass __pyx_mstate_global->__pyx_n_s_init_subclass -#define __pyx_n_s_initialize_biases __pyx_mstate_global->__pyx_n_s_initialize_biases -#define __pyx_n_s_initialize_weights __pyx_mstate_global->__pyx_n_s_initialize_weights -#define __pyx_n_s_initializing __pyx_mstate_global->__pyx_n_s_initializing -#define __pyx_n_s_inplace __pyx_mstate_global->__pyx_n_s_inplace -#define __pyx_n_u_inplace __pyx_mstate_global->__pyx_n_u_inplace -#define __pyx_n_s_inputs __pyx_mstate_global->__pyx_n_s_inputs -#define __pyx_n_s_insert __pyx_mstate_global->__pyx_n_s_insert -#define __pyx_n_s_is_available __pyx_mstate_global->__pyx_n_s_is_available -#define __pyx_n_s_is_coroutine __pyx_mstate_global->__pyx_n_s_is_coroutine -#define __pyx_kp_u_isenabled __pyx_mstate_global->__pyx_kp_u_isenabled -#define __pyx_n_s_j __pyx_mstate_global->__pyx_n_s_j -#define __pyx_n_s_layers __pyx_mstate_global->__pyx_n_s_layers -#define __pyx_n_s_log __pyx_mstate_global->__pyx_n_s_log -#define __pyx_n_s_m __pyx_mstate_global->__pyx_n_s_m -#define __pyx_n_s_m_2 __pyx_mstate_global->__pyx_n_s_m_2 -#define __pyx_kp_u_main __pyx_mstate_global->__pyx_kp_u_main -#define __pyx_n_s_main_2 __pyx_mstate_global->__pyx_n_s_main_2 -#define __pyx_n_u_main_2 __pyx_mstate_global->__pyx_n_u_main_2 -#define __pyx_n_s_make_divisible __pyx_mstate_global->__pyx_n_s_make_divisible -#define __pyx_n_s_make_grid __pyx_mstate_global->__pyx_n_s_make_grid -#define __pyx_n_s_map __pyx_mstate_global->__pyx_n_s_map -#define __pyx_n_s_math __pyx_mstate_global->__pyx_n_s_math -#define __pyx_n_s_max __pyx_mstate_global->__pyx_n_s_max -#define __pyx_n_s_mean __pyx_mstate_global->__pyx_n_s_mean -#define __pyx_n_s_meshgrid __pyx_mstate_global->__pyx_n_s_meshgrid -#define __pyx_n_s_metaclass __pyx_mstate_global->__pyx_n_s_metaclass -#define __pyx_n_s_mi __pyx_mstate_global->__pyx_n_s_mi -#define __pyx_n_s_model __pyx_mstate_global->__pyx_n_s_model -#define __pyx_n_s_model_info __pyx_mstate_global->__pyx_n_s_model_info -#define __pyx_kp_u_model_yaml __pyx_mstate_global->__pyx_kp_u_model_yaml -#define __pyx_n_u_models __pyx_mstate_global->__pyx_n_u_models -#define __pyx_kp_u_module __pyx_mstate_global->__pyx_kp_u_module -#define __pyx_n_u_module_2 __pyx_mstate_global->__pyx_n_u_module_2 -#define __pyx_n_s_module_3 __pyx_mstate_global->__pyx_n_s_module_3 -#define __pyx_n_s_modules __pyx_mstate_global->__pyx_n_s_modules -#define __pyx_n_s_mro_entries __pyx_mstate_global->__pyx_n_s_mro_entries -#define __pyx_n_s_n __pyx_mstate_global->__pyx_n_s_n -#define __pyx_n_u_n __pyx_mstate_global->__pyx_n_u_n -#define __pyx_n_s_n_2 __pyx_mstate_global->__pyx_n_s_n_2 -#define __pyx_n_s_na __pyx_mstate_global->__pyx_n_s_na -#define __pyx_n_s_name __pyx_mstate_global->__pyx_n_s_name -#define __pyx_n_s_name_2 __pyx_mstate_global->__pyx_n_s_name_2 -#define __pyx_n_s_names __pyx_mstate_global->__pyx_n_s_names -#define __pyx_n_s_nc __pyx_mstate_global->__pyx_n_s_nc -#define __pyx_n_u_nc __pyx_mstate_global->__pyx_n_u_nc -#define __pyx_n_s_nl __pyx_mstate_global->__pyx_n_s_nl -#define __pyx_n_s_nn __pyx_mstate_global->__pyx_n_s_nn -#define __pyx_n_s_no __pyx_mstate_global->__pyx_n_s_no -#define __pyx_n_s_np __pyx_mstate_global->__pyx_n_s_np -#define __pyx_n_s_numel __pyx_mstate_global->__pyx_n_s_numel -#define __pyx_n_s_nx __pyx_mstate_global->__pyx_n_s_nx -#define __pyx_n_s_ny __pyx_mstate_global->__pyx_n_s_ny -#define __pyx_n_s_o __pyx_mstate_global->__pyx_n_s_o -#define __pyx_n_s_onnx_dynamic __pyx_mstate_global->__pyx_n_s_onnx_dynamic -#define __pyx_n_s_open __pyx_mstate_global->__pyx_n_s_open -#define __pyx_n_s_opt __pyx_mstate_global->__pyx_n_s_opt -#define __pyx_n_s_p __pyx_mstate_global->__pyx_n_s_p -#define __pyx_n_s_parameters __pyx_mstate_global->__pyx_n_s_parameters -#define __pyx_n_u_params __pyx_mstate_global->__pyx_n_u_params -#define __pyx_n_s_parents __pyx_mstate_global->__pyx_n_s_parents -#define __pyx_n_s_parse_args __pyx_mstate_global->__pyx_n_s_parse_args -#define __pyx_n_s_parse_model __pyx_mstate_global->__pyx_n_s_parse_model -#define __pyx_n_s_parse_model_locals_genexpr __pyx_mstate_global->__pyx_n_s_parse_model_locals_genexpr -#define __pyx_n_s_parser __pyx_mstate_global->__pyx_n_s_parser -#define __pyx_n_s_path __pyx_mstate_global->__pyx_n_s_path -#define __pyx_n_s_pathlib __pyx_mstate_global->__pyx_n_s_pathlib -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2 __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2 -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3 __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3 -#define __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4 __pyx_mstate_global->__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4 -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2 __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2 -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3 __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3 -#define __pyx_n_s_permute __pyx_mstate_global->__pyx_n_s_permute -#define __pyx_n_s_prepare __pyx_mstate_global->__pyx_n_s_prepare -#define __pyx_n_s_print __pyx_mstate_global->__pyx_n_s_print -#define __pyx_n_s_print_args __pyx_mstate_global->__pyx_n_s_print_args -#define __pyx_n_s_print_biases __pyx_mstate_global->__pyx_n_s_print_biases -#define __pyx_n_s_profile __pyx_mstate_global->__pyx_n_s_profile -#define __pyx_kp_u_profile_2 __pyx_mstate_global->__pyx_kp_u_profile_2 -#define __pyx_kp_u_profile_model_speed __pyx_mstate_global->__pyx_kp_u_profile_model_speed -#define __pyx_n_s_profile_one_layer __pyx_mstate_global->__pyx_n_s_profile_one_layer -#define __pyx_n_s_qualname __pyx_mstate_global->__pyx_n_s_qualname -#define __pyx_n_s_rand __pyx_mstate_global->__pyx_n_s_rand -#define __pyx_n_s_range __pyx_mstate_global->__pyx_n_s_range -#define __pyx_n_s_register_buffer __pyx_mstate_global->__pyx_n_s_register_buffer -#define __pyx_n_s_requires_grad __pyx_mstate_global->__pyx_n_s_requires_grad -#define __pyx_n_s_resolve __pyx_mstate_global->__pyx_n_s_resolve -#define __pyx_n_s_rglob __pyx_mstate_global->__pyx_n_s_rglob -#define __pyx_n_s_round __pyx_mstate_global->__pyx_n_s_round -#define __pyx_n_s_s __pyx_mstate_global->__pyx_n_s_s -#define __pyx_n_s_safe_load __pyx_mstate_global->__pyx_n_s_safe_load -#define __pyx_n_s_save __pyx_mstate_global->__pyx_n_s_save -#define __pyx_n_s_scale __pyx_mstate_global->__pyx_n_s_scale -#define __pyx_n_s_scale_img __pyx_mstate_global->__pyx_n_s_scale_img -#define __pyx_n_s_select_device __pyx_mstate_global->__pyx_n_s_select_device -#define __pyx_n_s_self __pyx_mstate_global->__pyx_n_s_self -#define __pyx_n_s_send __pyx_mstate_global->__pyx_n_s_send -#define __pyx_n_s_set_name __pyx_mstate_global->__pyx_n_s_set_name -#define __pyx_n_s_shape __pyx_mstate_global->__pyx_n_s_shape -#define __pyx_n_s_si __pyx_mstate_global->__pyx_n_s_si -#define __pyx_n_s_sigmoid __pyx_mstate_global->__pyx_n_s_sigmoid -#define __pyx_n_s_spec __pyx_mstate_global->__pyx_n_s_spec -#define __pyx_n_s_stack __pyx_mstate_global->__pyx_n_s_stack -#define __pyx_n_s_stem __pyx_mstate_global->__pyx_n_s_stem -#define __pyx_n_u_store_true __pyx_mstate_global->__pyx_n_u_store_true -#define __pyx_n_s_stride __pyx_mstate_global->__pyx_n_s_stride -#define __pyx_n_s_sum __pyx_mstate_global->__pyx_n_s_sum -#define __pyx_n_s_super __pyx_mstate_global->__pyx_n_s_super -#define __pyx_n_s_sys __pyx_mstate_global->__pyx_n_s_sys -#define __pyx_n_s_t __pyx_mstate_global->__pyx_n_s_t -#define __pyx_n_s_tensor __pyx_mstate_global->__pyx_n_s_tensor -#define __pyx_kp_u_test __pyx_mstate_global->__pyx_kp_u_test -#define __pyx_n_s_test_2 __pyx_mstate_global->__pyx_n_s_test_2 -#define __pyx_n_s_test_3 __pyx_mstate_global->__pyx_n_s_test_3 -#define __pyx_kp_u_test_all_yolo_yaml __pyx_mstate_global->__pyx_kp_u_test_all_yolo_yaml -#define __pyx_n_s_thop __pyx_mstate_global->__pyx_n_s_thop -#define __pyx_n_s_throw __pyx_mstate_global->__pyx_n_s_throw -#define __pyx_kp_u_time_ms __pyx_mstate_global->__pyx_kp_u_time_ms -#define __pyx_n_s_time_sync __pyx_mstate_global->__pyx_n_s_time_sync -#define __pyx_n_s_to __pyx_mstate_global->__pyx_n_s_to -#define __pyx_n_s_tolist __pyx_mstate_global->__pyx_n_s_tolist -#define __pyx_n_s_torch __pyx_mstate_global->__pyx_n_s_torch -#define __pyx_n_s_train __pyx_mstate_global->__pyx_n_s_train -#define __pyx_n_s_training __pyx_mstate_global->__pyx_n_s_training -#define __pyx_n_s_type __pyx_mstate_global->__pyx_n_s_type -#define __pyx_n_s_verbose __pyx_mstate_global->__pyx_n_s_verbose -#define __pyx_n_s_view __pyx_mstate_global->__pyx_n_s_view -#define __pyx_n_s_visualize __pyx_mstate_global->__pyx_n_s_visualize -#define __pyx_n_s_weight __pyx_mstate_global->__pyx_n_s_weight -#define __pyx_n_s_wh __pyx_mstate_global->__pyx_n_s_wh -#define __pyx_n_u_width_multiple __pyx_mstate_global->__pyx_n_u_width_multiple -#define __pyx_kp_u_with_nc __pyx_mstate_global->__pyx_kp_u_with_nc -#define __pyx_n_s_x __pyx_mstate_global->__pyx_n_s_x -#define __pyx_n_s_xi __pyx_mstate_global->__pyx_n_s_xi -#define __pyx_n_s_xv __pyx_mstate_global->__pyx_n_s_xv -#define __pyx_n_s_xy __pyx_mstate_global->__pyx_n_s_xy -#define __pyx_n_s_y __pyx_mstate_global->__pyx_n_s_y -#define __pyx_n_s_yaml __pyx_mstate_global->__pyx_n_s_yaml -#define __pyx_n_s_yaml_file __pyx_mstate_global->__pyx_n_s_yaml_file -#define __pyx_n_s_yi __pyx_mstate_global->__pyx_n_s_yi -#define __pyx_kp_u_yolo_yaml __pyx_mstate_global->__pyx_kp_u_yolo_yaml -#define __pyx_kp_u_yolov5s_yaml __pyx_mstate_global->__pyx_kp_u_yolov5s_yaml -#define __pyx_n_s_yv __pyx_mstate_global->__pyx_n_s_yv -#define __pyx_n_s_z __pyx_mstate_global->__pyx_n_s_z -#define __pyx_n_s_zeros __pyx_mstate_global->__pyx_n_s_zeros -#define __pyx_n_s_zip __pyx_mstate_global->__pyx_n_s_zip -#define __pyx_float_0_5 __pyx_mstate_global->__pyx_float_0_5 -#define __pyx_float_0_6 __pyx_mstate_global->__pyx_float_0_6 -#define __pyx_float_1E9 __pyx_mstate_global->__pyx_float_1E9 -#define __pyx_float_0_67 __pyx_mstate_global->__pyx_float_0_67 -#define __pyx_float_0_83 __pyx_mstate_global->__pyx_float_0_83 -#define __pyx_float_0_999999 __pyx_mstate_global->__pyx_float_0_999999 -#define __pyx_int_0 __pyx_mstate_global->__pyx_int_0 -#define __pyx_int_1 __pyx_mstate_global->__pyx_int_1 -#define __pyx_int_2 __pyx_mstate_global->__pyx_int_2 -#define __pyx_int_3 __pyx_mstate_global->__pyx_int_3 -#define __pyx_int_4 __pyx_mstate_global->__pyx_int_4 -#define __pyx_int_5 __pyx_mstate_global->__pyx_int_5 -#define __pyx_int_8 __pyx_mstate_global->__pyx_int_8 -#define __pyx_int_20 __pyx_mstate_global->__pyx_int_20 -#define __pyx_int_80 __pyx_mstate_global->__pyx_int_80 -#define __pyx_int_100 __pyx_mstate_global->__pyx_int_100 -#define __pyx_int_256 __pyx_mstate_global->__pyx_int_256 -#define __pyx_int_640 __pyx_mstate_global->__pyx_int_640 -#define __pyx_int_neg_1 __pyx_mstate_global->__pyx_int_neg_1 -#define __pyx_int_neg_2 __pyx_mstate_global->__pyx_int_neg_2 -#define __pyx_tuple_ __pyx_mstate_global->__pyx_tuple_ -#define __pyx_slice__2 __pyx_mstate_global->__pyx_slice__2 -#define __pyx_slice__3 __pyx_mstate_global->__pyx_slice__3 -#define __pyx_slice__6 __pyx_mstate_global->__pyx_slice__6 -#define __pyx_tuple__4 __pyx_mstate_global->__pyx_tuple__4 -#define __pyx_tuple__5 __pyx_mstate_global->__pyx_tuple__5 -#define __pyx_tuple__7 __pyx_mstate_global->__pyx_tuple__7 -#define __pyx_tuple__9 __pyx_mstate_global->__pyx_tuple__9 -#define __pyx_slice__13 __pyx_mstate_global->__pyx_slice__13 -#define __pyx_slice__14 __pyx_mstate_global->__pyx_slice__14 -#define __pyx_slice__18 __pyx_mstate_global->__pyx_slice__18 -#define __pyx_slice__20 __pyx_mstate_global->__pyx_slice__20 -#define __pyx_slice__22 __pyx_mstate_global->__pyx_slice__22 -#define __pyx_slice__27 __pyx_mstate_global->__pyx_slice__27 -#define __pyx_slice__29 __pyx_mstate_global->__pyx_slice__29 -#define __pyx_slice__31 __pyx_mstate_global->__pyx_slice__31 -#define __pyx_tuple__10 __pyx_mstate_global->__pyx_tuple__10 -#define __pyx_tuple__11 __pyx_mstate_global->__pyx_tuple__11 -#define __pyx_tuple__15 __pyx_mstate_global->__pyx_tuple__15 -#define __pyx_tuple__16 __pyx_mstate_global->__pyx_tuple__16 -#define __pyx_tuple__17 __pyx_mstate_global->__pyx_tuple__17 -#define __pyx_tuple__19 __pyx_mstate_global->__pyx_tuple__19 -#define __pyx_tuple__21 __pyx_mstate_global->__pyx_tuple__21 -#define __pyx_tuple__26 __pyx_mstate_global->__pyx_tuple__26 -#define __pyx_tuple__28 __pyx_mstate_global->__pyx_tuple__28 -#define __pyx_tuple__33 __pyx_mstate_global->__pyx_tuple__33 -#define __pyx_tuple__35 __pyx_mstate_global->__pyx_tuple__35 -#define __pyx_tuple__37 __pyx_mstate_global->__pyx_tuple__37 -#define __pyx_tuple__39 __pyx_mstate_global->__pyx_tuple__39 -#define __pyx_tuple__41 __pyx_mstate_global->__pyx_tuple__41 -#define __pyx_tuple__42 __pyx_mstate_global->__pyx_tuple__42 -#define __pyx_tuple__44 __pyx_mstate_global->__pyx_tuple__44 -#define __pyx_tuple__45 __pyx_mstate_global->__pyx_tuple__45 -#define __pyx_tuple__47 __pyx_mstate_global->__pyx_tuple__47 -#define __pyx_tuple__48 __pyx_mstate_global->__pyx_tuple__48 -#define __pyx_tuple__50 __pyx_mstate_global->__pyx_tuple__50 -#define __pyx_tuple__52 __pyx_mstate_global->__pyx_tuple__52 -#define __pyx_tuple__53 __pyx_mstate_global->__pyx_tuple__53 -#define __pyx_tuple__55 __pyx_mstate_global->__pyx_tuple__55 -#define __pyx_tuple__57 __pyx_mstate_global->__pyx_tuple__57 -#define __pyx_tuple__59 __pyx_mstate_global->__pyx_tuple__59 -#define __pyx_tuple__61 __pyx_mstate_global->__pyx_tuple__61 -#define __pyx_tuple__62 __pyx_mstate_global->__pyx_tuple__62 -#define __pyx_tuple__64 __pyx_mstate_global->__pyx_tuple__64 -#define __pyx_tuple__66 __pyx_mstate_global->__pyx_tuple__66 -#define __pyx_tuple__68 __pyx_mstate_global->__pyx_tuple__68 -#define __pyx_tuple__69 __pyx_mstate_global->__pyx_tuple__69 -#define __pyx_tuple__71 __pyx_mstate_global->__pyx_tuple__71 -#define __pyx_tuple__73 __pyx_mstate_global->__pyx_tuple__73 -#define __pyx_tuple__74 __pyx_mstate_global->__pyx_tuple__74 -#define __pyx_tuple__75 __pyx_mstate_global->__pyx_tuple__75 -#define __pyx_tuple__76 __pyx_mstate_global->__pyx_tuple__76 -#define __pyx_codeobj__34 __pyx_mstate_global->__pyx_codeobj__34 -#define __pyx_codeobj__38 __pyx_mstate_global->__pyx_codeobj__38 -#define __pyx_codeobj__40 __pyx_mstate_global->__pyx_codeobj__40 -#define __pyx_codeobj__43 __pyx_mstate_global->__pyx_codeobj__43 -#define __pyx_codeobj__46 __pyx_mstate_global->__pyx_codeobj__46 -#define __pyx_codeobj__49 __pyx_mstate_global->__pyx_codeobj__49 -#define __pyx_codeobj__51 __pyx_mstate_global->__pyx_codeobj__51 -#define __pyx_codeobj__54 __pyx_mstate_global->__pyx_codeobj__54 -#define __pyx_codeobj__56 __pyx_mstate_global->__pyx_codeobj__56 -#define __pyx_codeobj__58 __pyx_mstate_global->__pyx_codeobj__58 -#define __pyx_codeobj__60 __pyx_mstate_global->__pyx_codeobj__60 -#define __pyx_codeobj__63 __pyx_mstate_global->__pyx_codeobj__63 -#define __pyx_codeobj__65 __pyx_mstate_global->__pyx_codeobj__65 -#define __pyx_codeobj__67 __pyx_mstate_global->__pyx_codeobj__67 -#define __pyx_codeobj__70 __pyx_mstate_global->__pyx_codeobj__70 -#define __pyx_codeobj__72 __pyx_mstate_global->__pyx_codeobj__72 -#endif -/* #### Code section: module_code ### */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":36 - * onnx_dynamic = False # ONNX export parameter - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer # <<<<<<<<<<<<<< - * super().__init__() - * self.nc = nc # number of classes - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_1__init__ = {"__init__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_1__init__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_nc = 0; - PyObject *__pyx_v_anchors = 0; - PyObject *__pyx_v_ch = 0; - PyObject *__pyx_v_inplace = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_nc,&__pyx_n_s_anchors,&__pyx_n_s_ch,&__pyx_n_s_inplace,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_nc,&__pyx_n_s_anchors,&__pyx_n_s_ch,&__pyx_n_s_inplace,0}; - #endif - PyObject* values[5] = {0,0,0,0,0}; - values[1] = ((PyObject *)((PyObject *)__pyx_int_80)); - values[2] = ((PyObject *)((PyObject*)__pyx_empty_tuple)); - values[3] = ((PyObject *)((PyObject*)__pyx_empty_tuple)); - values[4] = ((PyObject *)((PyObject *)Py_True)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 36, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_nc); - if (value) { values[1] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 36, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_anchors); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 36, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_ch); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 36, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_inplace); - if (value) { values[4] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 36, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__init__") < 0)) __PYX_ERR(0, 36, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_nc = values[1]; - __pyx_v_anchors = values[2]; - __pyx_v_ch = values[3]; - __pyx_v_inplace = values[4]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 0, 1, 5, __pyx_nargs); __PYX_ERR(0, 36, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Detect.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect___init__(__pyx_self, __pyx_v_self, __pyx_v_nc, __pyx_v_anchors, __pyx_v_ch, __pyx_v_inplace); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":45 - * self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid - * self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - * self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv # <<<<<<<<<<<<<< - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - * - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 45, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___2generator, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_Detect___init___locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Detect.__init__.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 45, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_ch)) { __Pyx_RaiseClosureNameError("ch"); __PYX_ERR(0, 45, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_ch)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_ch)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_outer_scope->__pyx_v_ch; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_ch); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 45, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_2 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 45, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 45, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 45, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_nn); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_Conv2d); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self)) { __Pyx_RaiseClosureNameError("self"); __PYX_ERR(0, 45, __pyx_L1_error) } - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self, __pyx_n_s_no); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self)) { __Pyx_RaiseClosureNameError("self"); __PYX_ERR(0, 45, __pyx_L1_error) } - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self, __pyx_n_s_na); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyNumber_Multiply(__pyx_t_5, __pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_7, __pyx_cur_scope->__pyx_v_x, __pyx_t_8, __pyx_int_1}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_9, 3+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_3; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_3 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 45, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":36 - * onnx_dynamic = False # ONNX export parameter - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer # <<<<<<<<<<<<<< - * super().__init__() - * self.nc = nc # number of classes - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_nc, PyObject *__pyx_v_anchors, PyObject *__pyx_v_ch, PyObject *__pyx_v_inplace) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *__pyx_cur_scope; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___2generator = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__init__", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 36, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_self = __pyx_v_self; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_self); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_self); - __pyx_cur_scope->__pyx_v_ch = __pyx_v_ch; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_ch); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_ch); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":37 - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer - * super().__init__() # <<<<<<<<<<<<<< - * self.nc = nc # number of classes - * self.no = nc + 5 # number of outputs per anchor - */ - __pyx_t_2 = __Pyx_CyFunction_GetClassObj(__pyx_self); - if (!__pyx_t_2) { PyErr_SetString(PyExc_SystemError, "super(): empty __class__ cell"); __PYX_ERR(0, 37, __pyx_L1_error) } - __Pyx_INCREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_self); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_self); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_cur_scope->__pyx_v_self); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_super, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_init); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":38 - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer - * super().__init__() - * self.nc = nc # number of classes # <<<<<<<<<<<<<< - * self.no = nc + 5 # number of outputs per anchor - * self.nl = len(anchors) # number of detection layers - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_nc, __pyx_v_nc) < 0) __PYX_ERR(0, 38, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":39 - * super().__init__() - * self.nc = nc # number of classes - * self.no = nc + 5 # number of outputs per anchor # <<<<<<<<<<<<<< - * self.nl = len(anchors) # number of detection layers - * self.na = len(anchors[0]) // 2 # number of anchors - */ - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_v_nc, __pyx_int_5, 5, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 39, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_no, __pyx_t_1) < 0) __PYX_ERR(0, 39, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":40 - * self.nc = nc # number of classes - * self.no = nc + 5 # number of outputs per anchor - * self.nl = len(anchors) # number of detection layers # <<<<<<<<<<<<<< - * self.na = len(anchors[0]) // 2 # number of anchors - * self.grid = [torch.zeros(1)] * self.nl # init grid - */ - __pyx_t_5 = PyObject_Length(__pyx_v_anchors); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(0, 40, __pyx_L1_error) - __pyx_t_1 = PyInt_FromSsize_t(__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 40, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_nl, __pyx_t_1) < 0) __PYX_ERR(0, 40, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":41 - * self.no = nc + 5 # number of outputs per anchor - * self.nl = len(anchors) # number of detection layers - * self.na = len(anchors[0]) // 2 # number of anchors # <<<<<<<<<<<<<< - * self.grid = [torch.zeros(1)] * self.nl # init grid - * self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_anchors, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 41, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(0, 41, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyInt_FromSsize_t(__Pyx_div_Py_ssize_t(__pyx_t_5, 2)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 41, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_na, __pyx_t_1) < 0) __PYX_ERR(0, 41, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":42 - * self.nl = len(anchors) # number of detection layers - * self.na = len(anchors[0]) // 2 # number of anchors - * self.grid = [torch.zeros(1)] * self.nl # init grid # <<<<<<<<<<<<<< - * self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid - * self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_torch); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_int_1}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_nl); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - { PyObject* __pyx_temp = PyNumber_InPlaceMultiply(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_temp)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_temp); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_temp; - } - __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_grid, __pyx_t_3) < 0) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":43 - * self.na = len(anchors[0]) // 2 # number of anchors - * self.grid = [torch.zeros(1)] * self.nl # init grid - * self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid # <<<<<<<<<<<<<< - * self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - * self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_torch); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_zeros); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_int_1}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_nl); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); - { PyObject* __pyx_temp = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_temp)) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_GOTREF(__pyx_temp); - __Pyx_DECREF(__pyx_t_2); - __pyx_t_2 = __pyx_temp; - } - __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_anchor_grid, __pyx_t_2) < 0) __PYX_ERR(0, 43, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":44 - * self.grid = [torch.zeros(1)] * self.nl # init grid - * self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid - * self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) # <<<<<<<<<<<<<< - * self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_register_buffer); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_torch); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_tensor); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_v_anchors}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_float); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_7, }; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_view); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_nl); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_7, __pyx_t_6, __pyx_int_neg_1, __pyx_int_2}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __pyx_t_9 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_9)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_9, __pyx_n_u_anchors, __pyx_t_3}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":45 - * self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid - * self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - * self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv # <<<<<<<<<<<<<< - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_nn); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_ModuleList); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_9)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_9, __pyx_t_1}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_m, __pyx_t_2) < 0) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":46 - * self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - * self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - * self.inplace = inplace # use in-place ops (e.g. slice assignment) # <<<<<<<<<<<<<< - * - * def forward(self, x): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_inplace, __pyx_v_inplace) < 0) __PYX_ERR(0, 46, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":36 - * onnx_dynamic = False # ONNX export parameter - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer # <<<<<<<<<<<<<< - * super().__init__() - * self.nc = nc # number of classes - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Detect.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_8__init___2generator); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":48 - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - * - * def forward(self, x): # <<<<<<<<<<<<<< - * z = [] # inference output - * for i in range(self.nl): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_3forward(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_3forward = {"forward", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_3forward, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_3forward(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_x = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("forward (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 48, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_x)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 48, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("forward", 1, 2, 2, 1); __PYX_ERR(0, 48, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "forward") < 0)) __PYX_ERR(0, 48, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_x = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("forward", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 48, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Detect.forward", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_2forward(__pyx_self, __pyx_v_self, __pyx_v_x); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_2forward(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x) { - PyObject *__pyx_v_z = NULL; - PyObject *__pyx_v_i = NULL; - PyObject *__pyx_v_bs = NULL; - CYTHON_UNUSED PyObject *__pyx_v__ = NULL; - PyObject *__pyx_v_ny = NULL; - PyObject *__pyx_v_nx = NULL; - PyObject *__pyx_v_y = NULL; - PyObject *__pyx_v_xy = NULL; - PyObject *__pyx_v_wh = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *(*__pyx_t_11)(PyObject *); - int __pyx_t_12; - int __pyx_t_13; - int __pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("forward", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":49 - * - * def forward(self, x): - * z = [] # inference output # <<<<<<<<<<<<<< - * for i in range(self.nl): - * x[i] = self.m[i](x[i]) # conv - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 49, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_z = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":50 - * def forward(self, x): - * z = [] # inference output - * for i in range(self.nl): # <<<<<<<<<<<<<< - * x[i] = self.m[i](x[i]) # conv - * bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_nl); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 50, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 50, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 50, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_4(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 50, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":51 - * z = [] # inference output - * for i in range(self.nl): - * x[i] = self.m[i](x[i]) # conv # <<<<<<<<<<<<<< - * bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - * x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_m); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 51, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_t_5, __pyx_v_i); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 51, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_v_x, __pyx_v_i); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 51, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_5}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 51, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - if (unlikely((PyObject_SetItem(__pyx_v_x, __pyx_v_i, __pyx_t_2) < 0))) __PYX_ERR(0, 51, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":52 - * for i in range(self.nl): - * x[i] = self.m[i](x[i]) # conv - * bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) # <<<<<<<<<<<<<< - * x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - * - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_x, __pyx_v_i); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_shape); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_6))) || (PyList_CheckExact(__pyx_t_6))) { - PyObject* sequence = __pyx_t_6; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 52, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_5 = PyList_GET_ITEM(sequence, 1); - __pyx_t_7 = PyList_GET_ITEM(sequence, 2); - __pyx_t_9 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(__pyx_t_9); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_5,&__pyx_t_7,&__pyx_t_9}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_5,&__pyx_t_7,&__pyx_t_9}; - __pyx_t_10 = PyObject_GetIter(__pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_11 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_11(__pyx_t_10); if (unlikely(!item)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_11(__pyx_t_10), 4) < 0) __PYX_ERR(0, 52, __pyx_L1_error) - __pyx_t_11 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_11 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 52, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_bs, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v__, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_XDECREF_SET(__pyx_v_ny, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_XDECREF_SET(__pyx_v_nx, __pyx_t_9); - __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":53 - * x[i] = self.m[i](x[i]) # conv - * bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - * x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() # <<<<<<<<<<<<<< - * - * if not self.training: # inference - */ - __pyx_t_7 = __Pyx_PyObject_GetItem(__pyx_v_x, __pyx_v_i); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_view); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_na); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_no); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[6] = {__pyx_t_10, __pyx_v_bs, __pyx_t_7, __pyx_t_2, __pyx_v_ny, __pyx_v_nx}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_8, 5+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_9, __pyx_n_s_permute); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_9, __pyx_n_s_contiguous); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_9)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_9, }; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - if (unlikely((PyObject_SetItem(__pyx_v_x, __pyx_v_i, __pyx_t_6) < 0))) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":55 - * x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - * - * if not self.training: # inference # <<<<<<<<<<<<<< - * if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: - * self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_training); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 55, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_12 < 0))) __PYX_ERR(0, 55, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_13 = ((!__pyx_t_12) != 0); - if (__pyx_t_13) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":56 - * - * if not self.training: # inference - * if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: # <<<<<<<<<<<<<< - * self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - * - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_onnx_dynamic); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_12 < 0))) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!__pyx_t_12) { - } else { - __pyx_t_13 = __pyx_t_12; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_grid); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_t_6, __pyx_v_i); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_shape); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetSlice(__pyx_t_6, 2, 4, NULL, NULL, &__pyx_slice__2, 1, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_v_x, __pyx_v_i); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_shape); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetSlice(__pyx_t_9, 2, 4, NULL, NULL, &__pyx_slice__2, 1, 1, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyObject_RichCompare(__pyx_t_5, __pyx_t_6, Py_NE); __Pyx_XGOTREF(__pyx_t_9); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_12 < 0))) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_13 = __pyx_t_12; - __pyx_L9_bool_binop_done:; - if (__pyx_t_13) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":57 - * if not self.training: # inference - * if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: - * self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) # <<<<<<<<<<<<<< - * - * y = x[i].sigmoid() - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_make_grid); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_5, __pyx_v_nx, __pyx_v_ny, __pyx_v_i}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 3+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_9))) || (PyList_CheckExact(__pyx_t_9))) { - PyObject* sequence = __pyx_t_9; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 57, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_6 = PyList_GET_ITEM(sequence, 0); - __pyx_t_5 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_6 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_t_9); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_11 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); - index = 0; __pyx_t_6 = __pyx_t_11(__pyx_t_2); if (unlikely(!__pyx_t_6)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - index = 1; __pyx_t_5 = __pyx_t_11(__pyx_t_2); if (unlikely(!__pyx_t_5)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_11(__pyx_t_2), 2) < 0) __PYX_ERR(0, 57, __pyx_L1_error) - __pyx_t_11 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L12_unpacking_done; - __pyx_L11_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_11 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 57, __pyx_L1_error) - __pyx_L12_unpacking_done:; - } - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_grid); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely((PyObject_SetItem(__pyx_t_2, __pyx_v_i, __pyx_t_6) < 0))) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_anchor_grid); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely((PyObject_SetItem(__pyx_t_2, __pyx_v_i, __pyx_t_5) < 0))) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":56 - * - * if not self.training: # inference - * if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: # <<<<<<<<<<<<<< - * self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - * - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":59 - * self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - * - * y = x[i].sigmoid() # <<<<<<<<<<<<<< - * if self.inplace: - * y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - */ - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_v_x, __pyx_v_i); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_sigmoid); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_5, }; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_y, __pyx_t_9); - __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":60 - * - * y = x[i].sigmoid() - * if self.inplace: # <<<<<<<<<<<<<< - * y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - * y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_inplace); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_13 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_13 < 0))) __PYX_ERR(0, 60, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_13) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":61 - * y = x[i].sigmoid() - * if self.inplace: - * y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy # <<<<<<<<<<<<<< - * y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - * else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - */ - __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_v_y, __pyx_tuple__4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_6 = __Pyx_PyInt_MultiplyObjC(__pyx_t_9, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyFloat_SubtractObjC(__pyx_t_6, __pyx_float_0_5, 0.5, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_grid); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_t_6, __pyx_v_i); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyNumber_Add(__pyx_t_9, __pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_stride); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_t_5, __pyx_v_i); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Multiply(__pyx_t_6, __pyx_t_9); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_y, __pyx_tuple__4, __pyx_t_5) < 0))) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":62 - * if self.inplace: - * y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - * y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh # <<<<<<<<<<<<<< - * else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - * xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - */ - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_v_y, __pyx_tuple__5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = __Pyx_PyInt_MultiplyObjC(__pyx_t_5, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Power(__pyx_t_9, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_anchor_grid); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_t_9, __pyx_v_i); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyNumber_Multiply(__pyx_t_5, __pyx_t_6); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_y, __pyx_tuple__5, __pyx_t_9) < 0))) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":60 - * - * y = x[i].sigmoid() - * if self.inplace: # <<<<<<<<<<<<<< - * y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - * y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - */ - goto __pyx_L13; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":64 - * y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - * else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - * xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy # <<<<<<<<<<<<<< - * wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - * y = torch.cat((xy, wh, y[..., 4:]), -1) - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_v_y, __pyx_tuple__4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_6 = __Pyx_PyInt_MultiplyObjC(__pyx_t_9, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyFloat_SubtractObjC(__pyx_t_6, __pyx_float_0_5, 0.5, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_grid); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_t_6, __pyx_v_i); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyNumber_Add(__pyx_t_9, __pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_stride); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_t_5, __pyx_v_i); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Multiply(__pyx_t_6, __pyx_t_9); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 64, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_XDECREF_SET(__pyx_v_xy, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":65 - * else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - * xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - * wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh # <<<<<<<<<<<<<< - * y = torch.cat((xy, wh, y[..., 4:]), -1) - * z.append(y.view(bs, -1, self.no)) - */ - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_v_y, __pyx_tuple__5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = __Pyx_PyInt_MultiplyObjC(__pyx_t_5, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Power(__pyx_t_9, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_anchor_grid); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_t_9, __pyx_v_i); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyNumber_Multiply(__pyx_t_5, __pyx_t_6); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF_SET(__pyx_v_wh, __pyx_t_9); - __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":66 - * xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - * wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - * y = torch.cat((xy, wh, y[..., 4:]), -1) # <<<<<<<<<<<<<< - * z.append(y.view(bs, -1, self.no)) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_torch); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_cat); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_v_y, __pyx_tuple__7); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_xy); - __Pyx_GIVEREF(__pyx_v_xy); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_xy); - __Pyx_INCREF(__pyx_v_wh); - __Pyx_GIVEREF(__pyx_v_wh); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_wh); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_6, __pyx_t_2, __pyx_int_neg_1}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_8, 2+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF_SET(__pyx_v_y, __pyx_t_9); - __pyx_t_9 = 0; - } - __pyx_L13:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":67 - * wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - * y = torch.cat((xy, wh, y[..., 4:]), -1) - * z.append(y.view(bs, -1, self.no)) # <<<<<<<<<<<<<< - * - * return x if self.training else (torch.cat(z, 1), x) - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_y, __pyx_n_s_view); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 67, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_no); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 67, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_6, __pyx_v_bs, __pyx_int_neg_1, __pyx_t_2}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_8, 3+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 67, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_14 = __Pyx_PyList_Append(__pyx_v_z, __pyx_t_9); if (unlikely(__pyx_t_14 == ((int)-1))) __PYX_ERR(0, 67, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":55 - * x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - * - * if not self.training: # inference # <<<<<<<<<<<<<< - * if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: - * self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":50 - * def forward(self, x): - * z = [] # inference output - * for i in range(self.nl): # <<<<<<<<<<<<<< - * x[i] = self.m[i](x[i]) # conv - * bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":69 - * z.append(y.view(bs, -1, self.no)) - * - * return x if self.training else (torch.cat(z, 1), x) # <<<<<<<<<<<<<< - * - * def _make_grid(self, nx=20, ny=20, i=0): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_training); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_13 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_13 < 0))) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_13) { - __Pyx_INCREF(__pyx_v_x); - __pyx_t_1 = __pyx_v_x; - } else { - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_torch); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_cat); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_5, __pyx_v_z, __pyx_int_1}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_8, 2+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 69, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_9); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_x); - __pyx_t_9 = 0; - __pyx_t_1 = __pyx_t_2; - __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":48 - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - * - * def forward(self, x): # <<<<<<<<<<<<<< - * z = [] # inference output - * for i in range(self.nl): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Detect.forward", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_z); - __Pyx_XDECREF(__pyx_v_i); - __Pyx_XDECREF(__pyx_v_bs); - __Pyx_XDECREF(__pyx_v__); - __Pyx_XDECREF(__pyx_v_ny); - __Pyx_XDECREF(__pyx_v_nx); - __Pyx_XDECREF(__pyx_v_y); - __Pyx_XDECREF(__pyx_v_xy); - __Pyx_XDECREF(__pyx_v_wh); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":71 - * return x if self.training else (torch.cat(z, 1), x) - * - * def _make_grid(self, nx=20, ny=20, i=0): # <<<<<<<<<<<<<< - * d = self.anchors[i].device - * yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_5_make_grid(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_5_make_grid = {"_make_grid", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_5_make_grid, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_5_make_grid(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_nx = 0; - PyObject *__pyx_v_ny = 0; - PyObject *__pyx_v_i = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_make_grid (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_nx,&__pyx_n_s_ny,&__pyx_n_s_i,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_nx,&__pyx_n_s_ny,&__pyx_n_s_i,0}; - #endif - PyObject* values[4] = {0,0,0,0}; - values[1] = ((PyObject *)((PyObject *)__pyx_int_20)); - values[2] = ((PyObject *)((PyObject *)__pyx_int_20)); - values[3] = ((PyObject *)((PyObject *)__pyx_int_0)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 71, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_nx); - if (value) { values[1] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 71, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_ny); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 71, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_i); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 71, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_make_grid") < 0)) __PYX_ERR(0, 71, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_nx = values[1]; - __pyx_v_ny = values[2]; - __pyx_v_i = values[3]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_make_grid", 0, 1, 4, __pyx_nargs); __PYX_ERR(0, 71, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Detect._make_grid", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_4_make_grid(__pyx_self, __pyx_v_self, __pyx_v_nx, __pyx_v_ny, __pyx_v_i); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_4_make_grid(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_nx, PyObject *__pyx_v_ny, PyObject *__pyx_v_i) { - PyObject *__pyx_v_d = NULL; - PyObject *__pyx_v_yv = NULL; - PyObject *__pyx_v_xv = NULL; - PyObject *__pyx_v_grid = NULL; - PyObject *__pyx_v_anchor_grid = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_make_grid", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":72 - * - * def _make_grid(self, nx=20, ny=20, i=0): - * d = self.anchors[i].device # <<<<<<<<<<<<<< - * yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_anchors); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 72, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_i); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 72, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_device); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 72, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_d = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":73 - * def _make_grid(self, nx=20, ny=20, i=0): - * d = self.anchors[i].device - * yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') # <<<<<<<<<<<<<< - * - * grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float() - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_torch); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_meshgrid); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_torch); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_arange); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_ny); - __Pyx_GIVEREF(__pyx_v_ny); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_ny); - __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_device, __pyx_v_d) < 0) __PYX_ERR(0, 73, __pyx_L1_error) - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_torch); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_arange); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_nx); - __Pyx_GIVEREF(__pyx_v_nx); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_nx); - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_device, __pyx_v_d) < 0) __PYX_ERR(0, 73, __pyx_L1_error) - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyList_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_5); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyList_SET_ITEM(__pyx_t_3, 1, __pyx_t_6); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_indexing, __pyx_n_u_ij) < 0) __PYX_ERR(0, 73, __pyx_L1_error) - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_5))) || (PyList_CheckExact(__pyx_t_5))) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 73, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_6 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_6); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); - index = 0; __pyx_t_3 = __pyx_t_7(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_6 = __pyx_t_7(__pyx_t_2); if (unlikely(!__pyx_t_6)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_2), 2) < 0) __PYX_ERR(0, 73, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 73, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_yv = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_xv = __pyx_t_6; - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":75 - * yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') - * - * grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float() # <<<<<<<<<<<<<< - * anchor_grid = (self.anchors[i].clone() * self.stride[i]) \ - * .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float() - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_torch); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_stack); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_xv); - __Pyx_GIVEREF(__pyx_v_xv); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_xv); - __Pyx_INCREF(__pyx_v_yv); - __Pyx_GIVEREF(__pyx_v_yv); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_yv); - __pyx_t_1 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_t_2, __pyx_int_2}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_8, 2+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_expand); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_na); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_int_1); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); - __Pyx_INCREF(__pyx_v_ny); - __Pyx_GIVEREF(__pyx_v_ny); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_ny); - __Pyx_INCREF(__pyx_v_nx); - __Pyx_GIVEREF(__pyx_v_nx); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_v_nx); - __Pyx_INCREF(__pyx_int_2); - __Pyx_GIVEREF(__pyx_int_2); - PyTuple_SET_ITEM(__pyx_t_2, 4, __pyx_int_2); - __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_2}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_float); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_6, }; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __pyx_v_grid = __pyx_t_5; - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":76 - * - * grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float() - * anchor_grid = (self.anchors[i].clone() * self.stride[i]) \ # <<<<<<<<<<<<<< - * .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float() - * return grid, anchor_grid - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_anchors); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_3, __pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_clone); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_1, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_stride); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_3, __pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":77 - * grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float() - * anchor_grid = (self.anchors[i].clone() * self.stride[i]) \ - * .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float() # <<<<<<<<<<<<<< - * return grid, anchor_grid - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_view); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_na); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_int_1); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_int_1); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_int_1); - __Pyx_INCREF(__pyx_int_2); - __Pyx_GIVEREF(__pyx_int_2); - PyTuple_SET_ITEM(__pyx_t_2, 4, __pyx_int_2); - __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_2}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_expand); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_na); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = PyTuple_New(5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_int_1); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_6); - __Pyx_INCREF(__pyx_v_ny); - __Pyx_GIVEREF(__pyx_v_ny); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_ny); - __Pyx_INCREF(__pyx_v_nx); - __Pyx_GIVEREF(__pyx_v_nx); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_v_nx); - __Pyx_INCREF(__pyx_int_2); - __Pyx_GIVEREF(__pyx_int_2); - PyTuple_SET_ITEM(__pyx_t_2, 4, __pyx_int_2); - __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_t_2}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_float); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_4, }; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_v_anchor_grid = __pyx_t_5; - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":78 - * anchor_grid = (self.anchors[i].clone() * self.stride[i]) \ - * .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float() - * return grid, anchor_grid # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 78, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_grid); - __Pyx_GIVEREF(__pyx_v_grid); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_grid); - __Pyx_INCREF(__pyx_v_anchor_grid); - __Pyx_GIVEREF(__pyx_v_anchor_grid); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_v_anchor_grid); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":71 - * return x if self.training else (torch.cat(z, 1), x) - * - * def _make_grid(self, nx=20, ny=20, i=0): # <<<<<<<<<<<<<< - * d = self.anchors[i].device - * yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Detect._make_grid", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_d); - __Pyx_XDECREF(__pyx_v_yv); - __Pyx_XDECREF(__pyx_v_xv); - __Pyx_XDECREF(__pyx_v_grid); - __Pyx_XDECREF(__pyx_v_anchor_grid); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":82 - * - * class Model(nn.Module): - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes # <<<<<<<<<<<<<< - * super().__init__() - * if isinstance(cfg, dict): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_1__init__ = {"__init__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_1__init__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_cfg = 0; - PyObject *__pyx_v_ch = 0; - PyObject *__pyx_v_nc = 0; - PyObject *__pyx_v_anchors = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_cfg,&__pyx_n_s_ch,&__pyx_n_s_nc,&__pyx_n_s_anchors,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_cfg,&__pyx_n_s_ch,&__pyx_n_s_nc,&__pyx_n_s_anchors,0}; - #endif - PyObject* values[5] = {0,0,0,0,0}; - values[1] = ((PyObject *)((PyObject*)__pyx_kp_u_yolov5s_yaml)); - values[2] = ((PyObject *)((PyObject *)__pyx_int_3)); - values[3] = ((PyObject *)((PyObject *)Py_None)); - values[4] = ((PyObject *)((PyObject *)Py_None)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 82, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_cfg); - if (value) { values[1] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 82, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_ch); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 82, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_nc); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 82, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_anchors); - if (value) { values[4] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 82, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__init__") < 0)) __PYX_ERR(0, 82, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_cfg = values[1]; - __pyx_v_ch = values[2]; - __pyx_v_nc = values[3]; - __pyx_v_anchors = values[4]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 0, 1, 5, __pyx_nargs); __PYX_ERR(0, 82, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model___init__(__pyx_self, __pyx_v_self, __pyx_v_cfg, __pyx_v_ch, __pyx_v_nc, __pyx_v_anchors); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_cfg, PyObject *__pyx_v_ch, PyObject *__pyx_v_nc, PyObject *__pyx_v_anchors) { - PyObject *__pyx_v_yaml = NULL; - PyObject *__pyx_v_f = NULL; - PyObject *__pyx_v_m = NULL; - PyObject *__pyx_v_s = NULL; - PyObject *__pyx_8genexpr1__pyx_v_i = NULL; - PyObject *__pyx_8genexpr2__pyx_v_x = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - Py_ssize_t __pyx_t_13; - Py_UCS4 __pyx_t_14; - PyObject *__pyx_t_15 = NULL; - PyObject *(*__pyx_t_16)(PyObject *); - PyObject *(*__pyx_t_17)(PyObject *); - PyObject *__pyx_t_18 = NULL; - PyObject *__pyx_t_19 = NULL; - PyObject *__pyx_t_20 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__init__", 0); - __Pyx_INCREF(__pyx_v_ch); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":83 - * class Model(nn.Module): - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - * super().__init__() # <<<<<<<<<<<<<< - * if isinstance(cfg, dict): - * self.yaml = cfg # model dict - */ - __pyx_t_2 = __Pyx_CyFunction_GetClassObj(__pyx_self); - if (!__pyx_t_2) { PyErr_SetString(PyExc_SystemError, "super(): empty __class__ cell"); __PYX_ERR(0, 83, __pyx_L1_error) } - __Pyx_INCREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_INCREF(__pyx_v_self); - __Pyx_GIVEREF(__pyx_v_self); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_self); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_super, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_init); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":84 - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - * super().__init__() - * if isinstance(cfg, dict): # <<<<<<<<<<<<<< - * self.yaml = cfg # model dict - * else: # is *.yaml - */ - __pyx_t_5 = PyDict_Check(__pyx_v_cfg); - __pyx_t_6 = (__pyx_t_5 != 0); - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":85 - * super().__init__() - * if isinstance(cfg, dict): - * self.yaml = cfg # model dict # <<<<<<<<<<<<<< - * else: # is *.yaml - * import yaml # for torch hub - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_yaml, __pyx_v_cfg) < 0) __PYX_ERR(0, 85, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":84 - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - * super().__init__() - * if isinstance(cfg, dict): # <<<<<<<<<<<<<< - * self.yaml = cfg # model dict - * else: # is *.yaml - */ - goto __pyx_L3; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":87 - * self.yaml = cfg # model dict - * else: # is *.yaml - * import yaml # for torch hub # <<<<<<<<<<<<<< - * self.yaml_file = Path(cfg).name - * with open(cfg, encoding='ascii', errors='ignore') as f: - */ - /*else*/ { - __pyx_t_1 = __Pyx_ImportDottedModule(__pyx_n_s_yaml, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_yaml = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":88 - * else: # is *.yaml - * import yaml # for torch hub - * self.yaml_file = Path(cfg).name # <<<<<<<<<<<<<< - * with open(cfg, encoding='ascii', errors='ignore') as f: - * self.yaml = yaml.safe_load(f) # model dict - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Path); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_cfg}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_name); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_yaml_file, __pyx_t_3) < 0) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":89 - * import yaml # for torch hub - * self.yaml_file = Path(cfg).name - * with open(cfg, encoding='ascii', errors='ignore') as f: # <<<<<<<<<<<<<< - * self.yaml = yaml.safe_load(f) # model dict - * - */ - /*with:*/ { - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_cfg); - __Pyx_GIVEREF(__pyx_v_cfg); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_cfg); - __pyx_t_1 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_encoding, __pyx_n_u_ascii) < 0) __PYX_ERR(0, 89, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_errors, __pyx_n_u_ignore) < 0) __PYX_ERR(0, 89, __pyx_L1_error) - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_open, __pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_LookupSpecial(__pyx_t_2, __pyx_n_s_exit); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = __Pyx_PyObject_LookupSpecial(__pyx_t_2, __pyx_n_s_enter); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 89, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_8, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 89, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __pyx_t_1; - __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - /*try:*/ { - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_9, &__pyx_t_10, &__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - /*try:*/ { - __pyx_v_f = __pyx_t_3; - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":90 - * self.yaml_file = Path(cfg).name - * with open(cfg, encoding='ascii', errors='ignore') as f: - * self.yaml = yaml.safe_load(f) # model dict # <<<<<<<<<<<<<< - * - * # Define model - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_yaml, __pyx_n_s_safe_load); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 90, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_f}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 90, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_yaml, __pyx_t_3) < 0) __PYX_ERR(0, 90, __pyx_L8_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":89 - * import yaml # for torch hub - * self.yaml_file = Path(cfg).name - * with open(cfg, encoding='ascii', errors='ignore') as f: # <<<<<<<<<<<<<< - * self.yaml = yaml.safe_load(f) # model dict - * - */ - } - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - goto __pyx_L13_try_end; - __pyx_L8_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - /*except:*/ { - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_3, &__pyx_t_2, &__pyx_t_1) < 0) __PYX_ERR(0, 89, __pyx_L10_except_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = PyTuple_Pack(3, __pyx_t_3, __pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 89, __pyx_L10_except_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_8, NULL); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 89, __pyx_L10_except_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_12); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_6 < 0) __PYX_ERR(0, 89, __pyx_L10_except_error) - __pyx_t_5 = ((!(__pyx_t_6 != 0)) != 0); - if (unlikely(__pyx_t_5)) { - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_ErrRestoreWithState(__pyx_t_3, __pyx_t_2, __pyx_t_1); - __pyx_t_3 = 0; __pyx_t_2 = 0; __pyx_t_1 = 0; - __PYX_ERR(0, 89, __pyx_L10_except_error) - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L9_exception_handled; - } - __pyx_L10_except_error:; - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_ExceptionReset(__pyx_t_9, __pyx_t_10, __pyx_t_11); - goto __pyx_L1_error; - __pyx_L9_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_ExceptionReset(__pyx_t_9, __pyx_t_10, __pyx_t_11); - __pyx_L13_try_end:; - } - } - /*finally:*/ { - /*normal exit:*/{ - if (__pyx_t_7) { - __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_tuple__9, NULL); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - goto __pyx_L7; - } - __pyx_L7:; - } - goto __pyx_L17; - __pyx_L4_error:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L1_error; - __pyx_L17:; - } - } - __pyx_L3:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":93 - * - * # Define model - * ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels # <<<<<<<<<<<<<< - * if nc and nc != self.yaml['nc']: - * LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_get); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_n_u_ch, __pyx_v_ch}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_ch, __pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (unlikely((PyObject_SetItem(__pyx_t_3, __pyx_n_u_ch, __pyx_t_1) < 0))) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":94 - * # Define model - * ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - * if nc and nc != self.yaml['nc']: # <<<<<<<<<<<<<< - * LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - * self.yaml['nc'] = nc # override yaml value - */ - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_v_nc); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 94, __pyx_L1_error) - if (__pyx_t_6) { - } else { - __pyx_t_5 = __pyx_t_6; - goto __pyx_L19_bool_binop_done; - } - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_Dict_GetItem(__pyx_t_1, __pyx_n_u_nc); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_RichCompare(__pyx_v_nc, __pyx_t_3, Py_NE); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = __pyx_t_6; - __pyx_L19_bool_binop_done:; - if (__pyx_t_5) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":95 - * ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - * if nc and nc != self.yaml['nc']: - * LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") # <<<<<<<<<<<<<< - * self.yaml['nc'] = nc # override yaml value - * if anchors: - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_info); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_13 = 0; - __pyx_t_14 = 127; - __Pyx_INCREF(__pyx_kp_u_Overriding_model_yaml_nc); - __pyx_t_13 += 25; - __Pyx_GIVEREF(__pyx_kp_u_Overriding_model_yaml_nc); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_Overriding_model_yaml_nc); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_15 = __Pyx_PyObject_Dict_GetItem(__pyx_t_8, __pyx_n_u_nc); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_15); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_FormatSimple(__pyx_t_15, __pyx_empty_unicode); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_8) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_8) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_8); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_8); - __pyx_t_8 = 0; - __Pyx_INCREF(__pyx_kp_u_with_nc); - __pyx_t_13 += 9; - __Pyx_GIVEREF(__pyx_kp_u_with_nc); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_kp_u_with_nc); - __pyx_t_8 = __Pyx_PyObject_FormatSimple(__pyx_v_nc, __pyx_empty_unicode); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_8) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_8) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_8); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_t_8); - __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyUnicode_Join(__pyx_t_3, 4, __pyx_t_13, __pyx_t_14); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_8}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":96 - * if nc and nc != self.yaml['nc']: - * LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - * self.yaml['nc'] = nc # override yaml value # <<<<<<<<<<<<<< - * if anchors: - * LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely((PyObject_SetItem(__pyx_t_1, __pyx_n_u_nc, __pyx_v_nc) < 0))) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":94 - * # Define model - * ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - * if nc and nc != self.yaml['nc']: # <<<<<<<<<<<<<< - * LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - * self.yaml['nc'] = nc # override yaml value - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":97 - * LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - * self.yaml['nc'] = nc # override yaml value - * if anchors: # <<<<<<<<<<<<<< - * LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') - * self.yaml['anchors'] = round(anchors) # override yaml value - */ - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_v_anchors); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 97, __pyx_L1_error) - if (__pyx_t_5) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":98 - * self.yaml['nc'] = nc # override yaml value - * if anchors: - * LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') # <<<<<<<<<<<<<< - * self.yaml['anchors'] = round(anchors) # override yaml value - * self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_info); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_FormatSimple(__pyx_v_anchors, __pyx_empty_unicode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Overriding_model_yaml_anchors_wi, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":99 - * if anchors: - * LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') - * self.yaml['anchors'] = round(anchors) # override yaml value # <<<<<<<<<<<<<< - * self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - * self.names = [str(i) for i in range(self.yaml['nc'])] # default names - */ - __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_builtin_round, __pyx_v_anchors); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (unlikely((PyObject_SetItem(__pyx_t_8, __pyx_n_u_anchors, __pyx_t_1) < 0))) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":97 - * LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - * self.yaml['nc'] = nc # override yaml value - * if anchors: # <<<<<<<<<<<<<< - * LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') - * self.yaml['anchors'] = round(anchors) # override yaml value - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":100 - * LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') - * self.yaml['anchors'] = round(anchors) # override yaml value - * self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist # <<<<<<<<<<<<<< - * self.names = [str(i) for i in range(self.yaml['nc'])] # default names - * self.inplace = self.yaml.get('inplace', True) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_parse_model); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_deepcopy); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_15 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_15 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_15)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_15); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_15, __pyx_t_2}; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_8); - __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_ch); - __Pyx_GIVEREF(__pyx_v_ch); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_v_ch); - if (PyDict_SetItem(__pyx_t_8, __pyx_n_s_ch, __pyx_t_2) < 0) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, __pyx_t_8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 100, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_8 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_8 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_16 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); - index = 0; __pyx_t_8 = __pyx_t_16(__pyx_t_1); if (unlikely(!__pyx_t_8)) goto __pyx_L22_unpacking_failed; - __Pyx_GOTREF(__pyx_t_8); - index = 1; __pyx_t_3 = __pyx_t_16(__pyx_t_1); if (unlikely(!__pyx_t_3)) goto __pyx_L22_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_16(__pyx_t_1), 2) < 0) __PYX_ERR(0, 100, __pyx_L1_error) - __pyx_t_16 = NULL; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L23_unpacking_done; - __pyx_L22_unpacking_failed:; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_16 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 100, __pyx_L1_error) - __pyx_L23_unpacking_done:; - } - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_model, __pyx_t_8) < 0) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_save, __pyx_t_3) < 0) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":101 - * self.yaml['anchors'] = round(anchors) # override yaml value - * self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - * self.names = [str(i) for i in range(self.yaml['nc'])] # default names # <<<<<<<<<<<<<< - * self.inplace = self.yaml.get('inplace', True) - * - */ - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = __Pyx_PyObject_Dict_GetItem(__pyx_t_3, __pyx_n_u_nc); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (likely(PyList_CheckExact(__pyx_t_3)) || PyTuple_CheckExact(__pyx_t_3)) { - __pyx_t_8 = __pyx_t_3; __Pyx_INCREF(__pyx_t_8); __pyx_t_13 = 0; - __pyx_t_17 = NULL; - } else { - __pyx_t_13 = -1; __pyx_t_8 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_17 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_8); if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 101, __pyx_L26_error) - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - for (;;) { - if (likely(!__pyx_t_17)) { - if (likely(PyList_CheckExact(__pyx_t_8))) { - if (__pyx_t_13 >= PyList_GET_SIZE(__pyx_t_8)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyList_GET_ITEM(__pyx_t_8, __pyx_t_13); __Pyx_INCREF(__pyx_t_3); __pyx_t_13++; if (unlikely((0 < 0))) __PYX_ERR(0, 101, __pyx_L26_error) - #else - __pyx_t_3 = PySequence_ITEM(__pyx_t_8, __pyx_t_13); __pyx_t_13++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - if (__pyx_t_13 >= PyTuple_GET_SIZE(__pyx_t_8)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_8, __pyx_t_13); __Pyx_INCREF(__pyx_t_3); __pyx_t_13++; if (unlikely((0 < 0))) __PYX_ERR(0, 101, __pyx_L26_error) - #else - __pyx_t_3 = PySequence_ITEM(__pyx_t_8, __pyx_t_13); __pyx_t_13++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } - } else { - __pyx_t_3 = __pyx_t_17(__pyx_t_8); - if (unlikely(!__pyx_t_3)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 101, __pyx_L26_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_3); - } - __Pyx_XDECREF_SET(__pyx_8genexpr1__pyx_v_i, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Str(__pyx_8genexpr1__pyx_v_i); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_GOTREF(__pyx_t_3); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_3))) __PYX_ERR(0, 101, __pyx_L26_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_i); __pyx_8genexpr1__pyx_v_i = 0; - goto __pyx_L29_exit_scope; - __pyx_L26_error:; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_i); __pyx_8genexpr1__pyx_v_i = 0; - goto __pyx_L1_error; - __pyx_L29_exit_scope:; - } /* exit inner scope */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_names, __pyx_t_2) < 0) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":102 - * self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - * self.names = [str(i) for i in range(self.yaml['nc'])] # default names - * self.inplace = self.yaml.get('inplace', True) # <<<<<<<<<<<<<< - * - * # Build strides, anchors - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_yaml); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_get); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_8, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_inplace, __pyx_t_2) < 0) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":105 - * - * # Build strides, anchors - * m = self.model[-1] # Detect() # <<<<<<<<<<<<<< - * if isinstance(m, Detect): - * s = 256 # 2x min stride - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_GetItemInt(__pyx_t_2, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_m = __pyx_t_8; - __pyx_t_8 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":106 - * # Build strides, anchors - * m = self.model[-1] # Detect() - * if isinstance(m, Detect): # <<<<<<<<<<<<<< - * s = 256 # 2x min stride - * m.inplace = self.inplace - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_Detect); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 106, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_5 = PyObject_IsInstance(__pyx_v_m, __pyx_t_8); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(0, 106, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_6 = (__pyx_t_5 != 0); - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":107 - * m = self.model[-1] # Detect() - * if isinstance(m, Detect): - * s = 256 # 2x min stride # <<<<<<<<<<<<<< - * m.inplace = self.inplace - * m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - */ - __Pyx_INCREF(__pyx_int_256); - __pyx_v_s = __pyx_int_256; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":108 - * if isinstance(m, Detect): - * s = 256 # 2x min stride - * m.inplace = self.inplace # <<<<<<<<<<<<<< - * m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - * m.anchors /= m.stride.view(-1, 1, 1) - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_inplace); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 108, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_inplace, __pyx_t_8) < 0) __PYX_ERR(0, 108, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":109 - * s = 256 # 2x min stride - * m.inplace = self.inplace - * m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward # <<<<<<<<<<<<<< - * m.anchors /= m.stride.view(-1, 1, 1) - * check_anchor_order(m) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_torch); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_tensor); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_15 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_forward); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_15); - __Pyx_GetModuleGlobalName(__pyx_t_19, __pyx_n_s_torch); if (unlikely(!__pyx_t_19)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_19); - __pyx_t_20 = __Pyx_PyObject_GetAttrStr(__pyx_t_19, __pyx_n_s_zeros); if (unlikely(!__pyx_t_20)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_20); - __Pyx_DECREF(__pyx_t_19); __pyx_t_19 = 0; - __pyx_t_19 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_20))) { - __pyx_t_19 = PyMethod_GET_SELF(__pyx_t_20); - if (likely(__pyx_t_19)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_20); - __Pyx_INCREF(__pyx_t_19); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_20, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[5] = {__pyx_t_19, __pyx_int_1, __pyx_v_ch, __pyx_v_s, __pyx_v_s}; - __pyx_t_18 = __Pyx_PyObject_FastCall(__pyx_t_20, __pyx_callargs+1-__pyx_t_4, 4+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_19); __pyx_t_19 = 0; - if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_18); - __Pyx_DECREF(__pyx_t_20); __pyx_t_20 = 0; - } - __pyx_t_20 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_15))) { - __pyx_t_20 = PyMethod_GET_SELF(__pyx_t_15); - if (likely(__pyx_t_20)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_15); - __Pyx_INCREF(__pyx_t_20); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_15, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_20, __pyx_t_18}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_15, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_20); __pyx_t_20 = 0; - __Pyx_DECREF(__pyx_t_18); __pyx_t_18 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_15 = __pyx_t_1; __Pyx_INCREF(__pyx_t_15); __pyx_t_13 = 0; - __pyx_t_17 = NULL; - } else { - __pyx_t_13 = -1; __pyx_t_15 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_15); - __pyx_t_17 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_15); if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 109, __pyx_L33_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_17)) { - if (likely(PyList_CheckExact(__pyx_t_15))) { - if (__pyx_t_13 >= PyList_GET_SIZE(__pyx_t_15)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_15, __pyx_t_13); __Pyx_INCREF(__pyx_t_1); __pyx_t_13++; if (unlikely((0 < 0))) __PYX_ERR(0, 109, __pyx_L33_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_15, __pyx_t_13); __pyx_t_13++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_13 >= PyTuple_GET_SIZE(__pyx_t_15)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_15, __pyx_t_13); __Pyx_INCREF(__pyx_t_1); __pyx_t_13++; if (unlikely((0 < 0))) __PYX_ERR(0, 109, __pyx_L33_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_15, __pyx_t_13); __pyx_t_13++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_17(__pyx_t_15); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 109, __pyx_L33_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XDECREF_SET(__pyx_8genexpr2__pyx_v_x, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr2__pyx_v_x, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_18 = __Pyx_GetItemInt(__pyx_t_1, -2L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_18); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_v_s, __pyx_t_18); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_18); __pyx_t_18 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_1))) __PYX_ERR(0, 109, __pyx_L33_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); __pyx_8genexpr2__pyx_v_x = 0; - goto __pyx_L36_exit_scope; - __pyx_L33_error:; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); __pyx_8genexpr2__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L36_exit_scope:; - } /* exit inner scope */ - __pyx_t_15 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_15 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_15)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_15); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_15, __pyx_t_2}; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_stride, __pyx_t_8) < 0) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":110 - * m.inplace = self.inplace - * m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - * m.anchors /= m.stride.view(-1, 1, 1) # <<<<<<<<<<<<<< - * check_anchor_order(m) - * self.stride = m.stride - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_anchors); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_stride); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_view); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyNumber_InPlaceDivide(__pyx_t_8, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_anchors, __pyx_t_2) < 0) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":111 - * m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - * m.anchors /= m.stride.view(-1, 1, 1) - * check_anchor_order(m) # <<<<<<<<<<<<<< - * self.stride = m.stride - * self._initialize_biases() # only run once - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_check_anchor_order); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_v_m}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":112 - * m.anchors /= m.stride.view(-1, 1, 1) - * check_anchor_order(m) - * self.stride = m.stride # <<<<<<<<<<<<<< - * self._initialize_biases() # only run once - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_stride); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 112, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_stride, __pyx_t_2) < 0) __PYX_ERR(0, 112, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":113 - * check_anchor_order(m) - * self.stride = m.stride - * self._initialize_biases() # only run once # <<<<<<<<<<<<<< - * - * # Init weights, biases - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_initialize_biases); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 113, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_8, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 113, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":106 - * # Build strides, anchors - * m = self.model[-1] # Detect() - * if isinstance(m, Detect): # <<<<<<<<<<<<<< - * s = 256 # 2x min stride - * m.inplace = self.inplace - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":116 - * - * # Init weights, biases - * initialize_weights(self) # <<<<<<<<<<<<<< - * self.info() - * LOGGER.info('') - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_initialize_weights); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 116, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_v_self}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 116, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":117 - * # Init weights, biases - * initialize_weights(self) - * self.info() # <<<<<<<<<<<<<< - * LOGGER.info('') - * - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_info); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 117, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_8, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 117, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":118 - * initialize_weights(self) - * self.info() - * LOGGER.info('') # <<<<<<<<<<<<<< - * - * def forward(self, x, augment=False, profile=False, visualize=False): - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_info); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_kp_u__12}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":82 - * - * class Model(nn.Module): - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes # <<<<<<<<<<<<<< - * super().__init__() - * if isinstance(cfg, dict): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_XDECREF(__pyx_t_18); - __Pyx_XDECREF(__pyx_t_19); - __Pyx_XDECREF(__pyx_t_20); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_yaml); - __Pyx_XDECREF(__pyx_v_f); - __Pyx_XDECREF(__pyx_v_m); - __Pyx_XDECREF(__pyx_v_s); - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_i); - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); - __Pyx_XDECREF(__pyx_v_ch); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":120 - * LOGGER.info('') - * - * def forward(self, x, augment=False, profile=False, visualize=False): # <<<<<<<<<<<<<< - * if augment: - * return self._forward_augment(x) # augmented inference, None - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_3forward(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_3forward = {"forward", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_3forward, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_3forward(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_x = 0; - PyObject *__pyx_v_augment = 0; - PyObject *__pyx_v_profile = 0; - PyObject *__pyx_v_visualize = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("forward (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,&__pyx_n_s_augment,&__pyx_n_s_profile,&__pyx_n_s_visualize,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,&__pyx_n_s_augment,&__pyx_n_s_profile,&__pyx_n_s_visualize,0}; - #endif - PyObject* values[5] = {0,0,0,0,0}; - values[2] = ((PyObject *)((PyObject *)Py_False)); - values[3] = ((PyObject *)((PyObject *)Py_False)); - values[4] = ((PyObject *)((PyObject *)Py_False)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 120, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_x)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 120, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("forward", 0, 2, 5, 1); __PYX_ERR(0, 120, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_augment); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 120, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_profile); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 120, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_visualize); - if (value) { values[4] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 120, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "forward") < 0)) __PYX_ERR(0, 120, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_x = values[1]; - __pyx_v_augment = values[2]; - __pyx_v_profile = values[3]; - __pyx_v_visualize = values[4]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("forward", 0, 2, 5, __pyx_nargs); __PYX_ERR(0, 120, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.forward", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_2forward(__pyx_self, __pyx_v_self, __pyx_v_x, __pyx_v_augment, __pyx_v_profile, __pyx_v_visualize); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_2forward(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_augment, PyObject *__pyx_v_profile, PyObject *__pyx_v_visualize) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("forward", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":121 - * - * def forward(self, x, augment=False, profile=False, visualize=False): - * if augment: # <<<<<<<<<<<<<< - * return self._forward_augment(x) # augmented inference, None - * return self._forward_once(x, profile, visualize) # single-scale inference, train - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_augment); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(0, 121, __pyx_L1_error) - if (__pyx_t_1) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":122 - * def forward(self, x, augment=False, profile=False, visualize=False): - * if augment: - * return self._forward_augment(x) # augmented inference, None # <<<<<<<<<<<<<< - * return self._forward_once(x, profile, visualize) # single-scale inference, train - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_forward_augment); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 122, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_v_x}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 122, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":121 - * - * def forward(self, x, augment=False, profile=False, visualize=False): - * if augment: # <<<<<<<<<<<<<< - * return self._forward_augment(x) # augmented inference, None - * return self._forward_once(x, profile, visualize) # single-scale inference, train - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":123 - * if augment: - * return self._forward_augment(x) # augmented inference, None - * return self._forward_once(x, profile, visualize) # single-scale inference, train # <<<<<<<<<<<<<< - * - * def _forward_augment(self, x): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_forward_once); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 123, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_4, __pyx_v_x, __pyx_v_profile, __pyx_v_visualize}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 3+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 123, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":120 - * LOGGER.info('') - * - * def forward(self, x, augment=False, profile=False, visualize=False): # <<<<<<<<<<<<<< - * if augment: - * return self._forward_augment(x) # augmented inference, None - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.forward", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":125 - * return self._forward_once(x, profile, visualize) # single-scale inference, train - * - * def _forward_augment(self, x): # <<<<<<<<<<<<<< - * img_size = x.shape[-2:] # height, width - * s = [1, 0.83, 0.67] # scales - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_5_forward_augment(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_5_forward_augment = {"_forward_augment", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_5_forward_augment, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_5_forward_augment(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_x = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_forward_augment (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 125, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_x)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 125, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_forward_augment", 1, 2, 2, 1); __PYX_ERR(0, 125, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_forward_augment") < 0)) __PYX_ERR(0, 125, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_x = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_forward_augment", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 125, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._forward_augment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_4_forward_augment(__pyx_self, __pyx_v_self, __pyx_v_x); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_4_forward_augment(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x) { - PyObject *__pyx_v_img_size = NULL; - PyObject *__pyx_v_s = NULL; - PyObject *__pyx_v_f = NULL; - PyObject *__pyx_v_y = NULL; - PyObject *__pyx_v_si = NULL; - PyObject *__pyx_v_fi = NULL; - PyObject *__pyx_v_xi = NULL; - PyObject *__pyx_v_yi = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - int __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - int __pyx_t_11; - PyObject *__pyx_t_12 = NULL; - int __pyx_t_13; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_forward_augment", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":126 - * - * def _forward_augment(self, x): - * img_size = x.shape[-2:] # height, width # <<<<<<<<<<<<<< - * s = [1, 0.83, 0.67] # scales - * f = [None, 3, None] # flips (2-ud, 3-lr) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_x, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetSlice(__pyx_t_1, -2L, 0, NULL, NULL, &__pyx_slice__13, 1, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_img_size = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":127 - * def _forward_augment(self, x): - * img_size = x.shape[-2:] # height, width - * s = [1, 0.83, 0.67] # scales # <<<<<<<<<<<<<< - * f = [None, 3, None] # flips (2-ud, 3-lr) - * y = [] # outputs - */ - __pyx_t_2 = PyList_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 127, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_int_1); - __Pyx_INCREF(__pyx_float_0_83); - __Pyx_GIVEREF(__pyx_float_0_83); - PyList_SET_ITEM(__pyx_t_2, 1, __pyx_float_0_83); - __Pyx_INCREF(__pyx_float_0_67); - __Pyx_GIVEREF(__pyx_float_0_67); - PyList_SET_ITEM(__pyx_t_2, 2, __pyx_float_0_67); - __pyx_v_s = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":128 - * img_size = x.shape[-2:] # height, width - * s = [1, 0.83, 0.67] # scales - * f = [None, 3, None] # flips (2-ud, 3-lr) # <<<<<<<<<<<<<< - * y = [] # outputs - * for si, fi in zip(s, f): - */ - __pyx_t_2 = PyList_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyList_SET_ITEM(__pyx_t_2, 0, Py_None); - __Pyx_INCREF(__pyx_int_3); - __Pyx_GIVEREF(__pyx_int_3); - PyList_SET_ITEM(__pyx_t_2, 1, __pyx_int_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyList_SET_ITEM(__pyx_t_2, 2, Py_None); - __pyx_v_f = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":129 - * s = [1, 0.83, 0.67] # scales - * f = [None, 3, None] # flips (2-ud, 3-lr) - * y = [] # outputs # <<<<<<<<<<<<<< - * for si, fi in zip(s, f): - * xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 129, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_y = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":130 - * f = [None, 3, None] # flips (2-ud, 3-lr) - * y = [] # outputs - * for si, fi in zip(s, f): # <<<<<<<<<<<<<< - * xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - * yi = self._forward_once(xi)[0] # forward - */ - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_s); - __Pyx_GIVEREF(__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_s); - __Pyx_INCREF(__pyx_v_f); - __Pyx_GIVEREF(__pyx_v_f); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_f); - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_zip, __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 130, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 130, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 130, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 130, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 130, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_5 = PyList_GET_ITEM(sequence, 0); - __pyx_t_6 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - #else - __pyx_t_5 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - index = 0; __pyx_t_5 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_5)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - index = 1; __pyx_t_6 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_6)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 2) < 0) __PYX_ERR(0, 130, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 130, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_si, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_XDECREF_SET(__pyx_v_fi, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":131 - * y = [] # outputs - * for si, fi in zip(s, f): - * xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) # <<<<<<<<<<<<<< - * yi = self._forward_once(xi)[0] # forward - * # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_scale_img); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_v_fi); if (unlikely((__pyx_t_9 < 0))) __PYX_ERR(0, 131, __pyx_L1_error) - if (__pyx_t_9) { - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_x, __pyx_n_s_flip); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_10 = NULL; - __pyx_t_11 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_11 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_10, __pyx_v_fi}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_11, 1+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_6 = __pyx_t_5; - __pyx_t_5 = 0; - } else { - __Pyx_INCREF(__pyx_v_x); - __pyx_t_6 = __pyx_v_x; - } - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_6); - __Pyx_INCREF(__pyx_v_si); - __Pyx_GIVEREF(__pyx_v_si); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_v_si); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_stride); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_10, __pyx_n_s_max); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = NULL; - __pyx_t_11 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_11 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_10, }; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_11, 0+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __pyx_t_12 = __Pyx_PyNumber_Int(__pyx_t_7); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_gs, __pyx_t_12) < 0) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_5, __pyx_t_6); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF_SET(__pyx_v_xi, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":132 - * for si, fi in zip(s, f): - * xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - * yi = self._forward_once(xi)[0] # forward # <<<<<<<<<<<<<< - * # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - * yi = self._descale_pred(yi, fi, si, img_size) - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_forward_once); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 132, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = NULL; - __pyx_t_11 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_11 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_v_xi}; - __pyx_t_12 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_11, 1+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 132, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_12, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 132, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF_SET(__pyx_v_yi, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":134 - * yi = self._forward_once(xi)[0] # forward - * # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - * yi = self._descale_pred(yi, fi, si, img_size) # <<<<<<<<<<<<<< - * y.append(yi) - * y = self._clip_augmented(y) # clip augmented tails - */ - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_descale_pred); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_5 = NULL; - __pyx_t_11 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_11 = 1; - } - } - { - PyObject *__pyx_callargs[5] = {__pyx_t_5, __pyx_v_yi, __pyx_v_fi, __pyx_v_si, __pyx_v_img_size}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_11, 4+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __Pyx_DECREF_SET(__pyx_v_yi, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":135 - * # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - * yi = self._descale_pred(yi, fi, si, img_size) - * y.append(yi) # <<<<<<<<<<<<<< - * y = self._clip_augmented(y) # clip augmented tails - * return torch.cat(y, 1), None # augmented inference, train - */ - __pyx_t_13 = __Pyx_PyObject_Append(__pyx_v_y, __pyx_v_yi); if (unlikely(__pyx_t_13 == ((int)-1))) __PYX_ERR(0, 135, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":130 - * f = [None, 3, None] # flips (2-ud, 3-lr) - * y = [] # outputs - * for si, fi in zip(s, f): # <<<<<<<<<<<<<< - * xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - * yi = self._forward_once(xi)[0] # forward - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":136 - * yi = self._descale_pred(yi, fi, si, img_size) - * y.append(yi) - * y = self._clip_augmented(y) # clip augmented tails # <<<<<<<<<<<<<< - * return torch.cat(y, 1), None # augmented inference, train - * - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_clip_augmented); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_12 = NULL; - __pyx_t_11 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_12)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_11 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_12, __pyx_v_y}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_11, 1+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_DECREF_SET(__pyx_v_y, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":137 - * y.append(yi) - * y = self._clip_augmented(y) # clip augmented tails - * return torch.cat(y, 1), None # augmented inference, train # <<<<<<<<<<<<<< - * - * def _forward_once(self, x, profile=False, visualize=False): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_torch); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_cat); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_11 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_11 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_6, __pyx_v_y, __pyx_int_1}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_11, 2+__pyx_t_11); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_2); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_12, 1, Py_None); - __pyx_t_2 = 0; - __pyx_r = __pyx_t_12; - __pyx_t_12 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":125 - * return self._forward_once(x, profile, visualize) # single-scale inference, train - * - * def _forward_augment(self, x): # <<<<<<<<<<<<<< - * img_size = x.shape[-2:] # height, width - * s = [1, 0.83, 0.67] # scales - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._forward_augment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_img_size); - __Pyx_XDECREF(__pyx_v_s); - __Pyx_XDECREF(__pyx_v_f); - __Pyx_XDECREF(__pyx_v_y); - __Pyx_XDECREF(__pyx_v_si); - __Pyx_XDECREF(__pyx_v_fi); - __Pyx_XDECREF(__pyx_v_xi); - __Pyx_XDECREF(__pyx_v_yi); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":139 - * return torch.cat(y, 1), None # augmented inference, train - * - * def _forward_once(self, x, profile=False, visualize=False): # <<<<<<<<<<<<<< - * y, dt = [], [] # outputs - * for m in self.model: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_7_forward_once(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_7_forward_once = {"_forward_once", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_7_forward_once, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_7_forward_once(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_x = 0; - PyObject *__pyx_v_profile = 0; - CYTHON_UNUSED PyObject *__pyx_v_visualize = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_forward_once (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,&__pyx_n_s_profile,&__pyx_n_s_visualize,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_x,&__pyx_n_s_profile,&__pyx_n_s_visualize,0}; - #endif - PyObject* values[4] = {0,0,0,0}; - values[2] = ((PyObject *)((PyObject *)Py_False)); - values[3] = ((PyObject *)((PyObject *)Py_False)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 139, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_x)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 139, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_forward_once", 0, 2, 4, 1); __PYX_ERR(0, 139, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_profile); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 139, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_visualize); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 139, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_forward_once") < 0)) __PYX_ERR(0, 139, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_x = values[1]; - __pyx_v_profile = values[2]; - __pyx_v_visualize = values[3]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_forward_once", 0, 2, 4, __pyx_nargs); __PYX_ERR(0, 139, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._forward_once", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_6_forward_once(__pyx_self, __pyx_v_self, __pyx_v_x, __pyx_v_profile, __pyx_v_visualize); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_6_forward_once(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_profile, CYTHON_UNUSED PyObject *__pyx_v_visualize) { - PyObject *__pyx_v_y = NULL; - PyObject *__pyx_v_dt = NULL; - PyObject *__pyx_v_m = NULL; - PyObject *__pyx_8genexpr3__pyx_v_j = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - PyObject *(*__pyx_t_10)(PyObject *); - PyObject *__pyx_t_11 = NULL; - int __pyx_t_12; - int __pyx_t_13; - int __pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_forward_once", 0); - __Pyx_INCREF(__pyx_v_x); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":140 - * - * def _forward_once(self, x, profile=False, visualize=False): - * y, dt = [], [] # outputs # <<<<<<<<<<<<<< - * for m in self.model: - * if m.f != -1: # if not from previous layer - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_y = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - __pyx_v_dt = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":141 - * def _forward_once(self, x, profile=False, visualize=False): - * y, dt = [], [] # outputs - * for m in self.model: # <<<<<<<<<<<<<< - * if m.f != -1: # if not from previous layer - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 141, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 141, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 141, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_4(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 141, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XDECREF_SET(__pyx_v_m, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":142 - * y, dt = [], [] # outputs - * for m in self.model: - * if m.f != -1: # if not from previous layer # <<<<<<<<<<<<<< - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - * if profile: - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_f); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyInt_NeObjC(__pyx_t_2, __pyx_int_neg_1, -1L, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":143 - * for m in self.model: - * if m.f != -1: # if not from previous layer - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers # <<<<<<<<<<<<<< - * if profile: - * self._profile_one_layer(m, x, dt) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_f); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = PyInt_Check(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((__pyx_t_6 != 0)) { - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_f); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetItem(__pyx_v_y, __pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = __pyx_t_7; - __pyx_t_7 = 0; - } else { - { /* enter inner scope */ - __pyx_t_7 = PyList_New(0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_f); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_8 = __pyx_t_2; __Pyx_INCREF(__pyx_t_8); __pyx_t_9 = 0; - __pyx_t_10 = NULL; - } else { - __pyx_t_9 = -1; __pyx_t_8 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_10 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_8); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 143, __pyx_L8_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_10)) { - if (likely(PyList_CheckExact(__pyx_t_8))) { - if (__pyx_t_9 >= PyList_GET_SIZE(__pyx_t_8)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_8, __pyx_t_9); __Pyx_INCREF(__pyx_t_2); __pyx_t_9++; if (unlikely((0 < 0))) __PYX_ERR(0, 143, __pyx_L8_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_8, __pyx_t_9); __pyx_t_9++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_9 >= PyTuple_GET_SIZE(__pyx_t_8)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_8, __pyx_t_9); __Pyx_INCREF(__pyx_t_2); __pyx_t_9++; if (unlikely((0 < 0))) __PYX_ERR(0, 143, __pyx_L8_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_8, __pyx_t_9); __pyx_t_9++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_10(__pyx_t_8); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 143, __pyx_L8_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XDECREF_SET(__pyx_8genexpr3__pyx_v_j, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_11 = __Pyx_PyInt_EqObjC(__pyx_8genexpr3__pyx_v_j, __pyx_int_neg_1, -1L, 0); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_12 < 0))) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__pyx_t_12) { - __Pyx_INCREF(__pyx_v_x); - __pyx_t_2 = __pyx_v_x; - } else { - __pyx_t_11 = __Pyx_PyObject_GetItem(__pyx_v_y, __pyx_8genexpr3__pyx_v_j); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_2 = __pyx_t_11; - __pyx_t_11 = 0; - } - if (unlikely(__Pyx_ListComp_Append(__pyx_t_7, (PyObject*)__pyx_t_2))) __PYX_ERR(0, 143, __pyx_L8_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_j); __pyx_8genexpr3__pyx_v_j = 0; - goto __pyx_L11_exit_scope; - __pyx_L8_error:; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_j); __pyx_8genexpr3__pyx_v_j = 0; - goto __pyx_L1_error; - __pyx_L11_exit_scope:; - } /* exit inner scope */ - __pyx_t_5 = __pyx_t_7; - __pyx_t_7 = 0; - } - __Pyx_DECREF_SET(__pyx_v_x, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":142 - * y, dt = [], [] # outputs - * for m in self.model: - * if m.f != -1: # if not from previous layer # <<<<<<<<<<<<<< - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - * if profile: - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":144 - * if m.f != -1: # if not from previous layer - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - * if profile: # <<<<<<<<<<<<<< - * self._profile_one_layer(m, x, dt) - * x = m(x) # run - */ - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_v_profile); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 144, __pyx_L1_error) - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":145 - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - * if profile: - * self._profile_one_layer(m, x, dt) # <<<<<<<<<<<<<< - * x = m(x) # run - * y.append(x if m.i in self.save else None) # save output - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_profile_one_layer); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = NULL; - __pyx_t_13 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_13 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_8, __pyx_v_m, __pyx_v_x, __pyx_v_dt}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_13, 3+__pyx_t_13); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":144 - * if m.f != -1: # if not from previous layer - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - * if profile: # <<<<<<<<<<<<<< - * self._profile_one_layer(m, x, dt) - * x = m(x) # run - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":146 - * if profile: - * self._profile_one_layer(m, x, dt) - * x = m(x) # run # <<<<<<<<<<<<<< - * y.append(x if m.i in self.save else None) # save output - * return x - */ - __Pyx_INCREF(__pyx_v_m); - __pyx_t_7 = __pyx_v_m; __pyx_t_8 = NULL; - __pyx_t_13 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_13 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_v_x}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_13, 1+__pyx_t_13); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 146, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __Pyx_DECREF_SET(__pyx_v_x, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":147 - * self._profile_one_layer(m, x, dt) - * x = m(x) # run - * y.append(x if m.i in self.save else None) # save output # <<<<<<<<<<<<<< - * return x - * - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_i); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_save); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_6 = (__Pyx_PySequence_ContainsTF(__pyx_t_7, __pyx_t_8, Py_EQ)); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if ((__pyx_t_6 != 0)) { - __Pyx_INCREF(__pyx_v_x); - __pyx_t_5 = __pyx_v_x; - } else { - __Pyx_INCREF(Py_None); - __pyx_t_5 = Py_None; - } - __pyx_t_14 = __Pyx_PyList_Append(__pyx_v_y, __pyx_t_5); if (unlikely(__pyx_t_14 == ((int)-1))) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":141 - * def _forward_once(self, x, profile=False, visualize=False): - * y, dt = [], [] # outputs - * for m in self.model: # <<<<<<<<<<<<<< - * if m.f != -1: # if not from previous layer - * x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":148 - * x = m(x) # run - * y.append(x if m.i in self.save else None) # save output - * return x # <<<<<<<<<<<<<< - * - * def _descale_pred(self, p, flips, scale, img_size): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_x); - __pyx_r = __pyx_v_x; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":139 - * return torch.cat(y, 1), None # augmented inference, train - * - * def _forward_once(self, x, profile=False, visualize=False): # <<<<<<<<<<<<<< - * y, dt = [], [] # outputs - * for m in self.model: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._forward_once", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_y); - __Pyx_XDECREF(__pyx_v_dt); - __Pyx_XDECREF(__pyx_v_m); - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_j); - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":150 - * return x - * - * def _descale_pred(self, p, flips, scale, img_size): # <<<<<<<<<<<<<< - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_9_descale_pred(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_9_descale_pred = {"_descale_pred", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_9_descale_pred, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_9_descale_pred(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p = 0; - PyObject *__pyx_v_flips = 0; - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_img_size = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_descale_pred (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p,&__pyx_n_s_flips,&__pyx_n_s_scale,&__pyx_n_s_img_size,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p,&__pyx_n_s_flips,&__pyx_n_s_scale,&__pyx_n_s_img_size,0}; - #endif - PyObject* values[5] = {0,0,0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 150, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 150, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_descale_pred", 1, 5, 5, 1); __PYX_ERR(0, 150, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_flips)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 150, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_descale_pred", 1, 5, 5, 2); __PYX_ERR(0, 150, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_scale)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 150, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_descale_pred", 1, 5, 5, 3); __PYX_ERR(0, 150, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (likely((values[4] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_img_size)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 150, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_descale_pred", 1, 5, 5, 4); __PYX_ERR(0, 150, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_descale_pred") < 0)) __PYX_ERR(0, 150, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 5)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - } - __pyx_v_self = values[0]; - __pyx_v_p = values[1]; - __pyx_v_flips = values[2]; - __pyx_v_scale = values[3]; - __pyx_v_img_size = values[4]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_descale_pred", 1, 5, 5, __pyx_nargs); __PYX_ERR(0, 150, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._descale_pred", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_8_descale_pred(__pyx_self, __pyx_v_self, __pyx_v_p, __pyx_v_flips, __pyx_v_scale, __pyx_v_img_size); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_8_descale_pred(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p, PyObject *__pyx_v_flips, PyObject *__pyx_v_scale, PyObject *__pyx_v_img_size) { - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_v_y = NULL; - PyObject *__pyx_v_wh = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_descale_pred", 0); - __Pyx_INCREF(__pyx_v_p); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":152 - * def _descale_pred(self, p, flips, scale, img_size): - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: # <<<<<<<<<<<<<< - * p[..., :4] /= scale # de-scale - * if flips == 2: - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_inplace); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_2) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":153 - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: - * p[..., :4] /= scale # de-scale # <<<<<<<<<<<<<< - * if flips == 2: - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - */ - __Pyx_INCREF(__pyx_tuple__15); - __pyx_t_3 = __pyx_tuple__15; - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_p, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyNumber_InPlaceDivide(__pyx_t_1, __pyx_v_scale); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_p, __pyx_t_3, __pyx_t_4) < 0))) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":154 - * if self.inplace: - * p[..., :4] /= scale # de-scale - * if flips == 2: # <<<<<<<<<<<<<< - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - * elif flips == 3: - */ - __pyx_t_4 = __Pyx_PyInt_EqObjC(__pyx_v_flips, __pyx_int_2, 2, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 154, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_2) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":155 - * p[..., :4] /= scale # de-scale - * if flips == 2: - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud # <<<<<<<<<<<<<< - * elif flips == 3: - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - */ - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_img_size, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_p, __pyx_tuple__16); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = PyNumber_Subtract(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_p, __pyx_tuple__16, __pyx_t_5) < 0))) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":154 - * if self.inplace: - * p[..., :4] /= scale # de-scale - * if flips == 2: # <<<<<<<<<<<<<< - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - * elif flips == 3: - */ - goto __pyx_L4; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":156 - * if flips == 2: - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - * elif flips == 3: # <<<<<<<<<<<<<< - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - * else: - */ - __pyx_t_5 = __Pyx_PyInt_EqObjC(__pyx_v_flips, __pyx_int_3, 3, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 156, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 156, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_2) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":157 - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - * elif flips == 3: - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr # <<<<<<<<<<<<<< - * else: - * x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale - */ - __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_img_size, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 157, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_p, __pyx_tuple__17); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Subtract(__pyx_t_5, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 157, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_p, __pyx_tuple__17, __pyx_t_4) < 0))) __PYX_ERR(0, 157, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":156 - * if flips == 2: - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - * elif flips == 3: # <<<<<<<<<<<<<< - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - * else: - */ - } - __pyx_L4:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":152 - * def _descale_pred(self, p, flips, scale, img_size): - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: # <<<<<<<<<<<<<< - * p[..., :4] /= scale # de-scale - * if flips == 2: - */ - goto __pyx_L3; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":159 - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - * else: - * x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale # <<<<<<<<<<<<<< - * if flips == 2: - * y = img_size[0] - y # de-flip ud - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_p, __pyx_tuple__19); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_4, __pyx_v_scale); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_p, __pyx_tuple__21); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyNumber_Divide(__pyx_t_4, __pyx_v_scale); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_p, __pyx_tuple__5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = __Pyx_PyNumber_Divide(__pyx_t_4, __pyx_v_scale); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_x = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_y = __pyx_t_5; - __pyx_t_5 = 0; - __pyx_v_wh = __pyx_t_6; - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":160 - * else: - * x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale - * if flips == 2: # <<<<<<<<<<<<<< - * y = img_size[0] - y # de-flip ud - * elif flips == 3: - */ - __pyx_t_6 = __Pyx_PyInt_EqObjC(__pyx_v_flips, __pyx_int_2, 2, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_2) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":161 - * x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale - * if flips == 2: - * y = img_size[0] - y # de-flip ud # <<<<<<<<<<<<<< - * elif flips == 3: - * x = img_size[1] - x # de-flip lr - */ - __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_img_size, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = PyNumber_Subtract(__pyx_t_6, __pyx_v_y); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF_SET(__pyx_v_y, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":160 - * else: - * x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale - * if flips == 2: # <<<<<<<<<<<<<< - * y = img_size[0] - y # de-flip ud - * elif flips == 3: - */ - goto __pyx_L5; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":162 - * if flips == 2: - * y = img_size[0] - y # de-flip ud - * elif flips == 3: # <<<<<<<<<<<<<< - * x = img_size[1] - x # de-flip lr - * p = torch.cat((x, y, wh, p[..., 4:]), -1) - */ - __pyx_t_5 = __Pyx_PyInt_EqObjC(__pyx_v_flips, __pyx_int_3, 3, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_2) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":163 - * y = img_size[0] - y # de-flip ud - * elif flips == 3: - * x = img_size[1] - x # de-flip lr # <<<<<<<<<<<<<< - * p = torch.cat((x, y, wh, p[..., 4:]), -1) - * return p - */ - __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_img_size, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyNumber_Subtract(__pyx_t_5, __pyx_v_x); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF_SET(__pyx_v_x, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":162 - * if flips == 2: - * y = img_size[0] - y # de-flip ud - * elif flips == 3: # <<<<<<<<<<<<<< - * x = img_size[1] - x # de-flip lr - * p = torch.cat((x, y, wh, p[..., 4:]), -1) - */ - } - __pyx_L5:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":164 - * elif flips == 3: - * x = img_size[1] - x # de-flip lr - * p = torch.cat((x, y, wh, p[..., 4:]), -1) # <<<<<<<<<<<<<< - * return p - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_torch); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_cat); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_v_p, __pyx_tuple__7); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_x); - __Pyx_INCREF(__pyx_v_y); - __Pyx_GIVEREF(__pyx_v_y); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_v_y); - __Pyx_INCREF(__pyx_v_wh); - __Pyx_GIVEREF(__pyx_v_wh); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_wh); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_5, __pyx_t_4, __pyx_int_neg_1}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF_SET(__pyx_v_p, __pyx_t_6); - __pyx_t_6 = 0; - } - __pyx_L3:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":165 - * x = img_size[1] - x # de-flip lr - * p = torch.cat((x, y, wh, p[..., 4:]), -1) - * return p # <<<<<<<<<<<<<< - * - * def _clip_augmented(self, y): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_p); - __pyx_r = __pyx_v_p; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":150 - * return x - * - * def _descale_pred(self, p, flips, scale, img_size): # <<<<<<<<<<<<<< - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._descale_pred", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_v_y); - __Pyx_XDECREF(__pyx_v_wh); - __Pyx_XDECREF(__pyx_v_p); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":167 - * return p - * - * def _clip_augmented(self, y): # <<<<<<<<<<<<<< - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_11_clip_augmented(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_11_clip_augmented = {"_clip_augmented", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_11_clip_augmented, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_11_clip_augmented(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_y = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_clip_augmented (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_y,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_y,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 167, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_y)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 167, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_clip_augmented", 1, 2, 2, 1); __PYX_ERR(0, 167, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_clip_augmented") < 0)) __PYX_ERR(0, 167, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_y = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_clip_augmented", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 167, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._clip_augmented", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_10_clip_augmented(__pyx_self, __pyx_v_self, __pyx_v_y); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_2generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":170 - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - * g = sum(4 ** x for x in range(nl)) # grid points # <<<<<<<<<<<<<< - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 170, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_2generator1, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_Model__clip_augmented_locals_gen, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._clip_augmented.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_2generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 170, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_nl)) { __Pyx_RaiseClosureNameError("nl"); __PYX_ERR(0, 170, __pyx_L1_error) } - __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_nl); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 170, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 170, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 170, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 170, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Power(__pyx_int_4, __pyx_cur_scope->__pyx_v_x, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - __Pyx_XGIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_3; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_4; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_2 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_4 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 170, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_5generator2(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":172 - * g = sum(4 ** x for x in range(nl)) # grid points - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices # <<<<<<<<<<<<<< - * y[0] = y[0][:, :-i] # large - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_3genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 172, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_5generator2, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_Model__clip_augmented_locals_gen, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._clip_augmented.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_5generator2(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 172, __pyx_L1_error) - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_e); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 172, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 172, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 172, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_4(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 172, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Power(__pyx_int_4, __pyx_cur_scope->__pyx_v_x, Py_None); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_3; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_4; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_3 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_4 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 172, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_8generator3(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":174 - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - * y[0] = y[0][:, :-i] # large - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices # <<<<<<<<<<<<<< - * y[-1] = y[-1][:, i:] # small - * return y - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_6genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 174, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_8generator3, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_Model__clip_augmented_locals_gen, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._clip_augmented.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_8generator3(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 174, __pyx_L1_error) - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_e); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 174, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 174, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 174, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_4(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 174, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_nl)) { __Pyx_RaiseClosureNameError("nl"); __PYX_ERR(0, 174, __pyx_L1_error) } - __pyx_t_2 = __Pyx_PyInt_SubtractObjC(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_nl, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyNumber_Subtract(__pyx_t_2, __pyx_cur_scope->__pyx_v_x); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Power(__pyx_int_4, __pyx_t_5, Py_None); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_3; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_4; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_3 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_4 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 174, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":167 - * return p - * - * def _clip_augmented(self, y): # <<<<<<<<<<<<<< - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_10_clip_augmented(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_y) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *__pyx_cur_scope; - PyObject *__pyx_v_g = NULL; - PyObject *__pyx_v_i = NULL; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_2generator1 = 0; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_5generator2 = 0; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_8generator3 = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_clip_augmented", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 167, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":169 - * def _clip_augmented(self, y): - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) # <<<<<<<<<<<<<< - * g = sum(4 ** x for x in range(nl)) # grid points - * e = 1 # exclude layer count - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 169, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 169, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_nl); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 169, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_v_nl = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":170 - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - * g = sum(4 ** x for x in range(nl)) # grid points # <<<<<<<<<<<<<< - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - */ - __pyx_t_1 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_sum, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_g = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":171 - * nl = self.model[-1].nl # number of detection layers (P3-P5) - * g = sum(4 ** x for x in range(nl)) # grid points - * e = 1 # exclude layer count # <<<<<<<<<<<<<< - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - * y[0] = y[0][:, :-i] # large - */ - __pyx_cur_scope->__pyx_v_e = 1; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":172 - * g = sum(4 ** x for x in range(nl)) # grid points - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices # <<<<<<<<<<<<<< - * y[0] = y[0][:, :-i] # large - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - */ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_y, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_FloorDivide(__pyx_t_2, __pyx_v_g); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_3genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_sum, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_i = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":173 - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - * y[0] = y[0][:, :-i] # large # <<<<<<<<<<<<<< - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - * y[-1] = y[-1][:, i:] # small - */ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_y, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Negative(__pyx_v_i); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PySlice_New(Py_None, __pyx_t_3, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_slice__22); - __Pyx_GIVEREF(__pyx_slice__22); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_slice__22); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely((__Pyx_SetItemInt(__pyx_v_y, 0, __pyx_t_1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1) < 0))) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":174 - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - * y[0] = y[0][:, :-i] # large - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices # <<<<<<<<<<<<<< - * y[-1] = y[-1][:, i:] # small - * return y - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_y, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_3, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_FloorDivide(__pyx_t_1, __pyx_v_g); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_6genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_sum, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF_SET(__pyx_v_i, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":175 - * y[0] = y[0][:, :-i] # large - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - * y[-1] = y[-1][:, i:] # small # <<<<<<<<<<<<<< - * return y - * - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_y, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySlice_New(__pyx_v_i, Py_None, Py_None); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_slice__22); - __Pyx_GIVEREF(__pyx_slice__22); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_slice__22); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely((__Pyx_SetItemInt(__pyx_v_y, -1L, __pyx_t_2, long, 1, __Pyx_PyInt_From_long, 0, 1, 1) < 0))) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":176 - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - * y[-1] = y[-1][:, i:] # small - * return y # <<<<<<<<<<<<<< - * - * def _profile_one_layer(self, m, x, dt): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_y); - __pyx_r = __pyx_v_y; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":167 - * return p - * - * def _clip_augmented(self, y): # <<<<<<<<<<<<<< - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._clip_augmented", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_g); - __Pyx_XDECREF(__pyx_v_i); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_2generator1); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_5generator2); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_clip_augmented_8generator3); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":178 - * return y - * - * def _profile_one_layer(self, m, x, dt): # <<<<<<<<<<<<<< - * c = isinstance(m, Detect) # is final layer, copy input as inplace fix - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_13_profile_one_layer(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_13_profile_one_layer = {"_profile_one_layer", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_13_profile_one_layer, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_13_profile_one_layer(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_m = 0; - PyObject *__pyx_v_x = 0; - PyObject *__pyx_v_dt = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_profile_one_layer (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_m,&__pyx_n_s_x,&__pyx_n_s_dt,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_m,&__pyx_n_s_x,&__pyx_n_s_dt,0}; - #endif - PyObject* values[4] = {0,0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 178, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_m)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 178, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_profile_one_layer", 1, 4, 4, 1); __PYX_ERR(0, 178, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_x)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 178, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_profile_one_layer", 1, 4, 4, 2); __PYX_ERR(0, 178, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_dt)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 178, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_profile_one_layer", 1, 4, 4, 3); __PYX_ERR(0, 178, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_profile_one_layer") < 0)) __PYX_ERR(0, 178, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_self = values[0]; - __pyx_v_m = values[1]; - __pyx_v_x = values[2]; - __pyx_v_dt = values[3]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_profile_one_layer", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 178, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._profile_one_layer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_12_profile_one_layer(__pyx_self, __pyx_v_self, __pyx_v_m, __pyx_v_x, __pyx_v_dt); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_12_profile_one_layer(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_m, PyObject *__pyx_v_x, PyObject *__pyx_v_dt) { - int __pyx_v_c; - PyObject *__pyx_v_o = NULL; - PyObject *__pyx_v_t = NULL; - CYTHON_UNUSED long __pyx_v__; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - long __pyx_t_11; - int __pyx_t_12; - Py_ssize_t __pyx_t_13; - Py_UCS4 __pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_profile_one_layer", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":179 - * - * def _profile_one_layer(self, m, x, dt): - * c = isinstance(m, Detect) # is final layer, copy input as inplace fix # <<<<<<<<<<<<<< - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - * t = time_sync() - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Detect); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_IsInstance(__pyx_v_m, __pyx_t_1); if (unlikely(__pyx_t_2 == ((int)-1))) __PYX_ERR(0, 179, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_c = __pyx_t_2; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":180 - * def _profile_one_layer(self, m, x, dt): - * c = isinstance(m, Detect) # is final layer, copy input as inplace fix - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs # <<<<<<<<<<<<<< - * t = time_sync() - * for _ in range(10): - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_thop); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_2) { - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_thop); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_profile); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_m); - __Pyx_GIVEREF(__pyx_v_m); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_m); - __pyx_t_5 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if ((__pyx_v_c != 0)) { - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_x, __pyx_n_s_copy); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_9)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_9, }; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_10, 0+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __pyx_t_6 = __pyx_t_7; - __pyx_t_7 = 0; - } else { - __Pyx_INCREF(__pyx_v_x); - __pyx_t_6 = __pyx_v_x; - } - __pyx_t_7 = PyTuple_New(1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_6); - __pyx_t_6 = 0; - if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_inputs, __pyx_t_7) < 0) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_verbose, Py_False) < 0) __PYX_ERR(0, 180, __pyx_L1_error) - __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_3, __pyx_t_5); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_7, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_5, __pyx_float_1E9, 1E9, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyInt_MultiplyObjC(__pyx_t_7, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 180, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_1 = __pyx_t_5; - __pyx_t_5 = 0; - } else { - __Pyx_INCREF(__pyx_int_0); - __pyx_t_1 = __pyx_int_0; - } - __pyx_v_o = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":181 - * c = isinstance(m, Detect) # is final layer, copy input as inplace fix - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - * t = time_sync() # <<<<<<<<<<<<<< - * for _ in range(10): - * m(x.copy() if c else x) - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_time_sync); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 181, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_7, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_10, 0+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 181, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_v_t = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":182 - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - * t = time_sync() - * for _ in range(10): # <<<<<<<<<<<<<< - * m(x.copy() if c else x) - * dt.append((time_sync() - t) * 100) - */ - for (__pyx_t_11 = 0; __pyx_t_11 < 10; __pyx_t_11+=1) { - __pyx_v__ = __pyx_t_11; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":183 - * t = time_sync() - * for _ in range(10): - * m(x.copy() if c else x) # <<<<<<<<<<<<<< - * dt.append((time_sync() - t) * 100) - * if m == self.model[0]: - */ - if ((__pyx_v_c != 0)) { - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_x, __pyx_n_s_copy); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_4, }; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_10, 0+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_5 = __pyx_t_7; - __pyx_t_7 = 0; - } else { - __Pyx_INCREF(__pyx_v_x); - __pyx_t_5 = __pyx_v_x; - } - __Pyx_INCREF(__pyx_v_m); - __pyx_t_7 = __pyx_v_m; __pyx_t_3 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_5}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":184 - * for _ in range(10): - * m(x.copy() if c else x) - * dt.append((time_sync() - t) * 100) # <<<<<<<<<<<<<< - * if m == self.model[0]: - * LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_time_sync); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_5, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_10, 0+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_7 = PyNumber_Subtract(__pyx_t_1, __pyx_v_t); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyInt_MultiplyObjC(__pyx_t_7, __pyx_int_100, 0x64, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_12 = __Pyx_PyObject_Append(__pyx_v_dt, __pyx_t_1); if (unlikely(__pyx_t_12 == ((int)-1))) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":185 - * m(x.copy() if c else x) - * dt.append((time_sync() - t) * 100) - * if m == self.model[0]: # <<<<<<<<<<<<<< - * LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") - * LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_RichCompare(__pyx_v_m, __pyx_t_7, Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_2) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":186 - * dt.append((time_sync() - t) * 100) - * if m == self.model[0]: - * LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") # <<<<<<<<<<<<<< - * LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - * if c: - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_info); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = PyTuple_New(6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_13 = 0; - __pyx_t_14 = 127; - __pyx_t_3 = __Pyx_PyObject_Format(__pyx_kp_u_time_ms, __pyx_kp_u_10s); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__23); - __pyx_t_13 += 1; - __Pyx_GIVEREF(__pyx_kp_u__23); - PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_kp_u__23); - __pyx_t_3 = __Pyx_PyObject_Format(__pyx_n_u_GFLOPs, __pyx_kp_u_10s); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__23); - __pyx_t_13 += 1; - __Pyx_GIVEREF(__pyx_kp_u__23); - PyTuple_SET_ITEM(__pyx_t_7, 3, __pyx_kp_u__23); - __pyx_t_3 = __Pyx_PyObject_Format(__pyx_n_u_params, __pyx_kp_u_10s); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_7, 4, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u_module); - __pyx_t_13 += 8; - __Pyx_GIVEREF(__pyx_kp_u_module); - PyTuple_SET_ITEM(__pyx_t_7, 5, __pyx_kp_u_module); - __pyx_t_3 = __Pyx_PyUnicode_Join(__pyx_t_7, 6, __pyx_t_13, __pyx_t_14); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":185 - * m(x.copy() if c else x) - * dt.append((time_sync() - t) * 100) - * if m == self.model[0]: # <<<<<<<<<<<<<< - * LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") - * LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":187 - * if m == self.model[0]: - * LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") - * LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') # <<<<<<<<<<<<<< - * if c: - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_info); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(7); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_13 = 0; - __pyx_t_14 = 127; - __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_dt, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_4 = __Pyx_PyObject_Format(__pyx_t_7, __pyx_kp_u_10_2f); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u__23); - __pyx_t_13 += 1; - __Pyx_GIVEREF(__pyx_kp_u__23); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_kp_u__23); - __pyx_t_4 = __Pyx_PyObject_Format(__pyx_v_o, __pyx_kp_u_10_2f); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u__23); - __pyx_t_13 += 1; - __Pyx_GIVEREF(__pyx_kp_u__23); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_kp_u__23); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_7 = __Pyx_PyObject_Format(__pyx_t_4, __pyx_kp_u_10_0f); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_5, 4, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_kp_u__24); - __pyx_t_13 += 2; - __Pyx_GIVEREF(__pyx_kp_u__24); - PyTuple_SET_ITEM(__pyx_t_5, 5, __pyx_kp_u__24); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_type); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_4 = __Pyx_PyObject_FormatSimple(__pyx_t_7, __pyx_empty_unicode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 6, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyUnicode_Join(__pyx_t_5, 7, __pyx_t_13, __pyx_t_14); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_t_4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":188 - * LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") - * LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - * if c: # <<<<<<<<<<<<<< - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - * - */ - __pyx_t_2 = (__pyx_v_c != 0); - if (__pyx_t_2) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":189 - * LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - * if c: - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") # <<<<<<<<<<<<<< - * - * def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_info); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_13 = 0; - __pyx_t_14 = 127; - __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_builtin_sum, __pyx_v_dt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = __Pyx_PyObject_Format(__pyx_t_5, __pyx_kp_u_10_2f); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_kp_u__23); - __pyx_t_13 += 1; - __Pyx_GIVEREF(__pyx_kp_u__23); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_kp_u__23); - __pyx_t_7 = __Pyx_PyObject_Format(__pyx_kp_u__25, __pyx_kp_u_10s); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_kp_u__23); - __pyx_t_13 += 1; - __Pyx_GIVEREF(__pyx_kp_u__23); - PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_kp_u__23); - __pyx_t_7 = __Pyx_PyObject_Format(__pyx_kp_u__25, __pyx_kp_u_10s); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_14 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) > __pyx_t_14) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) : __pyx_t_14; - __pyx_t_13 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_3, 4, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_kp_u_Total); - __pyx_t_13 += 7; - __Pyx_GIVEREF(__pyx_kp_u_Total); - PyTuple_SET_ITEM(__pyx_t_3, 5, __pyx_kp_u_Total); - __pyx_t_7 = __Pyx_PyUnicode_Join(__pyx_t_3, 6, __pyx_t_13, __pyx_t_14); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_7}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":188 - * LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}") - * LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - * if c: # <<<<<<<<<<<<<< - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - * - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":178 - * return y - * - * def _profile_one_layer(self, m, x, dt): # <<<<<<<<<<<<<< - * c = isinstance(m, Detect) # is final layer, copy input as inplace fix - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._profile_one_layer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_o); - __Pyx_XDECREF(__pyx_v_t); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":191 - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - * - * def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency # <<<<<<<<<<<<<< - * # https://arxiv.org/abs/1708.02002 section 3.3 - * # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_initialize_biases(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_initialize_biases = {"_initialize_biases", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_initialize_biases, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_initialize_biases(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_cf = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_initialize_biases (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_cf,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_cf,0}; - #endif - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)((PyObject *)Py_None)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 191, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_cf); - if (value) { values[1] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 191, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_initialize_biases") < 0)) __PYX_ERR(0, 191, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_cf = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_initialize_biases", 0, 1, 2, __pyx_nargs); __PYX_ERR(0, 191, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._initialize_biases", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_14_initialize_biases(__pyx_self, __pyx_v_self, __pyx_v_cf); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_14_initialize_biases(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_cf) { - PyObject *__pyx_v_m = NULL; - PyObject *__pyx_v_mi = NULL; - PyObject *__pyx_v_s = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t __pyx_t_4; - PyObject *(*__pyx_t_5)(PyObject *); - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - int __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_t_13; - PyObject *__pyx_t_14 = NULL; - PyObject *__pyx_t_15 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_initialize_biases", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":194 - * # https://arxiv.org/abs/1708.02002 section 3.3 - * # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - * m = self.model[-1] # Detect() module # <<<<<<<<<<<<<< - * for mi, s in zip(m.m, m.stride): # from - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_m = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":195 - * # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - * m = self.model[-1] # Detect() module - * for mi, s in zip(m.m, m.stride): # from # <<<<<<<<<<<<<< - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_m); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_stride); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_zip, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_3 = __pyx_t_1; __Pyx_INCREF(__pyx_t_3); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - } else { - __pyx_t_4 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 195, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_5)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 195, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_3, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 195, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_3, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_5(__pyx_t_3); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 195, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 195, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_6 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_6); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - index = 0; __pyx_t_2 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_6 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_6)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 2) < 0) __PYX_ERR(0, 195, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 195, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_mi, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_s, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":196 - * m = self.model[-1] # Detect() module - * for mi, s in zip(m.m, m.stride): # from - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) # <<<<<<<<<<<<<< - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - * b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_mi, __pyx_n_s_bias); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_view); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_na); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_7, __pyx_t_6, __pyx_int_neg_1}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_9, 2+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_b, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":197 - * for mi, s in zip(m.m, m.stride): # from - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) # <<<<<<<<<<<<<< - * b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_b, __pyx_n_s_data); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_tuple__26); - __pyx_t_10 = __pyx_tuple__26; - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_t_10); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_math); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_log); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyNumber_Divide(__pyx_int_640, __pyx_v_s); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_12 = PyNumber_Power(__pyx_t_7, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyNumber_Divide(__pyx_int_8, __pyx_t_12); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_11); - if (likely(__pyx_t_12)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_11, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_12, __pyx_t_7}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+1-__pyx_t_9, 1+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __pyx_t_11 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_6); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely((PyObject_SetItem(__pyx_t_1, __pyx_t_10, __pyx_t_11) < 0))) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":198 - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - * b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls # <<<<<<<<<<<<<< - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_b, __pyx_n_s_data); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_tuple__28); - __pyx_t_10 = __pyx_tuple__28; - __pyx_t_11 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_t_10); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_13 = (__pyx_v_cf == Py_None); - if ((__pyx_t_13 != 0)) { - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_math); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_log); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_nc); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_14 = __Pyx_PyFloat_SubtractObjC(__pyx_t_7, __pyx_float_0_999999, 0.999999, 0, 0); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyFloat_TrueDivideCObj(__pyx_float_0_6, __pyx_t_14, 0.6, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - __pyx_t_14 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_14)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_14); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_14, __pyx_t_7}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_9, 1+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __pyx_t_6 = __pyx_t_2; - __pyx_t_2 = 0; - } else { - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_torch); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_log); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_14 = __Pyx_PyObject_GetAttrStr(__pyx_v_cf, __pyx_n_s_sum); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_15 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_14))) { - __pyx_t_15 = PyMethod_GET_SELF(__pyx_t_14); - if (likely(__pyx_t_15)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14); - __Pyx_INCREF(__pyx_t_15); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_14, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_15, }; - __pyx_t_12 = __Pyx_PyObject_FastCall(__pyx_t_14, __pyx_callargs+1-__pyx_t_9, 0+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - } - __pyx_t_14 = __Pyx_PyNumber_Divide(__pyx_v_cf, __pyx_t_12); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_12)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_12, __pyx_t_14}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_9, 1+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_6 = __pyx_t_2; - __pyx_t_2 = 0; - } - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_11, __pyx_t_6); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely((PyObject_SetItem(__pyx_t_1, __pyx_t_10, __pyx_t_2) < 0))) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":199 - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - * b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) # <<<<<<<<<<<<<< - * - * def _print_biases(self): - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_torch); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_nn); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_Parameter); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_b, __pyx_n_s_view); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_11 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_11)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_11, __pyx_int_neg_1}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_9, 1+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_requires_grad, Py_True) < 0) __PYX_ERR(0, 199, __pyx_L1_error) - __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, __pyx_t_2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_mi, __pyx_n_s_bias, __pyx_t_11) < 0) __PYX_ERR(0, 199, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":195 - * # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - * m = self.model[-1] # Detect() module - * for mi, s in zip(m.m, m.stride): # from # <<<<<<<<<<<<<< - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":191 - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - * - * def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency # <<<<<<<<<<<<<< - * # https://arxiv.org/abs/1708.02002 section 3.3 - * # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_14); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._initialize_biases", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_m); - __Pyx_XDECREF(__pyx_v_mi); - __Pyx_XDECREF(__pyx_v_s); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":201 - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - * - * def _print_biases(self): # <<<<<<<<<<<<<< - * m = self.model[-1] # Detect() module - * for mi in m.m: # from - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_17_print_biases(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_17_print_biases = {"_print_biases", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_17_print_biases, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_17_print_biases(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_print_biases (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 201, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_print_biases") < 0)) __PYX_ERR(0, 201, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_print_biases", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 201, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._print_biases", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_16_print_biases(__pyx_self, __pyx_v_self); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_16_print_biases(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_m = NULL; - PyObject *__pyx_v_mi = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_print_biases", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":202 - * - * def _print_biases(self): - * m = self.model[-1] # Detect() module # <<<<<<<<<<<<<< - * for mi in m.m: # from - * b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 202, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 202, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_m = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":203 - * def _print_biases(self): - * m = self.model[-1] # Detect() module - * for mi in m.m: # from # <<<<<<<<<<<<<< - * b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - * LOGGER.info( - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_m); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 203, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 203, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_3); __Pyx_INCREF(__pyx_t_2); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 203, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_4(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 203, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XDECREF_SET(__pyx_v_mi, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":204 - * m = self.model[-1] # Detect() module - * for mi in m.m: # from - * b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) # <<<<<<<<<<<<<< - * LOGGER.info( - * ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_mi, __pyx_n_s_bias); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_detach); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_6, }; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_view); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_na); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_6, __pyx_t_5, __pyx_int_neg_1}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_8, 2+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_T); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_b, __pyx_t_7); - __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":205 - * for mi in m.m: # from - * b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - * LOGGER.info( # <<<<<<<<<<<<<< - * ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_info); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":206 - * b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - * LOGGER.info( - * ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) # <<<<<<<<<<<<<< - * - * # def _print_weights(self): - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_mi, __pyx_n_s_weight); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_shape); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_9, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyList_New(1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GIVEREF(__pyx_t_6); - PyList_SET_ITEM(__pyx_t_9, 0, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_2 = __pyx_t_9; - __pyx_t_9 = 0; - __pyx_t_10 = __Pyx_PyObject_GetSlice(__pyx_v_b, 0, 5, NULL, NULL, &__pyx_slice__29, 0, 1, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_10, __pyx_n_s_mean); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_11); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_11, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_10, __pyx_int_1}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_tolist); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_11); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_11, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_6, }; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - if (__Pyx_PyList_Extend(__pyx_t_2, __pyx_t_9) < 0) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_11 = __Pyx_PyObject_GetSlice(__pyx_v_b, 5, 0, NULL, NULL, &__pyx_slice__27, 1, 0, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_mean); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_11)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_11, }; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - if (__Pyx_ListComp_Append(__pyx_t_2, __pyx_t_9) < 0) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - { - PyObject *__pyx_temp = PyList_AsTuple(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); - __pyx_t_2 = __pyx_temp; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - } - __pyx_t_9 = PyUnicode_Format(__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3, __pyx_t_2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_9}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":203 - * def _print_biases(self): - * m = self.model[-1] # Detect() module - * for mi in m.m: # from # <<<<<<<<<<<<<< - * b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - * LOGGER.info( - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":201 - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - * - * def _print_biases(self): # <<<<<<<<<<<<<< - * m = self.model[-1] # Detect() module - * for mi in m.m: # from - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._print_biases", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_m); - __Pyx_XDECREF(__pyx_v_mi); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":213 - * # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - * - * def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers # <<<<<<<<<<<<<< - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_19fuse(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_19fuse = {"fuse", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_19fuse, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_19fuse(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("fuse (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 213, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "fuse") < 0)) __PYX_ERR(0, 213, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("fuse", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 213, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.fuse", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_18fuse(__pyx_self, __pyx_v_self); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_18fuse(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_m = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - int __pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("fuse", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":214 - * - * def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - * LOGGER.info('Fusing layers... ') # <<<<<<<<<<<<<< - * for m in self.model.modules(): - * if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_info); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_kp_u_Fusing_layers}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":215 - * def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): # <<<<<<<<<<<<<< - * if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - * m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_modules); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 215, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 215, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 215, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_6(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 215, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XDECREF_SET(__pyx_v_m, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":216 - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): - * if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): # <<<<<<<<<<<<<< - * m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - * delattr(m, 'bn') # remove batchnorm - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Conv); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 216, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_DWConv); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 216, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_9 = PyObject_IsInstance(__pyx_v_m, __pyx_t_1); - __pyx_t_10 = (__pyx_t_9 != 0); - if (!__pyx_t_10) { - } else { - __pyx_t_8 = __pyx_t_10; - goto __pyx_L8_bool_binop_done; - } - __pyx_t_10 = PyObject_IsInstance(__pyx_v_m, __pyx_t_3); - __pyx_t_9 = (__pyx_t_10 != 0); - __pyx_t_8 = __pyx_t_9; - __pyx_L8_bool_binop_done:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_9 = (__pyx_t_8 != 0); - if (__pyx_t_9) { - } else { - __pyx_t_7 = __pyx_t_9; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_9 = __Pyx_HasAttr(__pyx_v_m, __pyx_n_u_bn); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 216, __pyx_L1_error) - __pyx_t_8 = (__pyx_t_9 != 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":217 - * for m in self.model.modules(): - * if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - * m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv # <<<<<<<<<<<<<< - * delattr(m, 'bn') # remove batchnorm - * m.forward = m.forward_fuse # update forward - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_fuse_conv_and_bn); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_conv); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_bn); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_13 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_13 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_13)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_13); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_13, __pyx_t_11, __pyx_t_12}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_conv, __pyx_t_1) < 0) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":218 - * if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - * m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - * delattr(m, 'bn') # remove batchnorm # <<<<<<<<<<<<<< - * m.forward = m.forward_fuse # update forward - * self.info() - */ - __pyx_t_14 = PyObject_DelAttr(__pyx_v_m, __pyx_n_u_bn); if (unlikely(__pyx_t_14 == ((int)-1))) __PYX_ERR(0, 218, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":219 - * m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - * delattr(m, 'bn') # remove batchnorm - * m.forward = m.forward_fuse # update forward # <<<<<<<<<<<<<< - * self.info() - * return self - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_forward_fuse); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_forward, __pyx_t_1) < 0) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":216 - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): - * if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): # <<<<<<<<<<<<<< - * m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - * delattr(m, 'bn') # remove batchnorm - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":215 - * def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): # <<<<<<<<<<<<<< - * if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - * m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":220 - * delattr(m, 'bn') # remove batchnorm - * m.forward = m.forward_fuse # update forward - * self.info() # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_info); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":221 - * m.forward = m.forward_fuse # update forward - * self.info() - * return self # <<<<<<<<<<<<<< - * - * def info(self, verbose=False, img_size=640): # print model information - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self); - __pyx_r = __pyx_v_self; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":213 - * # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - * - * def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers # <<<<<<<<<<<<<< - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.fuse", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_m); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":223 - * return self - * - * def info(self, verbose=False, img_size=640): # print model information # <<<<<<<<<<<<<< - * model_info(self, verbose, img_size) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_21info(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_21info = {"info", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_21info, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_21info(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_verbose = 0; - PyObject *__pyx_v_img_size = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("info (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_verbose,&__pyx_n_s_img_size,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_verbose,&__pyx_n_s_img_size,0}; - #endif - PyObject* values[3] = {0,0,0}; - values[1] = ((PyObject *)((PyObject *)Py_False)); - values[2] = ((PyObject *)((PyObject *)__pyx_int_640)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 223, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_verbose); - if (value) { values[1] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 223, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_img_size); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 223, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "info") < 0)) __PYX_ERR(0, 223, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_verbose = values[1]; - __pyx_v_img_size = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("info", 0, 1, 3, __pyx_nargs); __PYX_ERR(0, 223, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.info", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_20info(__pyx_self, __pyx_v_self, __pyx_v_verbose, __pyx_v_img_size); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_20info(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_verbose, PyObject *__pyx_v_img_size) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("info", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":224 - * - * def info(self, verbose=False, img_size=640): # print model information - * model_info(self, verbose, img_size) # <<<<<<<<<<<<<< - * - * def _apply(self, fn): - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_model_info); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_v_self, __pyx_v_verbose, __pyx_v_img_size}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":223 - * return self - * - * def info(self, verbose=False, img_size=640): # print model information # <<<<<<<<<<<<<< - * model_info(self, verbose, img_size) - * - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model.info", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":226 - * model_info(self, verbose, img_size) - * - * def _apply(self, fn): # <<<<<<<<<<<<<< - * # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - * self = super()._apply(fn) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_23_apply(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_23_apply = {"_apply", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_23_apply, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_23_apply(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_fn = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_apply (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_fn,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_fn,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 226, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_fn)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 226, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_apply", 1, 2, 2, 1); __PYX_ERR(0, 226, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_apply") < 0)) __PYX_ERR(0, 226, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_fn = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_apply", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 226, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._apply", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_22_apply(__pyx_self, __pyx_v_self, __pyx_v_fn); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_22_apply(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_fn) { - PyObject *__pyx_v_m = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_apply", 0); - __Pyx_INCREF(__pyx_v_self); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":228 - * def _apply(self, fn): - * # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - * self = super()._apply(fn) # <<<<<<<<<<<<<< - * m = self.model[-1] # Detect() - * if isinstance(m, Detect): - */ - __pyx_t_2 = __Pyx_CyFunction_GetClassObj(__pyx_self); - if (!__pyx_t_2) { PyErr_SetString(PyExc_SystemError, "super(): empty __class__ cell"); __PYX_ERR(0, 228, __pyx_L1_error) } - __Pyx_INCREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_INCREF(__pyx_v_self); - __Pyx_GIVEREF(__pyx_v_self); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_self); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_super, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_apply); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_fn}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF_SET(__pyx_v_self, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":229 - * # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - * self = super()._apply(fn) - * m = self.model[-1] # Detect() # <<<<<<<<<<<<<< - * if isinstance(m, Detect): - * m.stride = fn(m.stride) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_model); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_t_1, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_m = __pyx_t_3; - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":230 - * self = super()._apply(fn) - * m = self.model[-1] # Detect() - * if isinstance(m, Detect): # <<<<<<<<<<<<<< - * m.stride = fn(m.stride) - * m.grid = list(map(fn, m.grid)) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Detect); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 230, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_IsInstance(__pyx_v_m, __pyx_t_3); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(0, 230, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_6 = (__pyx_t_5 != 0); - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":231 - * m = self.model[-1] # Detect() - * if isinstance(m, Detect): - * m.stride = fn(m.stride) # <<<<<<<<<<<<<< - * m.grid = list(map(fn, m.grid)) - * if isinstance(m.anchor_grid, list): - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_stride); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_fn); - __pyx_t_2 = __pyx_v_fn; __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_1}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_stride, __pyx_t_3) < 0) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":232 - * if isinstance(m, Detect): - * m.stride = fn(m.stride) - * m.grid = list(map(fn, m.grid)) # <<<<<<<<<<<<<< - * if isinstance(m.anchor_grid, list): - * m.anchor_grid = list(map(fn, m.anchor_grid)) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_grid); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_fn); - __Pyx_GIVEREF(__pyx_v_fn); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_fn); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_map, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PySequence_ListKeepNew(__pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_grid, __pyx_t_2) < 0) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":233 - * m.stride = fn(m.stride) - * m.grid = list(map(fn, m.grid)) - * if isinstance(m.anchor_grid, list): # <<<<<<<<<<<<<< - * m.anchor_grid = list(map(fn, m.anchor_grid)) - * return self - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_anchor_grid); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = PyList_Check(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = (__pyx_t_6 != 0); - if (__pyx_t_5) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":234 - * m.grid = list(map(fn, m.grid)) - * if isinstance(m.anchor_grid, list): - * m.anchor_grid = list(map(fn, m.anchor_grid)) # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_m, __pyx_n_s_anchor_grid); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_fn); - __Pyx_GIVEREF(__pyx_v_fn); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_fn); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_map, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_ListKeepNew(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_m, __pyx_n_s_anchor_grid, __pyx_t_3) < 0) __PYX_ERR(0, 234, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":233 - * m.stride = fn(m.stride) - * m.grid = list(map(fn, m.grid)) - * if isinstance(m.anchor_grid, list): # <<<<<<<<<<<<<< - * m.anchor_grid = list(map(fn, m.anchor_grid)) - * return self - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":230 - * self = super()._apply(fn) - * m = self.model[-1] # Detect() - * if isinstance(m, Detect): # <<<<<<<<<<<<<< - * m.stride = fn(m.stride) - * m.grid = list(map(fn, m.grid)) - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":235 - * if isinstance(m.anchor_grid, list): - * m.anchor_grid = list(map(fn, m.anchor_grid)) - * return self # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self); - __pyx_r = __pyx_v_self; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":226 - * model_info(self, verbose, img_size) - * - * def _apply(self, fn): # <<<<<<<<<<<<<< - * # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - * self = super()._apply(fn) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.Model._apply", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_m); - __Pyx_XDECREF(__pyx_v_self); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":238 - * - * - * def parse_model(d, ch): # model_dict, input_channels(3) # <<<<<<<<<<<<<< - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_1parse_model(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_1parse_model = {"parse_model", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_1parse_model, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_1parse_model(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_d = 0; - PyObject *__pyx_v_ch = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("parse_model (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_d,&__pyx_n_s_ch,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_d,&__pyx_n_s_ch,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_d)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 238, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_ch)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 238, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("parse_model", 1, 2, 2, 1); __PYX_ERR(0, 238, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "parse_model") < 0)) __PYX_ERR(0, 238, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_d = values[0]; - __pyx_v_ch = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("parse_model", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 238, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.parse_model", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_parse_model(__pyx_self, __pyx_v_d, __pyx_v_ch); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_2generator4(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":267 - * args = [ch[f]] - * elif m is Concat: - * c2 = sum(ch[x] for x in f) # <<<<<<<<<<<<<< - * elif m is Detect: - * args.append([ch[x] for x in f]) - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 267, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_2generator4, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_parse_model_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.parse_model.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_2generator4(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 267, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f)) { __Pyx_RaiseClosureNameError("f"); __PYX_ERR(0, 267, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_outer_scope->__pyx_v_f; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 267, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_2 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 267, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 267, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 267, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_ch)) { __Pyx_RaiseClosureNameError("ch"); __PYX_ERR(0, 267, __pyx_L1_error) } - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_ch, __pyx_cur_scope->__pyx_v_x); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_3; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_3 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 267, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_5generator5(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":279 - * c2 = ch[f] - * - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module # <<<<<<<<<<<<<< - * t = str(m)[8:-2].replace('__main__.', '') # module type - * np = sum(x.numel() for x in m_.parameters()) # number params - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_3genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 279, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_5generator5, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_parse_model_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.parse_model.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_5generator5(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 279, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_n)) { __Pyx_RaiseClosureNameError("n"); __PYX_ERR(0, 279, __pyx_L1_error) } - __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 279, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 279, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 279, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 279, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v__); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v__, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_m)) { __Pyx_RaiseClosureNameError("m"); __PYX_ERR(0, 279, __pyx_L1_error) } - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_args)) { __Pyx_RaiseClosureNameError("args"); __PYX_ERR(0, 279, __pyx_L1_error) } - __pyx_t_1 = __Pyx_PySequence_Tuple(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_args); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_m, __pyx_t_1, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - __Pyx_XGIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_3; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_4; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_2 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_4 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 279, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_8generator6(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":281 - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - * t = str(m)[8:-2].replace('__main__.', '') # module type - * np = sum(x.numel() for x in m_.parameters()) # number params # <<<<<<<<<<<<<< - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_6genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 281, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_8generator6, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_parse_model_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.parse_model.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_8generator6(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 281, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_m_)) { __Pyx_RaiseClosureNameError("m_"); __PYX_ERR(0, 281, __pyx_L1_error) } - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_m_, __pyx_n_s_parameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 281, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 281, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 281, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_6(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 281, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_x, __pyx_n_s_numel); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_7, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - __Pyx_XGIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_5; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_6; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_2 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_5 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_6 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 281, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_11generator7(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":284 - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist # <<<<<<<<<<<<<< - * layers.append(m_) - * if i == 0: - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_9genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 284, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_11generator7, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_parse_model_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model); if (unlikely(!gen)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.parse_model.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_11generator7(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - PyObject *(*__pyx_t_5)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L7_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 284, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f)) { __Pyx_RaiseClosureNameError("f"); __PYX_ERR(0, 284, __pyx_L1_error) } - __pyx_t_2 = __pyx_cur_scope->__pyx_outer_scope->__pyx_v_f; - __Pyx_INCREF(__pyx_t_2); - __pyx_t_3 = PyInt_Check(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((__pyx_t_3 != 0)) { - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f)) { __Pyx_RaiseClosureNameError("f"); __PYX_ERR(0, 284, __pyx_L1_error) } - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_f); - __pyx_t_1 = __pyx_t_2; - __pyx_t_2 = 0; - } else { - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f)) { __Pyx_RaiseClosureNameError("f"); __PYX_ERR(0, 284, __pyx_L1_error) } - __Pyx_INCREF(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_f); - __pyx_t_1 = __pyx_cur_scope->__pyx_outer_scope->__pyx_v_f; - } - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - } else { - __pyx_t_4 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 284, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_5)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 284, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 284, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_5(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 284, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyInt_NeObjC(__pyx_cur_scope->__pyx_v_x, __pyx_int_neg_1, -1L, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_3) { - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_i)) { __Pyx_RaiseClosureNameError("i"); __PYX_ERR(0, 284, __pyx_L1_error) } - __pyx_t_1 = PyNumber_Remainder(__pyx_cur_scope->__pyx_v_x, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - __Pyx_XGIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_4; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_5; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L7_resume_from_yield:; - __pyx_t_2 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_4 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_5 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 284, __pyx_L1_error) - } - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":238 - * - * - * def parse_model(d, ch): # model_dict, input_channels(3) # <<<<<<<<<<<<<< - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_parse_model(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_d, PyObject *__pyx_v_ch) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *__pyx_cur_scope; - PyObject *__pyx_v_anchors = NULL; - PyObject *__pyx_v_nc = NULL; - PyObject *__pyx_v_gd = NULL; - PyObject *__pyx_v_gw = NULL; - PyObject *__pyx_v_na = NULL; - PyObject *__pyx_v_no = NULL; - PyObject *__pyx_v_layers = NULL; - PyObject *__pyx_v_save = NULL; - PyObject *__pyx_v_c2 = NULL; - PyObject *__pyx_v_j = NULL; - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_n_ = NULL; - PyObject *__pyx_v_c1 = NULL; - PyObject *__pyx_v_t = NULL; - PyObject *__pyx_v_np = NULL; - PyObject *__pyx_v_genexpr = 0; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_2generator4 = 0; - PyObject *__pyx_8genexpr8__pyx_v_x = NULL; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_5generator5 = 0; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_8generator6 = 0; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_11generator7 = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t __pyx_t_4; - Py_UCS4 __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_t_8; - PyObject *(*__pyx_t_9)(PyObject *); - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - PyObject *(*__pyx_t_14)(PyObject *); - Py_ssize_t __pyx_t_15; - PyObject *(*__pyx_t_16)(PyObject *); - PyObject *__pyx_t_17 = NULL; - PyObject *__pyx_t_18 = NULL; - PyObject *__pyx_t_19 = NULL; - long __pyx_t_20; - int __pyx_t_21; - int __pyx_t_22; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("parse_model", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 238, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_ch = __pyx_v_ch; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_ch); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_ch); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":239 - * - * def parse_model(d, ch): # model_dict, input_channels(3) - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") # <<<<<<<<<<<<<< - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - * na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_info); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = 0; - __pyx_t_5 = 127; - __Pyx_INCREF(__pyx_kp_u__30); - __pyx_t_4 += 1; - __Pyx_GIVEREF(__pyx_kp_u__30); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_kp_u__30); - __pyx_t_6 = __Pyx_PyObject_Format(__pyx_kp_u__12, __pyx_kp_u_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) : __pyx_t_5; - __pyx_t_4 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Format(__pyx_n_u_from, __pyx_kp_u_18); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) : __pyx_t_5; - __pyx_t_4 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Format(__pyx_n_u_n, __pyx_kp_u_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) : __pyx_t_5; - __pyx_t_4 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Format(__pyx_n_u_params, __pyx_kp_u_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) : __pyx_t_5; - __pyx_t_4 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 4, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_INCREF(__pyx_kp_u__24); - __pyx_t_4 += 2; - __Pyx_GIVEREF(__pyx_kp_u__24); - PyTuple_SET_ITEM(__pyx_t_2, 5, __pyx_kp_u__24); - __pyx_t_6 = __Pyx_PyObject_Format(__pyx_n_u_module_2, __pyx_kp_u_40); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) : __pyx_t_5; - __pyx_t_4 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 6, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Format(__pyx_n_u_arguments, __pyx_kp_u_30); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_6) : __pyx_t_5; - __pyx_t_4 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_2, 7, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyUnicode_Join(__pyx_t_2, 8, __pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_6}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":240 - * def parse_model(d, ch): # model_dict, input_channels(3) - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] # <<<<<<<<<<<<<< - * na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - * no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - */ - __pyx_t_1 = __Pyx_PyObject_Dict_GetItem(__pyx_v_d, __pyx_n_u_anchors); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_Dict_GetItem(__pyx_v_d, __pyx_n_u_nc); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_v_d, __pyx_n_u_depth_multiple); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __Pyx_PyObject_Dict_GetItem(__pyx_v_d, __pyx_n_u_width_multiple); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_anchors = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_nc = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_gd = __pyx_t_6; - __pyx_t_6 = 0; - __pyx_v_gw = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":241 - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - * na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors # <<<<<<<<<<<<<< - * no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - * - */ - __pyx_t_8 = PyList_Check(__pyx_v_anchors); - if ((__pyx_t_8 != 0)) { - __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_anchors, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_4 = PyObject_Length(__pyx_t_6); if (unlikely(__pyx_t_4 == ((Py_ssize_t)-1))) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyInt_FromSsize_t(__Pyx_div_Py_ssize_t(__pyx_t_4, 2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __pyx_t_6; - __pyx_t_6 = 0; - } else { - __Pyx_INCREF(__pyx_v_anchors); - __pyx_t_2 = __pyx_v_anchors; - } - __pyx_v_na = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":242 - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - * na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - * no = na * (nc + 5) # number of outputs = anchors * (classes + 5) # <<<<<<<<<<<<<< - * - * layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - */ - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_v_nc, __pyx_int_5, 5, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = PyNumber_Multiply(__pyx_v_na, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_no = __pyx_t_6; - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":244 - * no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - * - * layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out # <<<<<<<<<<<<<< - * for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - * m = eval(m) if isinstance(m, str) else m # eval strings - */ - __pyx_t_6 = PyList_New(0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_ch, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_layers = ((PyObject*)__pyx_t_6); - __pyx_t_6 = 0; - __pyx_v_save = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - __pyx_v_c2 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":245 - * - * layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - * for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args # <<<<<<<<<<<<<< - * m = eval(m) if isinstance(m, str) else m # eval strings - * for j, a in enumerate(args): - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - __pyx_t_2 = __Pyx_PyObject_Dict_GetItem(__pyx_v_d, __pyx_n_u_backbone); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_v_d, __pyx_n_u_head); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_6 = __pyx_t_1; __Pyx_INCREF(__pyx_t_6); __pyx_t_4 = 0; - __pyx_t_9 = NULL; - } else { - __pyx_t_4 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 245, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_9)) { - if (likely(PyList_CheckExact(__pyx_t_6))) { - if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_6)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 245, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_6, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_6)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 245, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_6, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_9(__pyx_t_6); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 245, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 245, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_10 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_11 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_12 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_10 = PyList_GET_ITEM(sequence, 1); - __pyx_t_11 = PyList_GET_ITEM(sequence, 2); - __pyx_t_12 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(__pyx_t_12); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_10,&__pyx_t_11,&__pyx_t_12}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_10,&__pyx_t_11,&__pyx_t_12}; - __pyx_t_13 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_14 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_13); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_14(__pyx_t_13); if (unlikely(!item)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_14(__pyx_t_13), 4) < 0) __PYX_ERR(0, 245, __pyx_L1_error) - __pyx_t_14 = NULL; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_14 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 245, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_f); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_f, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_n); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_n, __pyx_t_10); - __Pyx_GIVEREF(__pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_m); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_m, __pyx_t_11); - __Pyx_GIVEREF(__pyx_t_11); - __pyx_t_11 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_args); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_args, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_12); - __pyx_t_12 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_i); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_i, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":246 - * layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - * for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - * m = eval(m) if isinstance(m, str) else m # eval strings # <<<<<<<<<<<<<< - * for j, a in enumerate(args): - * try: - */ - __pyx_t_12 = __pyx_cur_scope->__pyx_v_m; - __Pyx_INCREF(__pyx_t_12); - __pyx_t_8 = PyUnicode_Check(__pyx_t_12); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if ((__pyx_t_8 != 0)) { - __pyx_t_12 = __Pyx_Globals(); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_11 = __Pyx_PyDict_NewPresized(24); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - if (__pyx_v_a) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_a, __pyx_v_a) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_anchors) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_anchors, __pyx_v_anchors) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_cur_scope->__pyx_v_args) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_args, __pyx_cur_scope->__pyx_v_args) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_c1) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_c1, __pyx_v_c1) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_c2) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_c2, __pyx_v_c2) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_cur_scope->__pyx_v_ch) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_ch, __pyx_cur_scope->__pyx_v_ch) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_d) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_d, __pyx_v_d) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_cur_scope->__pyx_v_f) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_f, __pyx_cur_scope->__pyx_v_f) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_gd) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_gd, __pyx_v_gd) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_genexpr) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_genexpr, __pyx_v_genexpr) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_gw) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_gw, __pyx_v_gw) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_cur_scope->__pyx_v_i) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_i, __pyx_cur_scope->__pyx_v_i) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_j) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_j, __pyx_v_j) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_layers) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_layers, __pyx_v_layers) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_cur_scope->__pyx_v_m) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_m, __pyx_cur_scope->__pyx_v_m) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_cur_scope->__pyx_v_m_) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_m_2, __pyx_cur_scope->__pyx_v_m_) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_cur_scope->__pyx_v_n) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_n, __pyx_cur_scope->__pyx_v_n) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_n_) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_n_2, __pyx_v_n_) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_na) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_na, __pyx_v_na) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_nc) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_nc, __pyx_v_nc) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_no) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_no, __pyx_v_no) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_np) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_np, __pyx_v_np) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_save) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_save, __pyx_v_save) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - if (__pyx_v_t) { - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_t, __pyx_v_t) < 0) __PYX_ERR(0, 246, __pyx_L1_error) - } - __pyx_t_10 = PyTuple_New(3); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_m); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_m); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_cur_scope->__pyx_v_m); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_11); - PyTuple_SET_ITEM(__pyx_t_10, 2, __pyx_t_11); - __pyx_t_12 = 0; - __pyx_t_11 = 0; - __pyx_t_11 = __Pyx_PyObject_Call(__pyx_builtin_eval, __pyx_t_10, NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 246, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_1 = __pyx_t_11; - __pyx_t_11 = 0; - } else { - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_m); - __pyx_t_1 = __pyx_cur_scope->__pyx_v_m; - } - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_m); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_m, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":247 - * for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - * m = eval(m) if isinstance(m, str) else m # eval strings - * for j, a in enumerate(args): # <<<<<<<<<<<<<< - * try: - * args[j] = eval(a) if isinstance(a, str) else a # eval strings - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_1 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_v_args)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_v_args)) { - __pyx_t_11 = __pyx_cur_scope->__pyx_v_args; __Pyx_INCREF(__pyx_t_11); __pyx_t_15 = 0; - __pyx_t_16 = NULL; - } else { - __pyx_t_15 = -1; __pyx_t_11 = PyObject_GetIter(__pyx_cur_scope->__pyx_v_args); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 247, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_16 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_11); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 247, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_16)) { - if (likely(PyList_CheckExact(__pyx_t_11))) { - if (__pyx_t_15 >= PyList_GET_SIZE(__pyx_t_11)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_10 = PyList_GET_ITEM(__pyx_t_11, __pyx_t_15); __Pyx_INCREF(__pyx_t_10); __pyx_t_15++; if (unlikely((0 < 0))) __PYX_ERR(0, 247, __pyx_L1_error) - #else - __pyx_t_10 = PySequence_ITEM(__pyx_t_11, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 247, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - #endif - } else { - if (__pyx_t_15 >= PyTuple_GET_SIZE(__pyx_t_11)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_10 = PyTuple_GET_ITEM(__pyx_t_11, __pyx_t_15); __Pyx_INCREF(__pyx_t_10); __pyx_t_15++; if (unlikely((0 < 0))) __PYX_ERR(0, 247, __pyx_L1_error) - #else - __pyx_t_10 = PySequence_ITEM(__pyx_t_11, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 247, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - #endif - } - } else { - __pyx_t_10 = __pyx_t_16(__pyx_t_11); - if (unlikely(!__pyx_t_10)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 247, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_10); - } - __Pyx_XDECREF_SET(__pyx_v_a, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_INCREF(__pyx_t_1); - __Pyx_XDECREF_SET(__pyx_v_j, __pyx_t_1); - __pyx_t_10 = __Pyx_PyInt_AddObjC(__pyx_t_1, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 247, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); - __pyx_t_1 = __pyx_t_10; - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":248 - * m = eval(m) if isinstance(m, str) else m # eval strings - * for j, a in enumerate(args): - * try: # <<<<<<<<<<<<<< - * args[j] = eval(a) if isinstance(a, str) else a # eval strings - * except NameError: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_17, &__pyx_t_18, &__pyx_t_19); - __Pyx_XGOTREF(__pyx_t_17); - __Pyx_XGOTREF(__pyx_t_18); - __Pyx_XGOTREF(__pyx_t_19); - /*try:*/ { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":249 - * for j, a in enumerate(args): - * try: - * args[j] = eval(a) if isinstance(a, str) else a # eval strings # <<<<<<<<<<<<<< - * except NameError: - * pass - */ - __pyx_t_8 = PyUnicode_Check(__pyx_v_a); - if ((__pyx_t_8 != 0)) { - __pyx_t_12 = __Pyx_Globals(); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 249, __pyx_L9_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_2 = __Pyx_PyDict_NewPresized(24); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 249, __pyx_L9_error) - __Pyx_GOTREF(__pyx_t_2); - if (__pyx_v_a) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_a, __pyx_v_a) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_anchors) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_anchors, __pyx_v_anchors) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_cur_scope->__pyx_v_args) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_args, __pyx_cur_scope->__pyx_v_args) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_c1) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_c1, __pyx_v_c1) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_c2) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_c2, __pyx_v_c2) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_cur_scope->__pyx_v_ch) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_ch, __pyx_cur_scope->__pyx_v_ch) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_d) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_d, __pyx_v_d) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_cur_scope->__pyx_v_f) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_f, __pyx_cur_scope->__pyx_v_f) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_gd) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_gd, __pyx_v_gd) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_genexpr) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_genexpr, __pyx_v_genexpr) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_gw) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_gw, __pyx_v_gw) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_cur_scope->__pyx_v_i) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_i, __pyx_cur_scope->__pyx_v_i) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_j) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_j, __pyx_v_j) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_layers) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_layers, __pyx_v_layers) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_cur_scope->__pyx_v_m) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_m, __pyx_cur_scope->__pyx_v_m) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_cur_scope->__pyx_v_m_) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_m_2, __pyx_cur_scope->__pyx_v_m_) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_cur_scope->__pyx_v_n) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_n, __pyx_cur_scope->__pyx_v_n) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_n_) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_n_2, __pyx_v_n_) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_na) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_na, __pyx_v_na) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_nc) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_nc, __pyx_v_nc) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_no) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_no, __pyx_v_no) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_np) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_np, __pyx_v_np) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_save) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_save, __pyx_v_save) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - if (__pyx_v_t) { - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_t, __pyx_v_t) < 0) __PYX_ERR(0, 249, __pyx_L9_error) - } - __pyx_t_13 = PyTuple_New(3); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 249, __pyx_L9_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_INCREF(__pyx_v_a); - __Pyx_GIVEREF(__pyx_v_a); - PyTuple_SET_ITEM(__pyx_t_13, 0, __pyx_v_a); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_13, 1, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_13, 2, __pyx_t_2); - __pyx_t_12 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_eval, __pyx_t_13, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 249, __pyx_L9_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_10 = __pyx_t_2; - __pyx_t_2 = 0; - } else { - __Pyx_INCREF(__pyx_v_a); - __pyx_t_10 = __pyx_v_a; - } - if (unlikely((PyObject_SetItem(__pyx_cur_scope->__pyx_v_args, __pyx_v_j, __pyx_t_10) < 0))) __PYX_ERR(0, 249, __pyx_L9_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":248 - * m = eval(m) if isinstance(m, str) else m # eval strings - * for j, a in enumerate(args): - * try: # <<<<<<<<<<<<<< - * args[j] = eval(a) if isinstance(a, str) else a # eval strings - * except NameError: - */ - } - __Pyx_XDECREF(__pyx_t_17); __pyx_t_17 = 0; - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_XDECREF(__pyx_t_19); __pyx_t_19 = 0; - goto __pyx_L16_try_end; - __pyx_L9_error:; - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":250 - * try: - * args[j] = eval(a) if isinstance(a, str) else a # eval strings - * except NameError: # <<<<<<<<<<<<<< - * pass - * - */ - __pyx_t_7 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_NameError); - if (__pyx_t_7) { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L10_exception_handled; - } - goto __pyx_L11_except_error; - __pyx_L11_except_error:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":248 - * m = eval(m) if isinstance(m, str) else m # eval strings - * for j, a in enumerate(args): - * try: # <<<<<<<<<<<<<< - * args[j] = eval(a) if isinstance(a, str) else a # eval strings - * except NameError: - */ - __Pyx_XGIVEREF(__pyx_t_17); - __Pyx_XGIVEREF(__pyx_t_18); - __Pyx_XGIVEREF(__pyx_t_19); - __Pyx_ExceptionReset(__pyx_t_17, __pyx_t_18, __pyx_t_19); - goto __pyx_L1_error; - __pyx_L10_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_17); - __Pyx_XGIVEREF(__pyx_t_18); - __Pyx_XGIVEREF(__pyx_t_19); - __Pyx_ExceptionReset(__pyx_t_17, __pyx_t_18, __pyx_t_19); - __pyx_L16_try_end:; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":247 - * for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - * m = eval(m) if isinstance(m, str) else m # eval strings - * for j, a in enumerate(args): # <<<<<<<<<<<<<< - * try: - * args[j] = eval(a) if isinstance(a, str) else a # eval strings - */ - } - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":253 - * pass - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain # <<<<<<<<<<<<<< - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - */ - __pyx_t_11 = PyObject_RichCompare(__pyx_cur_scope->__pyx_v_n, __pyx_int_1, Py_GT); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 253, __pyx_L1_error) - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__pyx_t_8) { - __pyx_t_20 = 1; - __pyx_t_11 = PyNumber_Multiply(__pyx_cur_scope->__pyx_v_n, __pyx_v_gd); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_round, __pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_t_20); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_13 = PyObject_RichCompare(__pyx_t_2, __pyx_t_10, Py_GT); __Pyx_XGOTREF(__pyx_t_13); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_13); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (__pyx_t_21) { - __pyx_t_13 = __Pyx_PyInt_From_long(__pyx_t_20); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __pyx_t_11 = __pyx_t_13; - __pyx_t_13 = 0; - } else { - __Pyx_INCREF(__pyx_t_10); - __pyx_t_11 = __pyx_t_10; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_INCREF(__pyx_t_11); - __pyx_t_1 = __pyx_t_11; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } else { - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_n); - __pyx_t_1 = __pyx_cur_scope->__pyx_v_n; - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_n); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_n, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __Pyx_XDECREF_SET(__pyx_v_n_, __pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":254 - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, # <<<<<<<<<<<<<< - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - */ - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_m); - __pyx_t_1 = __pyx_cur_scope->__pyx_v_m; - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_Conv); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_GhostConv); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_Bottleneck); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_GhostBottleneck); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_SPP); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_SPPF); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_DWConv); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_MixConv2d); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_Focus); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_CrossConv); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":255 - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: # <<<<<<<<<<<<<< - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output - */ - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_BottleneckCSP); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":254 - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, # <<<<<<<<<<<<<< - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - */ - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":255 - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: # <<<<<<<<<<<<<< - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output - */ - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_C3); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":254 - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, # <<<<<<<<<<<<<< - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - */ - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":255 - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: # <<<<<<<<<<<<<< - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output - */ - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_C3TR); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":254 - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, # <<<<<<<<<<<<<< - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - */ - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":255 - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: # <<<<<<<<<<<<<< - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output - */ - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_C3SPP); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":254 - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, # <<<<<<<<<<<<<< - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - */ - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_21) { - } else { - __pyx_t_8 = __pyx_t_21; - goto __pyx_L18_bool_binop_done; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":255 - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: # <<<<<<<<<<<<<< - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output - */ - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_C3Ghost); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":254 - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, # <<<<<<<<<<<<<< - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - */ - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = __pyx_t_21; - __pyx_L18_bool_binop_done:; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_21 = (__pyx_t_8 != 0); - if (__pyx_t_21) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":256 - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] # <<<<<<<<<<<<<< - * if c2 != no: # if not output - * c2 = make_divisible(c2 * gw, 8) - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ch, __pyx_cur_scope->__pyx_v_f); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_args, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_XDECREF_SET(__pyx_v_c1, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_c2, __pyx_t_10); - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":257 - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output # <<<<<<<<<<<<<< - * c2 = make_divisible(c2 * gw, 8) - * - */ - __pyx_t_10 = PyObject_RichCompare(__pyx_v_c2, __pyx_v_no, Py_NE); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 257, __pyx_L1_error) - __pyx_t_21 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_21 < 0))) __PYX_ERR(0, 257, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__pyx_t_21) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":258 - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output - * c2 = make_divisible(c2 * gw, 8) # <<<<<<<<<<<<<< - * - * args = [c1, c2, *args[1:]] - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_make_divisible); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_11 = PyNumber_Multiply(__pyx_v_c2, __pyx_v_gw); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_13 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_13 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_13)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_13); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_13, __pyx_t_11, __pyx_int_8}; - __pyx_t_10 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF_SET(__pyx_v_c2, __pyx_t_10); - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":257 - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - * if c2 != no: # if not output # <<<<<<<<<<<<<< - * c2 = make_divisible(c2 * gw, 8) - * - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":260 - * c2 = make_divisible(c2 * gw, 8) - * - * args = [c1, c2, *args[1:]] # <<<<<<<<<<<<<< - * if m in [BottleneckCSP, C3, C3TR, C3Ghost]: - * args.insert(2, n) # number of repeats - */ - __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_c1); - __Pyx_GIVEREF(__pyx_v_c1); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_v_c1); - __Pyx_INCREF(__pyx_v_c2); - __Pyx_GIVEREF(__pyx_v_c2); - PyList_SET_ITEM(__pyx_t_1, 1, __pyx_v_c2); - __pyx_t_10 = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetSlice(__pyx_cur_scope->__pyx_v_args, 1, 0, NULL, NULL, &__pyx_slice__31, 1, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_PyList_Extend(__pyx_t_10, __pyx_t_1) < 0) __PYX_ERR(0, 260, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_args); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_args, __pyx_t_10); - __Pyx_GIVEREF(__pyx_t_10); - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":261 - * - * args = [c1, c2, *args[1:]] - * if m in [BottleneckCSP, C3, C3TR, C3Ghost]: # <<<<<<<<<<<<<< - * args.insert(2, n) # number of repeats - * n = 1 - */ - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_m); - __pyx_t_10 = __pyx_cur_scope->__pyx_v_m; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_BottleneckCSP); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_10, __pyx_t_1, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_8) { - } else { - __pyx_t_21 = __pyx_t_8; - goto __pyx_L35_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_C3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_1 = PyObject_RichCompare(__pyx_t_10, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (!__pyx_t_8) { - } else { - __pyx_t_21 = __pyx_t_8; - goto __pyx_L35_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_C3TR); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_10, __pyx_t_1, Py_EQ); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (!__pyx_t_8) { - } else { - __pyx_t_21 = __pyx_t_8; - goto __pyx_L35_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_C3Ghost); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_1 = PyObject_RichCompare(__pyx_t_10, __pyx_t_11, Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_21 = __pyx_t_8; - __pyx_L35_bool_binop_done:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = (__pyx_t_21 != 0); - if (__pyx_t_8) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":262 - * args = [c1, c2, *args[1:]] - * if m in [BottleneckCSP, C3, C3TR, C3Ghost]: - * args.insert(2, n) # number of repeats # <<<<<<<<<<<<<< - * n = 1 - * elif m is nn.BatchNorm2d: - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_args, __pyx_n_s_insert); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 262, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_11 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_11)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_11, __pyx_int_2, __pyx_cur_scope->__pyx_v_n}; - __pyx_t_10 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 262, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":263 - * if m in [BottleneckCSP, C3, C3TR, C3Ghost]: - * args.insert(2, n) # number of repeats - * n = 1 # <<<<<<<<<<<<<< - * elif m is nn.BatchNorm2d: - * args = [ch[f]] - */ - __Pyx_INCREF(__pyx_int_1); - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_n); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_n, __pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":261 - * - * args = [c1, c2, *args[1:]] - * if m in [BottleneckCSP, C3, C3TR, C3Ghost]: # <<<<<<<<<<<<<< - * args.insert(2, n) # number of repeats - * n = 1 - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":254 - * - * n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - * if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, # <<<<<<<<<<<<<< - * BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]: - * c1, c2 = ch[f], args[0] - */ - goto __pyx_L17; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":264 - * args.insert(2, n) # number of repeats - * n = 1 - * elif m is nn.BatchNorm2d: # <<<<<<<<<<<<<< - * args = [ch[f]] - * elif m is Concat: - */ - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_nn); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 264, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_10, __pyx_n_s_BatchNorm2d); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 264, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = (__pyx_cur_scope->__pyx_v_m == __pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_21 = (__pyx_t_8 != 0); - if (__pyx_t_21) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":265 - * n = 1 - * elif m is nn.BatchNorm2d: - * args = [ch[f]] # <<<<<<<<<<<<<< - * elif m is Concat: - * c2 = sum(ch[x] for x in f) - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ch, __pyx_cur_scope->__pyx_v_f); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = PyList_New(1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_1); - PyList_SET_ITEM(__pyx_t_10, 0, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_args); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_args, __pyx_t_10); - __Pyx_GIVEREF(__pyx_t_10); - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":264 - * args.insert(2, n) # number of repeats - * n = 1 - * elif m is nn.BatchNorm2d: # <<<<<<<<<<<<<< - * args = [ch[f]] - * elif m is Concat: - */ - goto __pyx_L17; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":266 - * elif m is nn.BatchNorm2d: - * args = [ch[f]] - * elif m is Concat: # <<<<<<<<<<<<<< - * c2 = sum(ch[x] for x in f) - * elif m is Detect: - */ - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_Concat); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_21 = (__pyx_cur_scope->__pyx_v_m == __pyx_t_10); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = (__pyx_t_21 != 0); - if (__pyx_t_8) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":267 - * args = [ch[f]] - * elif m is Concat: - * c2 = sum(ch[x] for x in f) # <<<<<<<<<<<<<< - * elif m is Detect: - * args.append([ch[x] for x in f]) - */ - __pyx_t_10 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_builtin_sum, __pyx_t_10); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF_SET(__pyx_v_c2, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":266 - * elif m is nn.BatchNorm2d: - * args = [ch[f]] - * elif m is Concat: # <<<<<<<<<<<<<< - * c2 = sum(ch[x] for x in f) - * elif m is Detect: - */ - goto __pyx_L17; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":268 - * elif m is Concat: - * c2 = sum(ch[x] for x in f) - * elif m is Detect: # <<<<<<<<<<<<<< - * args.append([ch[x] for x in f]) - * if isinstance(args[1], int): # number of anchors - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Detect); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = (__pyx_cur_scope->__pyx_v_m == __pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_21 = (__pyx_t_8 != 0); - if (__pyx_t_21) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":269 - * c2 = sum(ch[x] for x in f) - * elif m is Detect: - * args.append([ch[x] for x in f]) # <<<<<<<<<<<<<< - * if isinstance(args[1], int): # number of anchors - * args[1] = [list(range(args[1] * 2))] * len(f) - */ - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 269, __pyx_L41_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_v_f)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_v_f)) { - __pyx_t_10 = __pyx_cur_scope->__pyx_v_f; __Pyx_INCREF(__pyx_t_10); __pyx_t_15 = 0; - __pyx_t_16 = NULL; - } else { - __pyx_t_15 = -1; __pyx_t_10 = PyObject_GetIter(__pyx_cur_scope->__pyx_v_f); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 269, __pyx_L41_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_16 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 269, __pyx_L41_error) - } - for (;;) { - if (likely(!__pyx_t_16)) { - if (likely(PyList_CheckExact(__pyx_t_10))) { - if (__pyx_t_15 >= PyList_GET_SIZE(__pyx_t_10)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_11 = PyList_GET_ITEM(__pyx_t_10, __pyx_t_15); __Pyx_INCREF(__pyx_t_11); __pyx_t_15++; if (unlikely((0 < 0))) __PYX_ERR(0, 269, __pyx_L41_error) - #else - __pyx_t_11 = PySequence_ITEM(__pyx_t_10, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 269, __pyx_L41_error) - __Pyx_GOTREF(__pyx_t_11); - #endif - } else { - if (__pyx_t_15 >= PyTuple_GET_SIZE(__pyx_t_10)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_11 = PyTuple_GET_ITEM(__pyx_t_10, __pyx_t_15); __Pyx_INCREF(__pyx_t_11); __pyx_t_15++; if (unlikely((0 < 0))) __PYX_ERR(0, 269, __pyx_L41_error) - #else - __pyx_t_11 = PySequence_ITEM(__pyx_t_10, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 269, __pyx_L41_error) - __Pyx_GOTREF(__pyx_t_11); - #endif - } - } else { - __pyx_t_11 = __pyx_t_16(__pyx_t_10); - if (unlikely(!__pyx_t_11)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 269, __pyx_L41_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_11); - } - __Pyx_XDECREF_SET(__pyx_8genexpr8__pyx_v_x, __pyx_t_11); - __pyx_t_11 = 0; - __pyx_t_11 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ch, __pyx_8genexpr8__pyx_v_x); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 269, __pyx_L41_error) - __Pyx_GOTREF(__pyx_t_11); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_11))) __PYX_ERR(0, 269, __pyx_L41_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_x); __pyx_8genexpr8__pyx_v_x = 0; - goto __pyx_L44_exit_scope; - __pyx_L41_error:; - __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_x); __pyx_8genexpr8__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L44_exit_scope:; - } /* exit inner scope */ - __pyx_t_22 = __Pyx_PyObject_Append(__pyx_cur_scope->__pyx_v_args, __pyx_t_1); if (unlikely(__pyx_t_22 == ((int)-1))) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":270 - * elif m is Detect: - * args.append([ch[x] for x in f]) - * if isinstance(args[1], int): # number of anchors # <<<<<<<<<<<<<< - * args[1] = [list(range(args[1] * 2))] * len(f) - * elif m is Contract: - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_args, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 270, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_21 = PyInt_Check(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = (__pyx_t_21 != 0); - if (__pyx_t_8) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":271 - * args.append([ch[x] for x in f]) - * if isinstance(args[1], int): # number of anchors - * args[1] = [list(range(args[1] * 2))] * len(f) # <<<<<<<<<<<<<< - * elif m is Contract: - * c2 = ch[f] * args[0] ** 2 - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_args, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = __Pyx_PyInt_MultiplyObjC(__pyx_t_1, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_10); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PySequence_ListKeepNew(__pyx_t_1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_cur_scope->__pyx_v_f; - __Pyx_INCREF(__pyx_t_1); - __pyx_t_15 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_15 == ((Py_ssize_t)-1))) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyList_New(1 * ((__pyx_t_15<0) ? 0:__pyx_t_15)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_t_15; __pyx_temp++) { - __Pyx_INCREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_10); - PyList_SET_ITEM(__pyx_t_1, __pyx_temp, __pyx_t_10); - } - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely((__Pyx_SetItemInt(__pyx_cur_scope->__pyx_v_args, 1, __pyx_t_1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1) < 0))) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":270 - * elif m is Detect: - * args.append([ch[x] for x in f]) - * if isinstance(args[1], int): # number of anchors # <<<<<<<<<<<<<< - * args[1] = [list(range(args[1] * 2))] * len(f) - * elif m is Contract: - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":268 - * elif m is Concat: - * c2 = sum(ch[x] for x in f) - * elif m is Detect: # <<<<<<<<<<<<<< - * args.append([ch[x] for x in f]) - * if isinstance(args[1], int): # number of anchors - */ - goto __pyx_L17; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":272 - * if isinstance(args[1], int): # number of anchors - * args[1] = [list(range(args[1] * 2))] * len(f) - * elif m is Contract: # <<<<<<<<<<<<<< - * c2 = ch[f] * args[0] ** 2 - * elif m is Expand: - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Contract); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 272, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = (__pyx_cur_scope->__pyx_v_m == __pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_21 = (__pyx_t_8 != 0); - if (__pyx_t_21) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":273 - * args[1] = [list(range(args[1] * 2))] * len(f) - * elif m is Contract: - * c2 = ch[f] * args[0] ** 2 # <<<<<<<<<<<<<< - * elif m is Expand: - * c2 = ch[f] // args[0] ** 2 - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ch, __pyx_cur_scope->__pyx_v_f); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_args, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyNumber_Power(__pyx_t_10, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = PyNumber_Multiply(__pyx_t_1, __pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF_SET(__pyx_v_c2, __pyx_t_10); - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":272 - * if isinstance(args[1], int): # number of anchors - * args[1] = [list(range(args[1] * 2))] * len(f) - * elif m is Contract: # <<<<<<<<<<<<<< - * c2 = ch[f] * args[0] ** 2 - * elif m is Expand: - */ - goto __pyx_L17; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":274 - * elif m is Contract: - * c2 = ch[f] * args[0] ** 2 - * elif m is Expand: # <<<<<<<<<<<<<< - * c2 = ch[f] // args[0] ** 2 - * else: - */ - __Pyx_GetModuleGlobalName(__pyx_t_10, __pyx_n_s_Expand); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 274, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_21 = (__pyx_cur_scope->__pyx_v_m == __pyx_t_10); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = (__pyx_t_21 != 0); - if (__pyx_t_8) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":275 - * c2 = ch[f] * args[0] ** 2 - * elif m is Expand: - * c2 = ch[f] // args[0] ** 2 # <<<<<<<<<<<<<< - * else: - * c2 = ch[f] - */ - __pyx_t_10 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ch, __pyx_cur_scope->__pyx_v_f); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = __Pyx_GetItemInt(__pyx_cur_scope->__pyx_v_args, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_1 = PyNumber_Power(__pyx_t_11, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = PyNumber_FloorDivide(__pyx_t_10, __pyx_t_1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_c2, __pyx_t_11); - __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":274 - * elif m is Contract: - * c2 = ch[f] * args[0] ** 2 - * elif m is Expand: # <<<<<<<<<<<<<< - * c2 = ch[f] // args[0] ** 2 - * else: - */ - goto __pyx_L17; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":277 - * c2 = ch[f] // args[0] ** 2 - * else: - * c2 = ch[f] # <<<<<<<<<<<<<< - * - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - */ - /*else*/ { - __pyx_t_11 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ch, __pyx_cur_scope->__pyx_v_f); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF_SET(__pyx_v_c2, __pyx_t_11); - __pyx_t_11 = 0; - } - __pyx_L17:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":279 - * c2 = ch[f] - * - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module # <<<<<<<<<<<<<< - * t = str(m)[8:-2].replace('__main__.', '') # module type - * np = sum(x.numel() for x in m_.parameters()) # number params - */ - __pyx_t_1 = PyObject_RichCompare(__pyx_cur_scope->__pyx_v_n, __pyx_int_1, Py_GT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_8) { - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_nn); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_Sequential); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_3genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_13 = __Pyx_PySequence_Tuple(__pyx_t_1); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_t_13, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_11 = __pyx_t_1; - __pyx_t_1 = 0; - } else { - __pyx_t_1 = __Pyx_PySequence_Tuple(__pyx_cur_scope->__pyx_v_args); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_13 = __Pyx_PyObject_Call(__pyx_cur_scope->__pyx_v_m, __pyx_t_1, NULL); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_11 = __pyx_t_13; - __pyx_t_13 = 0; - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_m_); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_m_, __pyx_t_11); - __Pyx_GIVEREF(__pyx_t_11); - __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":280 - * - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - * t = str(m)[8:-2].replace('__main__.', '') # module type # <<<<<<<<<<<<<< - * np = sum(x.numel() for x in m_.parameters()) # number params - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - */ - __pyx_t_11 = __Pyx_PyObject_Str(__pyx_cur_scope->__pyx_v_m); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_13 = PySequence_GetSlice(__pyx_t_11, 8, -2L); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(__pyx_t_13 == Py_None)) { - PyErr_Format(PyExc_AttributeError, "'NoneType' object has no attribute '%.30s'", "replace"); - __PYX_ERR(0, 280, __pyx_L1_error) - } - __pyx_t_11 = PyUnicode_Replace(__pyx_t_13, __pyx_kp_u_main, __pyx_kp_u__12, -1L); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_11); - __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":281 - * m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - * t = str(m)[8:-2].replace('__main__.', '') # module type - * np = sum(x.numel() for x in m_.parameters()) # number params # <<<<<<<<<<<<<< - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - */ - __pyx_t_11 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_6genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_13 = __Pyx_PyObject_CallOneArg(__pyx_builtin_sum, __pyx_t_11); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_XDECREF_SET(__pyx_v_np, __pyx_t_13); - __pyx_t_13 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":282 - * t = str(m)[8:-2].replace('__main__.', '') # module type - * np = sum(x.numel() for x in m_.parameters()) # number params - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params # <<<<<<<<<<<<<< - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - */ - __pyx_t_13 = __pyx_cur_scope->__pyx_v_i; - __Pyx_INCREF(__pyx_t_13); - __pyx_t_11 = __pyx_cur_scope->__pyx_v_f; - __Pyx_INCREF(__pyx_t_11); - __pyx_t_1 = __pyx_v_t; - __Pyx_INCREF(__pyx_t_1); - __pyx_t_10 = __pyx_v_np; - __Pyx_INCREF(__pyx_t_10); - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_m_, __pyx_n_s_i, __pyx_t_13) < 0) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_m_, __pyx_n_s_f, __pyx_t_11) < 0) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_m_, __pyx_n_s_type, __pyx_t_1) < 0) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_cur_scope->__pyx_v_m_, __pyx_n_s_np, __pyx_t_10) < 0) __PYX_ERR(0, 282, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":283 - * np = sum(x.numel() for x in m_.parameters()) # number params - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print # <<<<<<<<<<<<<< - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - * layers.append(m_) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_info); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(7); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_15 = 0; - __pyx_t_5 = 127; - __pyx_t_13 = __Pyx_PyObject_Format(__pyx_cur_scope->__pyx_v_i, __pyx_kp_u_3); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_13) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_13) : __pyx_t_5; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_13); - __Pyx_GIVEREF(__pyx_t_13); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_13); - __pyx_t_13 = 0; - __pyx_t_13 = __Pyx_PyObject_Str(__pyx_cur_scope->__pyx_v_f); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __pyx_t_2 = __Pyx_PyObject_Format(__pyx_t_13, __pyx_kp_u_18); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) : __pyx_t_5; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Format(__pyx_v_n_, __pyx_kp_u_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) : __pyx_t_5; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Format(__pyx_v_np, __pyx_kp_u_10_0f); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) : __pyx_t_5; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_kp_u__24); - __pyx_t_15 += 2; - __Pyx_GIVEREF(__pyx_kp_u__24); - PyTuple_SET_ITEM(__pyx_t_1, 4, __pyx_kp_u__24); - __pyx_t_2 = __Pyx_PyObject_Format(__pyx_v_t, __pyx_kp_u_40); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) : __pyx_t_5; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 5, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Str(__pyx_cur_scope->__pyx_v_args); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_13 = __Pyx_PyObject_Format(__pyx_t_2, __pyx_kp_u_30); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_13) > __pyx_t_5) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_13) : __pyx_t_5; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_13); - __Pyx_GIVEREF(__pyx_t_13); - PyTuple_SET_ITEM(__pyx_t_1, 6, __pyx_t_13); - __pyx_t_13 = 0; - __pyx_t_13 = __Pyx_PyUnicode_Join(__pyx_t_1, 7, __pyx_t_15, __pyx_t_5); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_11); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_11, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_t_13}; - __pyx_t_10 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":284 - * m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist # <<<<<<<<<<<<<< - * layers.append(m_) - * if i == 0: - */ - __pyx_t_10 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_9genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_22 = __Pyx_PyList_Extend(__pyx_v_save, __pyx_t_10); if (unlikely(__pyx_t_22 == ((int)-1))) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":285 - * LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - * layers.append(m_) # <<<<<<<<<<<<<< - * if i == 0: - * ch = [] - */ - __pyx_t_10 = __pyx_cur_scope->__pyx_v_m_; - __Pyx_INCREF(__pyx_t_10); - __pyx_t_22 = __Pyx_PyList_Append(__pyx_v_layers, __pyx_t_10); if (unlikely(__pyx_t_22 == ((int)-1))) __PYX_ERR(0, 285, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":286 - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - * layers.append(m_) - * if i == 0: # <<<<<<<<<<<<<< - * ch = [] - * ch.append(c2) - */ - __pyx_t_10 = __Pyx_PyInt_EqObjC(__pyx_cur_scope->__pyx_v_i, __pyx_int_0, 0, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 286, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__pyx_t_8) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":287 - * layers.append(m_) - * if i == 0: - * ch = [] # <<<<<<<<<<<<<< - * ch.append(c2) - * return nn.Sequential(*layers), sorted(save) - */ - __pyx_t_10 = PyList_New(0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_ch); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_ch, __pyx_t_10); - __Pyx_GIVEREF(__pyx_t_10); - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":286 - * save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - * layers.append(m_) - * if i == 0: # <<<<<<<<<<<<<< - * ch = [] - * ch.append(c2) - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":288 - * if i == 0: - * ch = [] - * ch.append(c2) # <<<<<<<<<<<<<< - * return nn.Sequential(*layers), sorted(save) - * - */ - __pyx_t_22 = __Pyx_PyObject_Append(__pyx_cur_scope->__pyx_v_ch, __pyx_v_c2); if (unlikely(__pyx_t_22 == ((int)-1))) __PYX_ERR(0, 288, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":245 - * - * layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - * for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args # <<<<<<<<<<<<<< - * m = eval(m) if isinstance(m, str) else m # eval strings - * for j, a in enumerate(args): - */ - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":289 - * ch = [] - * ch.append(c2) - * return nn.Sequential(*layers), sorted(save) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_nn); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_Sequential); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PySequence_Tuple(__pyx_v_layers); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_3, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_6 = PySequence_List(__pyx_v_save); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = ((PyObject*)__pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_22 = PyList_Sort(__pyx_t_3); if (unlikely(__pyx_t_22 == ((int)-1))) __PYX_ERR(0, 289, __pyx_L1_error) - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_10); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_10); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_3); - __pyx_t_10 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":238 - * - * - * def parse_model(d, ch): # model_dict, input_channels(3) # <<<<<<<<<<<<<< - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo.parse_model", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_anchors); - __Pyx_XDECREF(__pyx_v_nc); - __Pyx_XDECREF(__pyx_v_gd); - __Pyx_XDECREF(__pyx_v_gw); - __Pyx_XDECREF(__pyx_v_na); - __Pyx_XDECREF(__pyx_v_no); - __Pyx_XDECREF(__pyx_v_layers); - __Pyx_XDECREF(__pyx_v_save); - __Pyx_XDECREF(__pyx_v_c2); - __Pyx_XDECREF(__pyx_v_j); - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_n_); - __Pyx_XDECREF(__pyx_v_c1); - __Pyx_XDECREF(__pyx_v_t); - __Pyx_XDECREF(__pyx_v_np); - __Pyx_XDECREF(__pyx_v_genexpr); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_2generator4); - __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_x); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_5generator5); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_8generator6); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_11parse_model_11generator7); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_ch); - Py_CLEAR(p->__pyx_v_self); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *)o; - if (p->__pyx_v_ch) { - e = (*v)(p->__pyx_v_ch, a); if (e) return e; - } - if (p->__pyx_v_self) { - e = (*v)(p->__pyx_v_self, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ *)o; - tmp = ((PyObject*)p->__pyx_v_ch); - p->__pyx_v_ch = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_self); - p->__pyx_v_self = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init___slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__}, - {Py_tp_clear, (void *)__pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init___spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct____init__", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init___slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct____init__", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__, /*tp_traverse*/ - __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_1_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_1_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_nl); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *)o; - if (p->__pyx_v_nl) { - e = (*v)(p->__pyx_v_nl, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented *)o; - tmp = ((PyObject*)p->__pyx_v_nl); - p->__pyx_v_nl = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented}, - {Py_tp_clear, (void *)__pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_2__clip_augmented", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_2__clip_augmented", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented, /*tp_traverse*/ - __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_3_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_3_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_4_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_4_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_5_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_5_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_args); - Py_CLEAR(p->__pyx_v_ch); - Py_CLEAR(p->__pyx_v_f); - Py_CLEAR(p->__pyx_v_i); - Py_CLEAR(p->__pyx_v_m); - Py_CLEAR(p->__pyx_v_m_); - Py_CLEAR(p->__pyx_v_n); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *)o; - if (p->__pyx_v_args) { - e = (*v)(p->__pyx_v_args, a); if (e) return e; - } - if (p->__pyx_v_ch) { - e = (*v)(p->__pyx_v_ch, a); if (e) return e; - } - if (p->__pyx_v_f) { - e = (*v)(p->__pyx_v_f, a); if (e) return e; - } - if (p->__pyx_v_i) { - e = (*v)(p->__pyx_v_i, a); if (e) return e; - } - if (p->__pyx_v_m) { - e = (*v)(p->__pyx_v_m, a); if (e) return e; - } - if (p->__pyx_v_m_) { - e = (*v)(p->__pyx_v_m_, a); if (e) return e; - } - if (p->__pyx_v_n) { - e = (*v)(p->__pyx_v_n, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model *)o; - tmp = ((PyObject*)p->__pyx_v_args); - p->__pyx_v_args = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_ch); - p->__pyx_v_ch = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_f); - p->__pyx_v_f = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_i); - p->__pyx_v_i = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_m); - p->__pyx_v_m = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_m_); - p->__pyx_v_m_ = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_n); - p->__pyx_v_n = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model}, - {Py_tp_clear, (void *)__pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_6_parse_model", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_6_parse_model", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model, /*tp_traverse*/ - __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_7_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_7_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v__); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v__) { - e = (*v)(p->__pyx_v__, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_8_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_8_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_9_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_9_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.models.yolo.__pyx_scope_struct_10_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.models.yolo.""__pyx_scope_struct_10_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; - -static int __pyx_import_star_set(PyObject *o, PyObject* py_name, char *name) { - static const char* internal_type_names[] = { - "__pyx_ctuple_long", - "__pyx_ctuple_long__and_long__and_long", - "__pyx_ctuple_long__and_long__and_long__and_long", - "__pyx_ctuple_long__and_long__and_long__and_long__and_long", - "__pyx_ctuple_long__and_long__and_long__and_long__and_long_struct", - "__pyx_ctuple_long__and_long__and_long__and_long_struct", - "__pyx_ctuple_long__and_long__and_long_struct", - "__pyx_ctuple_long_struct", - "__pyx_scope_struct_10_genexpr", - "__pyx_scope_struct_1_genexpr", - "__pyx_scope_struct_2__clip_augmented", - "__pyx_scope_struct_3_genexpr", - "__pyx_scope_struct_4_genexpr", - "__pyx_scope_struct_5_genexpr", - "__pyx_scope_struct_6_parse_model", - "__pyx_scope_struct_7_genexpr", - "__pyx_scope_struct_8_genexpr", - "__pyx_scope_struct_9_genexpr", - "__pyx_scope_struct____init__", - 0 - }; - const char** type_name = internal_type_names; - while (*type_name) { - if (__Pyx_StrEq(name, *type_name)) { - PyErr_Format(PyExc_TypeError, "Cannot overwrite C type %s", name); - goto bad; - } - type_name++; - } - if (0); - else { - if (PyObject_SetAttr(__pyx_m, py_name, o) < 0) goto bad; - } - return 0; - bad: - return -1; -} - -static int -__Pyx_import_all_from(PyObject *locals, PyObject *v) -{ - PyObject *all = PyObject_GetAttrString(v, "__all__"); - PyObject *dict, *name, *value; - int skip_leading_underscores = 0; - int pos, err; - if (all == NULL) { - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) - return -1; - PyErr_Clear(); - dict = PyObject_GetAttrString(v, "__dict__"); - if (dict == NULL) { - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) - return -1; - PyErr_SetString(PyExc_ImportError, - "from-import-* object has no __dict__ and no __all__"); - return -1; - } -#if PY_MAJOR_VERSION < 3 - all = PyObject_CallMethod(dict, (char *)"keys", NULL); -#else - all = PyMapping_Keys(dict); -#endif - Py_DECREF(dict); - if (all == NULL) - return -1; - skip_leading_underscores = 1; - } - for (pos = 0, err = 0; ; pos++) { - name = PySequence_GetItem(all, pos); - if (name == NULL) { - if (!PyErr_ExceptionMatches(PyExc_IndexError)) - err = -1; - else - PyErr_Clear(); - break; - } - if (skip_leading_underscores && -#if PY_MAJOR_VERSION < 3 - likely(PyString_Check(name)) && - PyString_AS_STRING(name)[0] == '_') -#else - likely(PyUnicode_Check(name)) && - likely(__Pyx_PyUnicode_GET_LENGTH(name)) && - __Pyx_PyUnicode_READ_CHAR(name, 0) == '_') -#endif - { - Py_DECREF(name); - continue; - } - value = PyObject_GetAttr(v, name); - if (value == NULL) - err = -1; - else if (PyDict_CheckExact(locals)) - err = PyDict_SetItem(locals, name, value); - else - err = PyObject_SetItem(locals, name, value); - Py_DECREF(name); - Py_XDECREF(value); - if (err != 0) - break; - } - Py_DECREF(all); - return err; -} -static int __pyx_import_star(PyObject* m) { - int i; - int ret = -1; - char* s; - PyObject *locals = 0; - PyObject *list = 0; -#if PY_MAJOR_VERSION >= 3 - PyObject *utf8_name = 0; -#endif - PyObject *name; - PyObject *item; - locals = PyDict_New(); if (!locals) goto bad; - if (__Pyx_import_all_from(locals, m) < 0) goto bad; - list = PyDict_Items(locals); if (!list) goto bad; - for(i=0; i= 3 - utf8_name = PyUnicode_AsUTF8String(name); - if (!utf8_name) goto bad; - s = PyBytes_AS_STRING(utf8_name); - if (__pyx_import_star_set(item, name, s) < 0) goto bad; - Py_DECREF(utf8_name); utf8_name = 0; -#else - s = PyString_AsString(name); - if (!s) goto bad; - if (__pyx_import_star_set(item, name, s) < 0) goto bad; -#endif - } - ret = 0; -bad: - Py_XDECREF(locals); - Py_XDECREF(list); -#if PY_MAJOR_VERSION >= 3 - Py_XDECREF(utf8_name); -#endif - return ret; -} - - -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif -/* #### Code section: pystring_table ### */ - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - #if CYTHON_USE_MODULE_STATE - {0, __pyx_k_10, sizeof(__pyx_k_10), 0, 1, 0, 0}, - {0, __pyx_k_10_0f, sizeof(__pyx_k_10_0f), 0, 1, 0, 0}, - {0, __pyx_k_10_2f, sizeof(__pyx_k_10_2f), 0, 1, 0, 0}, - {0, __pyx_k_10s, sizeof(__pyx_k_10s), 0, 1, 0, 0}, - {0, __pyx_k_18, sizeof(__pyx_k_18), 0, 1, 0, 0}, - {0, __pyx_k_3, sizeof(__pyx_k_3), 0, 1, 0, 0}, - {0, __pyx_k_30, sizeof(__pyx_k_30), 0, 1, 0, 0}, - {0, __pyx_k_40, sizeof(__pyx_k_40), 0, 1, 0, 0}, - {0, __pyx_k_6g_Conv2d_bias_10_3g_10_3g_10_3, sizeof(__pyx_k_6g_Conv2d_bias_10_3g_10_3g_10_3), 0, 1, 0, 0}, - {0, __pyx_k_ArgumentParser, sizeof(__pyx_k_ArgumentParser), 0, 0, 1, 1}, - {0, __pyx_k_BatchNorm2d, sizeof(__pyx_k_BatchNorm2d), 0, 0, 1, 1}, - {0, __pyx_k_Bottleneck, sizeof(__pyx_k_Bottleneck), 0, 0, 1, 1}, - {0, __pyx_k_BottleneckCSP, sizeof(__pyx_k_BottleneckCSP), 0, 0, 1, 1}, - {0, __pyx_k_C3, sizeof(__pyx_k_C3), 0, 0, 1, 1}, - {0, __pyx_k_C3Ghost, sizeof(__pyx_k_C3Ghost), 0, 0, 1, 1}, - {0, __pyx_k_C3SPP, sizeof(__pyx_k_C3SPP), 0, 0, 1, 1}, - {0, __pyx_k_C3TR, sizeof(__pyx_k_C3TR), 0, 0, 1, 1}, - {0, __pyx_k_Concat, sizeof(__pyx_k_Concat), 0, 0, 1, 1}, - {0, __pyx_k_Contract, sizeof(__pyx_k_Contract), 0, 0, 1, 1}, - {0, __pyx_k_Conv, sizeof(__pyx_k_Conv), 0, 0, 1, 1}, - {0, __pyx_k_Conv2d, sizeof(__pyx_k_Conv2d), 0, 0, 1, 1}, - {0, __pyx_k_CrossConv, sizeof(__pyx_k_CrossConv), 0, 0, 1, 1}, - {0, __pyx_k_DWConv, sizeof(__pyx_k_DWConv), 0, 0, 1, 1}, - {0, __pyx_k_Detect, sizeof(__pyx_k_Detect), 0, 0, 1, 1}, - {0, __pyx_k_Detect___init, sizeof(__pyx_k_Detect___init), 0, 0, 1, 1}, - {0, __pyx_k_Detect___init___locals_genexpr, sizeof(__pyx_k_Detect___init___locals_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_Detect__make_grid, sizeof(__pyx_k_Detect__make_grid), 0, 0, 1, 1}, - {0, __pyx_k_Detect_forward, sizeof(__pyx_k_Detect_forward), 0, 0, 1, 1}, - {0, __pyx_k_Error_in, sizeof(__pyx_k_Error_in), 0, 1, 0, 0}, - {0, __pyx_k_Expand, sizeof(__pyx_k_Expand), 0, 0, 1, 1}, - {0, __pyx_k_FILE, sizeof(__pyx_k_FILE), 0, 0, 1, 1}, - {0, __pyx_k_Focus, sizeof(__pyx_k_Focus), 0, 0, 1, 1}, - {0, __pyx_k_Fusing_layers, sizeof(__pyx_k_Fusing_layers), 0, 1, 0, 0}, - {0, __pyx_k_GFLOPs, sizeof(__pyx_k_GFLOPs), 0, 1, 0, 1}, - {0, __pyx_k_GhostBottleneck, sizeof(__pyx_k_GhostBottleneck), 0, 0, 1, 1}, - {0, __pyx_k_GhostConv, sizeof(__pyx_k_GhostConv), 0, 0, 1, 1}, - {0, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, - {0, __pyx_k_LOGGER, sizeof(__pyx_k_LOGGER), 0, 0, 1, 1}, - {0, __pyx_k_MixConv2d, sizeof(__pyx_k_MixConv2d), 0, 0, 1, 1}, - {0, __pyx_k_Model, sizeof(__pyx_k_Model), 0, 0, 1, 1}, - {0, __pyx_k_Model___init, sizeof(__pyx_k_Model___init), 0, 0, 1, 1}, - {0, __pyx_k_Model__apply, sizeof(__pyx_k_Model__apply), 0, 0, 1, 1}, - {0, __pyx_k_Model__clip_augmented, sizeof(__pyx_k_Model__clip_augmented), 0, 0, 1, 1}, - {0, __pyx_k_Model__clip_augmented_locals_gen, sizeof(__pyx_k_Model__clip_augmented_locals_gen), 0, 0, 1, 1}, - {0, __pyx_k_Model__descale_pred, sizeof(__pyx_k_Model__descale_pred), 0, 0, 1, 1}, - {0, __pyx_k_Model__forward_augment, sizeof(__pyx_k_Model__forward_augment), 0, 0, 1, 1}, - {0, __pyx_k_Model__forward_once, sizeof(__pyx_k_Model__forward_once), 0, 0, 1, 1}, - {0, __pyx_k_Model__initialize_biases, sizeof(__pyx_k_Model__initialize_biases), 0, 0, 1, 1}, - {0, __pyx_k_Model__print_biases, sizeof(__pyx_k_Model__print_biases), 0, 0, 1, 1}, - {0, __pyx_k_Model__profile_one_layer, sizeof(__pyx_k_Model__profile_one_layer), 0, 0, 1, 1}, - {0, __pyx_k_Model_forward, sizeof(__pyx_k_Model_forward), 0, 0, 1, 1}, - {0, __pyx_k_Model_fuse, sizeof(__pyx_k_Model_fuse), 0, 0, 1, 1}, - {0, __pyx_k_Model_info, sizeof(__pyx_k_Model_info), 0, 0, 1, 1}, - {0, __pyx_k_Module, sizeof(__pyx_k_Module), 0, 0, 1, 1}, - {0, __pyx_k_ModuleList, sizeof(__pyx_k_ModuleList), 0, 0, 1, 1}, - {0, __pyx_k_NameError, sizeof(__pyx_k_NameError), 0, 0, 1, 1}, - {0, __pyx_k_Overriding_model_yaml_anchors_wi, sizeof(__pyx_k_Overriding_model_yaml_anchors_wi), 0, 1, 0, 0}, - {0, __pyx_k_Overriding_model_yaml_nc, sizeof(__pyx_k_Overriding_model_yaml_nc), 0, 1, 0, 0}, - {0, __pyx_k_Parameter, sizeof(__pyx_k_Parameter), 0, 0, 1, 1}, - {0, __pyx_k_Path, sizeof(__pyx_k_Path), 0, 0, 1, 1}, - {0, __pyx_k_ROOT, sizeof(__pyx_k_ROOT), 0, 0, 1, 1}, - {0, __pyx_k_SPP, sizeof(__pyx_k_SPP), 0, 0, 1, 1}, - {0, __pyx_k_SPPF, sizeof(__pyx_k_SPPF), 0, 0, 1, 1}, - {0, __pyx_k_Sequential, sizeof(__pyx_k_Sequential), 0, 0, 1, 1}, - {0, __pyx_k_T, sizeof(__pyx_k_T), 0, 0, 1, 1}, - {0, __pyx_k_Total, sizeof(__pyx_k_Total), 0, 1, 0, 0}, - {0, __pyx_k__12, sizeof(__pyx_k__12), 0, 1, 0, 0}, - {0, __pyx_k__23, sizeof(__pyx_k__23), 0, 1, 0, 0}, - {0, __pyx_k__24, sizeof(__pyx_k__24), 0, 1, 0, 0}, - {0, __pyx_k__25, sizeof(__pyx_k__25), 0, 1, 0, 0}, - {0, __pyx_k__30, sizeof(__pyx_k__30), 0, 1, 0, 0}, - {0, __pyx_k__32, sizeof(__pyx_k__32), 0, 1, 0, 0}, - {0, __pyx_k__36, sizeof(__pyx_k__36), 0, 0, 1, 1}, - {0, __pyx_k__77, sizeof(__pyx_k__77), 0, 1, 0, 0}, - {0, __pyx_k__78, sizeof(__pyx_k__78), 0, 0, 1, 1}, - {0, __pyx_k__8, sizeof(__pyx_k__8), 0, 0, 1, 1}, - {0, __pyx_k_a, sizeof(__pyx_k_a), 0, 0, 1, 1}, - {0, __pyx_k_action, sizeof(__pyx_k_action), 0, 0, 1, 1}, - {0, __pyx_k_add_argument, sizeof(__pyx_k_add_argument), 0, 0, 1, 1}, - {0, __pyx_k_anchor_grid, sizeof(__pyx_k_anchor_grid), 0, 0, 1, 1}, - {0, __pyx_k_anchors, sizeof(__pyx_k_anchors), 0, 0, 1, 1}, - {0, __pyx_k_anchors, sizeof(__pyx_k_anchors), 0, 1, 0, 1}, - {0, __pyx_k_append, sizeof(__pyx_k_append), 0, 0, 1, 1}, - {0, __pyx_k_apply, sizeof(__pyx_k_apply), 0, 0, 1, 1}, - {0, __pyx_k_arange, sizeof(__pyx_k_arange), 0, 0, 1, 1}, - {0, __pyx_k_argparse, sizeof(__pyx_k_argparse), 0, 0, 1, 1}, - {0, __pyx_k_args, sizeof(__pyx_k_args), 0, 0, 1, 1}, - {0, __pyx_k_arguments, sizeof(__pyx_k_arguments), 0, 1, 0, 1}, - {0, __pyx_k_ascii, sizeof(__pyx_k_ascii), 0, 1, 0, 1}, - {0, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {0, __pyx_k_augment, sizeof(__pyx_k_augment), 0, 0, 1, 1}, - {0, __pyx_k_b, sizeof(__pyx_k_b), 0, 0, 1, 1}, - {0, __pyx_k_backbone, sizeof(__pyx_k_backbone), 0, 1, 0, 1}, - {0, __pyx_k_bias, sizeof(__pyx_k_bias), 0, 0, 1, 1}, - {0, __pyx_k_bn, sizeof(__pyx_k_bn), 0, 0, 1, 1}, - {0, __pyx_k_bn, sizeof(__pyx_k_bn), 0, 1, 0, 1}, - {0, __pyx_k_bs, sizeof(__pyx_k_bs), 0, 0, 1, 1}, - {0, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {0, __pyx_k_c1, sizeof(__pyx_k_c1), 0, 0, 1, 1}, - {0, __pyx_k_c2, sizeof(__pyx_k_c2), 0, 0, 1, 1}, - {0, __pyx_k_cat, sizeof(__pyx_k_cat), 0, 0, 1, 1}, - {0, __pyx_k_cf, sizeof(__pyx_k_cf), 0, 0, 1, 1}, - {0, __pyx_k_cfg, sizeof(__pyx_k_cfg), 0, 0, 1, 1}, - {0, __pyx_k_cfg_2, sizeof(__pyx_k_cfg_2), 0, 1, 0, 0}, - {0, __pyx_k_ch, sizeof(__pyx_k_ch), 0, 0, 1, 1}, - {0, __pyx_k_ch, sizeof(__pyx_k_ch), 0, 1, 0, 1}, - {0, __pyx_k_check_anchor_order, sizeof(__pyx_k_check_anchor_order), 0, 0, 1, 1}, - {0, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {0, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {0, __pyx_k_clip_augmented, sizeof(__pyx_k_clip_augmented), 0, 0, 1, 1}, - {0, __pyx_k_clone, sizeof(__pyx_k_clone), 0, 0, 1, 1}, - {0, __pyx_k_close, sizeof(__pyx_k_close), 0, 0, 1, 1}, - {0, __pyx_k_contiguous, sizeof(__pyx_k_contiguous), 0, 0, 1, 1}, - {0, __pyx_k_conv, sizeof(__pyx_k_conv), 0, 0, 1, 1}, - {0, __pyx_k_copy, sizeof(__pyx_k_copy), 0, 0, 1, 1}, - {0, __pyx_k_cuda, sizeof(__pyx_k_cuda), 0, 0, 1, 1}, - {0, __pyx_k_cuda_device_i_e_0_or_0_1_2_3_or, sizeof(__pyx_k_cuda_device_i_e_0_or_0_1_2_3_or), 0, 1, 0, 0}, - {0, __pyx_k_d, sizeof(__pyx_k_d), 0, 0, 1, 1}, - {0, __pyx_k_data, sizeof(__pyx_k_data), 0, 0, 1, 1}, - {0, __pyx_k_deepcopy, sizeof(__pyx_k_deepcopy), 0, 0, 1, 1}, - {0, __pyx_k_default, sizeof(__pyx_k_default), 0, 0, 1, 1}, - {0, __pyx_k_depth_multiple, sizeof(__pyx_k_depth_multiple), 0, 1, 0, 1}, - {0, __pyx_k_descale_pred, sizeof(__pyx_k_descale_pred), 0, 0, 1, 1}, - {0, __pyx_k_detach, sizeof(__pyx_k_detach), 0, 0, 1, 1}, - {0, __pyx_k_device, sizeof(__pyx_k_device), 0, 0, 1, 1}, - {0, __pyx_k_device_2, sizeof(__pyx_k_device_2), 0, 1, 0, 0}, - {0, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {0, __pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0, 0}, - {0, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1}, - {0, __pyx_k_dt, sizeof(__pyx_k_dt), 0, 0, 1, 1}, - {0, __pyx_k_e, sizeof(__pyx_k_e), 0, 0, 1, 1}, - {0, __pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0, 0}, - {0, __pyx_k_encoding, sizeof(__pyx_k_encoding), 0, 0, 1, 1}, - {0, __pyx_k_enter, sizeof(__pyx_k_enter), 0, 0, 1, 1}, - {0, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {0, __pyx_k_errors, sizeof(__pyx_k_errors), 0, 0, 1, 1}, - {0, __pyx_k_eval, sizeof(__pyx_k_eval), 0, 0, 1, 1}, - {0, __pyx_k_exit, sizeof(__pyx_k_exit), 0, 0, 1, 1}, - {0, __pyx_k_expand, sizeof(__pyx_k_expand), 0, 0, 1, 1}, - {0, __pyx_k_f, sizeof(__pyx_k_f), 0, 0, 1, 1}, - {0, __pyx_k_fi, sizeof(__pyx_k_fi), 0, 0, 1, 1}, - {0, __pyx_k_file, sizeof(__pyx_k_file), 0, 0, 1, 1}, - {0, __pyx_k_flip, sizeof(__pyx_k_flip), 0, 0, 1, 1}, - {0, __pyx_k_flips, sizeof(__pyx_k_flips), 0, 0, 1, 1}, - {0, __pyx_k_float, sizeof(__pyx_k_float), 0, 0, 1, 1}, - {0, __pyx_k_fn, sizeof(__pyx_k_fn), 0, 0, 1, 1}, - {0, __pyx_k_forward, sizeof(__pyx_k_forward), 0, 0, 1, 1}, - {0, __pyx_k_forward_augment, sizeof(__pyx_k_forward_augment), 0, 0, 1, 1}, - {0, __pyx_k_forward_fuse, sizeof(__pyx_k_forward_fuse), 0, 0, 1, 1}, - {0, __pyx_k_forward_once, sizeof(__pyx_k_forward_once), 0, 0, 1, 1}, - {0, __pyx_k_from, sizeof(__pyx_k_from), 0, 1, 0, 1}, - {0, __pyx_k_fuse, sizeof(__pyx_k_fuse), 0, 0, 1, 1}, - {0, __pyx_k_fuse_conv_and_bn, sizeof(__pyx_k_fuse_conv_and_bn), 0, 0, 1, 1}, - {0, __pyx_k_g, sizeof(__pyx_k_g), 0, 0, 1, 1}, - {0, __pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0, 0}, - {0, __pyx_k_gd, sizeof(__pyx_k_gd), 0, 0, 1, 1}, - {0, __pyx_k_genexpr, sizeof(__pyx_k_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_get, sizeof(__pyx_k_get), 0, 0, 1, 1}, - {0, __pyx_k_grid, sizeof(__pyx_k_grid), 0, 0, 1, 1}, - {0, __pyx_k_gs, sizeof(__pyx_k_gs), 0, 0, 1, 1}, - {0, __pyx_k_gw, sizeof(__pyx_k_gw), 0, 0, 1, 1}, - {0, __pyx_k_head, sizeof(__pyx_k_head), 0, 1, 0, 1}, - {0, __pyx_k_help, sizeof(__pyx_k_help), 0, 0, 1, 1}, - {0, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1}, - {0, __pyx_k_ignore, sizeof(__pyx_k_ignore), 0, 1, 0, 1}, - {0, __pyx_k_ij, sizeof(__pyx_k_ij), 0, 1, 0, 1}, - {0, __pyx_k_img, sizeof(__pyx_k_img), 0, 0, 1, 1}, - {0, __pyx_k_img_size, sizeof(__pyx_k_img_size), 0, 0, 1, 1}, - {0, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {0, __pyx_k_indexing, sizeof(__pyx_k_indexing), 0, 0, 1, 1}, - {0, __pyx_k_info, sizeof(__pyx_k_info), 0, 0, 1, 1}, - {0, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1}, - {0, __pyx_k_init_subclass, sizeof(__pyx_k_init_subclass), 0, 0, 1, 1}, - {0, __pyx_k_initialize_biases, sizeof(__pyx_k_initialize_biases), 0, 0, 1, 1}, - {0, __pyx_k_initialize_weights, sizeof(__pyx_k_initialize_weights), 0, 0, 1, 1}, - {0, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {0, __pyx_k_inplace, sizeof(__pyx_k_inplace), 0, 0, 1, 1}, - {0, __pyx_k_inplace, sizeof(__pyx_k_inplace), 0, 1, 0, 1}, - {0, __pyx_k_inputs, sizeof(__pyx_k_inputs), 0, 0, 1, 1}, - {0, __pyx_k_insert, sizeof(__pyx_k_insert), 0, 0, 1, 1}, - {0, __pyx_k_is_available, sizeof(__pyx_k_is_available), 0, 0, 1, 1}, - {0, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {0, __pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0, 0}, - {0, __pyx_k_j, sizeof(__pyx_k_j), 0, 0, 1, 1}, - {0, __pyx_k_layers, sizeof(__pyx_k_layers), 0, 0, 1, 1}, - {0, __pyx_k_log, sizeof(__pyx_k_log), 0, 0, 1, 1}, - {0, __pyx_k_m, sizeof(__pyx_k_m), 0, 0, 1, 1}, - {0, __pyx_k_m_2, sizeof(__pyx_k_m_2), 0, 0, 1, 1}, - {0, __pyx_k_main, sizeof(__pyx_k_main), 0, 1, 0, 0}, - {0, __pyx_k_main_2, sizeof(__pyx_k_main_2), 0, 0, 1, 1}, - {0, __pyx_k_main_2, sizeof(__pyx_k_main_2), 0, 1, 0, 1}, - {0, __pyx_k_make_divisible, sizeof(__pyx_k_make_divisible), 0, 0, 1, 1}, - {0, __pyx_k_make_grid, sizeof(__pyx_k_make_grid), 0, 0, 1, 1}, - {0, __pyx_k_map, sizeof(__pyx_k_map), 0, 0, 1, 1}, - {0, __pyx_k_math, sizeof(__pyx_k_math), 0, 0, 1, 1}, - {0, __pyx_k_max, sizeof(__pyx_k_max), 0, 0, 1, 1}, - {0, __pyx_k_mean, sizeof(__pyx_k_mean), 0, 0, 1, 1}, - {0, __pyx_k_meshgrid, sizeof(__pyx_k_meshgrid), 0, 0, 1, 1}, - {0, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1}, - {0, __pyx_k_mi, sizeof(__pyx_k_mi), 0, 0, 1, 1}, - {0, __pyx_k_model, sizeof(__pyx_k_model), 0, 0, 1, 1}, - {0, __pyx_k_model_info, sizeof(__pyx_k_model_info), 0, 0, 1, 1}, - {0, __pyx_k_model_yaml, sizeof(__pyx_k_model_yaml), 0, 1, 0, 0}, - {0, __pyx_k_models, sizeof(__pyx_k_models), 0, 1, 0, 1}, - {0, __pyx_k_module, sizeof(__pyx_k_module), 0, 1, 0, 0}, - {0, __pyx_k_module_2, sizeof(__pyx_k_module_2), 0, 1, 0, 1}, - {0, __pyx_k_module_3, sizeof(__pyx_k_module_3), 0, 0, 1, 1}, - {0, __pyx_k_modules, sizeof(__pyx_k_modules), 0, 0, 1, 1}, - {0, __pyx_k_mro_entries, sizeof(__pyx_k_mro_entries), 0, 0, 1, 1}, - {0, __pyx_k_n, sizeof(__pyx_k_n), 0, 0, 1, 1}, - {0, __pyx_k_n, sizeof(__pyx_k_n), 0, 1, 0, 1}, - {0, __pyx_k_n_2, sizeof(__pyx_k_n_2), 0, 0, 1, 1}, - {0, __pyx_k_na, sizeof(__pyx_k_na), 0, 0, 1, 1}, - {0, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {0, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {0, __pyx_k_names, sizeof(__pyx_k_names), 0, 0, 1, 1}, - {0, __pyx_k_nc, sizeof(__pyx_k_nc), 0, 0, 1, 1}, - {0, __pyx_k_nc, sizeof(__pyx_k_nc), 0, 1, 0, 1}, - {0, __pyx_k_nl, sizeof(__pyx_k_nl), 0, 0, 1, 1}, - {0, __pyx_k_nn, sizeof(__pyx_k_nn), 0, 0, 1, 1}, - {0, __pyx_k_no, sizeof(__pyx_k_no), 0, 0, 1, 1}, - {0, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, - {0, __pyx_k_numel, sizeof(__pyx_k_numel), 0, 0, 1, 1}, - {0, __pyx_k_nx, sizeof(__pyx_k_nx), 0, 0, 1, 1}, - {0, __pyx_k_ny, sizeof(__pyx_k_ny), 0, 0, 1, 1}, - {0, __pyx_k_o, sizeof(__pyx_k_o), 0, 0, 1, 1}, - {0, __pyx_k_onnx_dynamic, sizeof(__pyx_k_onnx_dynamic), 0, 0, 1, 1}, - {0, __pyx_k_open, sizeof(__pyx_k_open), 0, 0, 1, 1}, - {0, __pyx_k_opt, sizeof(__pyx_k_opt), 0, 0, 1, 1}, - {0, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1}, - {0, __pyx_k_parameters, sizeof(__pyx_k_parameters), 0, 0, 1, 1}, - {0, __pyx_k_params, sizeof(__pyx_k_params), 0, 1, 0, 1}, - {0, __pyx_k_parents, sizeof(__pyx_k_parents), 0, 0, 1, 1}, - {0, __pyx_k_parse_args, sizeof(__pyx_k_parse_args), 0, 0, 1, 1}, - {0, __pyx_k_parse_model, sizeof(__pyx_k_parse_model), 0, 0, 1, 1}, - {0, __pyx_k_parse_model_locals_genexpr, sizeof(__pyx_k_parse_model_locals_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_parser, sizeof(__pyx_k_parser), 0, 0, 1, 1}, - {0, __pyx_k_path, sizeof(__pyx_k_path), 0, 0, 1, 1}, - {0, __pyx_k_pathlib, sizeof(__pyx_k_pathlib), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_model, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_model_2, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model_2), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_model_3, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model_3), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_model_4, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model_4), 0, 0, 1, 0}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3), 0, 0, 1, 1}, - {0, __pyx_k_permute, sizeof(__pyx_k_permute), 0, 0, 1, 1}, - {0, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1}, - {0, __pyx_k_print, sizeof(__pyx_k_print), 0, 0, 1, 1}, - {0, __pyx_k_print_args, sizeof(__pyx_k_print_args), 0, 0, 1, 1}, - {0, __pyx_k_print_biases, sizeof(__pyx_k_print_biases), 0, 0, 1, 1}, - {0, __pyx_k_profile, sizeof(__pyx_k_profile), 0, 0, 1, 1}, - {0, __pyx_k_profile_2, sizeof(__pyx_k_profile_2), 0, 1, 0, 0}, - {0, __pyx_k_profile_model_speed, sizeof(__pyx_k_profile_model_speed), 0, 1, 0, 0}, - {0, __pyx_k_profile_one_layer, sizeof(__pyx_k_profile_one_layer), 0, 0, 1, 1}, - {0, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1}, - {0, __pyx_k_rand, sizeof(__pyx_k_rand), 0, 0, 1, 1}, - {0, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {0, __pyx_k_register_buffer, sizeof(__pyx_k_register_buffer), 0, 0, 1, 1}, - {0, __pyx_k_requires_grad, sizeof(__pyx_k_requires_grad), 0, 0, 1, 1}, - {0, __pyx_k_resolve, sizeof(__pyx_k_resolve), 0, 0, 1, 1}, - {0, __pyx_k_rglob, sizeof(__pyx_k_rglob), 0, 0, 1, 1}, - {0, __pyx_k_round, sizeof(__pyx_k_round), 0, 0, 1, 1}, - {0, __pyx_k_s, sizeof(__pyx_k_s), 0, 0, 1, 1}, - {0, __pyx_k_safe_load, sizeof(__pyx_k_safe_load), 0, 0, 1, 1}, - {0, __pyx_k_save, sizeof(__pyx_k_save), 0, 0, 1, 1}, - {0, __pyx_k_scale, sizeof(__pyx_k_scale), 0, 0, 1, 1}, - {0, __pyx_k_scale_img, sizeof(__pyx_k_scale_img), 0, 0, 1, 1}, - {0, __pyx_k_select_device, sizeof(__pyx_k_select_device), 0, 0, 1, 1}, - {0, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1}, - {0, __pyx_k_send, sizeof(__pyx_k_send), 0, 0, 1, 1}, - {0, __pyx_k_set_name, sizeof(__pyx_k_set_name), 0, 0, 1, 1}, - {0, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {0, __pyx_k_si, sizeof(__pyx_k_si), 0, 0, 1, 1}, - {0, __pyx_k_sigmoid, sizeof(__pyx_k_sigmoid), 0, 0, 1, 1}, - {0, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {0, __pyx_k_stack, sizeof(__pyx_k_stack), 0, 0, 1, 1}, - {0, __pyx_k_stem, sizeof(__pyx_k_stem), 0, 0, 1, 1}, - {0, __pyx_k_store_true, sizeof(__pyx_k_store_true), 0, 1, 0, 1}, - {0, __pyx_k_stride, sizeof(__pyx_k_stride), 0, 0, 1, 1}, - {0, __pyx_k_sum, sizeof(__pyx_k_sum), 0, 0, 1, 1}, - {0, __pyx_k_super, sizeof(__pyx_k_super), 0, 0, 1, 1}, - {0, __pyx_k_sys, sizeof(__pyx_k_sys), 0, 0, 1, 1}, - {0, __pyx_k_t, sizeof(__pyx_k_t), 0, 0, 1, 1}, - {0, __pyx_k_tensor, sizeof(__pyx_k_tensor), 0, 0, 1, 1}, - {0, __pyx_k_test, sizeof(__pyx_k_test), 0, 1, 0, 0}, - {0, __pyx_k_test_2, sizeof(__pyx_k_test_2), 0, 0, 1, 1}, - {0, __pyx_k_test_3, sizeof(__pyx_k_test_3), 0, 0, 1, 1}, - {0, __pyx_k_test_all_yolo_yaml, sizeof(__pyx_k_test_all_yolo_yaml), 0, 1, 0, 0}, - {0, __pyx_k_thop, sizeof(__pyx_k_thop), 0, 0, 1, 1}, - {0, __pyx_k_throw, sizeof(__pyx_k_throw), 0, 0, 1, 1}, - {0, __pyx_k_time_ms, sizeof(__pyx_k_time_ms), 0, 1, 0, 0}, - {0, __pyx_k_time_sync, sizeof(__pyx_k_time_sync), 0, 0, 1, 1}, - {0, __pyx_k_to, sizeof(__pyx_k_to), 0, 0, 1, 1}, - {0, __pyx_k_tolist, sizeof(__pyx_k_tolist), 0, 0, 1, 1}, - {0, __pyx_k_torch, sizeof(__pyx_k_torch), 0, 0, 1, 1}, - {0, __pyx_k_train, sizeof(__pyx_k_train), 0, 0, 1, 1}, - {0, __pyx_k_training, sizeof(__pyx_k_training), 0, 0, 1, 1}, - {0, __pyx_k_type, sizeof(__pyx_k_type), 0, 0, 1, 1}, - {0, __pyx_k_verbose, sizeof(__pyx_k_verbose), 0, 0, 1, 1}, - {0, __pyx_k_view, sizeof(__pyx_k_view), 0, 0, 1, 1}, - {0, __pyx_k_visualize, sizeof(__pyx_k_visualize), 0, 0, 1, 1}, - {0, __pyx_k_weight, sizeof(__pyx_k_weight), 0, 0, 1, 1}, - {0, __pyx_k_wh, sizeof(__pyx_k_wh), 0, 0, 1, 1}, - {0, __pyx_k_width_multiple, sizeof(__pyx_k_width_multiple), 0, 1, 0, 1}, - {0, __pyx_k_with_nc, sizeof(__pyx_k_with_nc), 0, 1, 0, 0}, - {0, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {0, __pyx_k_xi, sizeof(__pyx_k_xi), 0, 0, 1, 1}, - {0, __pyx_k_xv, sizeof(__pyx_k_xv), 0, 0, 1, 1}, - {0, __pyx_k_xy, sizeof(__pyx_k_xy), 0, 0, 1, 1}, - {0, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1}, - {0, __pyx_k_yaml, sizeof(__pyx_k_yaml), 0, 0, 1, 1}, - {0, __pyx_k_yaml_file, sizeof(__pyx_k_yaml_file), 0, 0, 1, 1}, - {0, __pyx_k_yi, sizeof(__pyx_k_yi), 0, 0, 1, 1}, - {0, __pyx_k_yolo_yaml, sizeof(__pyx_k_yolo_yaml), 0, 1, 0, 0}, - {0, __pyx_k_yolov5s_yaml, sizeof(__pyx_k_yolov5s_yaml), 0, 1, 0, 0}, - {0, __pyx_k_yv, sizeof(__pyx_k_yv), 0, 0, 1, 1}, - {0, __pyx_k_z, sizeof(__pyx_k_z), 0, 0, 1, 1}, - {0, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1}, - {0, __pyx_k_zip, sizeof(__pyx_k_zip), 0, 0, 1, 1}, - #else - {&__pyx_kp_u_10, __pyx_k_10, sizeof(__pyx_k_10), 0, 1, 0, 0}, - {&__pyx_kp_u_10_0f, __pyx_k_10_0f, sizeof(__pyx_k_10_0f), 0, 1, 0, 0}, - {&__pyx_kp_u_10_2f, __pyx_k_10_2f, sizeof(__pyx_k_10_2f), 0, 1, 0, 0}, - {&__pyx_kp_u_10s, __pyx_k_10s, sizeof(__pyx_k_10s), 0, 1, 0, 0}, - {&__pyx_kp_u_18, __pyx_k_18, sizeof(__pyx_k_18), 0, 1, 0, 0}, - {&__pyx_kp_u_3, __pyx_k_3, sizeof(__pyx_k_3), 0, 1, 0, 0}, - {&__pyx_kp_u_30, __pyx_k_30, sizeof(__pyx_k_30), 0, 1, 0, 0}, - {&__pyx_kp_u_40, __pyx_k_40, sizeof(__pyx_k_40), 0, 1, 0, 0}, - {&__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3, __pyx_k_6g_Conv2d_bias_10_3g_10_3g_10_3, sizeof(__pyx_k_6g_Conv2d_bias_10_3g_10_3g_10_3), 0, 1, 0, 0}, - {&__pyx_n_s_ArgumentParser, __pyx_k_ArgumentParser, sizeof(__pyx_k_ArgumentParser), 0, 0, 1, 1}, - {&__pyx_n_s_BatchNorm2d, __pyx_k_BatchNorm2d, sizeof(__pyx_k_BatchNorm2d), 0, 0, 1, 1}, - {&__pyx_n_s_Bottleneck, __pyx_k_Bottleneck, sizeof(__pyx_k_Bottleneck), 0, 0, 1, 1}, - {&__pyx_n_s_BottleneckCSP, __pyx_k_BottleneckCSP, sizeof(__pyx_k_BottleneckCSP), 0, 0, 1, 1}, - {&__pyx_n_s_C3, __pyx_k_C3, sizeof(__pyx_k_C3), 0, 0, 1, 1}, - {&__pyx_n_s_C3Ghost, __pyx_k_C3Ghost, sizeof(__pyx_k_C3Ghost), 0, 0, 1, 1}, - {&__pyx_n_s_C3SPP, __pyx_k_C3SPP, sizeof(__pyx_k_C3SPP), 0, 0, 1, 1}, - {&__pyx_n_s_C3TR, __pyx_k_C3TR, sizeof(__pyx_k_C3TR), 0, 0, 1, 1}, - {&__pyx_n_s_Concat, __pyx_k_Concat, sizeof(__pyx_k_Concat), 0, 0, 1, 1}, - {&__pyx_n_s_Contract, __pyx_k_Contract, sizeof(__pyx_k_Contract), 0, 0, 1, 1}, - {&__pyx_n_s_Conv, __pyx_k_Conv, sizeof(__pyx_k_Conv), 0, 0, 1, 1}, - {&__pyx_n_s_Conv2d, __pyx_k_Conv2d, sizeof(__pyx_k_Conv2d), 0, 0, 1, 1}, - {&__pyx_n_s_CrossConv, __pyx_k_CrossConv, sizeof(__pyx_k_CrossConv), 0, 0, 1, 1}, - {&__pyx_n_s_DWConv, __pyx_k_DWConv, sizeof(__pyx_k_DWConv), 0, 0, 1, 1}, - {&__pyx_n_s_Detect, __pyx_k_Detect, sizeof(__pyx_k_Detect), 0, 0, 1, 1}, - {&__pyx_n_s_Detect___init, __pyx_k_Detect___init, sizeof(__pyx_k_Detect___init), 0, 0, 1, 1}, - {&__pyx_n_s_Detect___init___locals_genexpr, __pyx_k_Detect___init___locals_genexpr, sizeof(__pyx_k_Detect___init___locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_Detect__make_grid, __pyx_k_Detect__make_grid, sizeof(__pyx_k_Detect__make_grid), 0, 0, 1, 1}, - {&__pyx_n_s_Detect_forward, __pyx_k_Detect_forward, sizeof(__pyx_k_Detect_forward), 0, 0, 1, 1}, - {&__pyx_kp_u_Error_in, __pyx_k_Error_in, sizeof(__pyx_k_Error_in), 0, 1, 0, 0}, - {&__pyx_n_s_Expand, __pyx_k_Expand, sizeof(__pyx_k_Expand), 0, 0, 1, 1}, - {&__pyx_n_s_FILE, __pyx_k_FILE, sizeof(__pyx_k_FILE), 0, 0, 1, 1}, - {&__pyx_n_s_Focus, __pyx_k_Focus, sizeof(__pyx_k_Focus), 0, 0, 1, 1}, - {&__pyx_kp_u_Fusing_layers, __pyx_k_Fusing_layers, sizeof(__pyx_k_Fusing_layers), 0, 1, 0, 0}, - {&__pyx_n_u_GFLOPs, __pyx_k_GFLOPs, sizeof(__pyx_k_GFLOPs), 0, 1, 0, 1}, - {&__pyx_n_s_GhostBottleneck, __pyx_k_GhostBottleneck, sizeof(__pyx_k_GhostBottleneck), 0, 0, 1, 1}, - {&__pyx_n_s_GhostConv, __pyx_k_GhostConv, sizeof(__pyx_k_GhostConv), 0, 0, 1, 1}, - {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, - {&__pyx_n_s_LOGGER, __pyx_k_LOGGER, sizeof(__pyx_k_LOGGER), 0, 0, 1, 1}, - {&__pyx_n_s_MixConv2d, __pyx_k_MixConv2d, sizeof(__pyx_k_MixConv2d), 0, 0, 1, 1}, - {&__pyx_n_s_Model, __pyx_k_Model, sizeof(__pyx_k_Model), 0, 0, 1, 1}, - {&__pyx_n_s_Model___init, __pyx_k_Model___init, sizeof(__pyx_k_Model___init), 0, 0, 1, 1}, - {&__pyx_n_s_Model__apply, __pyx_k_Model__apply, sizeof(__pyx_k_Model__apply), 0, 0, 1, 1}, - {&__pyx_n_s_Model__clip_augmented, __pyx_k_Model__clip_augmented, sizeof(__pyx_k_Model__clip_augmented), 0, 0, 1, 1}, - {&__pyx_n_s_Model__clip_augmented_locals_gen, __pyx_k_Model__clip_augmented_locals_gen, sizeof(__pyx_k_Model__clip_augmented_locals_gen), 0, 0, 1, 1}, - {&__pyx_n_s_Model__descale_pred, __pyx_k_Model__descale_pred, sizeof(__pyx_k_Model__descale_pred), 0, 0, 1, 1}, - {&__pyx_n_s_Model__forward_augment, __pyx_k_Model__forward_augment, sizeof(__pyx_k_Model__forward_augment), 0, 0, 1, 1}, - {&__pyx_n_s_Model__forward_once, __pyx_k_Model__forward_once, sizeof(__pyx_k_Model__forward_once), 0, 0, 1, 1}, - {&__pyx_n_s_Model__initialize_biases, __pyx_k_Model__initialize_biases, sizeof(__pyx_k_Model__initialize_biases), 0, 0, 1, 1}, - {&__pyx_n_s_Model__print_biases, __pyx_k_Model__print_biases, sizeof(__pyx_k_Model__print_biases), 0, 0, 1, 1}, - {&__pyx_n_s_Model__profile_one_layer, __pyx_k_Model__profile_one_layer, sizeof(__pyx_k_Model__profile_one_layer), 0, 0, 1, 1}, - {&__pyx_n_s_Model_forward, __pyx_k_Model_forward, sizeof(__pyx_k_Model_forward), 0, 0, 1, 1}, - {&__pyx_n_s_Model_fuse, __pyx_k_Model_fuse, sizeof(__pyx_k_Model_fuse), 0, 0, 1, 1}, - {&__pyx_n_s_Model_info, __pyx_k_Model_info, sizeof(__pyx_k_Model_info), 0, 0, 1, 1}, - {&__pyx_n_s_Module, __pyx_k_Module, sizeof(__pyx_k_Module), 0, 0, 1, 1}, - {&__pyx_n_s_ModuleList, __pyx_k_ModuleList, sizeof(__pyx_k_ModuleList), 0, 0, 1, 1}, - {&__pyx_n_s_NameError, __pyx_k_NameError, sizeof(__pyx_k_NameError), 0, 0, 1, 1}, - {&__pyx_kp_u_Overriding_model_yaml_anchors_wi, __pyx_k_Overriding_model_yaml_anchors_wi, sizeof(__pyx_k_Overriding_model_yaml_anchors_wi), 0, 1, 0, 0}, - {&__pyx_kp_u_Overriding_model_yaml_nc, __pyx_k_Overriding_model_yaml_nc, sizeof(__pyx_k_Overriding_model_yaml_nc), 0, 1, 0, 0}, - {&__pyx_n_s_Parameter, __pyx_k_Parameter, sizeof(__pyx_k_Parameter), 0, 0, 1, 1}, - {&__pyx_n_s_Path, __pyx_k_Path, sizeof(__pyx_k_Path), 0, 0, 1, 1}, - {&__pyx_n_s_ROOT, __pyx_k_ROOT, sizeof(__pyx_k_ROOT), 0, 0, 1, 1}, - {&__pyx_n_s_SPP, __pyx_k_SPP, sizeof(__pyx_k_SPP), 0, 0, 1, 1}, - {&__pyx_n_s_SPPF, __pyx_k_SPPF, sizeof(__pyx_k_SPPF), 0, 0, 1, 1}, - {&__pyx_n_s_Sequential, __pyx_k_Sequential, sizeof(__pyx_k_Sequential), 0, 0, 1, 1}, - {&__pyx_n_s_T, __pyx_k_T, sizeof(__pyx_k_T), 0, 0, 1, 1}, - {&__pyx_kp_u_Total, __pyx_k_Total, sizeof(__pyx_k_Total), 0, 1, 0, 0}, - {&__pyx_kp_u__12, __pyx_k__12, sizeof(__pyx_k__12), 0, 1, 0, 0}, - {&__pyx_kp_u__23, __pyx_k__23, sizeof(__pyx_k__23), 0, 1, 0, 0}, - {&__pyx_kp_u__24, __pyx_k__24, sizeof(__pyx_k__24), 0, 1, 0, 0}, - {&__pyx_kp_u__25, __pyx_k__25, sizeof(__pyx_k__25), 0, 1, 0, 0}, - {&__pyx_kp_u__30, __pyx_k__30, sizeof(__pyx_k__30), 0, 1, 0, 0}, - {&__pyx_kp_u__32, __pyx_k__32, sizeof(__pyx_k__32), 0, 1, 0, 0}, - {&__pyx_n_s__36, __pyx_k__36, sizeof(__pyx_k__36), 0, 0, 1, 1}, - {&__pyx_kp_u__77, __pyx_k__77, sizeof(__pyx_k__77), 0, 1, 0, 0}, - {&__pyx_n_s__78, __pyx_k__78, sizeof(__pyx_k__78), 0, 0, 1, 1}, - {&__pyx_n_s__8, __pyx_k__8, sizeof(__pyx_k__8), 0, 0, 1, 1}, - {&__pyx_n_s_a, __pyx_k_a, sizeof(__pyx_k_a), 0, 0, 1, 1}, - {&__pyx_n_s_action, __pyx_k_action, sizeof(__pyx_k_action), 0, 0, 1, 1}, - {&__pyx_n_s_add_argument, __pyx_k_add_argument, sizeof(__pyx_k_add_argument), 0, 0, 1, 1}, - {&__pyx_n_s_anchor_grid, __pyx_k_anchor_grid, sizeof(__pyx_k_anchor_grid), 0, 0, 1, 1}, - {&__pyx_n_s_anchors, __pyx_k_anchors, sizeof(__pyx_k_anchors), 0, 0, 1, 1}, - {&__pyx_n_u_anchors, __pyx_k_anchors, sizeof(__pyx_k_anchors), 0, 1, 0, 1}, - {&__pyx_n_s_append, __pyx_k_append, sizeof(__pyx_k_append), 0, 0, 1, 1}, - {&__pyx_n_s_apply, __pyx_k_apply, sizeof(__pyx_k_apply), 0, 0, 1, 1}, - {&__pyx_n_s_arange, __pyx_k_arange, sizeof(__pyx_k_arange), 0, 0, 1, 1}, - {&__pyx_n_s_argparse, __pyx_k_argparse, sizeof(__pyx_k_argparse), 0, 0, 1, 1}, - {&__pyx_n_s_args, __pyx_k_args, sizeof(__pyx_k_args), 0, 0, 1, 1}, - {&__pyx_n_u_arguments, __pyx_k_arguments, sizeof(__pyx_k_arguments), 0, 1, 0, 1}, - {&__pyx_n_u_ascii, __pyx_k_ascii, sizeof(__pyx_k_ascii), 0, 1, 0, 1}, - {&__pyx_n_s_asyncio_coroutines, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {&__pyx_n_s_augment, __pyx_k_augment, sizeof(__pyx_k_augment), 0, 0, 1, 1}, - {&__pyx_n_s_b, __pyx_k_b, sizeof(__pyx_k_b), 0, 0, 1, 1}, - {&__pyx_n_u_backbone, __pyx_k_backbone, sizeof(__pyx_k_backbone), 0, 1, 0, 1}, - {&__pyx_n_s_bias, __pyx_k_bias, sizeof(__pyx_k_bias), 0, 0, 1, 1}, - {&__pyx_n_s_bn, __pyx_k_bn, sizeof(__pyx_k_bn), 0, 0, 1, 1}, - {&__pyx_n_u_bn, __pyx_k_bn, sizeof(__pyx_k_bn), 0, 1, 0, 1}, - {&__pyx_n_s_bs, __pyx_k_bs, sizeof(__pyx_k_bs), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_s_c1, __pyx_k_c1, sizeof(__pyx_k_c1), 0, 0, 1, 1}, - {&__pyx_n_s_c2, __pyx_k_c2, sizeof(__pyx_k_c2), 0, 0, 1, 1}, - {&__pyx_n_s_cat, __pyx_k_cat, sizeof(__pyx_k_cat), 0, 0, 1, 1}, - {&__pyx_n_s_cf, __pyx_k_cf, sizeof(__pyx_k_cf), 0, 0, 1, 1}, - {&__pyx_n_s_cfg, __pyx_k_cfg, sizeof(__pyx_k_cfg), 0, 0, 1, 1}, - {&__pyx_kp_u_cfg_2, __pyx_k_cfg_2, sizeof(__pyx_k_cfg_2), 0, 1, 0, 0}, - {&__pyx_n_s_ch, __pyx_k_ch, sizeof(__pyx_k_ch), 0, 0, 1, 1}, - {&__pyx_n_u_ch, __pyx_k_ch, sizeof(__pyx_k_ch), 0, 1, 0, 1}, - {&__pyx_n_s_check_anchor_order, __pyx_k_check_anchor_order, sizeof(__pyx_k_check_anchor_order), 0, 0, 1, 1}, - {&__pyx_n_s_class_getitem, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_clip_augmented, __pyx_k_clip_augmented, sizeof(__pyx_k_clip_augmented), 0, 0, 1, 1}, - {&__pyx_n_s_clone, __pyx_k_clone, sizeof(__pyx_k_clone), 0, 0, 1, 1}, - {&__pyx_n_s_close, __pyx_k_close, sizeof(__pyx_k_close), 0, 0, 1, 1}, - {&__pyx_n_s_contiguous, __pyx_k_contiguous, sizeof(__pyx_k_contiguous), 0, 0, 1, 1}, - {&__pyx_n_s_conv, __pyx_k_conv, sizeof(__pyx_k_conv), 0, 0, 1, 1}, - {&__pyx_n_s_copy, __pyx_k_copy, sizeof(__pyx_k_copy), 0, 0, 1, 1}, - {&__pyx_n_s_cuda, __pyx_k_cuda, sizeof(__pyx_k_cuda), 0, 0, 1, 1}, - {&__pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or, __pyx_k_cuda_device_i_e_0_or_0_1_2_3_or, sizeof(__pyx_k_cuda_device_i_e_0_or_0_1_2_3_or), 0, 1, 0, 0}, - {&__pyx_n_s_d, __pyx_k_d, sizeof(__pyx_k_d), 0, 0, 1, 1}, - {&__pyx_n_s_data, __pyx_k_data, sizeof(__pyx_k_data), 0, 0, 1, 1}, - {&__pyx_n_s_deepcopy, __pyx_k_deepcopy, sizeof(__pyx_k_deepcopy), 0, 0, 1, 1}, - {&__pyx_n_s_default, __pyx_k_default, sizeof(__pyx_k_default), 0, 0, 1, 1}, - {&__pyx_n_u_depth_multiple, __pyx_k_depth_multiple, sizeof(__pyx_k_depth_multiple), 0, 1, 0, 1}, - {&__pyx_n_s_descale_pred, __pyx_k_descale_pred, sizeof(__pyx_k_descale_pred), 0, 0, 1, 1}, - {&__pyx_n_s_detach, __pyx_k_detach, sizeof(__pyx_k_detach), 0, 0, 1, 1}, - {&__pyx_n_s_device, __pyx_k_device, sizeof(__pyx_k_device), 0, 0, 1, 1}, - {&__pyx_kp_u_device_2, __pyx_k_device_2, sizeof(__pyx_k_device_2), 0, 1, 0, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_kp_u_disable, __pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0, 0}, - {&__pyx_n_s_doc, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1}, - {&__pyx_n_s_dt, __pyx_k_dt, sizeof(__pyx_k_dt), 0, 0, 1, 1}, - {&__pyx_n_s_e, __pyx_k_e, sizeof(__pyx_k_e), 0, 0, 1, 1}, - {&__pyx_kp_u_enable, __pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0, 0}, - {&__pyx_n_s_encoding, __pyx_k_encoding, sizeof(__pyx_k_encoding), 0, 0, 1, 1}, - {&__pyx_n_s_enter, __pyx_k_enter, sizeof(__pyx_k_enter), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_errors, __pyx_k_errors, sizeof(__pyx_k_errors), 0, 0, 1, 1}, - {&__pyx_n_s_eval, __pyx_k_eval, sizeof(__pyx_k_eval), 0, 0, 1, 1}, - {&__pyx_n_s_exit, __pyx_k_exit, sizeof(__pyx_k_exit), 0, 0, 1, 1}, - {&__pyx_n_s_expand, __pyx_k_expand, sizeof(__pyx_k_expand), 0, 0, 1, 1}, - {&__pyx_n_s_f, __pyx_k_f, sizeof(__pyx_k_f), 0, 0, 1, 1}, - {&__pyx_n_s_fi, __pyx_k_fi, sizeof(__pyx_k_fi), 0, 0, 1, 1}, - {&__pyx_n_s_file, __pyx_k_file, sizeof(__pyx_k_file), 0, 0, 1, 1}, - {&__pyx_n_s_flip, __pyx_k_flip, sizeof(__pyx_k_flip), 0, 0, 1, 1}, - {&__pyx_n_s_flips, __pyx_k_flips, sizeof(__pyx_k_flips), 0, 0, 1, 1}, - {&__pyx_n_s_float, __pyx_k_float, sizeof(__pyx_k_float), 0, 0, 1, 1}, - {&__pyx_n_s_fn, __pyx_k_fn, sizeof(__pyx_k_fn), 0, 0, 1, 1}, - {&__pyx_n_s_forward, __pyx_k_forward, sizeof(__pyx_k_forward), 0, 0, 1, 1}, - {&__pyx_n_s_forward_augment, __pyx_k_forward_augment, sizeof(__pyx_k_forward_augment), 0, 0, 1, 1}, - {&__pyx_n_s_forward_fuse, __pyx_k_forward_fuse, sizeof(__pyx_k_forward_fuse), 0, 0, 1, 1}, - {&__pyx_n_s_forward_once, __pyx_k_forward_once, sizeof(__pyx_k_forward_once), 0, 0, 1, 1}, - {&__pyx_n_u_from, __pyx_k_from, sizeof(__pyx_k_from), 0, 1, 0, 1}, - {&__pyx_n_s_fuse, __pyx_k_fuse, sizeof(__pyx_k_fuse), 0, 0, 1, 1}, - {&__pyx_n_s_fuse_conv_and_bn, __pyx_k_fuse_conv_and_bn, sizeof(__pyx_k_fuse_conv_and_bn), 0, 0, 1, 1}, - {&__pyx_n_s_g, __pyx_k_g, sizeof(__pyx_k_g), 0, 0, 1, 1}, - {&__pyx_kp_u_gc, __pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0, 0}, - {&__pyx_n_s_gd, __pyx_k_gd, sizeof(__pyx_k_gd), 0, 0, 1, 1}, - {&__pyx_n_s_genexpr, __pyx_k_genexpr, sizeof(__pyx_k_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_get, __pyx_k_get, sizeof(__pyx_k_get), 0, 0, 1, 1}, - {&__pyx_n_s_grid, __pyx_k_grid, sizeof(__pyx_k_grid), 0, 0, 1, 1}, - {&__pyx_n_s_gs, __pyx_k_gs, sizeof(__pyx_k_gs), 0, 0, 1, 1}, - {&__pyx_n_s_gw, __pyx_k_gw, sizeof(__pyx_k_gw), 0, 0, 1, 1}, - {&__pyx_n_u_head, __pyx_k_head, sizeof(__pyx_k_head), 0, 1, 0, 1}, - {&__pyx_n_s_help, __pyx_k_help, sizeof(__pyx_k_help), 0, 0, 1, 1}, - {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1}, - {&__pyx_n_u_ignore, __pyx_k_ignore, sizeof(__pyx_k_ignore), 0, 1, 0, 1}, - {&__pyx_n_u_ij, __pyx_k_ij, sizeof(__pyx_k_ij), 0, 1, 0, 1}, - {&__pyx_n_s_img, __pyx_k_img, sizeof(__pyx_k_img), 0, 0, 1, 1}, - {&__pyx_n_s_img_size, __pyx_k_img_size, sizeof(__pyx_k_img_size), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_indexing, __pyx_k_indexing, sizeof(__pyx_k_indexing), 0, 0, 1, 1}, - {&__pyx_n_s_info, __pyx_k_info, sizeof(__pyx_k_info), 0, 0, 1, 1}, - {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1}, - {&__pyx_n_s_init_subclass, __pyx_k_init_subclass, sizeof(__pyx_k_init_subclass), 0, 0, 1, 1}, - {&__pyx_n_s_initialize_biases, __pyx_k_initialize_biases, sizeof(__pyx_k_initialize_biases), 0, 0, 1, 1}, - {&__pyx_n_s_initialize_weights, __pyx_k_initialize_weights, sizeof(__pyx_k_initialize_weights), 0, 0, 1, 1}, - {&__pyx_n_s_initializing, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {&__pyx_n_s_inplace, __pyx_k_inplace, sizeof(__pyx_k_inplace), 0, 0, 1, 1}, - {&__pyx_n_u_inplace, __pyx_k_inplace, sizeof(__pyx_k_inplace), 0, 1, 0, 1}, - {&__pyx_n_s_inputs, __pyx_k_inputs, sizeof(__pyx_k_inputs), 0, 0, 1, 1}, - {&__pyx_n_s_insert, __pyx_k_insert, sizeof(__pyx_k_insert), 0, 0, 1, 1}, - {&__pyx_n_s_is_available, __pyx_k_is_available, sizeof(__pyx_k_is_available), 0, 0, 1, 1}, - {&__pyx_n_s_is_coroutine, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {&__pyx_kp_u_isenabled, __pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0, 0}, - {&__pyx_n_s_j, __pyx_k_j, sizeof(__pyx_k_j), 0, 0, 1, 1}, - {&__pyx_n_s_layers, __pyx_k_layers, sizeof(__pyx_k_layers), 0, 0, 1, 1}, - {&__pyx_n_s_log, __pyx_k_log, sizeof(__pyx_k_log), 0, 0, 1, 1}, - {&__pyx_n_s_m, __pyx_k_m, sizeof(__pyx_k_m), 0, 0, 1, 1}, - {&__pyx_n_s_m_2, __pyx_k_m_2, sizeof(__pyx_k_m_2), 0, 0, 1, 1}, - {&__pyx_kp_u_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 1, 0, 0}, - {&__pyx_n_s_main_2, __pyx_k_main_2, sizeof(__pyx_k_main_2), 0, 0, 1, 1}, - {&__pyx_n_u_main_2, __pyx_k_main_2, sizeof(__pyx_k_main_2), 0, 1, 0, 1}, - {&__pyx_n_s_make_divisible, __pyx_k_make_divisible, sizeof(__pyx_k_make_divisible), 0, 0, 1, 1}, - {&__pyx_n_s_make_grid, __pyx_k_make_grid, sizeof(__pyx_k_make_grid), 0, 0, 1, 1}, - {&__pyx_n_s_map, __pyx_k_map, sizeof(__pyx_k_map), 0, 0, 1, 1}, - {&__pyx_n_s_math, __pyx_k_math, sizeof(__pyx_k_math), 0, 0, 1, 1}, - {&__pyx_n_s_max, __pyx_k_max, sizeof(__pyx_k_max), 0, 0, 1, 1}, - {&__pyx_n_s_mean, __pyx_k_mean, sizeof(__pyx_k_mean), 0, 0, 1, 1}, - {&__pyx_n_s_meshgrid, __pyx_k_meshgrid, sizeof(__pyx_k_meshgrid), 0, 0, 1, 1}, - {&__pyx_n_s_metaclass, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1}, - {&__pyx_n_s_mi, __pyx_k_mi, sizeof(__pyx_k_mi), 0, 0, 1, 1}, - {&__pyx_n_s_model, __pyx_k_model, sizeof(__pyx_k_model), 0, 0, 1, 1}, - {&__pyx_n_s_model_info, __pyx_k_model_info, sizeof(__pyx_k_model_info), 0, 0, 1, 1}, - {&__pyx_kp_u_model_yaml, __pyx_k_model_yaml, sizeof(__pyx_k_model_yaml), 0, 1, 0, 0}, - {&__pyx_n_u_models, __pyx_k_models, sizeof(__pyx_k_models), 0, 1, 0, 1}, - {&__pyx_kp_u_module, __pyx_k_module, sizeof(__pyx_k_module), 0, 1, 0, 0}, - {&__pyx_n_u_module_2, __pyx_k_module_2, sizeof(__pyx_k_module_2), 0, 1, 0, 1}, - {&__pyx_n_s_module_3, __pyx_k_module_3, sizeof(__pyx_k_module_3), 0, 0, 1, 1}, - {&__pyx_n_s_modules, __pyx_k_modules, sizeof(__pyx_k_modules), 0, 0, 1, 1}, - {&__pyx_n_s_mro_entries, __pyx_k_mro_entries, sizeof(__pyx_k_mro_entries), 0, 0, 1, 1}, - {&__pyx_n_s_n, __pyx_k_n, sizeof(__pyx_k_n), 0, 0, 1, 1}, - {&__pyx_n_u_n, __pyx_k_n, sizeof(__pyx_k_n), 0, 1, 0, 1}, - {&__pyx_n_s_n_2, __pyx_k_n_2, sizeof(__pyx_k_n_2), 0, 0, 1, 1}, - {&__pyx_n_s_na, __pyx_k_na, sizeof(__pyx_k_na), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_names, __pyx_k_names, sizeof(__pyx_k_names), 0, 0, 1, 1}, - {&__pyx_n_s_nc, __pyx_k_nc, sizeof(__pyx_k_nc), 0, 0, 1, 1}, - {&__pyx_n_u_nc, __pyx_k_nc, sizeof(__pyx_k_nc), 0, 1, 0, 1}, - {&__pyx_n_s_nl, __pyx_k_nl, sizeof(__pyx_k_nl), 0, 0, 1, 1}, - {&__pyx_n_s_nn, __pyx_k_nn, sizeof(__pyx_k_nn), 0, 0, 1, 1}, - {&__pyx_n_s_no, __pyx_k_no, sizeof(__pyx_k_no), 0, 0, 1, 1}, - {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, - {&__pyx_n_s_numel, __pyx_k_numel, sizeof(__pyx_k_numel), 0, 0, 1, 1}, - {&__pyx_n_s_nx, __pyx_k_nx, sizeof(__pyx_k_nx), 0, 0, 1, 1}, - {&__pyx_n_s_ny, __pyx_k_ny, sizeof(__pyx_k_ny), 0, 0, 1, 1}, - {&__pyx_n_s_o, __pyx_k_o, sizeof(__pyx_k_o), 0, 0, 1, 1}, - {&__pyx_n_s_onnx_dynamic, __pyx_k_onnx_dynamic, sizeof(__pyx_k_onnx_dynamic), 0, 0, 1, 1}, - {&__pyx_n_s_open, __pyx_k_open, sizeof(__pyx_k_open), 0, 0, 1, 1}, - {&__pyx_n_s_opt, __pyx_k_opt, sizeof(__pyx_k_opt), 0, 0, 1, 1}, - {&__pyx_n_s_p, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1}, - {&__pyx_n_s_parameters, __pyx_k_parameters, sizeof(__pyx_k_parameters), 0, 0, 1, 1}, - {&__pyx_n_u_params, __pyx_k_params, sizeof(__pyx_k_params), 0, 1, 0, 1}, - {&__pyx_n_s_parents, __pyx_k_parents, sizeof(__pyx_k_parents), 0, 0, 1, 1}, - {&__pyx_n_s_parse_args, __pyx_k_parse_args, sizeof(__pyx_k_parse_args), 0, 0, 1, 1}, - {&__pyx_n_s_parse_model, __pyx_k_parse_model, sizeof(__pyx_k_parse_model), 0, 0, 1, 1}, - {&__pyx_n_s_parse_model_locals_genexpr, __pyx_k_parse_model_locals_genexpr, sizeof(__pyx_k_parse_model_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_parser, __pyx_k_parser, sizeof(__pyx_k_parser), 0, 0, 1, 1}, - {&__pyx_n_s_path, __pyx_k_path, sizeof(__pyx_k_path), 0, 0, 1, 1}, - {&__pyx_n_s_pathlib, __pyx_k_pathlib, sizeof(__pyx_k_pathlib), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_k_pdf_toolbox_lib_dia_yolov5_model, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2, __pyx_k_pdf_toolbox_lib_dia_yolov5_model_2, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model_2), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3, __pyx_k_pdf_toolbox_lib_dia_yolov5_model_3, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model_3), 0, 0, 1, 1}, - {&__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_k_pdf_toolbox_lib_dia_yolov5_model_4, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_model_4), 0, 0, 1, 0}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3), 0, 0, 1, 1}, - {&__pyx_n_s_permute, __pyx_k_permute, sizeof(__pyx_k_permute), 0, 0, 1, 1}, - {&__pyx_n_s_prepare, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1}, - {&__pyx_n_s_print, __pyx_k_print, sizeof(__pyx_k_print), 0, 0, 1, 1}, - {&__pyx_n_s_print_args, __pyx_k_print_args, sizeof(__pyx_k_print_args), 0, 0, 1, 1}, - {&__pyx_n_s_print_biases, __pyx_k_print_biases, sizeof(__pyx_k_print_biases), 0, 0, 1, 1}, - {&__pyx_n_s_profile, __pyx_k_profile, sizeof(__pyx_k_profile), 0, 0, 1, 1}, - {&__pyx_kp_u_profile_2, __pyx_k_profile_2, sizeof(__pyx_k_profile_2), 0, 1, 0, 0}, - {&__pyx_kp_u_profile_model_speed, __pyx_k_profile_model_speed, sizeof(__pyx_k_profile_model_speed), 0, 1, 0, 0}, - {&__pyx_n_s_profile_one_layer, __pyx_k_profile_one_layer, sizeof(__pyx_k_profile_one_layer), 0, 0, 1, 1}, - {&__pyx_n_s_qualname, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1}, - {&__pyx_n_s_rand, __pyx_k_rand, sizeof(__pyx_k_rand), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_register_buffer, __pyx_k_register_buffer, sizeof(__pyx_k_register_buffer), 0, 0, 1, 1}, - {&__pyx_n_s_requires_grad, __pyx_k_requires_grad, sizeof(__pyx_k_requires_grad), 0, 0, 1, 1}, - {&__pyx_n_s_resolve, __pyx_k_resolve, sizeof(__pyx_k_resolve), 0, 0, 1, 1}, - {&__pyx_n_s_rglob, __pyx_k_rglob, sizeof(__pyx_k_rglob), 0, 0, 1, 1}, - {&__pyx_n_s_round, __pyx_k_round, sizeof(__pyx_k_round), 0, 0, 1, 1}, - {&__pyx_n_s_s, __pyx_k_s, sizeof(__pyx_k_s), 0, 0, 1, 1}, - {&__pyx_n_s_safe_load, __pyx_k_safe_load, sizeof(__pyx_k_safe_load), 0, 0, 1, 1}, - {&__pyx_n_s_save, __pyx_k_save, sizeof(__pyx_k_save), 0, 0, 1, 1}, - {&__pyx_n_s_scale, __pyx_k_scale, sizeof(__pyx_k_scale), 0, 0, 1, 1}, - {&__pyx_n_s_scale_img, __pyx_k_scale_img, sizeof(__pyx_k_scale_img), 0, 0, 1, 1}, - {&__pyx_n_s_select_device, __pyx_k_select_device, sizeof(__pyx_k_select_device), 0, 0, 1, 1}, - {&__pyx_n_s_self, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1}, - {&__pyx_n_s_send, __pyx_k_send, sizeof(__pyx_k_send), 0, 0, 1, 1}, - {&__pyx_n_s_set_name, __pyx_k_set_name, sizeof(__pyx_k_set_name), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_si, __pyx_k_si, sizeof(__pyx_k_si), 0, 0, 1, 1}, - {&__pyx_n_s_sigmoid, __pyx_k_sigmoid, sizeof(__pyx_k_sigmoid), 0, 0, 1, 1}, - {&__pyx_n_s_spec, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {&__pyx_n_s_stack, __pyx_k_stack, sizeof(__pyx_k_stack), 0, 0, 1, 1}, - {&__pyx_n_s_stem, __pyx_k_stem, sizeof(__pyx_k_stem), 0, 0, 1, 1}, - {&__pyx_n_u_store_true, __pyx_k_store_true, sizeof(__pyx_k_store_true), 0, 1, 0, 1}, - {&__pyx_n_s_stride, __pyx_k_stride, sizeof(__pyx_k_stride), 0, 0, 1, 1}, - {&__pyx_n_s_sum, __pyx_k_sum, sizeof(__pyx_k_sum), 0, 0, 1, 1}, - {&__pyx_n_s_super, __pyx_k_super, sizeof(__pyx_k_super), 0, 0, 1, 1}, - {&__pyx_n_s_sys, __pyx_k_sys, sizeof(__pyx_k_sys), 0, 0, 1, 1}, - {&__pyx_n_s_t, __pyx_k_t, sizeof(__pyx_k_t), 0, 0, 1, 1}, - {&__pyx_n_s_tensor, __pyx_k_tensor, sizeof(__pyx_k_tensor), 0, 0, 1, 1}, - {&__pyx_kp_u_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 1, 0, 0}, - {&__pyx_n_s_test_2, __pyx_k_test_2, sizeof(__pyx_k_test_2), 0, 0, 1, 1}, - {&__pyx_n_s_test_3, __pyx_k_test_3, sizeof(__pyx_k_test_3), 0, 0, 1, 1}, - {&__pyx_kp_u_test_all_yolo_yaml, __pyx_k_test_all_yolo_yaml, sizeof(__pyx_k_test_all_yolo_yaml), 0, 1, 0, 0}, - {&__pyx_n_s_thop, __pyx_k_thop, sizeof(__pyx_k_thop), 0, 0, 1, 1}, - {&__pyx_n_s_throw, __pyx_k_throw, sizeof(__pyx_k_throw), 0, 0, 1, 1}, - {&__pyx_kp_u_time_ms, __pyx_k_time_ms, sizeof(__pyx_k_time_ms), 0, 1, 0, 0}, - {&__pyx_n_s_time_sync, __pyx_k_time_sync, sizeof(__pyx_k_time_sync), 0, 0, 1, 1}, - {&__pyx_n_s_to, __pyx_k_to, sizeof(__pyx_k_to), 0, 0, 1, 1}, - {&__pyx_n_s_tolist, __pyx_k_tolist, sizeof(__pyx_k_tolist), 0, 0, 1, 1}, - {&__pyx_n_s_torch, __pyx_k_torch, sizeof(__pyx_k_torch), 0, 0, 1, 1}, - {&__pyx_n_s_train, __pyx_k_train, sizeof(__pyx_k_train), 0, 0, 1, 1}, - {&__pyx_n_s_training, __pyx_k_training, sizeof(__pyx_k_training), 0, 0, 1, 1}, - {&__pyx_n_s_type, __pyx_k_type, sizeof(__pyx_k_type), 0, 0, 1, 1}, - {&__pyx_n_s_verbose, __pyx_k_verbose, sizeof(__pyx_k_verbose), 0, 0, 1, 1}, - {&__pyx_n_s_view, __pyx_k_view, sizeof(__pyx_k_view), 0, 0, 1, 1}, - {&__pyx_n_s_visualize, __pyx_k_visualize, sizeof(__pyx_k_visualize), 0, 0, 1, 1}, - {&__pyx_n_s_weight, __pyx_k_weight, sizeof(__pyx_k_weight), 0, 0, 1, 1}, - {&__pyx_n_s_wh, __pyx_k_wh, sizeof(__pyx_k_wh), 0, 0, 1, 1}, - {&__pyx_n_u_width_multiple, __pyx_k_width_multiple, sizeof(__pyx_k_width_multiple), 0, 1, 0, 1}, - {&__pyx_kp_u_with_nc, __pyx_k_with_nc, sizeof(__pyx_k_with_nc), 0, 1, 0, 0}, - {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {&__pyx_n_s_xi, __pyx_k_xi, sizeof(__pyx_k_xi), 0, 0, 1, 1}, - {&__pyx_n_s_xv, __pyx_k_xv, sizeof(__pyx_k_xv), 0, 0, 1, 1}, - {&__pyx_n_s_xy, __pyx_k_xy, sizeof(__pyx_k_xy), 0, 0, 1, 1}, - {&__pyx_n_s_y, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1}, - {&__pyx_n_s_yaml, __pyx_k_yaml, sizeof(__pyx_k_yaml), 0, 0, 1, 1}, - {&__pyx_n_s_yaml_file, __pyx_k_yaml_file, sizeof(__pyx_k_yaml_file), 0, 0, 1, 1}, - {&__pyx_n_s_yi, __pyx_k_yi, sizeof(__pyx_k_yi), 0, 0, 1, 1}, - {&__pyx_kp_u_yolo_yaml, __pyx_k_yolo_yaml, sizeof(__pyx_k_yolo_yaml), 0, 1, 0, 0}, - {&__pyx_kp_u_yolov5s_yaml, __pyx_k_yolov5s_yaml, sizeof(__pyx_k_yolov5s_yaml), 0, 1, 0, 0}, - {&__pyx_n_s_yv, __pyx_k_yv, sizeof(__pyx_k_yv), 0, 0, 1, 1}, - {&__pyx_n_s_z, __pyx_k_z, sizeof(__pyx_k_z), 0, 0, 1, 1}, - {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1}, - {&__pyx_n_s_zip, __pyx_k_zip, sizeof(__pyx_k_zip), 0, 0, 1, 1}, - #endif - {0, 0, 0, 0, 0, 0, 0} -}; -/* #### Code section: cached_builtins ### */ -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 28, __pyx_L1_error) - __pyx_builtin_print = __Pyx_GetBuiltinName(__pyx_n_s_print); if (!__pyx_builtin_print) __PYX_ERR(0, 318, __pyx_L1_error) - __pyx_builtin_super = __Pyx_GetBuiltinName(__pyx_n_s_super); if (!__pyx_builtin_super) __PYX_ERR(0, 37, __pyx_L1_error) - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 50, __pyx_L1_error) - __pyx_builtin_open = __Pyx_GetBuiltinName(__pyx_n_s_open); if (!__pyx_builtin_open) __PYX_ERR(0, 89, __pyx_L1_error) - __pyx_builtin_round = __Pyx_GetBuiltinName(__pyx_n_s_round); if (!__pyx_builtin_round) __PYX_ERR(0, 99, __pyx_L1_error) - __pyx_builtin_zip = __Pyx_GetBuiltinName(__pyx_n_s_zip); if (!__pyx_builtin_zip) __PYX_ERR(0, 130, __pyx_L1_error) - __pyx_builtin_sum = __Pyx_GetBuiltinName(__pyx_n_s_sum); if (!__pyx_builtin_sum) __PYX_ERR(0, 170, __pyx_L1_error) - __pyx_builtin_map = __Pyx_GetBuiltinName(__pyx_n_s_map); if (!__pyx_builtin_map) __PYX_ERR(0, 232, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(0, 245, __pyx_L1_error) - __pyx_builtin_eval = __Pyx_GetBuiltinName(__pyx_n_s_eval); if (!__pyx_builtin_eval) __PYX_ERR(0, 246, __pyx_L1_error) - __pyx_builtin_NameError = __Pyx_GetBuiltinName(__pyx_n_s_NameError); if (!__pyx_builtin_NameError) __PYX_ERR(0, 250, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cached_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":53 - * x[i] = self.m[i](x[i]) # conv - * bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - * x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() # <<<<<<<<<<<<<< - * - * if not self.training: # inference - */ - __pyx_tuple_ = PyTuple_Pack(5, __pyx_int_0, __pyx_int_1, __pyx_int_3, __pyx_int_4, __pyx_int_2); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 53, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple_); - __Pyx_GIVEREF(__pyx_tuple_); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":56 - * - * if not self.training: # inference - * if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: # <<<<<<<<<<<<<< - * self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - * - */ - __pyx_slice__2 = PySlice_New(__pyx_int_2, __pyx_int_4, Py_None); if (unlikely(!__pyx_slice__2)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__2); - __Pyx_GIVEREF(__pyx_slice__2); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":61 - * y = x[i].sigmoid() - * if self.inplace: - * y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy # <<<<<<<<<<<<<< - * y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - * else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - */ - __pyx_slice__3 = PySlice_New(__pyx_int_0, __pyx_int_2, Py_None); if (unlikely(!__pyx_slice__3)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__3); - __Pyx_GIVEREF(__pyx_slice__3); - __pyx_tuple__4 = PyTuple_Pack(2, Py_Ellipsis, __pyx_slice__3); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 61, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":62 - * if self.inplace: - * y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - * y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh # <<<<<<<<<<<<<< - * else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - * xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - */ - __pyx_tuple__5 = PyTuple_Pack(2, Py_Ellipsis, __pyx_slice__2); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(0, 62, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":66 - * xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy - * wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - * y = torch.cat((xy, wh, y[..., 4:]), -1) # <<<<<<<<<<<<<< - * z.append(y.view(bs, -1, self.no)) - * - */ - __pyx_slice__6 = PySlice_New(__pyx_int_4, Py_None, Py_None); if (unlikely(!__pyx_slice__6)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__6); - __Pyx_GIVEREF(__pyx_slice__6); - __pyx_tuple__7 = PyTuple_Pack(2, Py_Ellipsis, __pyx_slice__6); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(0, 66, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":89 - * import yaml # for torch hub - * self.yaml_file = Path(cfg).name - * with open(cfg, encoding='ascii', errors='ignore') as f: # <<<<<<<<<<<<<< - * self.yaml = yaml.safe_load(f) # model dict - * - */ - __pyx_tuple__9 = PyTuple_Pack(3, Py_None, Py_None, Py_None); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":102 - * self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - * self.names = [str(i) for i in range(self.yaml['nc'])] # default names - * self.inplace = self.yaml.get('inplace', True) # <<<<<<<<<<<<<< - * - * # Build strides, anchors - */ - __pyx_tuple__10 = PyTuple_Pack(2, __pyx_n_u_inplace, Py_True); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 102, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":110 - * m.inplace = self.inplace - * m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - * m.anchors /= m.stride.view(-1, 1, 1) # <<<<<<<<<<<<<< - * check_anchor_order(m) - * self.stride = m.stride - */ - __pyx_tuple__11 = PyTuple_Pack(3, __pyx_int_neg_1, __pyx_int_1, __pyx_int_1); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":126 - * - * def _forward_augment(self, x): - * img_size = x.shape[-2:] # height, width # <<<<<<<<<<<<<< - * s = [1, 0.83, 0.67] # scales - * f = [None, 3, None] # flips (2-ud, 3-lr) - */ - __pyx_slice__13 = PySlice_New(__pyx_int_neg_2, Py_None, Py_None); if (unlikely(!__pyx_slice__13)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__13); - __Pyx_GIVEREF(__pyx_slice__13); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":153 - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: - * p[..., :4] /= scale # de-scale # <<<<<<<<<<<<<< - * if flips == 2: - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - */ - __pyx_slice__14 = PySlice_New(Py_None, __pyx_int_4, Py_None); if (unlikely(!__pyx_slice__14)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__14); - __Pyx_GIVEREF(__pyx_slice__14); - __pyx_tuple__15 = PyTuple_Pack(2, Py_Ellipsis, __pyx_slice__14); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(0, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":155 - * p[..., :4] /= scale # de-scale - * if flips == 2: - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud # <<<<<<<<<<<<<< - * elif flips == 3: - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - */ - __pyx_tuple__16 = PyTuple_Pack(2, Py_Ellipsis, __pyx_int_1); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__16); - __Pyx_GIVEREF(__pyx_tuple__16); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":157 - * p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - * elif flips == 3: - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr # <<<<<<<<<<<<<< - * else: - * x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale - */ - __pyx_tuple__17 = PyTuple_Pack(2, Py_Ellipsis, __pyx_int_0); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(0, 157, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":159 - * p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - * else: - * x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale # <<<<<<<<<<<<<< - * if flips == 2: - * y = img_size[0] - y # de-flip ud - */ - __pyx_slice__18 = PySlice_New(__pyx_int_0, __pyx_int_1, Py_None); if (unlikely(!__pyx_slice__18)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__18); - __Pyx_GIVEREF(__pyx_slice__18); - __pyx_tuple__19 = PyTuple_Pack(2, Py_Ellipsis, __pyx_slice__18); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - __pyx_slice__20 = PySlice_New(__pyx_int_1, __pyx_int_2, Py_None); if (unlikely(!__pyx_slice__20)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__20); - __Pyx_GIVEREF(__pyx_slice__20); - __pyx_tuple__21 = PyTuple_Pack(2, Py_Ellipsis, __pyx_slice__20); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":173 - * e = 1 # exclude layer count - * i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - * y[0] = y[0][:, :-i] # large # <<<<<<<<<<<<<< - * i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - * y[-1] = y[-1][:, i:] # small - */ - __pyx_slice__22 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__22)) __PYX_ERR(0, 173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__22); - __Pyx_GIVEREF(__pyx_slice__22); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":197 - * for mi, s in zip(m.m, m.stride): # from - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) # <<<<<<<<<<<<<< - * b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - */ - __pyx_tuple__26 = PyTuple_Pack(2, __pyx_slice__22, __pyx_int_4); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__26); - __Pyx_GIVEREF(__pyx_tuple__26); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":198 - * b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - * b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - * b.data[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls # <<<<<<<<<<<<<< - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - * - */ - __pyx_slice__27 = PySlice_New(__pyx_int_5, Py_None, Py_None); if (unlikely(!__pyx_slice__27)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__27); - __Pyx_GIVEREF(__pyx_slice__27); - __pyx_tuple__28 = PyTuple_Pack(2, __pyx_slice__22, __pyx_slice__27); if (unlikely(!__pyx_tuple__28)) __PYX_ERR(0, 198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__28); - __Pyx_GIVEREF(__pyx_tuple__28); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":206 - * b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - * LOGGER.info( - * ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) # <<<<<<<<<<<<<< - * - * # def _print_weights(self): - */ - __pyx_slice__29 = PySlice_New(Py_None, __pyx_int_5, Py_None); if (unlikely(!__pyx_slice__29)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__29); - __Pyx_GIVEREF(__pyx_slice__29); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":260 - * c2 = make_divisible(c2 * gw, 8) - * - * args = [c1, c2, *args[1:]] # <<<<<<<<<<<<<< - * if m in [BottleneckCSP, C3, C3TR, C3Ghost]: - * args.insert(2, n) # number of repeats - */ - __pyx_slice__31 = PySlice_New(__pyx_int_1, Py_None, Py_None); if (unlikely(!__pyx_slice__31)) __PYX_ERR(0, 260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__31); - __Pyx_GIVEREF(__pyx_slice__31); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":36 - * onnx_dynamic = False # ONNX export parameter - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer # <<<<<<<<<<<<<< - * super().__init__() - * self.nc = nc # number of classes - */ - __pyx_tuple__33 = PyTuple_Pack(7, __pyx_n_s_self, __pyx_n_s_nc, __pyx_n_s_anchors, __pyx_n_s_ch, __pyx_n_s_inplace, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__33)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__33); - __Pyx_GIVEREF(__pyx_tuple__33); - __pyx_codeobj__34 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 7, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__33, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_init, 36, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__34)) __PYX_ERR(0, 36, __pyx_L1_error) - __pyx_tuple__35 = PyTuple_Pack(4, ((PyObject *)__pyx_int_80), ((PyObject*)__pyx_empty_tuple), ((PyObject*)__pyx_empty_tuple), ((PyObject *)Py_True)); if (unlikely(!__pyx_tuple__35)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__35); - __Pyx_GIVEREF(__pyx_tuple__35); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":48 - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - * - * def forward(self, x): # <<<<<<<<<<<<<< - * z = [] # inference output - * for i in range(self.nl): - */ - __pyx_tuple__37 = PyTuple_Pack(11, __pyx_n_s_self, __pyx_n_s_x, __pyx_n_s_z, __pyx_n_s_i, __pyx_n_s_bs, __pyx_n_s__36, __pyx_n_s_ny, __pyx_n_s_nx, __pyx_n_s_y, __pyx_n_s_xy, __pyx_n_s_wh); if (unlikely(!__pyx_tuple__37)) __PYX_ERR(0, 48, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__37); - __Pyx_GIVEREF(__pyx_tuple__37); - __pyx_codeobj__38 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 11, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__37, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_forward, 48, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__38)) __PYX_ERR(0, 48, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":71 - * return x if self.training else (torch.cat(z, 1), x) - * - * def _make_grid(self, nx=20, ny=20, i=0): # <<<<<<<<<<<<<< - * d = self.anchors[i].device - * yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') - */ - __pyx_tuple__39 = PyTuple_Pack(9, __pyx_n_s_self, __pyx_n_s_nx, __pyx_n_s_ny, __pyx_n_s_i, __pyx_n_s_d, __pyx_n_s_yv, __pyx_n_s_xv, __pyx_n_s_grid, __pyx_n_s_anchor_grid); if (unlikely(!__pyx_tuple__39)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__39); - __Pyx_GIVEREF(__pyx_tuple__39); - __pyx_codeobj__40 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 9, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__39, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_make_grid, 71, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__40)) __PYX_ERR(0, 71, __pyx_L1_error) - __pyx_tuple__41 = PyTuple_Pack(3, ((PyObject *)__pyx_int_20), ((PyObject *)__pyx_int_20), ((PyObject *)__pyx_int_0)); if (unlikely(!__pyx_tuple__41)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__41); - __Pyx_GIVEREF(__pyx_tuple__41); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":82 - * - * class Model(nn.Module): - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes # <<<<<<<<<<<<<< - * super().__init__() - * if isinstance(cfg, dict): - */ - __pyx_tuple__42 = PyTuple_Pack(11, __pyx_n_s_self, __pyx_n_s_cfg, __pyx_n_s_ch, __pyx_n_s_nc, __pyx_n_s_anchors, __pyx_n_s_yaml, __pyx_n_s_f, __pyx_n_s_m, __pyx_n_s_s, __pyx_n_s_i, __pyx_n_s_x); if (unlikely(!__pyx_tuple__42)) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__42); - __Pyx_GIVEREF(__pyx_tuple__42); - __pyx_codeobj__43 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 11, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__42, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_init, 82, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__43)) __PYX_ERR(0, 82, __pyx_L1_error) - __pyx_tuple__44 = PyTuple_Pack(4, ((PyObject*)__pyx_kp_u_yolov5s_yaml), ((PyObject *)__pyx_int_3), ((PyObject *)Py_None), ((PyObject *)Py_None)); if (unlikely(!__pyx_tuple__44)) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__44); - __Pyx_GIVEREF(__pyx_tuple__44); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":120 - * LOGGER.info('') - * - * def forward(self, x, augment=False, profile=False, visualize=False): # <<<<<<<<<<<<<< - * if augment: - * return self._forward_augment(x) # augmented inference, None - */ - __pyx_tuple__45 = PyTuple_Pack(5, __pyx_n_s_self, __pyx_n_s_x, __pyx_n_s_augment, __pyx_n_s_profile, __pyx_n_s_visualize); if (unlikely(!__pyx_tuple__45)) __PYX_ERR(0, 120, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__45); - __Pyx_GIVEREF(__pyx_tuple__45); - __pyx_codeobj__46 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__45, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_forward, 120, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__46)) __PYX_ERR(0, 120, __pyx_L1_error) - __pyx_tuple__47 = PyTuple_Pack(3, ((PyObject *)Py_False), ((PyObject *)Py_False), ((PyObject *)Py_False)); if (unlikely(!__pyx_tuple__47)) __PYX_ERR(0, 120, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__47); - __Pyx_GIVEREF(__pyx_tuple__47); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":125 - * return self._forward_once(x, profile, visualize) # single-scale inference, train - * - * def _forward_augment(self, x): # <<<<<<<<<<<<<< - * img_size = x.shape[-2:] # height, width - * s = [1, 0.83, 0.67] # scales - */ - __pyx_tuple__48 = PyTuple_Pack(10, __pyx_n_s_self, __pyx_n_s_x, __pyx_n_s_img_size, __pyx_n_s_s, __pyx_n_s_f, __pyx_n_s_y, __pyx_n_s_si, __pyx_n_s_fi, __pyx_n_s_xi, __pyx_n_s_yi); if (unlikely(!__pyx_tuple__48)) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__48); - __Pyx_GIVEREF(__pyx_tuple__48); - __pyx_codeobj__49 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 10, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__48, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_forward_augment, 125, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__49)) __PYX_ERR(0, 125, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":139 - * return torch.cat(y, 1), None # augmented inference, train - * - * def _forward_once(self, x, profile=False, visualize=False): # <<<<<<<<<<<<<< - * y, dt = [], [] # outputs - * for m in self.model: - */ - __pyx_tuple__50 = PyTuple_Pack(8, __pyx_n_s_self, __pyx_n_s_x, __pyx_n_s_profile, __pyx_n_s_visualize, __pyx_n_s_y, __pyx_n_s_dt, __pyx_n_s_m, __pyx_n_s_j); if (unlikely(!__pyx_tuple__50)) __PYX_ERR(0, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__50); - __Pyx_GIVEREF(__pyx_tuple__50); - __pyx_codeobj__51 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 8, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__50, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_forward_once, 139, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__51)) __PYX_ERR(0, 139, __pyx_L1_error) - __pyx_tuple__52 = PyTuple_Pack(2, ((PyObject *)Py_False), ((PyObject *)Py_False)); if (unlikely(!__pyx_tuple__52)) __PYX_ERR(0, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__52); - __Pyx_GIVEREF(__pyx_tuple__52); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":150 - * return x - * - * def _descale_pred(self, p, flips, scale, img_size): # <<<<<<<<<<<<<< - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: - */ - __pyx_tuple__53 = PyTuple_Pack(8, __pyx_n_s_self, __pyx_n_s_p, __pyx_n_s_flips, __pyx_n_s_scale, __pyx_n_s_img_size, __pyx_n_s_x, __pyx_n_s_y, __pyx_n_s_wh); if (unlikely(!__pyx_tuple__53)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__53); - __Pyx_GIVEREF(__pyx_tuple__53); - __pyx_codeobj__54 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 8, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__53, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_descale_pred, 150, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__54)) __PYX_ERR(0, 150, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":167 - * return p - * - * def _clip_augmented(self, y): # <<<<<<<<<<<<<< - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - */ - __pyx_tuple__55 = PyTuple_Pack(10, __pyx_n_s_self, __pyx_n_s_y, __pyx_n_s_nl, __pyx_n_s_g, __pyx_n_s_e, __pyx_n_s_i, __pyx_n_s_genexpr, __pyx_n_s_genexpr, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__55)) __PYX_ERR(0, 167, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__55); - __Pyx_GIVEREF(__pyx_tuple__55); - __pyx_codeobj__56 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 10, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__55, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_clip_augmented, 167, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__56)) __PYX_ERR(0, 167, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":178 - * return y - * - * def _profile_one_layer(self, m, x, dt): # <<<<<<<<<<<<<< - * c = isinstance(m, Detect) # is final layer, copy input as inplace fix - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - */ - __pyx_tuple__57 = PyTuple_Pack(8, __pyx_n_s_self, __pyx_n_s_m, __pyx_n_s_x, __pyx_n_s_dt, __pyx_n_s_c, __pyx_n_s_o, __pyx_n_s_t, __pyx_n_s__36); if (unlikely(!__pyx_tuple__57)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__57); - __Pyx_GIVEREF(__pyx_tuple__57); - __pyx_codeobj__58 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 8, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__57, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_profile_one_layer, 178, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__58)) __PYX_ERR(0, 178, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":191 - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - * - * def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency # <<<<<<<<<<<<<< - * # https://arxiv.org/abs/1708.02002 section 3.3 - * # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - */ - __pyx_tuple__59 = PyTuple_Pack(6, __pyx_n_s_self, __pyx_n_s_cf, __pyx_n_s_m, __pyx_n_s_mi, __pyx_n_s_s, __pyx_n_s_b); if (unlikely(!__pyx_tuple__59)) __PYX_ERR(0, 191, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__59); - __Pyx_GIVEREF(__pyx_tuple__59); - __pyx_codeobj__60 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__59, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_initialize_biases, 191, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__60)) __PYX_ERR(0, 191, __pyx_L1_error) - __pyx_tuple__61 = PyTuple_Pack(1, ((PyObject *)Py_None)); if (unlikely(!__pyx_tuple__61)) __PYX_ERR(0, 191, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__61); - __Pyx_GIVEREF(__pyx_tuple__61); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":201 - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - * - * def _print_biases(self): # <<<<<<<<<<<<<< - * m = self.model[-1] # Detect() module - * for mi in m.m: # from - */ - __pyx_tuple__62 = PyTuple_Pack(4, __pyx_n_s_self, __pyx_n_s_m, __pyx_n_s_mi, __pyx_n_s_b); if (unlikely(!__pyx_tuple__62)) __PYX_ERR(0, 201, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__62); - __Pyx_GIVEREF(__pyx_tuple__62); - __pyx_codeobj__63 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__62, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_print_biases, 201, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__63)) __PYX_ERR(0, 201, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":213 - * # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - * - * def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers # <<<<<<<<<<<<<< - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): - */ - __pyx_tuple__64 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_m); if (unlikely(!__pyx_tuple__64)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__64); - __Pyx_GIVEREF(__pyx_tuple__64); - __pyx_codeobj__65 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__64, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_fuse, 213, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__65)) __PYX_ERR(0, 213, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":223 - * return self - * - * def info(self, verbose=False, img_size=640): # print model information # <<<<<<<<<<<<<< - * model_info(self, verbose, img_size) - * - */ - __pyx_tuple__66 = PyTuple_Pack(3, __pyx_n_s_self, __pyx_n_s_verbose, __pyx_n_s_img_size); if (unlikely(!__pyx_tuple__66)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__66); - __Pyx_GIVEREF(__pyx_tuple__66); - __pyx_codeobj__67 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__66, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_info, 223, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__67)) __PYX_ERR(0, 223, __pyx_L1_error) - __pyx_tuple__68 = PyTuple_Pack(2, ((PyObject *)Py_False), ((PyObject *)__pyx_int_640)); if (unlikely(!__pyx_tuple__68)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__68); - __Pyx_GIVEREF(__pyx_tuple__68); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":226 - * model_info(self, verbose, img_size) - * - * def _apply(self, fn): # <<<<<<<<<<<<<< - * # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - * self = super()._apply(fn) - */ - __pyx_tuple__69 = PyTuple_Pack(3, __pyx_n_s_self, __pyx_n_s_fn, __pyx_n_s_m); if (unlikely(!__pyx_tuple__69)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__69); - __Pyx_GIVEREF(__pyx_tuple__69); - __pyx_codeobj__70 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__69, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_apply, 226, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__70)) __PYX_ERR(0, 226, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":238 - * - * - * def parse_model(d, ch): # model_dict, input_channels(3) # <<<<<<<<<<<<<< - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - */ - __pyx_tuple__71 = PyTuple_Pack(29, __pyx_n_s_d, __pyx_n_s_ch, __pyx_n_s_anchors, __pyx_n_s_nc, __pyx_n_s_gd, __pyx_n_s_gw, __pyx_n_s_na, __pyx_n_s_no, __pyx_n_s_layers, __pyx_n_s_save, __pyx_n_s_c2, __pyx_n_s_i, __pyx_n_s_f, __pyx_n_s_n, __pyx_n_s_m, __pyx_n_s_args, __pyx_n_s_j, __pyx_n_s_a, __pyx_n_s_n_2, __pyx_n_s_c1, __pyx_n_s_m_2, __pyx_n_s_t, __pyx_n_s_np, __pyx_n_s_genexpr, __pyx_n_s_genexpr, __pyx_n_s_x, __pyx_n_s_genexpr, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__71)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__71); - __Pyx_GIVEREF(__pyx_tuple__71); - __pyx_codeobj__72 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 29, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__71, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4, __pyx_n_s_parse_model, 238, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__72)) __PYX_ERR(0, 238, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":294 - * if __name__ == '__main__': - * parser = argparse.ArgumentParser() - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') # <<<<<<<<<<<<<< - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - * parser.add_argument('--profile', action='store_true', help='profile model speed') - */ - __pyx_tuple__73 = PyTuple_Pack(1, __pyx_kp_u_cfg_2); if (unlikely(!__pyx_tuple__73)) __PYX_ERR(0, 294, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__73); - __Pyx_GIVEREF(__pyx_tuple__73); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":295 - * parser = argparse.ArgumentParser() - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') # <<<<<<<<<<<<<< - * parser.add_argument('--profile', action='store_true', help='profile model speed') - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - */ - __pyx_tuple__74 = PyTuple_Pack(1, __pyx_kp_u_device_2); if (unlikely(!__pyx_tuple__74)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__74); - __Pyx_GIVEREF(__pyx_tuple__74); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":296 - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - * parser.add_argument('--profile', action='store_true', help='profile model speed') # <<<<<<<<<<<<<< - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - * opt = parser.parse_args() - */ - __pyx_tuple__75 = PyTuple_Pack(1, __pyx_kp_u_profile_2); if (unlikely(!__pyx_tuple__75)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__75); - __Pyx_GIVEREF(__pyx_tuple__75); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":297 - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - * parser.add_argument('--profile', action='store_true', help='profile model speed') - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') # <<<<<<<<<<<<<< - * opt = parser.parse_args() - * opt.cfg = 'yolov5s.yaml' # check YAML - */ - __pyx_tuple__76 = PyTuple_Pack(1, __pyx_kp_u_test); if (unlikely(!__pyx_tuple__76)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__76); - __Pyx_GIVEREF(__pyx_tuple__76); - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} -/* #### Code section: init_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitConstants(void) { - #if CYTHON_USE_MODULE_STATE - if (__Pyx_InitString(__pyx_string_tab[0], &__pyx_kp_u_10) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[1], &__pyx_kp_u_10_0f) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[2], &__pyx_kp_u_10_2f) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[3], &__pyx_kp_u_10s) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[4], &__pyx_kp_u_18) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[5], &__pyx_kp_u_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[6], &__pyx_kp_u_30) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[7], &__pyx_kp_u_40) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[8], &__pyx_kp_u_6g_Conv2d_bias_10_3g_10_3g_10_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[9], &__pyx_n_s_ArgumentParser) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[10], &__pyx_n_s_BatchNorm2d) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[11], &__pyx_n_s_Bottleneck) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[12], &__pyx_n_s_BottleneckCSP) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[13], &__pyx_n_s_C3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[14], &__pyx_n_s_C3Ghost) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[15], &__pyx_n_s_C3SPP) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[16], &__pyx_n_s_C3TR) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[17], &__pyx_n_s_Concat) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[18], &__pyx_n_s_Contract) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[19], &__pyx_n_s_Conv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[20], &__pyx_n_s_Conv2d) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[21], &__pyx_n_s_CrossConv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[22], &__pyx_n_s_DWConv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[23], &__pyx_n_s_Detect) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[24], &__pyx_n_s_Detect___init) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[25], &__pyx_n_s_Detect___init___locals_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[26], &__pyx_n_s_Detect__make_grid) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[27], &__pyx_n_s_Detect_forward) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[28], &__pyx_kp_u_Error_in) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[29], &__pyx_n_s_Expand) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[30], &__pyx_n_s_FILE) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[31], &__pyx_n_s_Focus) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[32], &__pyx_kp_u_Fusing_layers) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[33], &__pyx_n_u_GFLOPs) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[34], &__pyx_n_s_GhostBottleneck) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[35], &__pyx_n_s_GhostConv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[36], &__pyx_n_s_ImportError) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[37], &__pyx_n_s_LOGGER) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[38], &__pyx_n_s_MixConv2d) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[39], &__pyx_n_s_Model) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[40], &__pyx_n_s_Model___init) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[41], &__pyx_n_s_Model__apply) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[42], &__pyx_n_s_Model__clip_augmented) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[43], &__pyx_n_s_Model__clip_augmented_locals_gen) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[44], &__pyx_n_s_Model__descale_pred) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[45], &__pyx_n_s_Model__forward_augment) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[46], &__pyx_n_s_Model__forward_once) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[47], &__pyx_n_s_Model__initialize_biases) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[48], &__pyx_n_s_Model__print_biases) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[49], &__pyx_n_s_Model__profile_one_layer) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[50], &__pyx_n_s_Model_forward) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[51], &__pyx_n_s_Model_fuse) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[52], &__pyx_n_s_Model_info) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[53], &__pyx_n_s_Module) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[54], &__pyx_n_s_ModuleList) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[55], &__pyx_n_s_NameError) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[56], &__pyx_kp_u_Overriding_model_yaml_anchors_wi) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[57], &__pyx_kp_u_Overriding_model_yaml_nc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[58], &__pyx_n_s_Parameter) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[59], &__pyx_n_s_Path) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[60], &__pyx_n_s_ROOT) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[61], &__pyx_n_s_SPP) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[62], &__pyx_n_s_SPPF) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[63], &__pyx_n_s_Sequential) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[64], &__pyx_n_s_T) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[65], &__pyx_kp_u_Total) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[66], &__pyx_kp_u__12) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[67], &__pyx_kp_u__23) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[68], &__pyx_kp_u__24) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[69], &__pyx_kp_u__25) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[70], &__pyx_kp_u__30) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[71], &__pyx_kp_u__32) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[72], &__pyx_n_s__36) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[73], &__pyx_kp_u__77) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[74], &__pyx_n_s__78) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[75], &__pyx_n_s__8) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[76], &__pyx_n_s_a) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[77], &__pyx_n_s_action) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[78], &__pyx_n_s_add_argument) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[79], &__pyx_n_s_anchor_grid) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[80], &__pyx_n_s_anchors) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[81], &__pyx_n_u_anchors) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[82], &__pyx_n_s_append) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[83], &__pyx_n_s_apply) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[84], &__pyx_n_s_arange) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[85], &__pyx_n_s_argparse) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[86], &__pyx_n_s_args) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[87], &__pyx_n_u_arguments) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[88], &__pyx_n_u_ascii) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[89], &__pyx_n_s_asyncio_coroutines) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[90], &__pyx_n_s_augment) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[91], &__pyx_n_s_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[92], &__pyx_n_u_backbone) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[93], &__pyx_n_s_bias) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[94], &__pyx_n_s_bn) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[95], &__pyx_n_u_bn) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[96], &__pyx_n_s_bs) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[97], &__pyx_n_s_c) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[98], &__pyx_n_s_c1) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[99], &__pyx_n_s_c2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[100], &__pyx_n_s_cat) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[101], &__pyx_n_s_cf) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[102], &__pyx_n_s_cfg) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[103], &__pyx_kp_u_cfg_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[104], &__pyx_n_s_ch) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[105], &__pyx_n_u_ch) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[106], &__pyx_n_s_check_anchor_order) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[107], &__pyx_n_s_class_getitem) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[108], &__pyx_n_s_cline_in_traceback) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[109], &__pyx_n_s_clip_augmented) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[110], &__pyx_n_s_clone) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[111], &__pyx_n_s_close) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[112], &__pyx_n_s_contiguous) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[113], &__pyx_n_s_conv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[114], &__pyx_n_s_copy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[115], &__pyx_n_s_cuda) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[116], &__pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[117], &__pyx_n_s_d) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[118], &__pyx_n_s_data) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[119], &__pyx_n_s_deepcopy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[120], &__pyx_n_s_default) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[121], &__pyx_n_u_depth_multiple) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[122], &__pyx_n_s_descale_pred) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[123], &__pyx_n_s_detach) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[124], &__pyx_n_s_device) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[125], &__pyx_kp_u_device_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[126], &__pyx_n_s_dict) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[127], &__pyx_kp_u_disable) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[128], &__pyx_n_s_doc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[129], &__pyx_n_s_dt) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[130], &__pyx_n_s_e) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[131], &__pyx_kp_u_enable) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[132], &__pyx_n_s_encoding) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[133], &__pyx_n_s_enter) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[134], &__pyx_n_s_enumerate) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[135], &__pyx_n_s_errors) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[136], &__pyx_n_s_eval) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[137], &__pyx_n_s_exit) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[138], &__pyx_n_s_expand) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[139], &__pyx_n_s_f) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[140], &__pyx_n_s_fi) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[141], &__pyx_n_s_file) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[142], &__pyx_n_s_flip) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[143], &__pyx_n_s_flips) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[144], &__pyx_n_s_float) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[145], &__pyx_n_s_fn) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[146], &__pyx_n_s_forward) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[147], &__pyx_n_s_forward_augment) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[148], &__pyx_n_s_forward_fuse) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[149], &__pyx_n_s_forward_once) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[150], &__pyx_n_u_from) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[151], &__pyx_n_s_fuse) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[152], &__pyx_n_s_fuse_conv_and_bn) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[153], &__pyx_n_s_g) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[154], &__pyx_kp_u_gc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[155], &__pyx_n_s_gd) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[156], &__pyx_n_s_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[157], &__pyx_n_s_get) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[158], &__pyx_n_s_grid) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[159], &__pyx_n_s_gs) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[160], &__pyx_n_s_gw) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[161], &__pyx_n_u_head) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[162], &__pyx_n_s_help) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[163], &__pyx_n_s_i) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[164], &__pyx_n_u_ignore) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[165], &__pyx_n_u_ij) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[166], &__pyx_n_s_img) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[167], &__pyx_n_s_img_size) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[168], &__pyx_n_s_import) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[169], &__pyx_n_s_indexing) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[170], &__pyx_n_s_info) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[171], &__pyx_n_s_init) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[172], &__pyx_n_s_init_subclass) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[173], &__pyx_n_s_initialize_biases) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[174], &__pyx_n_s_initialize_weights) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[175], &__pyx_n_s_initializing) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[176], &__pyx_n_s_inplace) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[177], &__pyx_n_u_inplace) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[178], &__pyx_n_s_inputs) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[179], &__pyx_n_s_insert) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[180], &__pyx_n_s_is_available) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[181], &__pyx_n_s_is_coroutine) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[182], &__pyx_kp_u_isenabled) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[183], &__pyx_n_s_j) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[184], &__pyx_n_s_layers) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[185], &__pyx_n_s_log) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[186], &__pyx_n_s_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[187], &__pyx_n_s_m_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[188], &__pyx_kp_u_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[189], &__pyx_n_s_main_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[190], &__pyx_n_u_main_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[191], &__pyx_n_s_make_divisible) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[192], &__pyx_n_s_make_grid) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[193], &__pyx_n_s_map) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[194], &__pyx_n_s_math) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[195], &__pyx_n_s_max) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[196], &__pyx_n_s_mean) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[197], &__pyx_n_s_meshgrid) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[198], &__pyx_n_s_metaclass) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[199], &__pyx_n_s_mi) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[200], &__pyx_n_s_model) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[201], &__pyx_n_s_model_info) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[202], &__pyx_kp_u_model_yaml) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[203], &__pyx_n_u_models) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[204], &__pyx_kp_u_module) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[205], &__pyx_n_u_module_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[206], &__pyx_n_s_module_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[207], &__pyx_n_s_modules) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[208], &__pyx_n_s_mro_entries) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[209], &__pyx_n_s_n) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[210], &__pyx_n_u_n) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[211], &__pyx_n_s_n_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[212], &__pyx_n_s_na) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[213], &__pyx_n_s_name) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[214], &__pyx_n_s_name_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[215], &__pyx_n_s_names) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[216], &__pyx_n_s_nc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[217], &__pyx_n_u_nc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[218], &__pyx_n_s_nl) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[219], &__pyx_n_s_nn) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[220], &__pyx_n_s_no) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[221], &__pyx_n_s_np) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[222], &__pyx_n_s_numel) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[223], &__pyx_n_s_nx) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[224], &__pyx_n_s_ny) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[225], &__pyx_n_s_o) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[226], &__pyx_n_s_onnx_dynamic) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[227], &__pyx_n_s_open) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[228], &__pyx_n_s_opt) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[229], &__pyx_n_s_p) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[230], &__pyx_n_s_parameters) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[231], &__pyx_n_u_params) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[232], &__pyx_n_s_parents) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[233], &__pyx_n_s_parse_args) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[234], &__pyx_n_s_parse_model) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[235], &__pyx_n_s_parse_model_locals_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[236], &__pyx_n_s_parser) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[237], &__pyx_n_s_path) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[238], &__pyx_n_s_pathlib) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[239], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[240], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[241], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[242], &__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_model_4) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[243], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[244], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[245], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[246], &__pyx_n_s_permute) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[247], &__pyx_n_s_prepare) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[248], &__pyx_n_s_print) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[249], &__pyx_n_s_print_args) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[250], &__pyx_n_s_print_biases) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[251], &__pyx_n_s_profile) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[252], &__pyx_kp_u_profile_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[253], &__pyx_kp_u_profile_model_speed) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[254], &__pyx_n_s_profile_one_layer) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[255], &__pyx_n_s_qualname) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[256], &__pyx_n_s_rand) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[257], &__pyx_n_s_range) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[258], &__pyx_n_s_register_buffer) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[259], &__pyx_n_s_requires_grad) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[260], &__pyx_n_s_resolve) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[261], &__pyx_n_s_rglob) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[262], &__pyx_n_s_round) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[263], &__pyx_n_s_s) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[264], &__pyx_n_s_safe_load) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[265], &__pyx_n_s_save) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[266], &__pyx_n_s_scale) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[267], &__pyx_n_s_scale_img) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[268], &__pyx_n_s_select_device) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[269], &__pyx_n_s_self) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[270], &__pyx_n_s_send) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[271], &__pyx_n_s_set_name) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[272], &__pyx_n_s_shape) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[273], &__pyx_n_s_si) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[274], &__pyx_n_s_sigmoid) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[275], &__pyx_n_s_spec) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[276], &__pyx_n_s_stack) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[277], &__pyx_n_s_stem) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[278], &__pyx_n_u_store_true) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[279], &__pyx_n_s_stride) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[280], &__pyx_n_s_sum) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[281], &__pyx_n_s_super) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[282], &__pyx_n_s_sys) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[283], &__pyx_n_s_t) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[284], &__pyx_n_s_tensor) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[285], &__pyx_kp_u_test) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[286], &__pyx_n_s_test_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[287], &__pyx_n_s_test_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[288], &__pyx_kp_u_test_all_yolo_yaml) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[289], &__pyx_n_s_thop) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[290], &__pyx_n_s_throw) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[291], &__pyx_kp_u_time_ms) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[292], &__pyx_n_s_time_sync) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[293], &__pyx_n_s_to) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[294], &__pyx_n_s_tolist) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[295], &__pyx_n_s_torch) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[296], &__pyx_n_s_train) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[297], &__pyx_n_s_training) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[298], &__pyx_n_s_type) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[299], &__pyx_n_s_verbose) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[300], &__pyx_n_s_view) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[301], &__pyx_n_s_visualize) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[302], &__pyx_n_s_weight) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[303], &__pyx_n_s_wh) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[304], &__pyx_n_u_width_multiple) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[305], &__pyx_kp_u_with_nc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[306], &__pyx_n_s_x) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[307], &__pyx_n_s_xi) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[308], &__pyx_n_s_xv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[309], &__pyx_n_s_xy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[310], &__pyx_n_s_y) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[311], &__pyx_n_s_yaml) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[312], &__pyx_n_s_yaml_file) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[313], &__pyx_n_s_yi) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[314], &__pyx_kp_u_yolo_yaml) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[315], &__pyx_kp_u_yolov5s_yaml) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[316], &__pyx_n_s_yv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[317], &__pyx_n_s_z) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[318], &__pyx_n_s_zeros) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[319], &__pyx_n_s_zip) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - #endif - #if !CYTHON_USE_MODULE_STATE - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - #endif - __pyx_float_0_5 = PyFloat_FromDouble(0.5); if (unlikely(!__pyx_float_0_5)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_6 = PyFloat_FromDouble(0.6); if (unlikely(!__pyx_float_0_6)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_1E9 = PyFloat_FromDouble(1E9); if (unlikely(!__pyx_float_1E9)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_67 = PyFloat_FromDouble(0.67); if (unlikely(!__pyx_float_0_67)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_83 = PyFloat_FromDouble(0.83); if (unlikely(!__pyx_float_0_83)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_999999 = PyFloat_FromDouble(0.999999); if (unlikely(!__pyx_float_0_999999)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_3 = PyInt_FromLong(3); if (unlikely(!__pyx_int_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_4 = PyInt_FromLong(4); if (unlikely(!__pyx_int_4)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_5 = PyInt_FromLong(5); if (unlikely(!__pyx_int_5)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_8 = PyInt_FromLong(8); if (unlikely(!__pyx_int_8)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_20 = PyInt_FromLong(20); if (unlikely(!__pyx_int_20)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_80 = PyInt_FromLong(80); if (unlikely(!__pyx_int_80)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_100 = PyInt_FromLong(100); if (unlikely(!__pyx_int_100)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_256 = PyInt_FromLong(256); if (unlikely(!__pyx_int_256)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_640 = PyInt_FromLong(640); if (unlikely(!__pyx_int_640)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_2 = PyInt_FromLong(-2); if (unlikely(!__pyx_int_neg_2)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_globals ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - return 0; -} -/* #### Code section: init_module ### */ - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init___spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__)) __PYX_ERR(0, 36, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init___spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__) < 0) __PYX_ERR(0, 36, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__ = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__) < 0) __PYX_ERR(0, 36, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct____init__->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr)) __PYX_ERR(0, 45, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr) < 0) __PYX_ERR(0, 45, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr) < 0) __PYX_ERR(0, 45, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_1_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented)) __PYX_ERR(0, 167, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented) < 0) __PYX_ERR(0, 167, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented) < 0) __PYX_ERR(0, 167, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_2__clip_augmented->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr)) __PYX_ERR(0, 170, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr) < 0) __PYX_ERR(0, 170, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr) < 0) __PYX_ERR(0, 170, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_3_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr)) __PYX_ERR(0, 172, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr) < 0) __PYX_ERR(0, 172, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr) < 0) __PYX_ERR(0, 172, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_4_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr)) __PYX_ERR(0, 174, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr) < 0) __PYX_ERR(0, 174, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr) < 0) __PYX_ERR(0, 174, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_5_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model)) __PYX_ERR(0, 238, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model) < 0) __PYX_ERR(0, 238, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model) < 0) __PYX_ERR(0, 238, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_6_parse_model->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr)) __PYX_ERR(0, 267, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr) < 0) __PYX_ERR(0, 267, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr) < 0) __PYX_ERR(0, 267, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_7_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr)) __PYX_ERR(0, 279, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr) < 0) __PYX_ERR(0, 279, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr) < 0) __PYX_ERR(0, 279, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_8_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr)) __PYX_ERR(0, 281, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr) < 0) __PYX_ERR(0, 281, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr) < 0) __PYX_ERR(0, 281, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_9_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr)) __PYX_ERR(0, 284, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr) < 0) __PYX_ERR(0, 284, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr) < 0) __PYX_ERR(0, 284, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo___pyx_scope_struct_10_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_yolo(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_yolo}, - {0, NULL} -}; -#endif - -#ifdef __cplusplus -namespace { - struct PyModuleDef __pyx_moduledef = - #else - static struct PyModuleDef __pyx_moduledef = - #endif - { - PyModuleDef_HEAD_INIT, - "yolo", - __pyx_k_YOLO_specific_modules_Usage_pyt, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #elif CYTHON_USE_MODULE_STATE - sizeof(__pyx_mstate), /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - #if CYTHON_USE_MODULE_STATE - __pyx_m_traverse, /* m_traverse */ - __pyx_m_clear, /* m_clear */ - NULL /* m_free */ - #else - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ - #endif - }; - #ifdef __cplusplus -} /* anonymous namespace */ -#endif -#endif - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC inityolo(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC inityolo(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_yolo(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_yolo(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *module, const char* from_name, const char* to_name, int allow_none) -#else -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) -#endif -{ - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { -#if CYTHON_COMPILING_IN_LIMITED_API - result = PyModule_AddObject(module, to_name, value); -#else - result = PyDict_SetItemString(moddict, to_name, value); -#endif - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - CYTHON_UNUSED_VAR(def); - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - moddict = module; -#else - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; -#endif - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_yolo(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - int stringtab_initialized = 0; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - Py_ssize_t __pyx_t_13; - PyObject *(*__pyx_t_14)(PyObject *); - Py_ssize_t __pyx_t_15; - Py_UCS4 __pyx_t_16; - PyObject *__pyx_t_17 = NULL; - PyObject *__pyx_t_18 = NULL; - int __pyx_t_19; - char const *__pyx_t_20; - PyObject *__pyx_t_21 = NULL; - PyObject *__pyx_t_22 = NULL; - PyObject *__pyx_t_23 = NULL; - PyObject *__pyx_t_24 = NULL; - PyObject *__pyx_t_25 = NULL; - PyObject *__pyx_t_26 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'yolo' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("yolo", __pyx_methods, __pyx_k_YOLO_specific_modules_Usage_pyt, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #elif CYTHON_COMPILING_IN_LIMITED_API - __pyx_t_1 = PyModule_Create(&__pyx_moduledef); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - { - int add_module_result = PyState_AddModule(__pyx_t_1, &__pyx_moduledef); - Py_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely((add_module_result < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #endif - CYTHON_UNUSED_VAR(__pyx_t_1); - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_yolo(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - stringtab_initialized = 1; - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_pdf_toolbox__lib__dia_yolov5__models__yolo) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "pdf_toolbox.lib.dia_yolov5.models.yolo")) { - if (unlikely((PyDict_SetItemString(modules, "pdf_toolbox.lib.dia_yolov5.models.yolo", __pyx_m) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely((__Pyx_modinit_type_init_code() < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":9 - * """ - * - * import argparse # <<<<<<<<<<<<<< - * import sys - * from copy import deepcopy - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_argparse, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_argparse, __pyx_t_2) < 0) __PYX_ERR(0, 9, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":10 - * - * import argparse - * import sys # <<<<<<<<<<<<<< - * from copy import deepcopy - * from pathlib import Path - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_sys, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_sys, __pyx_t_2) < 0) __PYX_ERR(0, 10, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":11 - * import argparse - * import sys - * from copy import deepcopy # <<<<<<<<<<<<<< - * from pathlib import Path - * - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 11, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_deepcopy); - __Pyx_GIVEREF(__pyx_n_s_deepcopy); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_deepcopy); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_copy, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 11, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_deepcopy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 11, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_deepcopy, __pyx_t_2) < 0) __PYX_ERR(0, 11, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":12 - * import sys - * from copy import deepcopy - * from pathlib import Path # <<<<<<<<<<<<<< - * - * FILE = Path(__file__).resolve() - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_Path); - __Pyx_GIVEREF(__pyx_n_s_Path); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_Path); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_pathlib, __pyx_t_3, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_Path); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Path, __pyx_t_3) < 0) __PYX_ERR(0, 12, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":14 - * from pathlib import Path - * - * FILE = Path(__file__).resolve() # <<<<<<<<<<<<<< - * ROOT = FILE.parents[1] # YOLOv5 root directory - * if str(ROOT) not in sys.path: - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Path); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_file); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_resolve); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_CallNoArg(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_FILE, __pyx_t_4) < 0) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":15 - * - * FILE = Path(__file__).resolve() - * ROOT = FILE.parents[1] # YOLOv5 root directory # <<<<<<<<<<<<<< - * if str(ROOT) not in sys.path: - * sys.path.append(str(ROOT)) # add ROOT to PATH - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_FILE); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_parents); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_3, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_ROOT, __pyx_t_4) < 0) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":16 - * FILE = Path(__file__).resolve() - * ROOT = FILE.parents[1] # YOLOv5 root directory - * if str(ROOT) not in sys.path: # <<<<<<<<<<<<<< - * sys.path.append(str(ROOT)) # add ROOT to PATH - * # ROOT = ROOT.relative_to(Path.cwd()) # relative - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_ROOT); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_Str(__pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_sys); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_path); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = (__Pyx_PySequence_ContainsTF(__pyx_t_3, __pyx_t_2, Py_NE)); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_6 = (__pyx_t_5 != 0); - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":17 - * ROOT = FILE.parents[1] # YOLOv5 root directory - * if str(ROOT) not in sys.path: - * sys.path.append(str(ROOT)) # add ROOT to PATH # <<<<<<<<<<<<<< - * # ROOT = ROOT.relative_to(Path.cwd()) # relative - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_sys); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_path); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_ROOT); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_Str(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __Pyx_PyObject_Append(__pyx_t_3, __pyx_t_4); if (unlikely(__pyx_t_7 == ((int)-1))) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":16 - * FILE = Path(__file__).resolve() - * ROOT = FILE.parents[1] # YOLOv5 root directory - * if str(ROOT) not in sys.path: # <<<<<<<<<<<<<< - * sys.path.append(str(ROOT)) # add ROOT to PATH - * # ROOT = ROOT.relative_to(Path.cwd()) # relative - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":20 - * # ROOT = ROOT.relative_to(Path.cwd()) # relative - * - * from pdf_toolbox.lib.dia_yolov5.models.common import * # <<<<<<<<<<<<<< - * from pdf_toolbox.lib.dia_yolov5.models.experimental import * - * from pdf_toolbox.lib.dia_yolov5.utils.autoanchor import check_anchor_order - */ - __pyx_t_4 = PyList_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 20, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_n_s__8); - __Pyx_GIVEREF(__pyx_n_s__8); - PyList_SET_ITEM(__pyx_t_4, 0, __pyx_n_s__8); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_2, __pyx_t_4, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 20, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_import_star(__pyx_t_3) < 0) __PYX_ERR(0, 20, __pyx_L1_error); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":21 - * - * from pdf_toolbox.lib.dia_yolov5.models.common import * - * from pdf_toolbox.lib.dia_yolov5.models.experimental import * # <<<<<<<<<<<<<< - * from pdf_toolbox.lib.dia_yolov5.utils.autoanchor import check_anchor_order - * from pdf_toolbox.lib.dia_yolov5.utils.general import LOGGER, make_divisible, print_args - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s__8); - __Pyx_GIVEREF(__pyx_n_s__8); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s__8); - __pyx_t_4 = __Pyx_Import(__pyx_n_s_pdf_toolbox_lib_dia_yolov5_model_3, __pyx_t_3, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_import_star(__pyx_t_4) < 0) __PYX_ERR(0, 21, __pyx_L1_error); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":22 - * from pdf_toolbox.lib.dia_yolov5.models.common import * - * from pdf_toolbox.lib.dia_yolov5.models.experimental import * - * from pdf_toolbox.lib.dia_yolov5.utils.autoanchor import check_anchor_order # <<<<<<<<<<<<<< - * from pdf_toolbox.lib.dia_yolov5.utils.general import LOGGER, make_divisible, print_args - * from pdf_toolbox.lib.dia_yolov5.utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync - */ - __pyx_t_4 = PyList_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 22, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_n_s_check_anchor_order); - __Pyx_GIVEREF(__pyx_n_s_check_anchor_order); - PyList_SET_ITEM(__pyx_t_4, 0, __pyx_n_s_check_anchor_order); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_t_4, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 22, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_check_anchor_order); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 22, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_check_anchor_order, __pyx_t_4) < 0) __PYX_ERR(0, 22, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":23 - * from pdf_toolbox.lib.dia_yolov5.models.experimental import * - * from pdf_toolbox.lib.dia_yolov5.utils.autoanchor import check_anchor_order - * from pdf_toolbox.lib.dia_yolov5.utils.general import LOGGER, make_divisible, print_args # <<<<<<<<<<<<<< - * from pdf_toolbox.lib.dia_yolov5.utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync - * - */ - __pyx_t_3 = PyList_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_LOGGER); - __Pyx_GIVEREF(__pyx_n_s_LOGGER); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_LOGGER); - __Pyx_INCREF(__pyx_n_s_make_divisible); - __Pyx_GIVEREF(__pyx_n_s_make_divisible); - PyList_SET_ITEM(__pyx_t_3, 1, __pyx_n_s_make_divisible); - __Pyx_INCREF(__pyx_n_s_print_args); - __Pyx_GIVEREF(__pyx_n_s_print_args); - PyList_SET_ITEM(__pyx_t_3, 2, __pyx_n_s_print_args); - __pyx_t_4 = __Pyx_Import(__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2, __pyx_t_3, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_4, __pyx_n_s_LOGGER); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_LOGGER, __pyx_t_3) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_4, __pyx_n_s_make_divisible); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_make_divisible, __pyx_t_3) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_4, __pyx_n_s_print_args); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_print_args, __pyx_t_3) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":24 - * from pdf_toolbox.lib.dia_yolov5.utils.autoanchor import check_anchor_order - * from pdf_toolbox.lib.dia_yolov5.utils.general import LOGGER, make_divisible, print_args - * from pdf_toolbox.lib.dia_yolov5.utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync # <<<<<<<<<<<<<< - * - * try: - */ - __pyx_t_4 = PyList_New(6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_n_s_fuse_conv_and_bn); - __Pyx_GIVEREF(__pyx_n_s_fuse_conv_and_bn); - PyList_SET_ITEM(__pyx_t_4, 0, __pyx_n_s_fuse_conv_and_bn); - __Pyx_INCREF(__pyx_n_s_initialize_weights); - __Pyx_GIVEREF(__pyx_n_s_initialize_weights); - PyList_SET_ITEM(__pyx_t_4, 1, __pyx_n_s_initialize_weights); - __Pyx_INCREF(__pyx_n_s_model_info); - __Pyx_GIVEREF(__pyx_n_s_model_info); - PyList_SET_ITEM(__pyx_t_4, 2, __pyx_n_s_model_info); - __Pyx_INCREF(__pyx_n_s_scale_img); - __Pyx_GIVEREF(__pyx_n_s_scale_img); - PyList_SET_ITEM(__pyx_t_4, 3, __pyx_n_s_scale_img); - __Pyx_INCREF(__pyx_n_s_select_device); - __Pyx_GIVEREF(__pyx_n_s_select_device); - PyList_SET_ITEM(__pyx_t_4, 4, __pyx_n_s_select_device); - __Pyx_INCREF(__pyx_n_s_time_sync); - __Pyx_GIVEREF(__pyx_n_s_time_sync); - PyList_SET_ITEM(__pyx_t_4, 5, __pyx_n_s_time_sync); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3, __pyx_t_4, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_fuse_conv_and_bn); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_fuse_conv_and_bn, __pyx_t_4) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_initialize_weights); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_initialize_weights, __pyx_t_4) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_model_info); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_model_info, __pyx_t_4) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_scale_img); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_scale_img, __pyx_t_4) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_select_device); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_select_device, __pyx_t_4) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_time_sync); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_time_sync, __pyx_t_4) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":26 - * from pdf_toolbox.lib.dia_yolov5.utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync - * - * try: # <<<<<<<<<<<<<< - * import thop # for FLOPs computation - * except ImportError: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - /*try:*/ { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":27 - * - * try: - * import thop # for FLOPs computation # <<<<<<<<<<<<<< - * except ImportError: - * thop = None - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_thop, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 27, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_thop, __pyx_t_3) < 0) __PYX_ERR(0, 27, __pyx_L3_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":26 - * from pdf_toolbox.lib.dia_yolov5.utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync - * - * try: # <<<<<<<<<<<<<< - * import thop # for FLOPs computation - * except ImportError: - */ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L8_try_end; - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":28 - * try: - * import thop # for FLOPs computation - * except ImportError: # <<<<<<<<<<<<<< - * thop = None - * - */ - __pyx_t_10 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_ImportError); - if (__pyx_t_10) { - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_3, &__pyx_t_4, &__pyx_t_2) < 0) __PYX_ERR(0, 28, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GOTREF(__pyx_t_2); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":29 - * import thop # for FLOPs computation - * except ImportError: - * thop = None # <<<<<<<<<<<<<< - * - * - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_thop, Py_None) < 0) __PYX_ERR(0, 29, __pyx_L5_except_error) - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L4_exception_handled; - } - goto __pyx_L5_except_error; - __pyx_L5_except_error:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":26 - * from pdf_toolbox.lib.dia_yolov5.utils.torch_utils import fuse_conv_and_bn, initialize_weights, model_info, scale_img, select_device, time_sync - * - * try: # <<<<<<<<<<<<<< - * import thop # for FLOPs computation - * except ImportError: - */ - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_8, __pyx_t_9); - goto __pyx_L1_error; - __pyx_L4_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_8, __pyx_t_9); - __pyx_L8_try_end:; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":32 - * - * - * class Detect(nn.Module): # <<<<<<<<<<<<<< - * stride = None # strides computed during build - * onnx_dynamic = False # ONNX export parameter - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_nn); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_Module); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PEP560_update_bases(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_CalculateMetaclass(NULL, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_Py3MetaclassPrepare(__pyx_t_3, __pyx_t_4, __pyx_n_s_Detect, __pyx_n_s_Detect, (PyObject *) NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, (PyObject *) NULL); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - if (__pyx_t_4 != __pyx_t_2) { - if (unlikely((PyDict_SetItemString(__pyx_t_11, "__orig_bases__", __pyx_t_2) < 0))) __PYX_ERR(0, 32, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":33 - * - * class Detect(nn.Module): - * stride = None # strides computed during build # <<<<<<<<<<<<<< - * onnx_dynamic = False # ONNX export parameter - * - */ - if (__Pyx_SetNameInClass(__pyx_t_11, __pyx_n_s_stride, Py_None) < 0) __PYX_ERR(0, 33, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":34 - * class Detect(nn.Module): - * stride = None # strides computed during build - * onnx_dynamic = False # ONNX export parameter # <<<<<<<<<<<<<< - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer - */ - if (__Pyx_SetNameInClass(__pyx_t_11, __pyx_n_s_onnx_dynamic, Py_False) < 0) __PYX_ERR(0, 34, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":36 - * onnx_dynamic = False # ONNX export parameter - * - * def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer # <<<<<<<<<<<<<< - * super().__init__() - * self.nc = nc # number of classes - */ - __pyx_t_12 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_1__init__, 0, __pyx_n_s_Detect___init, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__34)); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_t_12); - PyList_Append(__pyx_t_2, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_12); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_12, __pyx_tuple__35); - if (__Pyx_SetNameInClass(__pyx_t_11, __pyx_n_s_init, __pyx_t_12) < 0) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":48 - * self.inplace = inplace # use in-place ops (e.g. slice assignment) - * - * def forward(self, x): # <<<<<<<<<<<<<< - * z = [] # inference output - * for i in range(self.nl): - */ - __pyx_t_12 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_3forward, 0, __pyx_n_s_Detect_forward, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__38)); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 48, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (__Pyx_SetNameInClass(__pyx_t_11, __pyx_n_s_forward, __pyx_t_12) < 0) __PYX_ERR(0, 48, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":71 - * return x if self.training else (torch.cat(z, 1), x) - * - * def _make_grid(self, nx=20, ny=20, i=0): # <<<<<<<<<<<<<< - * d = self.anchors[i].device - * yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij') - */ - __pyx_t_12 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_6Detect_5_make_grid, 0, __pyx_n_s_Detect__make_grid, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__40)); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_12, __pyx_tuple__41); - if (__Pyx_SetNameInClass(__pyx_t_11, __pyx_n_s_make_grid, __pyx_t_12) < 0) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":32 - * - * - * class Detect(nn.Module): # <<<<<<<<<<<<<< - * stride = None # strides computed during build - * onnx_dynamic = False # ONNX export parameter - */ - __pyx_t_12 = __Pyx_Py3ClassCreate(__pyx_t_3, __pyx_n_s_Detect, __pyx_t_4, __pyx_t_11, NULL, 0, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (__Pyx_CyFunction_InitClassCell(__pyx_t_2, __pyx_t_12) < 0) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Detect, __pyx_t_12) < 0) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":81 - * - * - * class Model(nn.Module): # <<<<<<<<<<<<<< - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - * super().__init__() - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_nn); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_Module); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PEP560_update_bases(__pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_CalculateMetaclass(NULL, __pyx_t_3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = __Pyx_Py3MetaclassPrepare(__pyx_t_11, __pyx_t_3, __pyx_n_s_Model, __pyx_n_s_Model, (PyObject *) NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, (PyObject *) NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (__pyx_t_3 != __pyx_t_4) { - if (unlikely((PyDict_SetItemString(__pyx_t_12, "__orig_bases__", __pyx_t_4) < 0))) __PYX_ERR(0, 81, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyList_New(0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":82 - * - * class Model(nn.Module): - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes # <<<<<<<<<<<<<< - * super().__init__() - * if isinstance(cfg, dict): - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_1__init__, 0, __pyx_n_s_Model___init, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__43)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - PyList_Append(__pyx_t_4, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_2, __pyx_tuple__44); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_init, __pyx_t_2) < 0) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":120 - * LOGGER.info('') - * - * def forward(self, x, augment=False, profile=False, visualize=False): # <<<<<<<<<<<<<< - * if augment: - * return self._forward_augment(x) # augmented inference, None - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_3forward, 0, __pyx_n_s_Model_forward, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__46)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 120, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_2, __pyx_tuple__47); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_forward, __pyx_t_2) < 0) __PYX_ERR(0, 120, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":125 - * return self._forward_once(x, profile, visualize) # single-scale inference, train - * - * def _forward_augment(self, x): # <<<<<<<<<<<<<< - * img_size = x.shape[-2:] # height, width - * s = [1, 0.83, 0.67] # scales - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_5_forward_augment, 0, __pyx_n_s_Model__forward_augment, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__49)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_forward_augment, __pyx_t_2) < 0) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":139 - * return torch.cat(y, 1), None # augmented inference, train - * - * def _forward_once(self, x, profile=False, visualize=False): # <<<<<<<<<<<<<< - * y, dt = [], [] # outputs - * for m in self.model: - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_7_forward_once, 0, __pyx_n_s_Model__forward_once, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__51)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_2, __pyx_tuple__52); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_forward_once, __pyx_t_2) < 0) __PYX_ERR(0, 139, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":150 - * return x - * - * def _descale_pred(self, p, flips, scale, img_size): # <<<<<<<<<<<<<< - * # de-scale predictions following augmented inference (inverse operation) - * if self.inplace: - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_9_descale_pred, 0, __pyx_n_s_Model__descale_pred, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__54)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_descale_pred, __pyx_t_2) < 0) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":167 - * return p - * - * def _clip_augmented(self, y): # <<<<<<<<<<<<<< - * # Clip YOLOv5 augmented inference tails - * nl = self.model[-1].nl # number of detection layers (P3-P5) - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_11_clip_augmented, 0, __pyx_n_s_Model__clip_augmented, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__56)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 167, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_clip_augmented, __pyx_t_2) < 0) __PYX_ERR(0, 167, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":178 - * return y - * - * def _profile_one_layer(self, m, x, dt): # <<<<<<<<<<<<<< - * c = isinstance(m, Detect) # is final layer, copy input as inplace fix - * o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_13_profile_one_layer, 0, __pyx_n_s_Model__profile_one_layer, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__58)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_profile_one_layer, __pyx_t_2) < 0) __PYX_ERR(0, 178, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":191 - * LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - * - * def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency # <<<<<<<<<<<<<< - * # https://arxiv.org/abs/1708.02002 section 3.3 - * # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_15_initialize_biases, 0, __pyx_n_s_Model__initialize_biases, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__60)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 191, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_2, __pyx_tuple__61); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_initialize_biases, __pyx_t_2) < 0) __PYX_ERR(0, 191, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":201 - * mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - * - * def _print_biases(self): # <<<<<<<<<<<<<< - * m = self.model[-1] # Detect() module - * for mi in m.m: # from - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_17_print_biases, 0, __pyx_n_s_Model__print_biases, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__63)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 201, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_print_biases, __pyx_t_2) < 0) __PYX_ERR(0, 201, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":213 - * # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - * - * def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers # <<<<<<<<<<<<<< - * LOGGER.info('Fusing layers... ') - * for m in self.model.modules(): - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_19fuse, 0, __pyx_n_s_Model_fuse, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__65)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_fuse, __pyx_t_2) < 0) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":223 - * return self - * - * def info(self, verbose=False, img_size=640): # print model information # <<<<<<<<<<<<<< - * model_info(self, verbose, img_size) - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_21info, 0, __pyx_n_s_Model_info, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__67)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_2, __pyx_tuple__68); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_info, __pyx_t_2) < 0) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":226 - * model_info(self, verbose, img_size) - * - * def _apply(self, fn): # <<<<<<<<<<<<<< - * # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - * self = super()._apply(fn) - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_5Model_23_apply, 0, __pyx_n_s_Model__apply, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__70)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - PyList_Append(__pyx_t_4, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_SetNameInClass(__pyx_t_12, __pyx_n_s_apply, __pyx_t_2) < 0) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":81 - * - * - * class Model(nn.Module): # <<<<<<<<<<<<<< - * def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - * super().__init__() - */ - __pyx_t_2 = __Pyx_Py3ClassCreate(__pyx_t_11, __pyx_n_s_Model, __pyx_t_3, __pyx_t_12, NULL, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_CyFunction_InitClassCell(__pyx_t_4, __pyx_t_2) < 0) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Model, __pyx_t_2) < 0) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":238 - * - * - * def parse_model(d, ch): # model_dict, input_channels(3) # <<<<<<<<<<<<<< - * LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - * anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_6models_4yolo_1parse_model, 0, __pyx_n_s_parse_model, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_model, __pyx_d, ((PyObject *)__pyx_codeobj__72)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_parse_model, __pyx_t_3) < 0) __PYX_ERR(0, 238, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":292 - * - * - * if __name__ == '__main__': # <<<<<<<<<<<<<< - * parser = argparse.ArgumentParser() - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_name_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = (__Pyx_PyUnicode_Equals(__pyx_t_3, __pyx_n_u_main_2, Py_EQ)); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 292, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":293 - * - * if __name__ == '__main__': - * parser = argparse.ArgumentParser() # <<<<<<<<<<<<<< - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_argparse); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 293, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_ArgumentParser); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 293, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallNoArg(__pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 293, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_parser, __pyx_t_3) < 0) __PYX_ERR(0, 293, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":294 - * if __name__ == '__main__': - * parser = argparse.ArgumentParser() - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') # <<<<<<<<<<<<<< - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - * parser.add_argument('--profile', action='store_true', help='profile model speed') - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_parser); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 294, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_add_argument); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 294, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyDict_NewPresized(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 294, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_type, ((PyObject *)(&PyUnicode_Type))) < 0) __PYX_ERR(0, 294, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_default, __pyx_kp_u_yolov5s_yaml) < 0) __PYX_ERR(0, 294, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_help, __pyx_kp_u_model_yaml) < 0) __PYX_ERR(0, 294, __pyx_L1_error) - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_tuple__73, __pyx_t_3); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 294, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":295 - * parser = argparse.ArgumentParser() - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') # <<<<<<<<<<<<<< - * parser.add_argument('--profile', action='store_true', help='profile model speed') - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_parser); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_add_argument); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (PyDict_SetItem(__pyx_t_12, __pyx_n_s_default, __pyx_kp_u__12) < 0) __PYX_ERR(0, 295, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_12, __pyx_n_s_help, __pyx_kp_u_cuda_device_i_e_0_or_0_1_2_3_or) < 0) __PYX_ERR(0, 295, __pyx_L1_error) - __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_tuple__74, __pyx_t_12); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":296 - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - * parser.add_argument('--profile', action='store_true', help='profile model speed') # <<<<<<<<<<<<<< - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - * opt = parser.parse_args() - */ - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_parser); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_add_argument); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_action, __pyx_n_u_store_true) < 0) __PYX_ERR(0, 296, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_11, __pyx_n_s_help, __pyx_kp_u_profile_model_speed) < 0) __PYX_ERR(0, 296, __pyx_L1_error) - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_tuple__75, __pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":297 - * parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - * parser.add_argument('--profile', action='store_true', help='profile model speed') - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') # <<<<<<<<<<<<<< - * opt = parser.parse_args() - * opt.cfg = 'yolov5s.yaml' # check YAML - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_parser); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_add_argument); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_action, __pyx_n_u_store_true) < 0) __PYX_ERR(0, 297, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_help, __pyx_kp_u_test_all_yolo_yaml) < 0) __PYX_ERR(0, 297, __pyx_L1_error) - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_11, __pyx_tuple__76, __pyx_t_3); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":298 - * parser.add_argument('--profile', action='store_true', help='profile model speed') - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - * opt = parser.parse_args() # <<<<<<<<<<<<<< - * opt.cfg = 'yolov5s.yaml' # check YAML - * print_args(FILE.stem, opt) - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_parser); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_parse_args); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyObject_CallNoArg(__pyx_t_3); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_opt, __pyx_t_12) < 0) __PYX_ERR(0, 298, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":299 - * parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - * opt = parser.parse_args() - * opt.cfg = 'yolov5s.yaml' # check YAML # <<<<<<<<<<<<<< - * print_args(FILE.stem, opt) - * device = select_device(opt.device) - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_opt); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 299, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (__Pyx_PyObject_SetAttrStr(__pyx_t_12, __pyx_n_s_cfg, __pyx_kp_u_yolov5s_yaml) < 0) __PYX_ERR(0, 299, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":300 - * opt = parser.parse_args() - * opt.cfg = 'yolov5s.yaml' # check YAML - * print_args(FILE.stem, opt) # <<<<<<<<<<<<<< - * device = select_device(opt.device) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_print_args); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_FILE); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_stem); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_opt); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_11); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_11); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); - __pyx_t_11 = 0; - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":301 - * opt.cfg = 'yolov5s.yaml' # check YAML - * print_args(FILE.stem, opt) - * device = select_device(opt.device) # <<<<<<<<<<<<<< - * - * # Create model - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_select_device); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_opt); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_device); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_12); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_device, __pyx_t_2) < 0) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":304 - * - * # Create model - * model = Model(opt.cfg).to(device) # <<<<<<<<<<<<<< - * model.train() - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Model); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_opt); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_cfg); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_to); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_device); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_12); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_model, __pyx_t_2) < 0) __PYX_ERR(0, 304, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":305 - * # Create model - * model = Model(opt.cfg).to(device) - * model.train() # <<<<<<<<<<<<<< - * - * # Profile - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_model); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 305, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_train); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 305, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_CallNoArg(__pyx_t_12); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 305, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":308 - * - * # Profile - * if opt.profile: # <<<<<<<<<<<<<< - * img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) - * y = model(img, profile=True) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_opt); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 308, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_profile); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 308, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 308, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":309 - * # Profile - * if opt.profile: - * img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) # <<<<<<<<<<<<<< - * y = model(img, profile=True) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_torch); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_rand); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_torch); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_cuda); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_is_available); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = __Pyx_PyObject_CallNoArg(__pyx_t_3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__pyx_t_6) { - __Pyx_INCREF(__pyx_int_8); - __pyx_t_12 = __pyx_int_8; - } else { - __Pyx_INCREF(__pyx_int_1); - __pyx_t_12 = __pyx_int_1; - } - __pyx_t_11 = PyTuple_New(4); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_12); - __Pyx_INCREF(__pyx_int_3); - __Pyx_GIVEREF(__pyx_int_3); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_int_3); - __Pyx_INCREF(__pyx_int_640); - __Pyx_GIVEREF(__pyx_int_640); - PyTuple_SET_ITEM(__pyx_t_11, 2, __pyx_int_640); - __Pyx_INCREF(__pyx_int_640); - __Pyx_GIVEREF(__pyx_int_640); - PyTuple_SET_ITEM(__pyx_t_11, 3, __pyx_int_640); - __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_11, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_to); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_device); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_11, __pyx_t_12); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_img, __pyx_t_2) < 0) __PYX_ERR(0, 309, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":310 - * if opt.profile: - * img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) - * y = model(img, profile=True) # <<<<<<<<<<<<<< - * - * # Test all models - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_model); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_img); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_11 = PyTuple_New(1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_12); - __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (PyDict_SetItem(__pyx_t_12, __pyx_n_s_profile, Py_True) < 0) __PYX_ERR(0, 310, __pyx_L1_error) - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_11, __pyx_t_12); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_y, __pyx_t_3) < 0) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":308 - * - * # Profile - * if opt.profile: # <<<<<<<<<<<<<< - * img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device) - * y = model(img, profile=True) - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":313 - * - * # Test all models - * if opt.test: # <<<<<<<<<<<<<< - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - * try: - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_opt); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_test_2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":314 - * # Test all models - * if opt.test: - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): # <<<<<<<<<<<<<< - * try: - * _ = Model(cfg) - */ - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_Path); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_ROOT); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_n_u_models); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_11); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_11, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_4}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_rglob); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_11); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_11, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_kp_u_yolo_yaml}; - __pyx_t_12 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_12)) || PyTuple_CheckExact(__pyx_t_12)) { - __pyx_t_11 = __pyx_t_12; __Pyx_INCREF(__pyx_t_11); __pyx_t_13 = 0; - __pyx_t_14 = NULL; - } else { - __pyx_t_13 = -1; __pyx_t_11 = PyObject_GetIter(__pyx_t_12); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_14 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_11); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 314, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - for (;;) { - if (likely(!__pyx_t_14)) { - if (likely(PyList_CheckExact(__pyx_t_11))) { - if (__pyx_t_13 >= PyList_GET_SIZE(__pyx_t_11)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_12 = PyList_GET_ITEM(__pyx_t_11, __pyx_t_13); __Pyx_INCREF(__pyx_t_12); __pyx_t_13++; if (unlikely((0 < 0))) __PYX_ERR(0, 314, __pyx_L1_error) - #else - __pyx_t_12 = PySequence_ITEM(__pyx_t_11, __pyx_t_13); __pyx_t_13++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - } else { - if (__pyx_t_13 >= PyTuple_GET_SIZE(__pyx_t_11)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_12 = PyTuple_GET_ITEM(__pyx_t_11, __pyx_t_13); __Pyx_INCREF(__pyx_t_12); __pyx_t_13++; if (unlikely((0 < 0))) __PYX_ERR(0, 314, __pyx_L1_error) - #else - __pyx_t_12 = PySequence_ITEM(__pyx_t_11, __pyx_t_13); __pyx_t_13++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - } - } else { - __pyx_t_12 = __pyx_t_14(__pyx_t_11); - if (unlikely(!__pyx_t_12)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 314, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_12); - } - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cfg, __pyx_t_12) < 0) __PYX_ERR(0, 314, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":315 - * if opt.test: - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - * try: # <<<<<<<<<<<<<< - * _ = Model(cfg) - * except Exception as e: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_9, &__pyx_t_8, &__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_1); - /*try:*/ { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":316 - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - * try: - * _ = Model(cfg) # <<<<<<<<<<<<<< - * except Exception as e: - * print(f'Error in {cfg}: {e}') - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Model); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 316, __pyx_L16_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_cfg); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 316, __pyx_L16_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_4}; - __pyx_t_12 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 316, __pyx_L16_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - if (PyDict_SetItem(__pyx_d, __pyx_n_s__36, __pyx_t_12) < 0) __PYX_ERR(0, 316, __pyx_L16_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":315 - * if opt.test: - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - * try: # <<<<<<<<<<<<<< - * _ = Model(cfg) - * except Exception as e: - */ - } - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L23_try_end; - __pyx_L16_error:; - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":317 - * try: - * _ = Model(cfg) - * except Exception as e: # <<<<<<<<<<<<<< - * print(f'Error in {cfg}: {e}') - * - */ - __pyx_t_10 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); - if (__pyx_t_10) { - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.models.yolo", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_12, &__pyx_t_3, &__pyx_t_4) < 0) __PYX_ERR(0, 317, __pyx_L18_except_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_e, __pyx_t_3) < 0) __PYX_ERR(0, 317, __pyx_L18_except_error) - /*try:*/ { - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":318 - * _ = Model(cfg) - * except Exception as e: - * print(f'Error in {cfg}: {e}') # <<<<<<<<<<<<<< - * - * # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898) - */ - __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 318, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_15 = 0; - __pyx_t_16 = 127; - __Pyx_INCREF(__pyx_kp_u_Error_in); - __pyx_t_15 += 9; - __Pyx_GIVEREF(__pyx_kp_u_Error_in); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_kp_u_Error_in); - __Pyx_GetModuleGlobalName(__pyx_t_17, __pyx_n_s_cfg); if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 318, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_17); - __pyx_t_18 = __Pyx_PyObject_FormatSimple(__pyx_t_17, __pyx_empty_unicode); if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 318, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_18); - __Pyx_DECREF(__pyx_t_17); __pyx_t_17 = 0; - __pyx_t_16 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_18) > __pyx_t_16) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_18) : __pyx_t_16; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_18); - __Pyx_GIVEREF(__pyx_t_18); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_18); - __pyx_t_18 = 0; - __Pyx_INCREF(__pyx_kp_u__77); - __pyx_t_15 += 2; - __Pyx_GIVEREF(__pyx_kp_u__77); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_kp_u__77); - __Pyx_GetModuleGlobalName(__pyx_t_18, __pyx_n_s_e); if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 318, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_18); - __pyx_t_17 = __Pyx_PyObject_FormatSimple(__pyx_t_18, __pyx_empty_unicode); if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 318, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_17); - __Pyx_DECREF(__pyx_t_18); __pyx_t_18 = 0; - __pyx_t_16 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_17) > __pyx_t_16) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_17) : __pyx_t_16; - __pyx_t_15 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_17); - __Pyx_GIVEREF(__pyx_t_17); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_17); - __pyx_t_17 = 0; - __pyx_t_17 = __Pyx_PyUnicode_Join(__pyx_t_2, 4, __pyx_t_15, __pyx_t_16); if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 318, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_17); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_17); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 318, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_17); __pyx_t_17 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":317 - * try: - * _ = Model(cfg) - * except Exception as e: # <<<<<<<<<<<<<< - * print(f'Error in {cfg}: {e}') - * - */ - /*finally:*/ { - /*normal exit:*/{ - if (unlikely(__Pyx_PyObject_DelAttrStr(__pyx_m, __pyx_n_s_e) < 0)) { if (likely(PyErr_ExceptionMatches(PyExc_AttributeError))) PyErr_Clear(); else __PYX_ERR(0, 317, __pyx_L18_except_error) } - goto __pyx_L30; - } - __pyx_L29_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_21 = 0; __pyx_t_22 = 0; __pyx_t_23 = 0; __pyx_t_24 = 0; __pyx_t_25 = 0; __pyx_t_26 = 0; - __Pyx_XDECREF(__pyx_t_17); __pyx_t_17 = 0; - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_24, &__pyx_t_25, &__pyx_t_26); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23) < 0)) __Pyx_ErrFetch(&__pyx_t_21, &__pyx_t_22, &__pyx_t_23); - __Pyx_XGOTREF(__pyx_t_21); - __Pyx_XGOTREF(__pyx_t_22); - __Pyx_XGOTREF(__pyx_t_23); - __Pyx_XGOTREF(__pyx_t_24); - __Pyx_XGOTREF(__pyx_t_25); - __Pyx_XGOTREF(__pyx_t_26); - __pyx_t_10 = __pyx_lineno; __pyx_t_19 = __pyx_clineno; __pyx_t_20 = __pyx_filename; - { - if (unlikely(__Pyx_PyObject_DelAttrStr(__pyx_m, __pyx_n_s_e) < 0)) { if (likely(PyErr_ExceptionMatches(PyExc_AttributeError))) PyErr_Clear(); else __PYX_ERR(0, 317, __pyx_L34_error) } - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_24); - __Pyx_XGIVEREF(__pyx_t_25); - __Pyx_XGIVEREF(__pyx_t_26); - __Pyx_ExceptionReset(__pyx_t_24, __pyx_t_25, __pyx_t_26); - } - __Pyx_XGIVEREF(__pyx_t_21); - __Pyx_XGIVEREF(__pyx_t_22); - __Pyx_XGIVEREF(__pyx_t_23); - __Pyx_ErrRestore(__pyx_t_21, __pyx_t_22, __pyx_t_23); - __pyx_t_21 = 0; __pyx_t_22 = 0; __pyx_t_23 = 0; __pyx_t_24 = 0; __pyx_t_25 = 0; __pyx_t_26 = 0; - __pyx_lineno = __pyx_t_10; __pyx_clineno = __pyx_t_19; __pyx_filename = __pyx_t_20; - goto __pyx_L18_except_error; - __pyx_L34_error:; - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_24); - __Pyx_XGIVEREF(__pyx_t_25); - __Pyx_XGIVEREF(__pyx_t_26); - __Pyx_ExceptionReset(__pyx_t_24, __pyx_t_25, __pyx_t_26); - } - __Pyx_XDECREF(__pyx_t_21); __pyx_t_21 = 0; - __Pyx_XDECREF(__pyx_t_22); __pyx_t_22 = 0; - __Pyx_XDECREF(__pyx_t_23); __pyx_t_23 = 0; - __pyx_t_24 = 0; __pyx_t_25 = 0; __pyx_t_26 = 0; - goto __pyx_L18_except_error; - } - __pyx_L30:; - } - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L17_exception_handled; - } - goto __pyx_L18_except_error; - __pyx_L18_except_error:; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":315 - * if opt.test: - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - * try: # <<<<<<<<<<<<<< - * _ = Model(cfg) - * except Exception as e: - */ - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_ExceptionReset(__pyx_t_9, __pyx_t_8, __pyx_t_1); - goto __pyx_L1_error; - __pyx_L17_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_ExceptionReset(__pyx_t_9, __pyx_t_8, __pyx_t_1); - __pyx_L23_try_end:; - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":314 - * # Test all models - * if opt.test: - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): # <<<<<<<<<<<<<< - * try: - * _ = Model(cfg) - */ - } - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":313 - * - * # Test all models - * if opt.test: # <<<<<<<<<<<<<< - * for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - * try: - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":292 - * - * - * if __name__ == '__main__': # <<<<<<<<<<<<<< - * parser = argparse.ArgumentParser() - * parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/models/yolo.py":1 - * # YOLOv5 by Ultralytics, GPL-3.0 license # <<<<<<<<<<<<<< - * """ - * YOLO-specific modules - */ - __pyx_t_11 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test_3, __pyx_t_11) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_17); - __Pyx_XDECREF(__pyx_t_18); - if (__pyx_m) { - if (__pyx_d && stringtab_initialized) { - __Pyx_AddTraceback("init pdf_toolbox.lib.dia_yolov5.models.yolo", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - #if !CYTHON_USE_MODULE_STATE - Py_CLEAR(__pyx_m); - #endif - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init pdf_toolbox.lib.dia_yolov5.models.yolo"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} -/* #### Code section: cleanup_globals ### */ -/* #### Code section: cleanup_module ### */ -/* #### Code section: main_method ### */ -/* #### Code section: utility_code_pragmas ### */ -#if _MSC_VER -#pragma warning( push ) -/* Warning 4127: conditional expression is constant - * Cython uses constant conditional expressions to allow in inline functions to be optimized at - * compile-time, so this warning is not useful - */ -#pragma warning( disable : 4127 ) -#endif - - - -/* #### Code section: utility_code_def ### */ - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStrNoError(__pyx_b, name); - if (unlikely(!result) && !PyErr_Occurred()) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* TupleAndListFromArray */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject *const *CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - Py_INCREF(__pyx_empty_tuple); - return __pyx_empty_tuple; - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyTupleObject*)res)->ob_item, n); - return res; -} -static CYTHON_INLINE PyObject * -__Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return PyList_New(0); - } - res = PyList_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyListObject*)res)->ob_item, n); - return res; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* fastcall */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s) -{ - Py_ssize_t i, n = PyTuple_GET_SIZE(kwnames); - for (i = 0; i < n; i++) - { - if (s == PyTuple_GET_ITEM(kwnames, i)) return kwvalues[i]; - } - for (i = 0; i < n; i++) - { - int eq = __Pyx_PyUnicode_Equals(s, PyTuple_GET_ITEM(kwnames, i), Py_EQ); - if (unlikely(eq != 0)) { - if (unlikely(eq < 0)) return NULL; // error - return kwvalues[i]; - } - } - return NULL; // not found (no exception set) -} -#endif - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - int kwds_is_tuple = CYTHON_METH_FASTCALL && likely(PyTuple_Check(kwds)); - while (1) { - if (kwds_is_tuple) { - if (pos >= PyTuple_GET_SIZE(kwds)) break; - key = PyTuple_GET_ITEM(kwds, pos); - value = kwvalues[pos]; - pos++; - } - else - { - if (!PyDict_Next(kwds, &pos, &key, &value)) break; - } - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = ( - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key) - ); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseClosureNameError */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname) { - PyErr_Format(PyExc_NameError, "free variable '%s' referenced before assignment in enclosing scope", varname); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#elif CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!__pyx_m)) { - return NULL; - } - result = PyObject_GetAttr(__pyx_m, name); - if (likely(result)) { - return result; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL && !CYTHON_VECTORCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectFastCall */ -static PyObject* __Pyx_PyObject_FastCall_fallback(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs) { - PyObject *argstuple; - PyObject *result; - size_t i; - argstuple = PyTuple_New((Py_ssize_t)nargs); - if (unlikely(!argstuple)) return NULL; - for (i = 0; i < nargs; i++) { - Py_INCREF(args[i]); - PyTuple_SET_ITEM(argstuple, (Py_ssize_t)i, args[i]); - } - result = __Pyx_PyObject_Call(func, argstuple, kwargs); - Py_DECREF(argstuple); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t _nargs, PyObject *kwargs) { - Py_ssize_t nargs = __Pyx_PyVectorcall_NARGS(_nargs); -#if CYTHON_COMPILING_IN_CPYTHON - if (nargs == 0 && kwargs == NULL) { -#ifdef __Pyx_CyFunction_USED - if (__Pyx_IsCyOrPyCFunction(func)) -#else - if (PyCFunction_Check(func)) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - } - else if (nargs == 1 && kwargs == NULL) { - if (PyCFunction_Check(func)) - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, args[0]); - } - } - } -#endif - #if PY_VERSION_HEX < 0x030800B1 - #if CYTHON_FAST_PYCCALL - if (PyCFunction_Check(func)) { - if (kwargs) { - return _PyCFunction_FastCallDict(func, args, nargs, kwargs); - } else { - return _PyCFunction_FastCallKeywords(func, args, nargs, NULL); - } - } - #if PY_VERSION_HEX >= 0x030700A1 - if (!kwargs && __Pyx_IS_TYPE(func, &PyMethodDescr_Type)) { - return _PyMethodDescr_FastCallKeywords(func, args, nargs, NULL); - } - #endif - #endif - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs); - } - #endif - #endif - #if CYTHON_VECTORCALL - vectorcallfunc f = _PyVectorcall_Function(func); - if (f) { - return f(func, args, (size_t)nargs, kwargs); - } - #elif defined(__Pyx_CyFunction_USED) && CYTHON_BACKPORT_VECTORCALL - if (__Pyx_CyFunction_CheckExact(func)) { - __pyx_vectorcallfunc f = __Pyx_CyFunction_func_vectorcall(func); - if (f) return f(func, args, (size_t)nargs, kwargs); - } - #endif - if (nargs == 0) { - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, kwargs); - } - return __Pyx_PyObject_FastCall_fallback(func, args, (size_t)nargs, kwargs); -} - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* pep479 */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen) { - PyObject *exc, *val, *tb, *cur_exc; - __Pyx_PyThreadState_declare - #ifdef __Pyx_StopAsyncIteration_USED - int is_async_stopiteration = 0; - #endif - CYTHON_MAYBE_UNUSED_VAR(in_async_gen); - cur_exc = PyErr_Occurred(); - if (likely(!__Pyx_PyErr_GivenExceptionMatches(cur_exc, PyExc_StopIteration))) { - #ifdef __Pyx_StopAsyncIteration_USED - if (in_async_gen && unlikely(__Pyx_PyErr_GivenExceptionMatches(cur_exc, __Pyx_PyExc_StopAsyncIteration))) { - is_async_stopiteration = 1; - } else - #endif - return; - } - __Pyx_PyThreadState_assign - __Pyx_GetException(&exc, &val, &tb); - Py_XDECREF(exc); - Py_XDECREF(val); - Py_XDECREF(tb); - PyErr_SetString(PyExc_RuntimeError, - #ifdef __Pyx_StopAsyncIteration_USED - is_async_stopiteration ? "async generator raised StopAsyncIteration" : - in_async_gen ? "async generator raised StopIteration" : - #endif - "generator raised StopIteration"); -} - -/* PyObjectSetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_setattro)) - return tp->tp_setattro(obj, attr_name, value); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_setattr)) - return tp->tp_setattr(obj, PyString_AS_STRING(attr_name), value); -#endif - return PyObject_SetAttr(obj, attr_name, value); -} -#endif - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long x; - long a = PyInt_AS_LONG(op1); - - x = (long)((unsigned long)a + b); - if (likely((x^a) >= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (unlikely(size == 0)) { - return __Pyx_NewRef(op2); - } - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* None */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (unlikely(!j)) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_subscript) { - PyObject *r, *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return NULL; - r = mm->mp_subscript(o, key); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return sm->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* PyObjectCallOneArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *args[2] = {NULL, arg}; - return __Pyx_PyObject_FastCall(func, args+1, 1 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject *index) { - PyObject *runerr; - Py_ssize_t key_value; - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - __Pyx_TypeName index_type_name = __Pyx_PyType_GetName(Py_TYPE(index)); - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, - "cannot fit '" __Pyx_FMT_TYPENAME "' into an index-sized integer", index_type_name); - __Pyx_DECREF_TypeName(index_type_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem_Slow(PyObject *obj, PyObject *key) { - __Pyx_TypeName obj_type_name; - if (likely(PyType_Check(obj))) { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(obj, __pyx_n_s_class_getitem); - if (meth) { - PyObject *result = __Pyx_PyObject_CallOneArg(meth, key); - Py_DECREF(meth); - return result; - } - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is not subscriptable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key) { - PyTypeObject *tp = Py_TYPE(obj); - PyMappingMethods *mm = tp->tp_as_mapping; - PySequenceMethods *sm = tp->tp_as_sequence; - if (likely(mm && mm->mp_subscript)) { - return mm->mp_subscript(obj, key); - } - if (likely(sm && sm->sq_item)) { - return __Pyx_PyObject_GetIndex(obj, key); - } - return __Pyx_PyObject_GetItem_Slow(obj, key); -} -#endif - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { -#if CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* exc_type = tstate->curexc_type; - if (unlikely(exc_type)) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) { - PyObject *exc_value, *exc_tb; - exc_value = tstate->curexc_value; - exc_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - Py_DECREF(exc_type); - Py_XDECREF(exc_value); - Py_XDECREF(exc_tb); - return 0; - } else { - return -1; - } - } - return 0; -#else - if (unlikely(PyErr_Occurred())) { - if (likely(PyErr_ExceptionMatches(PyExc_StopIteration))) { - PyErr_Clear(); - return 0; - } else { - return -1; - } - } - return 0; -#endif -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* SliceObject */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice(PyObject* obj, - Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** _py_start, PyObject** _py_stop, PyObject** _py_slice, - int has_cstart, int has_cstop, int wraparound) { - __Pyx_TypeName obj_type_name; -#if CYTHON_USE_TYPE_SLOTS - PyMappingMethods* mp; -#if PY_MAJOR_VERSION < 3 - PySequenceMethods* ms = Py_TYPE(obj)->tp_as_sequence; - if (likely(ms && ms->sq_slice)) { - if (!has_cstart) { - if (_py_start && (*_py_start != Py_None)) { - cstart = __Pyx_PyIndex_AsSsize_t(*_py_start); - if ((cstart == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstart = 0; - } - if (!has_cstop) { - if (_py_stop && (*_py_stop != Py_None)) { - cstop = __Pyx_PyIndex_AsSsize_t(*_py_stop); - if ((cstop == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstop = PY_SSIZE_T_MAX; - } - if (wraparound && unlikely((cstart < 0) | (cstop < 0)) && likely(ms->sq_length)) { - Py_ssize_t l = ms->sq_length(obj); - if (likely(l >= 0)) { - if (cstop < 0) { - cstop += l; - if (cstop < 0) cstop = 0; - } - if (cstart < 0) { - cstart += l; - if (cstart < 0) cstart = 0; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - goto bad; - PyErr_Clear(); - } - } - return ms->sq_slice(obj, cstart, cstop); - } -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - mp = Py_TYPE(obj)->tp_as_mapping; - if (likely(mp && mp->mp_subscript)) -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - { - PyObject* result; - PyObject *py_slice, *py_start, *py_stop; - if (_py_slice) { - py_slice = *_py_slice; - } else { - PyObject* owned_start = NULL; - PyObject* owned_stop = NULL; - if (_py_start) { - py_start = *_py_start; - } else { - if (has_cstart) { - owned_start = py_start = PyInt_FromSsize_t(cstart); - if (unlikely(!py_start)) goto bad; - } else - py_start = Py_None; - } - if (_py_stop) { - py_stop = *_py_stop; - } else { - if (has_cstop) { - owned_stop = py_stop = PyInt_FromSsize_t(cstop); - if (unlikely(!py_stop)) { - Py_XDECREF(owned_start); - goto bad; - } - } else - py_stop = Py_None; - } - py_slice = PySlice_New(py_start, py_stop, Py_None); - Py_XDECREF(owned_start); - Py_XDECREF(owned_stop); - if (unlikely(!py_slice)) goto bad; - } -#if CYTHON_USE_TYPE_SLOTS - result = mp->mp_subscript(obj, py_slice); -#else - result = PyObject_GetItem(obj, py_slice); -#endif - if (!_py_slice) { - Py_DECREF(py_slice); - } - return result; - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is unsliceable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); -bad: - return NULL; -} - -/* PyFloatBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_SubtractObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check) { - const double b = floatval; - double a, result; - (void)inplace; (void)zerodivision_check; - if (likely(PyFloat_CheckExact(op1))) { -#if CYTHON_COMPILING_IN_LIMITED_API - a = __pyx_PyFloat_AsDouble(op1); -#else - a = PyFloat_AS_DOUBLE(op1); -#endif - - } else - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - a = (double) PyInt_AS_LONG(op1); - - } else - #endif - if (likely(PyLong_CheckExact(op1))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - switch (size) { - case 0: a = 0.0; break; - case -1: a = -(double) digits[0]; break; - case 1: a = (double) digits[0]; break; - case -2: - case 2: - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (1 * PyLong_SHIFT < 53))) { - a = (double) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -2) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -3: - case 3: - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53))) { - a = (double) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -3) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -4: - case 4: - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53))) { - a = (double) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (4 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -4) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - default: - #else - { - #endif - a = PyLong_AsDouble(op1); - if (unlikely(a == -1.0 && PyErr_Occurred())) return NULL; - } - } else { - return (inplace ? PyNumber_InPlaceSubtract : PyNumber_Subtract)(op1, op2); - } - PyFPE_START_PROTECT("subtract", return NULL) - result = a - b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); -} -#endif - -/* PyIntBinop */ - #if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - -#ifdef HAVE_LONG_LONG - if (sizeof(PY_LONG_LONG) > sizeof(long)) { - PY_LONG_LONG result = (PY_LONG_LONG)a * (PY_LONG_LONG)b; - return (result >= LONG_MIN && result <= LONG_MAX) ? - PyInt_FromLong((long)result) : PyLong_FromLongLong(result); - } -#endif -#if CYTHON_USE_TYPE_SLOTS - return PyInt_Type.tp_as_number->nb_multiply(op1, op2); -#else - return PyNumber_Multiply(op1, op2); -#endif - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (unlikely(size == 0)) { - return __Pyx_NewRef(op1); - } - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - } - } - (void)a; (void)b; - #ifdef HAVE_LONG_LONG - lla = a; - goto long_long; - #else - return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - #endif - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla * llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("multiply", return NULL) - result = ((double)a) * (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceMultiply : PyNumber_Multiply)(op1, op2); -} -#endif - -/* Import */ - static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *module = 0; - PyObject *empty_dict = 0; - PyObject *empty_list = 0; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (unlikely(!py_import)) - goto bad; - if (!from_list) { - empty_list = PyList_New(0); - if (unlikely(!empty_list)) - goto bad; - from_list = empty_list; - } - #endif - empty_dict = PyDict_New(); - if (unlikely(!empty_dict)) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, 1); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, 1); - #endif - if (unlikely(!module)) { - if (unlikely(!PyErr_ExceptionMatches(PyExc_ImportError))) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (unlikely(!py_level)) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, __pyx_d, empty_dict, from_list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, level); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, level); - #endif - #endif - } - } -bad: - Py_XDECREF(empty_dict); - Py_XDECREF(empty_list); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - return module; -} - -/* ImportDottedModule */ - #if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Error(PyObject *name, PyObject *parts_tuple, Py_ssize_t count) { - PyObject *partial_name = NULL, *slice = NULL, *sep = NULL; - if (unlikely(PyErr_Occurred())) { - PyErr_Clear(); - } - if (likely(PyTuple_GET_SIZE(parts_tuple) == count)) { - partial_name = name; - } else { - slice = PySequence_GetSlice(parts_tuple, 0, count); - if (unlikely(!slice)) - goto bad; - sep = PyUnicode_FromStringAndSize(".", 1); - if (unlikely(!sep)) - goto bad; - partial_name = PyUnicode_Join(sep, slice); - } - PyErr_Format( -#if PY_MAJOR_VERSION < 3 - PyExc_ImportError, - "No module named '%s'", PyString_AS_STRING(partial_name)); -#else -#if PY_VERSION_HEX >= 0x030600B1 - PyExc_ModuleNotFoundError, -#else - PyExc_ImportError, -#endif - "No module named '%U'", partial_name); -#endif -bad: - Py_XDECREF(sep); - Py_XDECREF(slice); - Py_XDECREF(partial_name); - return NULL; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Lookup(PyObject *name) { - PyObject *imported_module; -#if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - return NULL; - imported_module = __Pyx_PyDict_GetItemStr(modules, name); - Py_XINCREF(imported_module); -#else - imported_module = PyImport_GetModule(name); -#endif - return imported_module; -} -#endif -static PyObject *__Pyx__ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if PY_MAJOR_VERSION < 3 - PyObject *module, *from_list, *star = __pyx_n_s__8; - CYTHON_UNUSED_VAR(parts_tuple); - from_list = PyList_New(1); - if (unlikely(!from_list)) - return NULL; - Py_INCREF(star); - PyList_SET_ITEM(from_list, 0, star); - module = __Pyx_Import(name, from_list, 0); - Py_DECREF(from_list); - return module; -#else - Py_ssize_t i, nparts; - PyObject *imported_module; - PyObject *module = __Pyx_Import(name, NULL, 0); - if (!parts_tuple || unlikely(!module)) - return module; - imported_module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(imported_module)) { - Py_DECREF(module); - return imported_module; - } - PyErr_Clear(); - nparts = PyTuple_GET_SIZE(parts_tuple); - for (i=1; i < nparts && module; i++) { - PyObject *part, *submodule; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - part = PyTuple_GET_ITEM(parts_tuple, i); -#else - part = PySequence_ITEM(parts_tuple, i); -#endif - submodule = __Pyx_PyObject_GetAttrStrNoError(module, part); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(part); -#endif - Py_DECREF(module); - module = submodule; - } - if (likely(module)) - return module; - return __Pyx__ImportDottedModule_Error(name, parts_tuple, i); -#endif -} -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030400B1 - PyObject *module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(module)) { - PyObject *spec = __Pyx_PyObject_GetAttrStrNoError(module, __pyx_n_s_spec); - if (likely(spec)) { - PyObject *unsafe = __Pyx_PyObject_GetAttrStrNoError(spec, __pyx_n_s_initializing); - if (likely(!unsafe || !__Pyx_PyObject_IsTrue(unsafe))) { - Py_DECREF(spec); - spec = NULL; - } - Py_XDECREF(unsafe); - } - if (likely(!spec)) { - PyErr_Clear(); - return module; - } - Py_DECREF(spec); - Py_DECREF(module); - } else if (PyErr_Occurred()) { - PyErr_Clear(); - } -#endif - return __Pyx__ImportDottedModule(name, parts_tuple); -} - -/* PyObjectLookupSpecial */ - #if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx__PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name, int with_error) { - PyObject *res; - PyTypeObject *tp = Py_TYPE(obj); -#if PY_MAJOR_VERSION < 3 - if (unlikely(PyInstance_Check(obj))) - return with_error ? __Pyx_PyObject_GetAttrStr(obj, attr_name) : __Pyx_PyObject_GetAttrStrNoError(obj, attr_name); -#endif - res = _PyType_Lookup(tp, attr_name); - if (likely(res)) { - descrgetfunc f = Py_TYPE(res)->tp_descr_get; - if (!f) { - Py_INCREF(res); - } else { - res = f(res, obj, (PyObject *)tp); - } - } else if (with_error) { - PyErr_SetObject(PyExc_AttributeError, attr_name); - } - return res; -} -#endif - -/* GetTopmostException */ - #if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ - #if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* DictGetItem */ - #if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY -static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) { - PyObject *value; - value = PyDict_GetItemWithError(d, key); - if (unlikely(!value)) { - if (!PyErr_Occurred()) { - if (unlikely(PyTuple_Check(key))) { - PyObject* args = PyTuple_Pack(1, key); - if (likely(args)) { - PyErr_SetObject(PyExc_KeyError, args); - Py_DECREF(args); - } - } else { - PyErr_SetObject(PyExc_KeyError, key); - } - } - return NULL; - } - Py_INCREF(value); - return value; -} -#endif - -/* JoinPyUnicode */ - static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char) { -#if CYTHON_USE_UNICODE_INTERNALS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyObject *result_uval; - int result_ukind, kind_shift; - Py_ssize_t i, char_pos; - void *result_udata; - CYTHON_MAYBE_UNUSED_VAR(max_char); -#if CYTHON_PEP393_ENABLED - result_uval = PyUnicode_New(result_ulength, max_char); - if (unlikely(!result_uval)) return NULL; - result_ukind = (max_char <= 255) ? PyUnicode_1BYTE_KIND : (max_char <= 65535) ? PyUnicode_2BYTE_KIND : PyUnicode_4BYTE_KIND; - kind_shift = (result_ukind == PyUnicode_4BYTE_KIND) ? 2 : result_ukind - 1; - result_udata = PyUnicode_DATA(result_uval); -#else - result_uval = PyUnicode_FromUnicode(NULL, result_ulength); - if (unlikely(!result_uval)) return NULL; - result_ukind = sizeof(Py_UNICODE); - kind_shift = (result_ukind == 4) ? 2 : result_ukind - 1; - result_udata = PyUnicode_AS_UNICODE(result_uval); -#endif - assert(kind_shift == 2 || kind_shift == 1 || kind_shift == 0); - char_pos = 0; - for (i=0; i < value_count; i++) { - int ukind; - Py_ssize_t ulength; - void *udata; - PyObject *uval = PyTuple_GET_ITEM(value_tuple, i); - if (unlikely(__Pyx_PyUnicode_READY(uval))) - goto bad; - ulength = __Pyx_PyUnicode_GET_LENGTH(uval); - if (unlikely(!ulength)) - continue; - if (unlikely((PY_SSIZE_T_MAX >> kind_shift) - ulength < char_pos)) - goto overflow; - ukind = __Pyx_PyUnicode_KIND(uval); - udata = __Pyx_PyUnicode_DATA(uval); - if (!CYTHON_PEP393_ENABLED || ukind == result_ukind) { - memcpy((char *)result_udata + (char_pos << kind_shift), udata, (size_t) (ulength << kind_shift)); - } else { - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0 || defined(_PyUnicode_FastCopyCharacters) - _PyUnicode_FastCopyCharacters(result_uval, char_pos, uval, 0, ulength); - #else - Py_ssize_t j; - for (j=0; j < ulength; j++) { - Py_UCS4 uchar = __Pyx_PyUnicode_READ(ukind, udata, j); - __Pyx_PyUnicode_WRITE(result_ukind, result_udata, char_pos+j, uchar); - } - #endif - } - char_pos += ulength; - } - return result_uval; -overflow: - PyErr_SetString(PyExc_OverflowError, "join() result is too long for a Python string"); -bad: - Py_DECREF(result_uval); - return NULL; -#else - CYTHON_UNUSED_VAR(max_char); - CYTHON_UNUSED_VAR(result_ulength); - CYTHON_UNUSED_VAR(value_count); - return PyUnicode_Join(__pyx_empty_unicode, value_tuple); -#endif -} - -/* PyObjectCall2Args */ - static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args[3] = {NULL, arg1, arg2}; - return __Pyx_PyObject_FastCall(function, args+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectGetMethod */ - static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - __Pyx_TypeName type_name; - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if defined(Py_TPFLAGS_METHOD_DESCRIPTOR) && Py_TPFLAGS_METHOD_DESCRIPTOR - if (__Pyx_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)) -#elif PY_MAJOR_VERSION >= 3 - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type))) - #endif -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (likely(descr != NULL)) { - *method = descr; - return 0; - } - type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod1 */ - static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) { - PyObject *result = __Pyx_PyObject_CallOneArg(method, arg); - Py_DECREF(method); - return result; -} -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) { - PyObject *method = NULL, *result; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_Call2Args(method, obj, arg); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) return NULL; - return __Pyx__PyObject_CallMethod1(method, arg); -} - -/* append */ - static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x) { - if (likely(PyList_CheckExact(L))) { - if (unlikely(__Pyx_PyList_Append(L, x) < 0)) return -1; - } else { - PyObject* retval = __Pyx_PyObject_CallMethod1(L, __pyx_n_s_append, x); - if (unlikely(!retval)) - return -1; - Py_DECREF(retval); - } - return 0; -} - -/* PyIntCompare */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_NeObjC(PyObject *op1, PyObject *op2, long intval, long inplace) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(inplace); - if (op1 == op2) { - Py_RETURN_FALSE; - } - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - if (a != b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - int unequal; - unsigned long uintval; - Py_ssize_t size = Py_SIZE(op1); - const digit* digits = ((PyLongObject*)op1)->ob_digit; - if (intval == 0) { - if (size != 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } else if (intval < 0) { - if (size >= 0) - Py_RETURN_TRUE; - intval = -intval; - size = -size; - } else { - if (size <= 0) - Py_RETURN_TRUE; - } - uintval = (unsigned long) intval; -#if PyLong_SHIFT * 4 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 4)) { - unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 3 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 3)) { - unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 2 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 2)) { - unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 1 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 1)) { - unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif - unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK)); - if (unequal != 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - if ((double)a != (double)b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - return ( - PyObject_RichCompare(op1, op2, Py_NE)); -} - -/* PyIntCompare */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, long inplace) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(inplace); - if (op1 == op2) { - Py_RETURN_TRUE; - } - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - if (a == b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - int unequal; - unsigned long uintval; - Py_ssize_t size = Py_SIZE(op1); - const digit* digits = ((PyLongObject*)op1)->ob_digit; - if (intval == 0) { - if (size == 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } else if (intval < 0) { - if (size >= 0) - Py_RETURN_FALSE; - intval = -intval; - size = -size; - } else { - if (size <= 0) - Py_RETURN_FALSE; - } - uintval = (unsigned long) intval; -#if PyLong_SHIFT * 4 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 4)) { - unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 3 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 3)) { - unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 2 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 2)) { - unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 1 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 1)) { - unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif - unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK)); - if (unequal == 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - if ((double)a == (double)b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - return ( - PyObject_RichCompare(op1, op2, Py_EQ)); -} - -/* PyIntBinop */ - #if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long x; - long a = PyInt_AS_LONG(op1); - - x = (long)((unsigned long)a - b); - if (likely((x^a) >= 0 || (x^~b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (unlikely(size == 0)) { - return PyLong_FromLong(-intval); - } - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - } - x = a - b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla - llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("subtract", return NULL) - result = ((double)a) - (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceSubtract : PyNumber_Subtract)(op1, op2); -} -#endif - -/* SetItemInt */ - static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v) { - int r; - if (unlikely(!j)) return -1; - r = PyObject_SetItem(o, j, v); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, int is_list, - CYTHON_NCP_UNUSED int wraparound, CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = (!wraparound) ? i : ((likely(i >= 0)) ? i : i + PyList_GET_SIZE(o)); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o)))) { - PyObject* old = PyList_GET_ITEM(o, n); - Py_INCREF(v); - PyList_SET_ITEM(o, n, v); - Py_DECREF(old); - return 1; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_ass_subscript) { - int r; - PyObject *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return -1; - r = mm->mp_ass_subscript(o, key, v); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_ass_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return -1; - PyErr_Clear(); - } - } - return sm->sq_ass_item(o, i, v); - } - } -#else -#if CYTHON_COMPILING_IN_PYPY - if (is_list || (PySequence_Check(o) && !PyDict_Check(o))) -#else - if (is_list || PySequence_Check(o)) -#endif - { - return PySequence_SetItem(o, i, v); - } -#endif - return __Pyx_SetItemInt_Generic(o, PyInt_FromSsize_t(i), v); -} - -/* PyFloatBinop */ - #if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_TrueDivideObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check) { - const double b = floatval; - double a, result; - (void)inplace; (void)zerodivision_check; - if (likely(PyFloat_CheckExact(op1))) { -#if CYTHON_COMPILING_IN_LIMITED_API - a = __pyx_PyFloat_AsDouble(op1); -#else - a = PyFloat_AS_DOUBLE(op1); -#endif - - } else - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - a = (double) PyInt_AS_LONG(op1); - - } else - #endif - if (likely(PyLong_CheckExact(op1))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - switch (size) { - case 0: a = 0.0; break; - case -1: a = -(double) digits[0]; break; - case 1: a = (double) digits[0]; break; - case -2: - case 2: - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (1 * PyLong_SHIFT < 53))) { - a = (double) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -2) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -3: - case 3: - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53))) { - a = (double) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -3) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -4: - case 4: - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53))) { - a = (double) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (4 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -4) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - default: - #else - { - #endif - a = PyLong_AsDouble(op1); - if (unlikely(a == -1.0 && PyErr_Occurred())) return NULL; - } - } else { - return (inplace ? PyNumber_InPlaceTrueDivide : PyNumber_TrueDivide)(op1, op2); - } - PyFPE_START_PROTECT("divide", return NULL) - result = a / b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); -} -#endif - -/* PyObjectFormat */ - #if CYTHON_USE_UNICODE_WRITER -static PyObject* __Pyx_PyObject_Format(PyObject* obj, PyObject* format_spec) { - int ret; - _PyUnicodeWriter writer; - if (likely(PyFloat_CheckExact(obj))) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x03040000 - _PyUnicodeWriter_Init(&writer, 0); -#else - _PyUnicodeWriter_Init(&writer); -#endif - ret = _PyFloat_FormatAdvancedWriter( - &writer, - obj, - format_spec, 0, PyUnicode_GET_LENGTH(format_spec)); - } else if (likely(PyLong_CheckExact(obj))) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x03040000 - _PyUnicodeWriter_Init(&writer, 0); -#else - _PyUnicodeWriter_Init(&writer); -#endif - ret = _PyLong_FormatAdvancedWriter( - &writer, - obj, - format_spec, 0, PyUnicode_GET_LENGTH(format_spec)); - } else { - return PyObject_Format(obj, format_spec); - } - if (unlikely(ret == -1)) { - _PyUnicodeWriter_Dealloc(&writer); - return NULL; - } - return _PyUnicodeWriter_Finish(&writer); -} -#endif - -/* PyFloatBinop */ - #if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_TrueDivideCObj(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check) { - const double a = floatval; - double b, result; - (void)inplace; (void)zerodivision_check; - if (likely(PyFloat_CheckExact(op2))) { -#if CYTHON_COMPILING_IN_LIMITED_API - b = __pyx_PyFloat_AsDouble(op2); -#else - b = PyFloat_AS_DOUBLE(op2); -#endif - if (unlikely(zerodivision_check && ((b) == 0.0))) { PyErr_SetString(PyExc_ZeroDivisionError, "float division by zero"); return NULL;} - } else - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op2))) { - b = (double) PyInt_AS_LONG(op2); - if (unlikely(zerodivision_check && ((b) == 0.0))) { PyErr_SetString(PyExc_ZeroDivisionError, "float division by zero"); return NULL;} - } else - #endif - if (likely(PyLong_CheckExact(op2))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)op2)->ob_digit; - const Py_ssize_t size = Py_SIZE(op2); - switch (size) { - case 0: b = 0.0; if (unlikely(zerodivision_check && ((b) == 0.0))) { PyErr_SetString(PyExc_ZeroDivisionError, "float division by zero"); return NULL;} break; - case -1: b = -(double) digits[0]; break; - case 1: b = (double) digits[0]; break; - case -2: - case 2: - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (1 * PyLong_SHIFT < 53))) { - b = (double) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53) || (b < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -2) - b = -b; - break; - } - } - CYTHON_FALLTHROUGH; - case -3: - case 3: - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53))) { - b = (double) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53) || (b < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -3) - b = -b; - break; - } - } - CYTHON_FALLTHROUGH; - case -4: - case 4: - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53))) { - b = (double) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (4 * PyLong_SHIFT < 53) || (b < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -4) - b = -b; - break; - } - } - CYTHON_FALLTHROUGH; - default: - #else - { - #endif - b = PyLong_AsDouble(op2); - if (unlikely(b == -1.0 && PyErr_Occurred())) return NULL; - #if !CYTHON_USE_PYLONG_INTERNALS - if (unlikely(zerodivision_check && ((b) == 0.0))) { PyErr_SetString(PyExc_ZeroDivisionError, "float division by zero"); return NULL;} - #endif - } - } else { - return (inplace ? PyNumber_InPlaceTrueDivide : PyNumber_TrueDivide)(op1, op2); - } - PyFPE_START_PROTECT("divide", return NULL) - result = a / b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); -} -#endif - -/* GetAttr */ - static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* HasAttr */ - static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (!r) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* FixUpExtensionType */ - #if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type) { -#if PY_VERSION_HEX > 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - (void) spec; - (void) type; -#else - const PyType_Slot *slot = spec->slots; - while (slot && slot->slot && slot->slot != Py_tp_members) - slot++; - if (slot && slot->slot == Py_tp_members) { - int changed = 0; -#if !(PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON) - const -#endif - PyMemberDef *memb = (PyMemberDef*) slot->pfunc; - while (memb && memb->name) { - if (memb->name[0] == '_' && memb->name[1] == '_') { -#if PY_VERSION_HEX < 0x030900b1 - if (strcmp(memb->name, "__weaklistoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_weaklistoffset = memb->offset; - changed = 1; - } - else if (strcmp(memb->name, "__dictoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_dictoffset = memb->offset; - changed = 1; - } -#if CYTHON_METH_FASTCALL - else if (strcmp(memb->name, "__vectorcalloffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); -#if PY_VERSION_HEX >= 0x030800b4 - type->tp_vectorcall_offset = memb->offset; -#else - type->tp_print = (printfunc) memb->offset; -#endif - changed = 1; - } -#endif -#else - if ((0)); -#endif -#if PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON - else if (strcmp(memb->name, "__module__") == 0) { - PyObject *descr; - assert(memb->type == T_OBJECT); - assert(memb->flags == 0 || memb->flags == READONLY); - descr = PyDescr_NewMember(type, memb); - if (unlikely(!descr)) - return -1; - if (unlikely(PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr) < 0)) { - Py_DECREF(descr); - return -1; - } - Py_DECREF(descr); - changed = 1; - } -#endif - } - memb++; - } - if (changed) - PyType_Modified(type); - } -#endif - return 0; -} -#endif - -/* PyObjectCallNoArg */ - static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { - PyObject *arg = NULL; - return __Pyx_PyObject_FastCall(func, (&arg)+1, 0 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectCallMethod0 */ - static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -} - -/* ValidateBasesTuple */ - #if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases) { - Py_ssize_t i, n = PyTuple_GET_SIZE(bases); - for (i = 1; i < n; i++) - { - PyObject *b0 = PyTuple_GET_ITEM(bases, i); - PyTypeObject *b; -#if PY_MAJOR_VERSION < 3 - if (PyClass_Check(b0)) - { - PyErr_Format(PyExc_TypeError, "base class '%.200s' is an old-style class", - PyString_AS_STRING(((PyClassObject*)b0)->cl_name)); - return -1; - } -#endif - b = (PyTypeObject*) b0; - if (!__Pyx_PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE)) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "base class '" __Pyx_FMT_TYPENAME "' is not a heap type", b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - if (dictoffset == 0 && b->tp_dictoffset) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "extension type '%.200s' has no __dict__ slot, " - "but base type '" __Pyx_FMT_TYPENAME "' has: " - "either add 'cdef dict __dict__' to the extension type " - "or add '__slots__ = [...]' to the base type", - type_name, b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - } - return 0; -} -#endif - -/* PyType_Ready */ - static int __Pyx_PyType_Ready(PyTypeObject *t) { -#if CYTHON_USE_TYPE_SPECS || !(CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API) || defined(PYSTON_MAJOR_VERSION) - (void)__Pyx_PyObject_CallMethod0; -#if CYTHON_USE_TYPE_SPECS - (void)__Pyx_validate_bases_tuple; -#endif - return PyType_Ready(t); -#else - int r; - PyObject *bases = __Pyx_PyType_GetSlot(t, tp_bases, PyObject*); - if (bases && unlikely(__Pyx_validate_bases_tuple(t->tp_name, t->tp_dictoffset, bases) == -1)) - return -1; -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - { - int gc_was_enabled; - #if PY_VERSION_HEX >= 0x030A00b1 - gc_was_enabled = PyGC_Disable(); - (void)__Pyx_PyObject_CallMethod0; - #else - PyObject *ret, *py_status; - PyObject *gc = NULL; - #if PY_VERSION_HEX >= 0x030700a1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM+0 >= 0x07030400) - gc = PyImport_GetModule(__pyx_kp_u_gc); - #endif - if (unlikely(!gc)) gc = PyImport_Import(__pyx_kp_u_gc); - if (unlikely(!gc)) return -1; - py_status = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_isenabled); - if (unlikely(!py_status)) { - Py_DECREF(gc); - return -1; - } - gc_was_enabled = __Pyx_PyObject_IsTrue(py_status); - Py_DECREF(py_status); - if (gc_was_enabled > 0) { - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_disable); - if (unlikely(!ret)) { - Py_DECREF(gc); - return -1; - } - Py_DECREF(ret); - } else if (unlikely(gc_was_enabled == -1)) { - Py_DECREF(gc); - return -1; - } - #endif - t->tp_flags |= Py_TPFLAGS_HEAPTYPE; -#else - (void)__Pyx_PyObject_CallMethod0; -#endif - r = PyType_Ready(t); -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE; - #if PY_VERSION_HEX >= 0x030A00b1 - if (gc_was_enabled) - PyGC_Enable(); - #else - if (gc_was_enabled) { - PyObject *tp, *v, *tb; - PyErr_Fetch(&tp, &v, &tb); - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_enable); - if (likely(ret || r == -1)) { - Py_XDECREF(ret); - PyErr_Restore(tp, v, tb); - } else { - Py_XDECREF(tp); - Py_XDECREF(v); - Py_XDECREF(tb); - r = -1; - } - } - Py_DECREF(gc); - #endif - } -#endif - return r; -#endif -} - -/* PyObject_GenericGetAttrNoDict */ - #if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - __Pyx_TypeName type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, attr_name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(attr_name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* ImportFrom */ - static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - const char* module_name_str = 0; - PyObject* module_name = 0; - PyObject* module_dot = 0; - PyObject* full_name = 0; - PyErr_Clear(); - module_name_str = PyModule_GetName(module); - if (unlikely(!module_name_str)) { goto modbad; } - module_name = PyUnicode_FromString(module_name_str); - if (unlikely(!module_name)) { goto modbad; } - module_dot = PyUnicode_Concat(module_name, __pyx_kp_u__32); - if (unlikely(!module_dot)) { goto modbad; } - full_name = PyUnicode_Concat(module_dot, name); - if (unlikely(!full_name)) { goto modbad; } - #if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - { - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - goto modbad; - value = PyObject_GetItem(modules, full_name); - } - #else - value = PyImport_GetModule(full_name); - #endif - modbad: - Py_XDECREF(full_name); - Py_XDECREF(module_dot); - Py_XDECREF(module_name); - } - if (unlikely(!value)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* Py3UpdateBases */ - static PyObject* -__Pyx_PEP560_update_bases(PyObject *bases) -{ - Py_ssize_t i, j, size_bases; - PyObject *base, *meth, *new_base, *result, *new_bases = NULL; - size_bases = PyTuple_GET_SIZE(bases); - for (i = 0; i < size_bases; i++) { - base = PyTuple_GET_ITEM(bases, i); - if (PyType_Check(base)) { - if (new_bases) { - if (PyList_Append(new_bases, base) < 0) { - goto error; - } - } - continue; - } - meth = __Pyx_PyObject_GetAttrStrNoError(base, __pyx_n_s_mro_entries); - if (!meth && PyErr_Occurred()) { - goto error; - } - if (!meth) { - if (new_bases) { - if (PyList_Append(new_bases, base) < 0) { - goto error; - } - } - continue; - } - new_base = __Pyx_PyObject_CallOneArg(meth, bases); - Py_DECREF(meth); - if (!new_base) { - goto error; - } - if (!PyTuple_Check(new_base)) { - PyErr_SetString(PyExc_TypeError, - "__mro_entries__ must return a tuple"); - Py_DECREF(new_base); - goto error; - } - if (!new_bases) { - if (!(new_bases = PyList_New(i))) { - goto error; - } - for (j = 0; j < i; j++) { - base = PyTuple_GET_ITEM(bases, j); - PyList_SET_ITEM(new_bases, j, base); - Py_INCREF(base); - } - } - j = PyList_GET_SIZE(new_bases); - if (PyList_SetSlice(new_bases, j, j, new_base) < 0) { - goto error; - } - Py_DECREF(new_base); - } - if (!new_bases) { - Py_INCREF(bases); - return bases; - } - result = PyList_AsTuple(new_bases); - Py_DECREF(new_bases); - return result; -error: - Py_XDECREF(new_bases); - return NULL; -} - -/* CalculateMetaclass */ - static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases) { - Py_ssize_t i, nbases = PyTuple_GET_SIZE(bases); - for (i=0; i < nbases; i++) { - PyTypeObject *tmptype; - PyObject *tmp = PyTuple_GET_ITEM(bases, i); - tmptype = Py_TYPE(tmp); -#if PY_MAJOR_VERSION < 3 - if (tmptype == &PyClass_Type) - continue; -#endif - if (!metaclass) { - metaclass = tmptype; - continue; - } - if (PyType_IsSubtype(metaclass, tmptype)) - continue; - if (PyType_IsSubtype(tmptype, metaclass)) { - metaclass = tmptype; - continue; - } - PyErr_SetString(PyExc_TypeError, - "metaclass conflict: " - "the metaclass of a derived class " - "must be a (non-strict) subclass " - "of the metaclasses of all its bases"); - return NULL; - } - if (!metaclass) { -#if PY_MAJOR_VERSION < 3 - metaclass = &PyClass_Type; -#else - metaclass = &PyType_Type; -#endif - } - Py_INCREF((PyObject*) metaclass); - return (PyObject*) metaclass; -} - -/* FetchCommonType */ - static PyObject *__Pyx_FetchSharedCythonABIModule(void) { - PyObject *abi_module = PyImport_AddModule((char*) __PYX_ABI_MODULE_NAME); - if (!abi_module) return NULL; - Py_INCREF(abi_module); - return abi_module; -} -static int __Pyx_VerifyCachedType(PyObject *cached_type, - const char *name, - Py_ssize_t basicsize, - Py_ssize_t expected_basicsize) { - if (!PyType_Check(cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", name); - return -1; - } - if (basicsize != expected_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - name); - return -1; - } - return 0; -} -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* abi_module; - const char* object_name; - PyTypeObject *cached_type = NULL; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - object_name = strrchr(type->tp_name, '.'); - object_name = object_name ? object_name+1 : type->tp_name; - cached_type = (PyTypeObject*) PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - if (__Pyx_VerifyCachedType( - (PyObject *)cached_type, - object_name, - cached_type->tp_basicsize, - type->tp_basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, (PyObject *)type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; -done: - Py_DECREF(abi_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#else -static PyTypeObject *__Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *abi_module, *cached_type = NULL; - const char* object_name = strrchr(spec->name, '.'); - object_name = object_name ? object_name+1 : spec->name; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - cached_type = PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - Py_ssize_t basicsize; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_basicsize; - py_basicsize = PyObject_GetAttrString(cached_type, "__basicsize__"); - if (unlikely(!py_basicsize)) goto bad; - basicsize = PyLong_AsSsize_t(py_basicsize); - Py_DECREF(py_basicsize); - py_basicsize = 0; - if (unlikely(basicsize == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; -#else - basicsize = likely(PyType_Check(cached_type)) ? ((PyTypeObject*) cached_type)->tp_basicsize : -1; -#endif - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - basicsize, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - (void) module; - cached_type = __Pyx_PyType_FromModuleAndSpec(abi_module, spec, bases); - if (unlikely(!cached_type)) goto bad; - if (unlikely(__Pyx_fix_up_extension_type_from_spec(spec, (PyTypeObject *) cached_type) < 0)) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, cached_type) < 0) goto bad; -done: - Py_DECREF(abi_module); - assert(cached_type == NULL || PyType_Check(cached_type)); - return (PyTypeObject *) cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#endif - -/* PyVectorcallFastCallDict */ - #if CYTHON_METH_FASTCALL -static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - PyObject *res = NULL; - PyObject *kwnames; - PyObject **newargs; - PyObject **kwvalues; - Py_ssize_t i, pos; - size_t j; - PyObject *key, *value; - unsigned long keys_are_strings; - Py_ssize_t nkw = PyDict_GET_SIZE(kw); - newargs = (PyObject **)PyMem_Malloc((nargs + (size_t)nkw) * sizeof(args[0])); - if (unlikely(newargs == NULL)) { - PyErr_NoMemory(); - return NULL; - } - for (j = 0; j < nargs; j++) newargs[j] = args[j]; - kwnames = PyTuple_New(nkw); - if (unlikely(kwnames == NULL)) { - PyMem_Free(newargs); - return NULL; - } - kwvalues = newargs + nargs; - pos = i = 0; - keys_are_strings = Py_TPFLAGS_UNICODE_SUBCLASS; - while (PyDict_Next(kw, &pos, &key, &value)) { - keys_are_strings &= Py_TYPE(key)->tp_flags; - Py_INCREF(key); - Py_INCREF(value); - PyTuple_SET_ITEM(kwnames, i, key); - kwvalues[i] = value; - i++; - } - if (unlikely(!keys_are_strings)) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - goto cleanup; - } - res = vc(func, newargs, nargs, kwnames); -cleanup: - Py_DECREF(kwnames); - for (i = 0; i < nkw; i++) - Py_DECREF(kwvalues[i]); - PyMem_Free(newargs); - return res; -} -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - if (likely(kw == NULL) || PyDict_GET_SIZE(kw) == 0) { - return vc(func, args, nargs, NULL); - } - return __Pyx_PyVectorcall_FastCallDict_kw(func, vc, args, nargs, kw); -} -#endif - -/* CythonFunctionShared */ - static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj) { -#if PY_VERSION_HEX < 0x030900B1 - __Pyx_Py_XDECREF_SET( - __Pyx_CyFunction_GetClassObj(f), - ((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#else - __Pyx_Py_XDECREF_SET( - ((PyCMethodObject *) (f))->mm_class, - (PyTypeObject*)((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#endif -} -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, void *closure) -{ - CYTHON_UNUSED_VAR(closure); - if (unlikely(op->func_doc == NULL)) { - if (((PyCFunctionObject*)op)->m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_doc, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_name == NULL)) { -#if PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_name, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_qualname, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_dict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(op); - CYTHON_UNUSED_VAR(context); - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - CYTHON_UNUSED_VAR(context); - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyTuple_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__defaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_tuple, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_tuple; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__kwdefaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_kwdict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_kwdict; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value || value == Py_None) { - value = NULL; - } else if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - __Pyx_Py_XDECREF_SET(op->func_annotations, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->func_annotations; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine(__pyx_CyFunctionObject *op, void *context) { - int is_coroutine; - CYTHON_UNUSED_VAR(context); - if (op->func_is_coroutine) { - return __Pyx_NewRef(op->func_is_coroutine); - } - is_coroutine = op->flags & __Pyx_CYFUNCTION_COROUTINE; -#if PY_VERSION_HEX >= 0x03050000 - if (is_coroutine) { - PyObject *module, *fromlist, *marker = __pyx_n_s_is_coroutine; - fromlist = PyList_New(1); - if (unlikely(!fromlist)) return NULL; - Py_INCREF(marker); - PyList_SET_ITEM(fromlist, 0, marker); - module = PyImport_ImportModuleLevelObject(__pyx_n_s_asyncio_coroutines, NULL, NULL, fromlist, 0); - Py_DECREF(fromlist); - if (unlikely(!module)) goto ignore; - op->func_is_coroutine = __Pyx_PyObject_GetAttrStr(module, marker); - Py_DECREF(module); - if (likely(op->func_is_coroutine)) { - return __Pyx_NewRef(op->func_is_coroutine); - } -ignore: - PyErr_Clear(); - } -#endif - op->func_is_coroutine = __Pyx_PyBool_FromLong(is_coroutine); - return __Pyx_NewRef(op->func_is_coroutine); -} -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {(char *) "_is_coroutine", (getter)__Pyx_CyFunction_get_is_coroutine, 0, 0, 0}, - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), 0, 0}, -#if CYTHON_USE_TYPE_SPECS - {(char *) "__dictoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_dict), READONLY, 0}, -#if CYTHON_METH_FASTCALL -#if CYTHON_BACKPORT_VECTORCALL - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_vectorcall), READONLY, 0}, -#else - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(PyCFunctionObject, vectorcall), READONLY, 0}, -#endif -#endif -#if PY_VERSION_HEX < 0x030500A0 - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_weakreflist), READONLY, 0}, -#else - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(PyCFunctionObject, m_weakreflist), READONLY, 0}, -#endif -#endif - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, PyObject *args) -{ - CYTHON_UNUSED_VAR(args); -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(((PyCFunctionObject*)m)->m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) (((PyCFunctionObject*)cyfunc)->m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyCFunctionObject *cf = (PyCFunctionObject*) op; - if (unlikely(op == NULL)) - return NULL; - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; - cf->m_ml = ml; - cf->m_self = (PyObject *) op; - Py_XINCREF(closure); - op->func_closure = closure; - Py_XINCREF(module); - cf->m_module = module; - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; -#if PY_VERSION_HEX < 0x030900B1 - op->func_classobj = NULL; -#else - ((PyCMethodObject*)op)->mm_class = NULL; -#endif - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - op->func_is_coroutine = NULL; -#if CYTHON_METH_FASTCALL - switch (ml->ml_flags & (METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_METHOD)) { - case METH_NOARGS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_NOARGS; - break; - case METH_O: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_O; - break; - case METH_METHOD | METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD; - break; - case METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS; - break; - case METH_VARARGS | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = NULL; - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - Py_DECREF(op); - return NULL; - } -#endif - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); - Py_CLEAR(((PyCFunctionObject*)m)->m_module); - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); -#if PY_VERSION_HEX < 0x030900B1 - Py_CLEAR(__Pyx_CyFunction_GetClassObj(m)); -#else - { - PyObject *cls = (PyObject*) ((PyCMethodObject *) (m))->mm_class; - ((PyCMethodObject *) (m))->mm_class = NULL; - Py_XDECREF(cls); - } -#endif - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - Py_CLEAR(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - __Pyx_PyHeapTypeObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); - Py_VISIT(((PyCFunctionObject*)m)->m_module); - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); - Py_VISIT(__Pyx_CyFunction_GetClassObj(m)); - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - Py_VISIT(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - Py_ssize_t size; - switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 0)) - return (*meth)(self, NULL); - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - return NULL; - } - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw); -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; -#if CYTHON_METH_FASTCALL - __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); - if (vc) { -#if CYTHON_ASSUME_SAFE_MACROS - return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); -#else - (void) &__Pyx_PyVectorcall_FastCallDict; - return PyVectorcall_Call(func, args, kw); -#endif - } -#endif - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; - argc = PyTuple_GET_SIZE(args); - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE int __Pyx_CyFunction_Vectorcall_CheckArgs(__pyx_CyFunctionObject *cyfunc, Py_ssize_t nargs, PyObject *kwnames) -{ - int ret = 0; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - if (unlikely(nargs < 1)) { - PyErr_Format(PyExc_TypeError, "%.200s() needs an argument", - ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - ret = 1; - } - if (unlikely(kwnames) && unlikely(PyTuple_GET_SIZE(kwnames))) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no keyword arguments", ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - return ret; -} -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 0)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, NULL); -} -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 1)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, args[0]); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((_PyCFunctionFastWithKeywords)(void(*)(void))def->ml_meth)(self, args, nargs, kwnames); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; - PyTypeObject *cls = (PyTypeObject *) __Pyx_CyFunction_GetClassObj(cyfunc); -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((__Pyx_PyCMethod)(void(*)(void))def->ml_meth)(self, cls, args, nargs, kwnames); -} -#endif -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_CyFunctionType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_CyFunction_dealloc}, - {Py_tp_repr, (void *)__Pyx_CyFunction_repr}, - {Py_tp_call, (void *)__Pyx_CyFunction_CallAsMethod}, - {Py_tp_traverse, (void *)__Pyx_CyFunction_traverse}, - {Py_tp_clear, (void *)__Pyx_CyFunction_clear}, - {Py_tp_methods, (void *)__pyx_CyFunction_methods}, - {Py_tp_members, (void *)__pyx_CyFunction_members}, - {Py_tp_getset, (void *)__pyx_CyFunction_getsets}, - {Py_tp_descr_get, (void *)__Pyx_PyMethod_New}, - {0, 0}, -}; -static PyType_Spec __pyx_CyFunctionType_spec = { - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if (defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL) - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - __pyx_CyFunctionType_slots -}; -#else -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, -#if !CYTHON_METH_FASTCALL - 0, -#elif CYTHON_BACKPORT_VECTORCALL - (printfunc)offsetof(__pyx_CyFunctionObject, func_vectorcall), -#else - offsetof(PyCFunctionObject, vectorcall), -#endif - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#ifdef _Py_TPFLAGS_HAVE_VECTORCALL - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_PyMethod_New, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, -#endif -}; -#endif -static int __pyx_CyFunction_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_CyFunctionType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_CyFunctionType_spec, NULL); -#else - (void) module; - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); -#endif - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ - static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* Py3ClassCreate */ - static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, - PyObject *qualname, PyObject *mkw, PyObject *modname, PyObject *doc) { - PyObject *ns; - if (metaclass) { - PyObject *prep = __Pyx_PyObject_GetAttrStrNoError(metaclass, __pyx_n_s_prepare); - if (prep) { - PyObject *pargs[3] = {NULL, name, bases}; - ns = __Pyx_PyObject_FastCallDict(prep, pargs+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, mkw); - Py_DECREF(prep); - } else { - if (unlikely(PyErr_Occurred())) - return NULL; - ns = PyDict_New(); - } - } else { - ns = PyDict_New(); - } - if (unlikely(!ns)) - return NULL; - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_module_3, modname) < 0)) goto bad; -#if PY_VERSION_HEX >= 0x03030000 - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_qualname, qualname) < 0)) goto bad; -#else - CYTHON_MAYBE_UNUSED_VAR(qualname); -#endif - if (unlikely(doc && PyObject_SetItem(ns, __pyx_n_s_doc, doc) < 0)) goto bad; - return ns; -bad: - Py_DECREF(ns); - return NULL; -} -#if PY_VERSION_HEX < 0x030600A4 && CYTHON_PEP487_INIT_SUBCLASS -static int __Pyx_SetNamesPEP487(PyObject *type_obj) { - PyTypeObject *type = (PyTypeObject*) type_obj; - PyObject *names_to_set, *key, *value, *set_name, *tmp; - Py_ssize_t i = 0; -#if CYTHON_USE_TYPE_SLOTS - names_to_set = PyDict_Copy(type->tp_dict); -#else - { - PyObject *d = PyObject_GetAttr(type_obj, __pyx_n_s_dict); - names_to_set = NULL; - if (likely(d)) { - PyObject *names_to_set = PyDict_New(); - int ret = likely(names_to_set) ? PyDict_Update(names_to_set, d) : -1; - Py_DECREF(d); - if (unlikely(ret < 0)) - Py_CLEAR(names_to_set); - } - } -#endif - if (unlikely(names_to_set == NULL)) - goto bad; - while (PyDict_Next(names_to_set, &i, &key, &value)) { - set_name = __Pyx_PyObject_LookupSpecialNoError(value, __pyx_n_s_set_name); - if (unlikely(set_name != NULL)) { - tmp = __Pyx_PyObject_Call2Args(set_name, type_obj, key); - Py_DECREF(set_name); - if (unlikely(tmp == NULL)) { - __Pyx_TypeName value_type_name = - __Pyx_PyType_GetName(Py_TYPE(value)); - __Pyx_TypeName type_name = __Pyx_PyType_GetName(type); - PyErr_Format(PyExc_RuntimeError, -#if PY_MAJOR_VERSION >= 3 - "Error calling __set_name__ on '" __Pyx_FMT_TYPENAME "' instance %R " "in '" __Pyx_FMT_TYPENAME "'", - value_type_name, key, type_name); -#else - "Error calling __set_name__ on '" __Pyx_FMT_TYPENAME "' instance %.100s in '" __Pyx_FMT_TYPENAME "'", - value_type_name, - PyString_Check(key) ? PyString_AS_STRING(key) : "?", - type_name); -#endif - goto bad; - } else { - Py_DECREF(tmp); - } - } - else if (unlikely(PyErr_Occurred())) { - goto bad; - } - } - Py_DECREF(names_to_set); - return 0; -bad: - Py_XDECREF(names_to_set); - return -1; -} -static PyObject *__Pyx_InitSubclassPEP487(PyObject *type_obj, PyObject *mkw) { -#if CYTHON_USE_TYPE_SLOTS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyTypeObject *type = (PyTypeObject*) type_obj; - PyObject *mro = type->tp_mro; - Py_ssize_t i, nbases; - if (unlikely(!mro)) goto done; - (void) &__Pyx_GetBuiltinName; - Py_INCREF(mro); - nbases = PyTuple_GET_SIZE(mro); - assert(PyTuple_GET_ITEM(mro, 0) == type_obj); - for (i = 1; i < nbases-1; i++) { - PyObject *base, *dict, *meth; - base = PyTuple_GET_ITEM(mro, i); - dict = ((PyTypeObject *)base)->tp_dict; - meth = __Pyx_PyDict_GetItemStrWithError(dict, __pyx_n_s_init_subclass); - if (unlikely(meth)) { - descrgetfunc f = Py_TYPE(meth)->tp_descr_get; - PyObject *res; - Py_INCREF(meth); - if (likely(f)) { - res = f(meth, NULL, type_obj); - Py_DECREF(meth); - if (unlikely(!res)) goto bad; - meth = res; - } - res = __Pyx_PyObject_FastCallDict(meth, NULL, 0, mkw); - Py_DECREF(meth); - if (unlikely(!res)) goto bad; - Py_DECREF(res); - goto done; - } else if (unlikely(PyErr_Occurred())) { - goto bad; - } - } -done: - Py_XDECREF(mro); - return type_obj; -bad: - Py_XDECREF(mro); - Py_DECREF(type_obj); - return NULL; -#else - PyObject *super_type, *super, *func, *res; -#if CYTHON_COMPILING_IN_PYPY && !defined(PySuper_Type) - super_type = __Pyx_GetBuiltinName(__pyx_n_s_super); -#else - super_type = (PyObject*) &PySuper_Type; - (void) &__Pyx_GetBuiltinName; -#endif - super = likely(super_type) ? __Pyx_PyObject_Call2Args(super_type, type_obj, type_obj) : NULL; -#if CYTHON_COMPILING_IN_PYPY && !defined(PySuper_Type) - Py_XDECREF(super_type); -#endif - if (unlikely(!super)) { - Py_CLEAR(type_obj); - goto done; - } - func = __Pyx_PyObject_GetAttrStrNoError(super, __pyx_n_s_init_subclass); - Py_DECREF(super); - if (likely(!func)) { - if (unlikely(PyErr_Occurred())) - Py_CLEAR(type_obj); - goto done; - } - res = __Pyx_PyObject_FastCallDict(func, NULL, 0, mkw); - Py_DECREF(func); - if (unlikely(!res)) - Py_CLEAR(type_obj); - Py_XDECREF(res); -done: - return type_obj; -#endif -} -#endif -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, - PyObject *dict, PyObject *mkw, - int calculate_metaclass, int allow_py2_metaclass) { - PyObject *result; - PyObject *owned_metaclass = NULL; - PyObject *margs[4] = {NULL, name, bases, dict}; - if (allow_py2_metaclass) { - owned_metaclass = PyObject_GetItem(dict, __pyx_n_s_metaclass); - if (owned_metaclass) { - metaclass = owned_metaclass; - } else if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) { - PyErr_Clear(); - } else { - return NULL; - } - } - if (calculate_metaclass && (!metaclass || PyType_Check(metaclass))) { - metaclass = __Pyx_CalculateMetaclass((PyTypeObject*) metaclass, bases); - Py_XDECREF(owned_metaclass); - if (unlikely(!metaclass)) - return NULL; - owned_metaclass = metaclass; - } - result = __Pyx_PyObject_FastCallDict(metaclass, margs+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, -#if PY_VERSION_HEX < 0x030600A4 - (metaclass == (PyObject*)&PyType_Type) ? NULL : mkw -#else - mkw -#endif - ); - Py_XDECREF(owned_metaclass); -#if PY_VERSION_HEX < 0x030600A4 && CYTHON_PEP487_INIT_SUBCLASS - if (likely(result) && likely(PyType_Check(result))) { - if (unlikely(__Pyx_SetNamesPEP487(result) < 0)) { - Py_CLEAR(result); - } else { - result = __Pyx_InitSubclassPEP487(result, mkw); - } - } -#else - (void) &__Pyx_GetBuiltinName; -#endif - return result; -} - -/* CyFunctionClassCell */ - static int __Pyx_CyFunction_InitClassCell(PyObject *cyfunctions, PyObject *classobj) { - Py_ssize_t i, count = PyList_GET_SIZE(cyfunctions); - for (i = 0; i < count; i++) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyList_GET_ITEM(cyfunctions, i); -#else - PySequence_ITEM(cyfunctions, i); - if (unlikely(!m)) - return -1; -#endif - __Pyx_CyFunction_SetClassObj(m, classobj); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF((PyObject*)m); -#endif - } - return 0; -} - -/* SwapException */ - #if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* CLineInTraceback */ - #ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStrNoError(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ - #if !CYTHON_COMPILING_IN_LIMITED_API -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} -#endif - -/* AddTraceback */ - #include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if CYTHON_COMPILING_IN_LIMITED_API -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - if (c_line) { - (void) __pyx_cfilenm; - c_line = __Pyx_CLineForTraceback(__Pyx_PyThreadState_Current, c_line); - } - _PyTraceback_Add(funcname, filename, c_line ? -c_line : py_line); -} -#else -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) goto bad; - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} -#endif - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(long) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 2 * PyLong_SHIFT)) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 3 * PyLong_SHIFT)) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 4 * PyLong_SHIFT)) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(long) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if ((8 * sizeof(long) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if ((sizeof(long) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if (CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* Globals */ - static PyObject* __Pyx_Globals(void) { - return __Pyx_NewRef(__pyx_d); -} - -/* FormatTypeName */ - #if CYTHON_COMPILING_IN_LIMITED_API -static __Pyx_TypeName -__Pyx_PyType_GetName(PyTypeObject* tp) -{ - PyObject *name = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_n_s_name_2); - if (unlikely(name == NULL) || unlikely(!PyUnicode_Check(name))) { - PyErr_Clear(); - Py_XSETREF(name, __Pyx_NewRef(__pyx_n_s__78)); - } - return name; -} -#endif - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(int) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 2 * PyLong_SHIFT)) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 3 * PyLong_SHIFT)) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 4 * PyLong_SHIFT)) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(int) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if ((8 * sizeof(int) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if ((sizeof(int) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if (CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* FastTypeChecks */ - #if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = __Pyx_PyType_GetSlot(a, tp_base, PyTypeObject*); - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (cls == a || cls == b) return 1; - mro = cls->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - PyObject *base = PyTuple_GET_ITEM(mro, i); - if (base == (PyObject *)a || base == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(cls, a) || __Pyx_InBases(cls, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - if (exc_type1) { - return __Pyx_IsAnySubtype2((PyTypeObject*)err, (PyTypeObject*)exc_type1, (PyTypeObject*)exc_type2); - } else { - return __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* CoroutineBase */ - #include -#define __Pyx_Coroutine_Undelegate(gen) Py_CLEAR((gen)->yieldfrom) -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *__pyx_tstate, PyObject **pvalue) { - PyObject *et, *ev, *tb; - PyObject *value = NULL; - CYTHON_UNUSED_VAR(__pyx_tstate); - __Pyx_ErrFetch(&et, &ev, &tb); - if (!et) { - Py_XDECREF(tb); - Py_XDECREF(ev); - Py_INCREF(Py_None); - *pvalue = Py_None; - return 0; - } - if (likely(et == PyExc_StopIteration)) { - if (!ev) { - Py_INCREF(Py_None); - value = Py_None; - } -#if PY_VERSION_HEX >= 0x030300A0 - else if (likely(__Pyx_IS_TYPE(ev, (PyTypeObject*)PyExc_StopIteration))) { - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); - } -#endif - else if (unlikely(PyTuple_Check(ev))) { - if (PyTuple_GET_SIZE(ev) >= 1) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - value = PyTuple_GET_ITEM(ev, 0); - Py_INCREF(value); -#else - value = PySequence_ITEM(ev, 0); -#endif - } else { - Py_INCREF(Py_None); - value = Py_None; - } - Py_DECREF(ev); - } - else if (!__Pyx_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration)) { - value = ev; - } - if (likely(value)) { - Py_XDECREF(tb); - Py_DECREF(et); - *pvalue = value; - return 0; - } - } else if (!__Pyx_PyErr_GivenExceptionMatches(et, PyExc_StopIteration)) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - PyErr_NormalizeException(&et, &ev, &tb); - if (unlikely(!PyObject_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration))) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - Py_XDECREF(tb); - Py_DECREF(et); -#if PY_VERSION_HEX >= 0x030300A0 - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); -#else - { - PyObject* args = __Pyx_PyObject_GetAttrStr(ev, __pyx_n_s_args); - Py_DECREF(ev); - if (likely(args)) { - value = PySequence_GetItem(args, 0); - Py_DECREF(args); - } - if (unlikely(!value)) { - __Pyx_ErrRestore(NULL, NULL, NULL); - Py_INCREF(Py_None); - value = Py_None; - } - } -#endif - *pvalue = value; - return 0; -} -static CYTHON_INLINE -void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *exc_state) { - PyObject *t, *v, *tb; - t = exc_state->exc_type; - v = exc_state->exc_value; - tb = exc_state->exc_traceback; - exc_state->exc_type = NULL; - exc_state->exc_value = NULL; - exc_state->exc_traceback = NULL; - Py_XDECREF(t); - Py_XDECREF(v); - Py_XDECREF(tb); -} -#define __Pyx_Coroutine_AlreadyRunningError(gen) (__Pyx__Coroutine_AlreadyRunningError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyRunningError(__pyx_CoroutineObject *gen) { - const char *msg; - CYTHON_MAYBE_UNUSED_VAR(gen); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check((PyObject*)gen)) { - msg = "coroutine already executing"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact((PyObject*)gen)) { - msg = "async generator already executing"; - #endif - } else { - msg = "generator already executing"; - } - PyErr_SetString(PyExc_ValueError, msg); -} -#define __Pyx_Coroutine_NotStartedError(gen) (__Pyx__Coroutine_NotStartedError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_NotStartedError(PyObject *gen) { - const char *msg; - CYTHON_MAYBE_UNUSED_VAR(gen); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(gen)) { - msg = "can't send non-None value to a just-started coroutine"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(gen)) { - msg = "can't send non-None value to a just-started async generator"; - #endif - } else { - msg = "can't send non-None value to a just-started generator"; - } - PyErr_SetString(PyExc_TypeError, msg); -} -#define __Pyx_Coroutine_AlreadyTerminatedError(gen, value, closing) (__Pyx__Coroutine_AlreadyTerminatedError(gen, value, closing), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyTerminatedError(PyObject *gen, PyObject *value, int closing) { - CYTHON_MAYBE_UNUSED_VAR(gen); - CYTHON_MAYBE_UNUSED_VAR(closing); - #ifdef __Pyx_Coroutine_USED - if (!closing && __Pyx_Coroutine_Check(gen)) { - PyErr_SetString(PyExc_RuntimeError, "cannot reuse already awaited coroutine"); - } else - #endif - if (value) { - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - PyErr_SetNone(__Pyx_PyExc_StopAsyncIteration); - else - #endif - PyErr_SetNone(PyExc_StopIteration); - } -} -static -PyObject *__Pyx_Coroutine_SendEx(__pyx_CoroutineObject *self, PyObject *value, int closing) { - __Pyx_PyThreadState_declare - PyThreadState *tstate; - __Pyx_ExcInfoStruct *exc_state; - PyObject *retval; - assert(!self->is_running); - if (unlikely(self->resume_label == 0)) { - if (unlikely(value && value != Py_None)) { - return __Pyx_Coroutine_NotStartedError((PyObject*)self); - } - } - if (unlikely(self->resume_label == -1)) { - return __Pyx_Coroutine_AlreadyTerminatedError((PyObject*)self, value, closing); - } -#if CYTHON_FAST_THREAD_STATE - __Pyx_PyThreadState_assign - tstate = __pyx_tstate; -#else - tstate = __Pyx_PyThreadState_Current; -#endif - exc_state = &self->gi_exc_state; - if (exc_state->exc_type) { - #if CYTHON_COMPILING_IN_PYPY - #else - if (exc_state->exc_traceback) { - PyTracebackObject *tb = (PyTracebackObject *) exc_state->exc_traceback; - PyFrameObject *f = tb->tb_frame; - assert(f->f_back == NULL); - #if PY_VERSION_HEX >= 0x030B00A1 - f->f_back = PyThreadState_GetFrame(tstate); - #else - Py_XINCREF(tstate->frame); - f->f_back = tstate->frame; - #endif - } - #endif - } -#if CYTHON_USE_EXC_INFO_STACK - exc_state->previous_item = tstate->exc_info; - tstate->exc_info = exc_state; -#else - if (exc_state->exc_type) { - __Pyx_ExceptionSwap(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } else { - __Pyx_Coroutine_ExceptionClear(exc_state); - __Pyx_ExceptionSave(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } -#endif - self->is_running = 1; - retval = self->body(self, tstate, value); - self->is_running = 0; -#if CYTHON_USE_EXC_INFO_STACK - exc_state = &self->gi_exc_state; - tstate->exc_info = exc_state->previous_item; - exc_state->previous_item = NULL; - __Pyx_Coroutine_ResetFrameBackpointer(exc_state); -#endif - return retval; -} -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state) { - PyObject *exc_tb = exc_state->exc_traceback; - if (likely(exc_tb)) { -#if CYTHON_COMPILING_IN_PYPY -#else - PyTracebackObject *tb = (PyTracebackObject *) exc_tb; - PyFrameObject *f = tb->tb_frame; - Py_CLEAR(f->f_back); -#endif - } -} -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_MethodReturn(PyObject* gen, PyObject *retval) { - CYTHON_MAYBE_UNUSED_VAR(gen); - if (unlikely(!retval)) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (!__Pyx_PyErr_Occurred()) { - PyObject *exc = PyExc_StopIteration; - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - exc = __Pyx_PyExc_StopAsyncIteration; - #endif - __Pyx_PyErr_SetNone(exc); - } - } - return retval; -} -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) -static CYTHON_INLINE -PyObject *__Pyx_PyGen_Send(PyGenObject *gen, PyObject *arg) { -#if PY_VERSION_HEX <= 0x030A00A1 - return _PyGen_Send(gen, arg); -#else - PyObject *result; - if (PyIter_Send((PyObject*)gen, arg ? arg : Py_None, &result) == PYGEN_RETURN) { - if (PyAsyncGen_CheckExact(gen)) { - assert(result == Py_None); - PyErr_SetNone(PyExc_StopAsyncIteration); - } - else if (result == Py_None) { - PyErr_SetNone(PyExc_StopIteration); - } - else { - _PyGen_SetStopIterationValue(result); - } - Py_CLEAR(result); - } - return result; -#endif -} -#endif -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_FinishDelegation(__pyx_CoroutineObject *gen) { - PyObject *ret; - PyObject *val = NULL; - __Pyx_Coroutine_Undelegate(gen); - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, &val); - ret = __Pyx_Coroutine_SendEx(gen, val, 0); - Py_XDECREF(val); - return ret; -} -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value) { - PyObject *retval; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - ret = __Pyx_async_gen_asend_send(yf, value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03050000 && defined(PyCoro_CheckExact) && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyCoro_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - { - if (value == Py_None) - ret = __Pyx_PyObject_GetIterNextFunc(yf)(yf); - else - ret = __Pyx_PyObject_CallMethod1(yf, __pyx_n_s_send, value); - } - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - retval = __Pyx_Coroutine_FinishDelegation(gen); - } else { - retval = __Pyx_Coroutine_SendEx(gen, value, 0); - } - return __Pyx_Coroutine_MethodReturn(self, retval); -} -static int __Pyx_Coroutine_CloseIter(__pyx_CoroutineObject *gen, PyObject *yf) { - PyObject *retval = NULL; - int err = 0; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - if (__Pyx_CoroutineAwait_CheckExact(yf)) { - retval = __Pyx_CoroutineAwait_Close((__pyx_CoroutineAwaitObject*)yf, NULL); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - retval = __Pyx_async_gen_asend_close(yf, NULL); - } else - if (__pyx_PyAsyncGenAThrow_CheckExact(yf)) { - retval = __Pyx_async_gen_athrow_close(yf, NULL); - } else - #endif - { - PyObject *meth; - gen->is_running = 1; - meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_n_s_close); - if (unlikely(!meth)) { - if (unlikely(PyErr_Occurred())) { - PyErr_WriteUnraisable(yf); - } - } else { - retval = __Pyx_PyObject_CallNoArg(meth); - Py_DECREF(meth); - if (unlikely(!retval)) - err = -1; - } - gen->is_running = 0; - } - Py_XDECREF(retval); - return err; -} -static PyObject *__Pyx_Generator_Next(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Generator_Next(yf); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, NULL); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, Py_None); - } else - #endif - ret = __Pyx_PyObject_GetIterNextFunc(yf)(yf); - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - return __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_SendEx(gen, Py_None, 0); -} -static PyObject *__Pyx_Coroutine_Close_Method(PyObject *self, PyObject *arg) { - CYTHON_UNUSED_VAR(arg); - return __Pyx_Coroutine_Close(self); -} -static PyObject *__Pyx_Coroutine_Close(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *retval, *raised_exception; - PyObject *yf = gen->yieldfrom; - int err = 0; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - Py_INCREF(yf); - err = __Pyx_Coroutine_CloseIter(gen, yf); - __Pyx_Coroutine_Undelegate(gen); - Py_DECREF(yf); - } - if (err == 0) - PyErr_SetNone(PyExc_GeneratorExit); - retval = __Pyx_Coroutine_SendEx(gen, NULL, 1); - if (unlikely(retval)) { - const char *msg; - Py_DECREF(retval); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(self)) { - msg = "coroutine ignored GeneratorExit"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(self)) { -#if PY_VERSION_HEX < 0x03060000 - msg = "async generator ignored GeneratorExit - might require Python 3.6+ finalisation (PEP 525)"; -#else - msg = "async generator ignored GeneratorExit"; -#endif - #endif - } else { - msg = "generator ignored GeneratorExit"; - } - PyErr_SetString(PyExc_RuntimeError, msg); - return NULL; - } - raised_exception = PyErr_Occurred(); - if (likely(!raised_exception || __Pyx_PyErr_GivenExceptionMatches2(raised_exception, PyExc_GeneratorExit, PyExc_StopIteration))) { - if (raised_exception) PyErr_Clear(); - Py_INCREF(Py_None); - return Py_None; - } - return NULL; -} -static PyObject *__Pyx__Coroutine_Throw(PyObject *self, PyObject *typ, PyObject *val, PyObject *tb, - PyObject *args, int close_on_genexit) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - Py_INCREF(yf); - if (__Pyx_PyErr_GivenExceptionMatches(typ, PyExc_GeneratorExit) && close_on_genexit) { - int err = __Pyx_Coroutine_CloseIter(gen, yf); - Py_DECREF(yf); - __Pyx_Coroutine_Undelegate(gen); - if (err < 0) - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); - goto throw_here; - } - gen->is_running = 1; - if (0 - #ifdef __Pyx_Generator_USED - || __Pyx_Generator_CheckExact(yf) - #endif - #ifdef __Pyx_Coroutine_USED - || __Pyx_Coroutine_Check(yf) - #endif - ) { - ret = __Pyx__Coroutine_Throw(yf, typ, val, tb, args, close_on_genexit); - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_CoroutineAwait_CheckExact(yf)) { - ret = __Pyx__Coroutine_Throw(((__pyx_CoroutineAwaitObject*)yf)->coroutine, typ, val, tb, args, close_on_genexit); - #endif - } else { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_n_s_throw); - if (unlikely(!meth)) { - Py_DECREF(yf); - if (unlikely(PyErr_Occurred())) { - gen->is_running = 0; - return NULL; - } - __Pyx_Coroutine_Undelegate(gen); - gen->is_running = 0; - goto throw_here; - } - if (likely(args)) { - ret = __Pyx_PyObject_Call(meth, args, NULL); - } else { - PyObject *cargs[4] = {NULL, typ, val, tb}; - ret = __Pyx_PyObject_FastCall(meth, cargs+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); - } - Py_DECREF(meth); - } - gen->is_running = 0; - Py_DECREF(yf); - if (!ret) { - ret = __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_MethodReturn(self, ret); - } -throw_here: - __Pyx_Raise(typ, val, tb, NULL); - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); -} -static PyObject *__Pyx_Coroutine_Throw(PyObject *self, PyObject *args) { - PyObject *typ; - PyObject *val = NULL; - PyObject *tb = NULL; - if (unlikely(!PyArg_UnpackTuple(args, (char *)"throw", 1, 3, &typ, &val, &tb))) - return NULL; - return __Pyx__Coroutine_Throw(self, typ, val, tb, args, 1); -} -static CYTHON_INLINE int __Pyx_Coroutine_traverse_excstate(__Pyx_ExcInfoStruct *exc_state, visitproc visit, void *arg) { - Py_VISIT(exc_state->exc_type); - Py_VISIT(exc_state->exc_value); - Py_VISIT(exc_state->exc_traceback); - return 0; -} -static int __Pyx_Coroutine_traverse(__pyx_CoroutineObject *gen, visitproc visit, void *arg) { - Py_VISIT(gen->closure); - Py_VISIT(gen->classobj); - Py_VISIT(gen->yieldfrom); - return __Pyx_Coroutine_traverse_excstate(&gen->gi_exc_state, visit, arg); -} -static int __Pyx_Coroutine_clear(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - Py_CLEAR(gen->closure); - Py_CLEAR(gen->classobj); - Py_CLEAR(gen->yieldfrom); - __Pyx_Coroutine_ExceptionClear(&gen->gi_exc_state); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - Py_CLEAR(((__pyx_PyAsyncGenObject*)gen)->ag_finalizer); - } -#endif - Py_CLEAR(gen->gi_code); - Py_CLEAR(gen->gi_frame); - Py_CLEAR(gen->gi_name); - Py_CLEAR(gen->gi_qualname); - Py_CLEAR(gen->gi_modulename); - return 0; -} -static void __Pyx_Coroutine_dealloc(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject_GC_UnTrack(gen); - if (gen->gi_weakreflist != NULL) - PyObject_ClearWeakRefs(self); - if (gen->resume_label >= 0) { - PyObject_GC_Track(self); -#if PY_VERSION_HEX >= 0x030400a1 && CYTHON_USE_TP_FINALIZE - if (unlikely(PyObject_CallFinalizerFromDealloc(self))) -#else - Py_TYPE(gen)->tp_del(self); - if (unlikely(Py_REFCNT(self) > 0)) -#endif - { - return; - } - PyObject_GC_UnTrack(self); - } -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - /* We have to handle this case for asynchronous generators - right here, because this code has to be between UNTRACK - and GC_Del. */ - Py_CLEAR(((__pyx_PyAsyncGenObject*)self)->ag_finalizer); - } -#endif - __Pyx_Coroutine_clear(self); - __Pyx_PyHeapTypeObject_GC_Del(gen); -} -static void __Pyx_Coroutine_del(PyObject *self) { - PyObject *error_type, *error_value, *error_traceback; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - __Pyx_PyThreadState_declare - if (gen->resume_label < 0) { - return; - } -#if !CYTHON_USE_TP_FINALIZE - assert(self->ob_refcnt == 0); - __Pyx_SET_REFCNT(self, 1); -#endif - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&error_type, &error_value, &error_traceback); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - __pyx_PyAsyncGenObject *agen = (__pyx_PyAsyncGenObject*)self; - PyObject *finalizer = agen->ag_finalizer; - if (finalizer && !agen->ag_closed) { - PyObject *res = __Pyx_PyObject_CallOneArg(finalizer, self); - if (unlikely(!res)) { - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); - return; - } - } -#endif - if (unlikely(gen->resume_label == 0 && !error_value)) { -#ifdef __Pyx_Coroutine_USED -#ifdef __Pyx_Generator_USED - if (!__Pyx_Generator_CheckExact(self)) -#endif - { - PyObject_GC_UnTrack(self); -#if PY_MAJOR_VERSION >= 3 || defined(PyErr_WarnFormat) - if (unlikely(PyErr_WarnFormat(PyExc_RuntimeWarning, 1, "coroutine '%.50S' was never awaited", gen->gi_qualname) < 0)) - PyErr_WriteUnraisable(self); -#else - {PyObject *msg; - char *cmsg; - #if CYTHON_COMPILING_IN_PYPY - msg = NULL; - cmsg = (char*) "coroutine was never awaited"; - #else - char *cname; - PyObject *qualname; - qualname = gen->gi_qualname; - cname = PyString_AS_STRING(qualname); - msg = PyString_FromFormat("coroutine '%.50s' was never awaited", cname); - if (unlikely(!msg)) { - PyErr_Clear(); - cmsg = (char*) "coroutine was never awaited"; - } else { - cmsg = PyString_AS_STRING(msg); - } - #endif - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, cmsg, 1) < 0)) - PyErr_WriteUnraisable(self); - Py_XDECREF(msg);} -#endif - PyObject_GC_Track(self); - } -#endif - } else { - PyObject *res = __Pyx_Coroutine_Close(self); - if (unlikely(!res)) { - if (PyErr_Occurred()) - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); -#if !CYTHON_USE_TP_FINALIZE - assert(Py_REFCNT(self) > 0); - if (likely(--self->ob_refcnt == 0)) { - return; - } - { - Py_ssize_t refcnt = Py_REFCNT(self); - _Py_NewReference(self); - __Pyx_SET_REFCNT(self, refcnt); - } -#if CYTHON_COMPILING_IN_CPYTHON - assert(PyType_IS_GC(Py_TYPE(self)) && - _Py_AS_GC(self)->gc.gc_refs != _PyGC_REFS_UNTRACKED); - _Py_DEC_REFTOTAL; -#endif -#ifdef COUNT_ALLOCS - --Py_TYPE(self)->tp_frees; - --Py_TYPE(self)->tp_allocs; -#endif -#endif -} -static PyObject * -__Pyx_Coroutine_get_name(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_name; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_name(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_name, value); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_qualname(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_qualname; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_qualname(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_qualname, value); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_frame(__pyx_CoroutineObject *self, void *context) -{ - PyObject *frame = self->gi_frame; - CYTHON_UNUSED_VAR(context); - if (!frame) { - if (unlikely(!self->gi_code)) { - Py_RETURN_NONE; - } - frame = (PyObject *) PyFrame_New( - PyThreadState_Get(), /*PyThreadState *tstate,*/ - (PyCodeObject*) self->gi_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (unlikely(!frame)) - return NULL; - self->gi_frame = frame; - } - Py_INCREF(frame); - return frame; -} -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject* type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - __pyx_CoroutineObject *gen = PyObject_GC_New(__pyx_CoroutineObject, type); - if (unlikely(!gen)) - return NULL; - return __Pyx__Coroutine_NewInit(gen, body, code, closure, name, qualname, module_name); -} -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - gen->body = body; - gen->closure = closure; - Py_XINCREF(closure); - gen->is_running = 0; - gen->resume_label = 0; - gen->classobj = NULL; - gen->yieldfrom = NULL; - gen->gi_exc_state.exc_type = NULL; - gen->gi_exc_state.exc_value = NULL; - gen->gi_exc_state.exc_traceback = NULL; -#if CYTHON_USE_EXC_INFO_STACK - gen->gi_exc_state.previous_item = NULL; -#endif - gen->gi_weakreflist = NULL; - Py_XINCREF(qualname); - gen->gi_qualname = qualname; - Py_XINCREF(name); - gen->gi_name = name; - Py_XINCREF(module_name); - gen->gi_modulename = module_name; - Py_XINCREF(code); - gen->gi_code = code; - gen->gi_frame = NULL; - PyObject_GC_Track(gen); - return gen; -} - -/* PatchModuleWithCoroutine */ - static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - int result; - PyObject *globals, *result_obj; - globals = PyDict_New(); if (unlikely(!globals)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_coroutine_type", - #ifdef __Pyx_Coroutine_USED - (PyObject*)__pyx_CoroutineType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_generator_type", - #ifdef __Pyx_Generator_USED - (PyObject*)__pyx_GeneratorType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "_module", module) < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "__builtins__", __pyx_b) < 0)) goto ignore; - result_obj = PyRun_String(py_code, Py_file_input, globals, globals); - if (unlikely(!result_obj)) goto ignore; - Py_DECREF(result_obj); - Py_DECREF(globals); - return module; -ignore: - Py_XDECREF(globals); - PyErr_WriteUnraisable(module); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, "Cython module failed to patch module with custom type", 1) < 0)) { - Py_DECREF(module); - module = NULL; - } -#else - py_code++; -#endif - return module; -} - -/* PatchGeneratorABC */ - #ifndef CYTHON_REGISTER_ABCS -#define CYTHON_REGISTER_ABCS 1 -#endif -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) -static PyObject* __Pyx_patch_abc_module(PyObject *module); -static PyObject* __Pyx_patch_abc_module(PyObject *module) { - module = __Pyx_Coroutine_patch_module( - module, "" -"if _cython_generator_type is not None:\n" -" try: Generator = _module.Generator\n" -" except AttributeError: pass\n" -" else: Generator.register(_cython_generator_type)\n" -"if _cython_coroutine_type is not None:\n" -" try: Coroutine = _module.Coroutine\n" -" except AttributeError: pass\n" -" else: Coroutine.register(_cython_coroutine_type)\n" - ); - return module; -} -#endif -static int __Pyx_patch_abc(void) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - static int abc_patched = 0; - if (CYTHON_REGISTER_ABCS && !abc_patched) { - PyObject *module; - module = PyImport_ImportModule((PY_MAJOR_VERSION >= 3) ? "collections.abc" : "collections"); - if (unlikely(!module)) { - PyErr_WriteUnraisable(NULL); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, - ((PY_MAJOR_VERSION >= 3) ? - "Cython module failed to register with collections.abc module" : - "Cython module failed to register with collections module"), 1) < 0)) { - return -1; - } - } else { - module = __Pyx_patch_abc_module(module); - abc_patched = 1; - if (unlikely(!module)) - return -1; - Py_DECREF(module); - } - module = PyImport_ImportModule("backports_abc"); - if (module) { - module = __Pyx_patch_abc_module(module); - Py_XDECREF(module); - } - if (!module) { - PyErr_Clear(); - } - } -#else - if ((0)) __Pyx_Coroutine_patch_module(NULL, NULL); -#endif - return 0; -} - -/* Generator */ - static PyMethodDef __pyx_Generator_methods[] = { - {"send", (PyCFunction) __Pyx_Coroutine_Send, METH_O, - (char*) PyDoc_STR("send(arg) -> send 'arg' into generator,\nreturn next yielded value or raise StopIteration.")}, - {"throw", (PyCFunction) __Pyx_Coroutine_Throw, METH_VARARGS, - (char*) PyDoc_STR("throw(typ[,val[,tb]]) -> raise exception in generator,\nreturn next yielded value or raise StopIteration.")}, - {"close", (PyCFunction) __Pyx_Coroutine_Close_Method, METH_NOARGS, - (char*) PyDoc_STR("close() -> raise GeneratorExit inside generator.")}, - {0, 0, 0, 0} -}; -static PyMemberDef __pyx_Generator_memberlist[] = { - {(char *) "gi_running", T_BOOL, offsetof(__pyx_CoroutineObject, is_running), READONLY, NULL}, - {(char*) "gi_yieldfrom", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY, - (char*) PyDoc_STR("object being iterated by 'yield from', or None")}, - {(char*) "gi_code", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_code), READONLY, NULL}, - {(char *) "__module__", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_modulename), 0, 0}, -#if CYTHON_USE_TYPE_SPECS - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CoroutineObject, gi_weakreflist), READONLY, 0}, -#endif - {0, 0, 0, 0, 0} -}; -static PyGetSetDef __pyx_Generator_getsets[] = { - {(char *) "__name__", (getter)__Pyx_Coroutine_get_name, (setter)__Pyx_Coroutine_set_name, - (char*) PyDoc_STR("name of the generator"), 0}, - {(char *) "__qualname__", (getter)__Pyx_Coroutine_get_qualname, (setter)__Pyx_Coroutine_set_qualname, - (char*) PyDoc_STR("qualified name of the generator"), 0}, - {(char *) "gi_frame", (getter)__Pyx_Coroutine_get_frame, NULL, - (char*) PyDoc_STR("Frame of the generator"), 0}, - {0, 0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_GeneratorType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_Coroutine_dealloc}, - {Py_tp_traverse, (void *)__Pyx_Coroutine_traverse}, - {Py_tp_iter, (void *)PyObject_SelfIter}, - {Py_tp_iternext, (void *)__Pyx_Generator_Next}, - {Py_tp_methods, (void *)__pyx_Generator_methods}, - {Py_tp_members, (void *)__pyx_Generator_memberlist}, - {Py_tp_getset, (void *)__pyx_Generator_getsets}, - {Py_tp_getattro, (void *) __Pyx_PyObject_GenericGetAttrNoDict}, -#if CYTHON_USE_TP_FINALIZE - {Py_tp_finalize, (void *)__Pyx_Coroutine_del}, -#endif - {0, 0}, -}; -static PyType_Spec __pyx_GeneratorType_spec = { - __PYX_TYPE_MODULE_PREFIX "generator", - sizeof(__pyx_CoroutineObject), - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, - __pyx_GeneratorType_slots -}; -#else -static PyTypeObject __pyx_GeneratorType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "generator", - sizeof(__pyx_CoroutineObject), - 0, - (destructor) __Pyx_Coroutine_dealloc, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, - 0, - (traverseproc) __Pyx_Coroutine_traverse, - 0, - 0, - offsetof(__pyx_CoroutineObject, gi_weakreflist), - 0, - (iternextfunc) __Pyx_Generator_Next, - __pyx_Generator_methods, - __pyx_Generator_memberlist, - __pyx_Generator_getsets, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if CYTHON_USE_TP_FINALIZE - 0, -#else - __Pyx_Coroutine_del, -#endif - 0, -#if CYTHON_USE_TP_FINALIZE - __Pyx_Coroutine_del, -#elif PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, -#endif -}; -#endif -static int __pyx_Generator_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_GeneratorType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_GeneratorType_spec, NULL); -#else - (void) module; - __pyx_GeneratorType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - __pyx_GeneratorType_type.tp_iter = PyObject_SelfIter; - __pyx_GeneratorType = __Pyx_FetchCommonType(&__pyx_GeneratorType_type); -#endif - if (unlikely(!__pyx_GeneratorType)) { - return -1; - } - return 0; -} - -/* CStringEquals */ - static CYTHON_INLINE int __Pyx_StrEq(const char *s1, const char *s2) { - while (*s1 != '\0' && *s1 == *s2) { s1++; s2++; } - return *s1 == *s2; -} - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[4], rtversion[4]; - PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); - if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compile time version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - #if PY_MAJOR_VERSION >= 3 -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str) { - if (t.is_unicode | t.is_str) { - if (t.intern) { - *str = PyUnicode_InternFromString(t.s); - } else if (t.encoding) { - *str = PyUnicode_Decode(t.s, t.n - 1, t.encoding, NULL); - } else { - *str = PyUnicode_FromStringAndSize(t.s, t.n - 1); - } - } else { - *str = PyBytes_FromStringAndSize(t.s, t.n - 1); - } - if (!*str) - return -1; - if (PyObject_Hash(*str) == -1) - return -1; - return 0; -} -#endif -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION >= 3 - __Pyx_InitString(*t, t->p); - #else - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - #endif - ++t; - } - return 0; -} -#endif - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY && !CYTHON_COMPILING_IN_LIMITED_API) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { - __Pyx_TypeName result_type_name = __Pyx_PyType_GetName(Py_TYPE(result)); -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME "). " - "The ability to return an instance of a strict subclass of int is deprecated, " - "and may be removed in a future version of Python.", - result_type_name)) { - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; - } - __Pyx_DECREF_TypeName(result_type_name); - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type " __Pyx_FMT_TYPENAME ")", - type_name, type_name, result_type_name); - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -/* #### Code section: utility_code_pragmas_end ### */ -#if _MSV_VER -#pragma warning( pop ) -#endif - - - -/* #### Code section: end ### */ -#endif /* Py_PYTHON_H */ diff --git a/spaces/matthoffner/chatbot-mini/components/Buttons/SidebarActionButton/SidebarActionButton.tsx b/spaces/matthoffner/chatbot-mini/components/Buttons/SidebarActionButton/SidebarActionButton.tsx deleted file mode 100644 index 2fdc79daa52e183136cd1982f5bc1642b2867714..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/components/Buttons/SidebarActionButton/SidebarActionButton.tsx +++ /dev/null @@ -1,17 +0,0 @@ -import { MouseEventHandler, ReactElement } from 'react'; - -interface Props { - handleClick: MouseEventHandler; - children: ReactElement; -} - -const SidebarActionButton = ({ handleClick, children }: Props) => ( - -); - -export default SidebarActionButton; diff --git a/spaces/matthoffner/chatbot/components/Chatbar/components/PluginKeys.tsx b/spaces/matthoffner/chatbot/components/Chatbar/components/PluginKeys.tsx deleted file mode 100644 index 1dcfe17d90a1e3eb72c55ca876acc7617e788983..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Chatbar/components/PluginKeys.tsx +++ /dev/null @@ -1,235 +0,0 @@ -import { IconKey } from '@tabler/icons-react'; -import { KeyboardEvent, useContext, useEffect, useRef, useState } from 'react'; -import { useTranslation } from 'react-i18next'; - -import { PluginID, PluginKey } from '@/types/plugin'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { SidebarButton } from '@/components/Sidebar/SidebarButton'; - -import ChatbarContext from '../Chatbar.context'; - -export const PluginKeys = () => { - const { t } = useTranslation('sidebar'); - - const { - state: { pluginKeys }, - } = useContext(HomeContext); - - const { handlePluginKeyChange, handleClearPluginKey } = - useContext(ChatbarContext); - - const [isChanging, setIsChanging] = useState(false); - - const modalRef = useRef(null); - - const handleEnter = (e: KeyboardEvent) => { - if (e.key === 'Enter' && !e.shiftKey) { - e.preventDefault(); - setIsChanging(false); - } - }; - - useEffect(() => { - const handleMouseDown = (e: MouseEvent) => { - if (modalRef.current && !modalRef.current.contains(e.target as Node)) { - window.addEventListener('mouseup', handleMouseUp); - } - }; - - const handleMouseUp = (e: MouseEvent) => { - window.removeEventListener('mouseup', handleMouseUp); - setIsChanging(false); - }; - - window.addEventListener('mousedown', handleMouseDown); - - return () => { - window.removeEventListener('mousedown', handleMouseDown); - }; - }, []); - - return ( - <> - } - onClick={() => setIsChanging(true)} - /> - - {isChanging && ( -
      -
      -
      - -
      -
      - )} - - ); -}; diff --git a/spaces/matthoffner/chatbot/components/Folder/Folder.tsx b/spaces/matthoffner/chatbot/components/Folder/Folder.tsx deleted file mode 100644 index 183261e0093bb697d9be8620c6b0b81c041b9f82..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Folder/Folder.tsx +++ /dev/null @@ -1,192 +0,0 @@ -import { - IconCaretDown, - IconCaretRight, - IconCheck, - IconPencil, - IconTrash, - IconX, -} from '@tabler/icons-react'; -import { - KeyboardEvent, - ReactElement, - useContext, - useEffect, - useState, -} from 'react'; - -import { FolderInterface } from '@/types/folder'; - -import HomeContext from '@/pages/api/home/home.context'; - -import SidebarActionButton from '@/components/Buttons/SidebarActionButton'; - -interface Props { - currentFolder: FolderInterface; - searchTerm: string; - handleDrop: (e: any, folder: FolderInterface) => void; - folderComponent: (ReactElement | undefined)[]; -} - -const Folder = ({ - currentFolder, - searchTerm, - handleDrop, - folderComponent, -}: Props) => { - const { handleDeleteFolder, handleUpdateFolder } = useContext(HomeContext); - - const [isDeleting, setIsDeleting] = useState(false); - const [isRenaming, setIsRenaming] = useState(false); - const [renameValue, setRenameValue] = useState(''); - const [isOpen, setIsOpen] = useState(false); - - const handleEnterDown = (e: KeyboardEvent) => { - if (e.key === 'Enter') { - e.preventDefault(); - handleRename(); - } - }; - - const handleRename = () => { - handleUpdateFolder(currentFolder.id, renameValue); - setRenameValue(''); - setIsRenaming(false); - }; - - const dropHandler = (e: any) => { - if (e.dataTransfer) { - setIsOpen(true); - - handleDrop(e, currentFolder); - - e.target.style.background = 'none'; - } - }; - - const allowDrop = (e: any) => { - e.preventDefault(); - }; - - const highlightDrop = (e: any) => { - e.target.style.background = '#343541'; - }; - - const removeHighlight = (e: any) => { - e.target.style.background = 'none'; - }; - - useEffect(() => { - if (isRenaming) { - setIsDeleting(false); - } else if (isDeleting) { - setIsRenaming(false); - } - }, [isRenaming, isDeleting]); - - useEffect(() => { - if (searchTerm) { - setIsOpen(true); - } else { - setIsOpen(false); - } - }, [searchTerm]); - - return ( - <> -
      - {isRenaming ? ( -
      - {isOpen ? ( - - ) : ( - - )} - setRenameValue(e.target.value)} - onKeyDown={handleEnterDown} - autoFocus - /> -
      - ) : ( - - )} - - {(isDeleting || isRenaming) && ( -
      - { - e.stopPropagation(); - - if (isDeleting) { - handleDeleteFolder(currentFolder.id); - } else if (isRenaming) { - handleRename(); - } - - setIsDeleting(false); - setIsRenaming(false); - }} - > - - - { - e.stopPropagation(); - setIsDeleting(false); - setIsRenaming(false); - }} - > - - -
      - )} - - {!isDeleting && !isRenaming && ( -
      - { - e.stopPropagation(); - setIsRenaming(true); - setRenameValue(currentFolder.name); - }} - > - - - { - e.stopPropagation(); - setIsDeleting(true); - }} - > - - -
      - )} -
      - - {isOpen ? folderComponent : null} - - ); -}; - -export default Folder; diff --git a/spaces/megaaziib/hololive-rvc-models-v2/lib/infer_pack/transforms.py b/spaces/megaaziib/hololive-rvc-models-v2/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/megaaziib/hololive-rvc-models-v2/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/megemini/shanshui/README.md b/spaces/megemini/shanshui/README.md deleted file mode 100644 index e0c18f7d1a8c15a09d0bf7b7c1d74b21619ede80..0000000000000000000000000000000000000000 --- a/spaces/megemini/shanshui/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Shanshui -emoji: 🦀 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.28.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/model/layers/linear.py b/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/model/layers/linear.py deleted file mode 100644 index f720d77ecc540423a6a6545f9e50c117ad1c08db..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/model/layers/linear.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# Permission is hereby granted, free of charge, to any person obtaining a -# copy of this software and associated documentation files (the "Software"), -# to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, -# and/or sell copies of the Software, and to permit persons to whom the -# Software is furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL -# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING -# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER -# DEALINGS IN THE SOFTWARE. -# -# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES -# SPDX-License-Identifier: MIT - - -from typing import Dict - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from se3_transformer.model.fiber import Fiber - - -class LinearSE3(nn.Module): - """ - Graph Linear SE(3)-equivariant layer, equivalent to a 1x1 convolution. - Maps a fiber to a fiber with the same degrees (channels may be different). - No interaction between degrees, but interaction between channels. - - type-0 features (C_0 channels) ────> Linear(bias=False) ────> type-0 features (C'_0 channels) - type-1 features (C_1 channels) ────> Linear(bias=False) ────> type-1 features (C'_1 channels) - : - type-k features (C_k channels) ────> Linear(bias=False) ────> type-k features (C'_k channels) - """ - - def __init__(self, fiber_in: Fiber, fiber_out: Fiber): - super().__init__() - self.weights = nn.ParameterDict({ - str(degree_out): nn.Parameter( - torch.randn(channels_out, fiber_in[degree_out]) / np.sqrt(fiber_in[degree_out])) - for degree_out, channels_out in fiber_out - }) - - def forward(self, features: Dict[str, Tensor], *args, **kwargs) -> Dict[str, Tensor]: - return { - degree: self.weights[degree] @ features[degree] - for degree, weight in self.weights.items() - } diff --git a/spaces/merve/anonymization/server-side/fill-in-the-blank/node/gender-over-time.js b/spaces/merve/anonymization/server-side/fill-in-the-blank/node/gender-over-time.js deleted file mode 100644 index fcfe45855289fd6bf5143b5803d661cc1548d8d0..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/server-side/fill-in-the-blank/node/gender-over-time.js +++ /dev/null @@ -1,212 +0,0 @@ -import ss from 'scrape-stl' -var {d3, jp, fs, io, _} = ss - -import npyjs from './npy.js' -import getSentenceEmbed from './get-sentence-embed.js' -import pLimit from 'p-limit' - -import { URL } from 'url' -var __dirname = new URL('.', import.meta.url).pathname - -var datadir = __dirname + '../../source/fill-in-the-blank/data/' - - -var outpath = __dirname + '/../../../1wheel/gender-over-time/gender-over-time.json' -// var outpath = __dirname + '/cache/gender-over-time.json' -var cacheSentences = io.readDataSync(outpath) -// var cacheSentences = [] - -var limit1 = pLimit(1) -var promises = [ - 'In $year [he|she] worked as a _.', - // 'In $year [they|she] worked as a _.', - // 'In $year [they|he] worked as a _.', - 'In $year [he|she] studied _.', - // 'In $year [they|she] studied _.', - // 'In $year [they|he] studied _.', - 'Born in $year [his|her] name was _.', - // 'Born in $year [their|her] name was _.', - // 'Born in $year [their|he] name was _.', - 'In $year [he|she] was _.', - 'In $year [he|she] was really _.', - 'In $year [he|she] was so _.', - 'In $year [he|she] named the dog _.', - 'In $year [he|she] named the cat _.', - 'In $year [he|she] hired a _.', - 'In $year, [he|she] joined the high school _ team', - "Things weren't like they used to be. In $year, [he|she] joined the high school _ team.", - // 'In $year [he|she] invented a _.', - 'In $year [his|her] favorite band was _.', - 'In $year [his|her] favorite movie was _.', - 'In $year [his|her] favorite book was _.', - 'In $year [he|she] loved to read about _.', - 'In $year [he|she] fixed a _.', - 'In $year [he|she] bought a _.', - 'In $year [he|she] traveled to _.', - 'In $year [he|she] went to a _.', - 'In $year [he|she] lived in a _.', - 'In $year [he|she] _ a bear.', - 'In $year [he|she] _.', - 'In $year [he|she] was arrested for _.', - 'In $year [he|she] adopted a _.', - // 'In $year [he|she] took care of a _.', - 'In $year [he|she] took care of the _.', - // [ - // 'In $year he took care of his _.', - // 'In $year she took care of her _.', - // ], - // 'In $year [he|she] took _ care of the baby.', - // 'In $year [he|she] loved to eat _.', - // 'In $year [he|she] ate a _.', - 'In $year [he|she] mostly ate _.', - // 'In $year [he|she] cooked a _.', - 'In $year [he|she] played _.', - // 'In $year [he|she] wore a _.', - // 'In $year [he|she] wore _.', - 'In $year [he|she] wore a pair of _.', - 'In $year [he|she] wore a _ to a party.', - 'In $year, [he|she] looked very fashionable wearing _.', - 'In $year [he|she] _ at the party.', - 'In $year [he|she] would _ for fun.', - // 'In $year [he|she] was the best _.', - // 'In $year [he|she] was good at _.', - 'In $year [he|she] was bad at _.', - 'In $year [his|her] favorite color was _.', - 'In $year [he|she] was one of the best _ in the world.', - // '[He|She] worked as a _ in $year', - // '[He|She] studied _ in $year', - // 'Born in $year [He|She] was named _.', - // 'It was $year and [he|she] loved to _.', - // [ - // 'In $year he loved his _.', - // 'In $year she loved her _.', - // ], - // [ - // 'In $year he traved to his _.', - // 'In $year she traved to her _.', - // ], - // [ - // 'In $year he traved with his _.', - // 'In $year she traved with her _.', - // ], - [ - 'In $year he married his _.', - 'In $year she married her _.', - ], - // [ - // 'In $year he helped his _.', - // 'In $year she helped her _.', - // ], - // [ - // 'In $year he loved to play with his _.', - // 'In $year she loved to play with her _.', - // ], - // [ - // 'In $year his favorite toy was his _.', - // 'In $year her favorite toy was her _.', - // ], - // [ - // "In $year the girl's favorite toy was her _.", - // "In $year the boy's favorite toy was his _.", - // ], - [ - 'In $year his favorite toy was the _.', - 'In $year her favorite toy was the _.', - - ], - // [ - // 'In $year he named his dog _.', - // 'In $year she named her dog _.', - // ], - // [ - // 'In $year he named his baby _.', - // 'In $year she named her baby _.', - // ], - // [ - // 'In $year he named his kid _.', - // 'In $year she named her kid _.', - // ], - -].slice(0, 1000).map(d => limit1(() => parseSentence(d))) - -var sentences = await Promise.all(promises) - - -io.writeDataSync(outpath, sentences) - -async function parseSentence(sentence){ - var m = cacheSentences.find(d => d.sentence + '' == sentence + '') - if (m){ - return m - } - console.log(sentence + '') - - if (sentence.length == 2){ - var s0 = sentence[0].replace('_', '[MASK]') - var s1 = sentence[1].replace('_', '[MASK]') - } else { - var start = sentence.split('[')[0] - var end = sentence.split(']')[1] - var [t0, t1] = sentence.split('[')[1].split(']')[0].split('|') - var s0 = (start + t0 + end).replace('_', '[MASK]') - var s1 = (start + t1 + end).replace('_', '[MASK]') - } - - async function fetchYear(year){ - var e0 = await getSentenceEmbed('embed', s0.replace('$year', year)) - var e1 = await getSentenceEmbed('embed', s1.replace('$year', year)) - - return {year, e0, e1} - } - - var limit = pLimit(10) - var promises = d3.range(1850, 2040, 1).map(d => limit(() => fetchYear(d))) - var years = await Promise.all(promises) - - - var vocab = io.readDataSync(datadir + 'processed_vocab.json') - - var token2index = Object.fromEntries(vocab.map((d, i) => [d, i])) - - var tidy = [] - years.forEach(({year, e0, e1}) => { - e0.forEach((v0, i) => { - var v1 = e1[i] - var dif = v0 - v1 - tidy.push({year, i, v0, v1, dif}) - }) - }) - - // tidy = [{i: 0, v0: .123, v1: .838}, {i: 0, v0: 322, v1: 144}, ...] - var byToken = jp.nestBy(tidy, d => d.i) - byToken.forEach(d => { - d.mean0 = d3.mean(d, d => d.v0) - d.mean1 = d3.mean(d, d => d.v1) - }) - - _.sortBy(byToken, d => -d.mean0).forEach((d, i) => d.i0 = i) - _.sortBy(byToken, d => -d.mean1).forEach((d, i) => d.i1 = i) - - var topTokens = _.sortBy(byToken, d => Math.min(d.i0, d.i1)).slice(0, 150) - - topTokens.forEach(d => { - // printTop(d.index) - delete d.v0 - delete d.v1 - delete d.i0 - delete d.i1 - d.index = +d.key - }) - - function printTop(index){ - // console.log(' ') - // console.log(vocab[index]) - byToken.filter(d => d.index == index)[0].forEach(({year, dif}) => { - console.log({year, dif}) - }) - } - - return {sentence, t0, t1, topTokens} -} - - diff --git a/spaces/merve/dataset-worldviews/public/data-leak/script.js b/spaces/merve/dataset-worldviews/public/data-leak/script.js deleted file mode 100644 index 16e45229aac271f5fb29b638c14822725a392865..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/data-leak/script.js +++ /dev/null @@ -1,296 +0,0 @@ -console.clear() - -var isMobile = innerWidth < 1000 -d3.select('body').classed('is-mobile', isMobile) - -var colors = ['#FDE100', '#EE2737' ] -var colors = ['#FDE100', '#8e068e' ] -// var colors = ['#2979FF', '#FF6D00'] -// var colors = ['#2979FF', '#FDD835'] -// var colors = ['#f1a340', '#998ec3' ] - -var color2dark = { - '#FDE100': d3.color('#FDE100').darker(.2), - '#8e068e': d3.color('#8e068e').darker(2), -} - -var colorScale = d3.interpolate(colors[0], colors[1]) - -var s = d3.select('#field-grass').node().offsetWidth/120 - -var width = 120*s -var height = Math.floor(75*s) - -var cs = 20 -var cells = d3.cross( - d3.range(0, width + cs, cs), - d3.range(0, height + cs, cs)) - - - -globalPlayers = decoratePlayers(players0) -globalPlayersH = decoratePlayers(playersleaklow) - -function decoratePlayers(rawPlayers){ - var players = rawPlayers.map(d => d.map(d => d*s)) - players.forEach((d, i) => { - d.color = i < 11 ? colors[0] : colors[1] - d.isRed = i < 11 ? 1 : 0 - d.i = i - }) - - players.renderFns = [] - players.renderAll = () => players.renderFns.forEach(d => d()) - - return players -} - -var playerOptions0 = [players1, players2, players0] -var playerOptions1 = [playersleaklow, playersleakhigh] - -// addPlayAnimation(globalPlayers, '#field-grass', playerOptions0, 'mouseenter') -addPlayAnimation(globalPlayers, '#player-button', playerOptions0) -addPlayAnimation(globalPlayersH, '#high-button', playerOptions1, 'click', true) - -function addPlayAnimation(players, selStr, playerOptions, eventStr='click', loop=false){ - if (loop) { - window.loopInterval = d3.interval(playAnimation, 2500) - } - if (selStr) { - d3.selectAll(selStr).on(eventStr, function() { - if (loop) window.loopInterval.stop() // stop looping if the higher-or-lower button is pressed - playAnimation() - }) - } - - var curPlayerIndex = 0 - function playAnimation(){ - curPlayerIndex++ - curPlayerIndex = curPlayerIndex % playerOptions.length - - var nextPlayers = playerOptions[curPlayerIndex] - .map(d => d.map(d => d*s)) - - var interpolates = players - .map((d, i) => d3.interpolate(d, nextPlayers[i])) - - var dur = 1000 - if (playerOptions.animationTimer) playerOptions.animationTimer.stop() - playerOptions.animationTimer = d3.timer(time => { - var t = d3.clamp(0, time/dur, 1) - - interpolates.forEach((interpolate, i) => { - var [x, y] = interpolate(t) - - players[i][0] = x - players[i][1] = y - }) - - players.renderAll(t) - - if (t == 1) playerOptions.animationTimer.stop() - }) - } -} - -function stopAnimations(){ - if (playerOptions0.animationTimer) playerOptions0.animationTimer.stop() - if (playerOptions1.animationTimer) playerOptions1.animationTimer.stop() -} - - -function initField(name){ - var marginBottom = 30 - var marginTop = 35 - var sel = d3.select('#field-' + name).html('').classed('field', true) - .st({marginBottom: marginBottom, marginTop: marginTop}) - - window.c = d3.conventions({ - sel, - margin: {top: 0, left: 0, right: 0, bottom: 0}, - width, - height, - layers: 'dcs' - }) - - var [divSel, ctx, svg] = c.layers - - c.svg = c.svg.append('g').translate([.5, .5]) - - var isRegression = name.includes('regression') - var isVisiblePoints = name != 'playerless' - - var pointName = isRegression || name == 'scatter' ? ' People' : ' Players' - var buttonSel = sel.append('div.button') - .st({top: pointName == ' People' ? 28 : -8, right: -8, position: 'absolute', background: '#fff'}) - .text((isVisiblePoints ? 'Hide' : 'Show') + pointName) - .on('click', () => { - isVisiblePoints = !isVisiblePoints - buttonSel.text((isVisiblePoints ? 'Hide' : 'Show') + pointName) - playerSel.st({opacity: isVisiblePoints ? 1 : 0}) - textSel.st({opacity: isVisiblePoints ? 1 : 0}) - }) - - if (name == 'grass'){ - c.svg.append('rect').at({width, height, fill: '#34A853'}) - divSel.append('div.pointer').append('div') - } - - var roundNum = d => isNaN(d) ? d : Math.round(d) - var chalkSel = c.svg.append('g') - chalkSel.append('path.white') - .at({d: ['M', Math.round(width/2), 0, 'V', height].map(roundNum).join(' '),}) - chalkSel.append('circle.white') - .at({r: 10*s}).translate([width/2, height/2]) - chalkSel.append('path.white') - .at({d: ['M', 0, (75 - 44)/2*s, 'h', 18*s, 'v', 44*s, 'H', 0].map(roundNum).join(' '),}) - chalkSel.append('path.white') - .at({d: ['M', width, (75 - 44)/2*s, 'h', -18*s, 'v', 44*s, 'H', width].map(roundNum).join(' '),}) - - var drag = d3.drag() - .on('drag', function(d){ - stopAnimations() - if (name === 'regression-leak') { - window.loopInterval.stop() - } - - d[0] = Math.round(Math.max(0, Math.min(width, d3.event.x))) - d[1] = Math.round(Math.max(0, Math.min(height, d3.event.y))) - - players.renderAll() - }) - .subject(function(d){ return {x: d[0], y: d[1]} }) - - - var players = name == 'regression-leak' ? globalPlayersH : globalPlayers - - if (isRegression){ - var byColor = d3.nestBy(players, d => d.color) - var regressionSel = c.svg.appendMany('path', byColor) - .at({stroke: d => color2dark[d.key], strokeWidth: 3.5, strokeDasharray: '4 4'}) - .each(function(d){ d.sel = d3.select(this) }) - } - - var bgPlayerSel = c.svg.appendMany('circle.player', players) - .at({r: 15, fill: d => d.color, opacity: 0}) - .translate(d => d) - .call(drag) - - var playerSel = c.svg.appendMany('circle.player', players) - .at({r: 5, fill: d => d.color, opacity: isVisiblePoints ? 1 : 0}) - .translate(d => d) - .call(drag) - - var textSel = c.svg.appendMany('text.chart-title', name == 'playerless' ? [players[0], players[20]] : [players[0]]) - .text(name == 'regression-leak' || name == 'scatter' ? 'New Hire' : name == 'playerless' ? 'Goalie' : '') - .st({pointerEvent: 'none'}) - .at({dy: '.33em', opacity: isVisiblePoints ? 1 : 0, dx: (d, i) => i ? -8 : 8, textAnchor: (d, i) => i ? 'end' : 'start'}) - - if (name == 'scatter' || isRegression){ - sel.st({marginBottom: marginBottom + 70}) - sel.insert('div.axis.chart-title', ':first-child') - .html(` - Men's - and - Women's - Salaries`) - .st({marginBottom: 10, fontSize: 16}) - - c.x.domain([0, 20]) - c.y.domain([40000, 90000]) - - c.xAxis.ticks(5) - c.yAxis.ticks(5).tickFormat(d => { - var rv = d3.format(',')(d).replace('9', '$9') - if (isMobile){ - rv = rv.replace(',000', 'k').replace('40k', '') - } - - return rv - }) - - - - chalkSel.selectAll('*').remove() - chalkSel.appendMany('path.white', c.x.ticks(5)) - .at({d: d => ['M', Math.round(c.x(d)), '0 V ', c.height].join(' ')}) - - chalkSel.appendMany('path.white', c.y.ticks(5)) - .at({d: d => ['M 0', Math.round(c.y(d)), 'H', c.width].join(' ')}) - - d3.drawAxis(c) - c.svg.selectAll('.axis').lower() - if (isMobile){ - c.svg.selectAll('.y text') - .translate([35, 10]) - .st({fill: name == 'scatter' ? '#000' : ''}) - - c.svg.selectAll('.x text').filter(d => d == 20).at({textAnchor: 'end'}) - c.svg.selectAll('.x text').filter(d => d == 0).at({textAnchor: 'start'}) - } - - - c.svg.select('.x').append('text.chart-title') - .text('Years at Company →') - .translate([c.width/2, 43]) - .at({textAnchor: 'middle'}) - } - - - - render() - players.renderFns.push(render) - function render(){ - renderSVG() - if (name != 'grass' && !isRegression)renderCanvas() - if (isRegression) renderRegression() - } - - function renderSVG(){ - if (playerSel){ - playerSel.translate(d => d) - bgPlayerSel.translate(d => d) - textSel.translate(d => d) - } - } - - function renderCanvas(){ - cells.forEach(d => { - players.forEach(p => { - var dx = p[0] - d[0] - cs/2 - var dy = p[1] - d[1] - cs/2 - - // p.dist = Math.sqrt(dx*dx + dy*dy) - // p.dist = dx*dx + dy*dy - p.dist = Math.pow(dx*dx + dy*dy, 1.5) + .00001 - p.weight = 1/p.dist - - return p.dist - }) - - var sum = d3.sum(players, d => d.isRed*d.weight) - var wsum = d3.sum(players, d => d.weight) - - ctx.fillStyle = colorScale(1 - sum/wsum) - - ctx.fillRect(d[0], d[1], cs, cs) - }) - } - - function renderRegression(){ - byColor.forEach(d => { - var l = ss.linearRegressionLine(ss.linearRegression(d)) - - var x0 = 0 - var x1 = c.width - - d.sel.at({d: `M ${x0} ${l(x0)} L ${x1} ${l(x1)}`}) - }) - } -} - -'grass prediction playerless scatter regression regression-leak' - .split(' ') - .forEach(initField) - - diff --git a/spaces/merve/owlv2/app.py b/spaces/merve/owlv2/app.py deleted file mode 100644 index caad325feb8e40b38e9934d575705a527286048c..0000000000000000000000000000000000000000 --- a/spaces/merve/owlv2/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import torch -import gradio as gr -from transformers import Owlv2Processor, Owlv2ForObjectDetection - - -# Use GPU if available -if torch.cuda.is_available(): - device = torch.device("cuda") -else: - device = torch.device("cpu") - -model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble").to(device) -processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble") - - -def query_image(img, text_queries, score_threshold): - text_queries = text_queries - text_queries = text_queries.split(",") - - size = max(img.shape[:2]) - target_sizes = torch.Tensor([[size, size]]) - inputs = processor(text=text_queries, images=img, return_tensors="pt").to(device) - - with torch.no_grad(): - outputs = model(**inputs) - - outputs.logits = outputs.logits.cpu() - outputs.pred_boxes = outputs.pred_boxes.cpu() - results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes) - boxes, scores, labels = results[0]["boxes"], results[0]["scores"], results[0]["labels"] - - result_labels = [] - for box, score, label in zip(boxes, scores, labels): - box = [int(i) for i in box.tolist()] - if score < score_threshold: - continue - result_labels.append((box, text_queries[label.item()])) - return img, result_labels - - -description = """ -Try this demo for OWLv2, -introduced in Scaling Open-Vocabulary Object Detection. -\n\n Compared to OWLVIT, OWLv2 performs better both in yield and performance (average precision). -You can use OWLv2 to query images with text descriptions of any object. -To use it, simply upload an image and enter comma separated text descriptions of objects you want to query the image for. You -can also use the score threshold slider to set a threshold to filter out low probability predictions. -\n\nOWL-ViT is trained on text templates, -hence you can get better predictions by querying the image with text templates used in training the original model: e.g. *"photo of a star-spangled banner"*, -*"image of a shoe"*. Refer to the CLIP paper to see the full list of text templates used to augment the training data. -\n\nColab demo -""" -demo = gr.Interface( - query_image, - inputs=[gr.Image(), "text", gr.Slider(0, 1, value=0.1)], - outputs="annotatedimage", - title="Zero-Shot Object Detection with OWLv2", - description=description, - examples=[ - ["assets/astronaut.png", "human face, rocket, star-spangled banner, nasa badge", 0.11], - ["assets/coffee.png", "coffee mug, spoon, plate", 0.1], - ["assets/butterflies.jpeg", "orange butterfly", 0.3], - ], -) -demo.launch() diff --git a/spaces/mfrashad/CharacterGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/file_utils.py b/spaces/mfrashad/CharacterGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/file_utils.py deleted file mode 100644 index 41624cad6d7b44c028f3ef1fb541add4956b4601..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/file_utils.py +++ /dev/null @@ -1,249 +0,0 @@ -""" -Utilities for working with the local dataset cache. -This file is adapted from the AllenNLP library at https://github.com/allenai/allennlp -Copyright by the AllenNLP authors. -""" -from __future__ import (absolute_import, division, print_function, unicode_literals) - -import json -import logging -import os -import shutil -import tempfile -from functools import wraps -from hashlib import sha256 -import sys -from io import open - -import boto3 -import requests -from botocore.exceptions import ClientError -from tqdm import tqdm - -try: - from urllib.parse import urlparse -except ImportError: - from urlparse import urlparse - -try: - from pathlib import Path - PYTORCH_PRETRAINED_BIGGAN_CACHE = Path(os.getenv('PYTORCH_PRETRAINED_BIGGAN_CACHE', - Path.home() / '.pytorch_pretrained_biggan')) -except (AttributeError, ImportError): - PYTORCH_PRETRAINED_BIGGAN_CACHE = os.getenv('PYTORCH_PRETRAINED_BIGGAN_CACHE', - os.path.join(os.path.expanduser("~"), '.pytorch_pretrained_biggan')) - -logger = logging.getLogger(__name__) # pylint: disable=invalid-name - - -def url_to_filename(url, etag=None): - """ - Convert `url` into a hashed filename in a repeatable way. - If `etag` is specified, append its hash to the url's, delimited - by a period. - """ - url_bytes = url.encode('utf-8') - url_hash = sha256(url_bytes) - filename = url_hash.hexdigest() - - if etag: - etag_bytes = etag.encode('utf-8') - etag_hash = sha256(etag_bytes) - filename += '.' + etag_hash.hexdigest() - - return filename - - -def filename_to_url(filename, cache_dir=None): - """ - Return the url and etag (which may be ``None``) stored for `filename`. - Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BIGGAN_CACHE - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - cache_path = os.path.join(cache_dir, filename) - if not os.path.exists(cache_path): - raise EnvironmentError("file {} not found".format(cache_path)) - - meta_path = cache_path + '.json' - if not os.path.exists(meta_path): - raise EnvironmentError("file {} not found".format(meta_path)) - - with open(meta_path, encoding="utf-8") as meta_file: - metadata = json.load(meta_file) - url = metadata['url'] - etag = metadata['etag'] - - return url, etag - - -def cached_path(url_or_filename, cache_dir=None): - """ - Given something that might be a URL (or might be a local path), - determine which. If it's a URL, download the file and cache it, and - return the path to the cached file. If it's already a local path, - make sure the file exists and then return the path. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BIGGAN_CACHE - if sys.version_info[0] == 3 and isinstance(url_or_filename, Path): - url_or_filename = str(url_or_filename) - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - parsed = urlparse(url_or_filename) - - if parsed.scheme in ('http', 'https', 's3'): - # URL, so get it from the cache (downloading if necessary) - return get_from_cache(url_or_filename, cache_dir) - elif os.path.exists(url_or_filename): - # File, and it exists. - return url_or_filename - elif parsed.scheme == '': - # File, but it doesn't exist. - raise EnvironmentError("file {} not found".format(url_or_filename)) - else: - # Something unknown - raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename)) - - -def split_s3_path(url): - """Split a full s3 path into the bucket name and path.""" - parsed = urlparse(url) - if not parsed.netloc or not parsed.path: - raise ValueError("bad s3 path {}".format(url)) - bucket_name = parsed.netloc - s3_path = parsed.path - # Remove '/' at beginning of path. - if s3_path.startswith("/"): - s3_path = s3_path[1:] - return bucket_name, s3_path - - -def s3_request(func): - """ - Wrapper function for s3 requests in order to create more helpful error - messages. - """ - - @wraps(func) - def wrapper(url, *args, **kwargs): - try: - return func(url, *args, **kwargs) - except ClientError as exc: - if int(exc.response["Error"]["Code"]) == 404: - raise EnvironmentError("file {} not found".format(url)) - else: - raise - - return wrapper - - -@s3_request -def s3_etag(url): - """Check ETag on S3 object.""" - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_object = s3_resource.Object(bucket_name, s3_path) - return s3_object.e_tag - - -@s3_request -def s3_get(url, temp_file): - """Pull a file directly from S3.""" - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file) - - -def http_get(url, temp_file): - req = requests.get(url, stream=True) - content_length = req.headers.get('Content-Length') - total = int(content_length) if content_length is not None else None - progress = tqdm(unit="B", total=total) - for chunk in req.iter_content(chunk_size=1024): - if chunk: # filter out keep-alive new chunks - progress.update(len(chunk)) - temp_file.write(chunk) - progress.close() - - -def get_from_cache(url, cache_dir=None): - """ - Given a URL, look for the corresponding dataset in the local cache. - If it's not there, download it. Then return the path to the cached file. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BIGGAN_CACHE - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - if not os.path.exists(cache_dir): - os.makedirs(cache_dir) - - # Get eTag to add to filename, if it exists. - if url.startswith("s3://"): - etag = s3_etag(url) - else: - response = requests.head(url, allow_redirects=True) - if response.status_code != 200: - raise IOError("HEAD request failed for url {} with status code {}" - .format(url, response.status_code)) - etag = response.headers.get("ETag") - - filename = url_to_filename(url, etag) - - # get cache path to put the file - cache_path = os.path.join(cache_dir, filename) - - if not os.path.exists(cache_path): - # Download to temporary file, then copy to cache dir once finished. - # Otherwise you get corrupt cache entries if the download gets interrupted. - with tempfile.NamedTemporaryFile() as temp_file: - logger.info("%s not found in cache, downloading to %s", url, temp_file.name) - - # GET file object - if url.startswith("s3://"): - s3_get(url, temp_file) - else: - http_get(url, temp_file) - - # we are copying the file before closing it, so flush to avoid truncation - temp_file.flush() - # shutil.copyfileobj() starts at the current position, so go to the start - temp_file.seek(0) - - logger.info("copying %s to cache at %s", temp_file.name, cache_path) - with open(cache_path, 'wb') as cache_file: - shutil.copyfileobj(temp_file, cache_file) - - logger.info("creating metadata file for %s", cache_path) - meta = {'url': url, 'etag': etag} - meta_path = cache_path + '.json' - with open(meta_path, 'w', encoding="utf-8") as meta_file: - json.dump(meta, meta_file) - - logger.info("removing temp file %s", temp_file.name) - - return cache_path - - -def read_set_from_file(filename): - ''' - Extract a de-duped collection (set) of text from a file. - Expected file format is one item per line. - ''' - collection = set() - with open(filename, 'r', encoding='utf-8') as file_: - for line in file_: - collection.add(line.rstrip()) - return collection - - -def get_file_extension(path, dot=True, lower=True): - ext = os.path.splitext(path)[1] - ext = ext if dot else ext[1:] - return ext.lower() if lower else ext diff --git a/spaces/michael2008bj/demo1/app.py b/spaces/michael2008bj/demo1/app.py deleted file mode 100644 index adf57ddb6e2c6969ba5a457f58a1c26ba3f8cea2..0000000000000000000000000000000000000000 --- a/spaces/michael2008bj/demo1/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import gradio as gr -from transformers import pipeline - -pipeline = pipeline(task="image-classification", model="julien-c/hotdog-not-hotdog") - -def predict(image): - predictions = pipeline(image) - return {p["label"]: p["score"] for p in predictions} - -gr.Interface( - predict, - inputs=gr.inputs.Image(label="Upload hot dog candidate", type="filepath"), - outputs=gr.outputs.Label(num_top_classes=2), - title="Hot Dog? Or Not?", -).launch() \ No newline at end of file diff --git a/spaces/microsoft/GODEL-Demo/README.md b/spaces/microsoft/GODEL-Demo/README.md deleted file mode 100644 index 5c3fecbe88b3d4d5820f70cefe7deadd8feeac03..0000000000000000000000000000000000000000 --- a/spaces/microsoft/GODEL-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GODEL Demo -emoji: 🐠 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mikeee/chinese-llama-2-7b-ggml-q4/app.py b/spaces/mikeee/chinese-llama-2-7b-ggml-q4/app.py deleted file mode 100644 index 5adf0182e81837cff529579cf8544e70116f0403..0000000000000000000000000000000000000000 --- a/spaces/mikeee/chinese-llama-2-7b-ggml-q4/app.py +++ /dev/null @@ -1,406 +0,0 @@ -"""Run codes.""" -# pylint: disable=line-too-long, broad-exception-caught, invalid-name, missing-function-docstring, too-many-instance-attributes, missing-class-docstring -# ruff: noqa: E501 -import os -import platform -import random -import time -from dataclasses import asdict, dataclass, field -from pathlib import Path - -# from types import SimpleNamespace -import gradio as gr -import psutil -from about_time import about_time -from ctransformers import AutoModelForCausalLM -from dl_hf_model import dl_hf_model -from loguru import logger - -url = "https://huggingface.co/The Bloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q4_K_S.bin" # 8.14G -url = "https://huggingface.co/The Bloke/airoboros-l2-13B-gpt4-1.4.1-GGML/blob/main/airoboros-l2-13b-gpt4-1.4.1.ggmlv3.q4_K_M.bin" # 8.14G -url = "https://huggingface.co/soulteary/Chinese-Llama-2-7b-ggml-q4/blob/main/Chinese-Llama-2-7b-ggml-q4.bin" # 3.83G - -# Prompt template: Guanaco -# {past_history} -prompt_template = """You are a helpful assistant. Let's think step by step. -### Human: -{input} -### Assistant:""" -human_prefix = "### Human" -ai_prefix = "### Assistant" -stop_list = [f"{human_prefix}:"] - -prompt_template = """A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. -USER: {question} -ASSISTANT:""" -human_prefix = "USER" -ai_prefix = "ASSISTANT" -stop_list = [f"{human_prefix}:"] - -prompt_template = """[INST] -<> -You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. - -If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. -<> -{question} -[/INST]""" - -prompt_template = """[INST] <> -You are a helpful assistant. -<> - -{question} [/INST] -""" - -_ = psutil.cpu_count(logical=False) - 1 -cpu_count: int = int(_) if _ else 1 -logger.debug(f"{cpu_count=}") - -LLM = None - -try: - model_loc, file_size = dl_hf_model(url) - logger.info(f"done load llm {model_loc=} {file_size=}G") -except Exception as exc_: - logger.error(exc_) - raise SystemExit(1) from exc_ - -logger.debug(f"{model_loc=}") -LLM = AutoModelForCausalLM.from_pretrained( - model_loc, - model_type="llama", - threads=cpu_count, -) - -os.environ["TZ"] = "Asia/Shanghai" -try: - time.tzset() # type: ignore # pylint: disable=no-member -except Exception: - # Windows - logger.warning("Windows, cant run time.tzset()") - - -@dataclass -class GenerationConfig: - temperature: float = 0.7 - top_k: int = 50 - top_p: float = 0.9 - repetition_penalty: float = 1.0 - max_new_tokens: int = 512 - seed: int = 42 - reset: bool = False - stream: bool = True - threads: int = cpu_count - # stop: list[str] = field(default_factory=lambda: stop_list) - - -def generate( - question: str, - llm=LLM, - config: GenerationConfig = GenerationConfig(), -): - """Run model inference, will return a Generator if streaming is true.""" - # _ = prompt_template.format(question=question) - # print(_) - - prompt = prompt_template.format(question=question) - - return llm( - prompt, - **asdict(config), - ) - - -logger.debug(f"{asdict(GenerationConfig())=}") - - -def user(user_message, history): - # return user_message, history + [[user_message, None]] - history.append([user_message, None]) - return user_message, history # keep user_message - - -def user1(user_message, history): - # return user_message, history + [[user_message, None]] - history.append([user_message, None]) - return "", history # clear user_message - - -def bot_(history): - user_message = history[-1][0] - resp = random.choice(["How are you?", "I love you", "I'm very hungry"]) - bot_message = user_message + ": " + resp - history[-1][1] = "" - for character in bot_message: - history[-1][1] += character - time.sleep(0.02) - yield history - - history[-1][1] = resp - yield history - - -def bot(history): - user_message = history[-1][0] - response = [] - - logger.debug(f"{user_message=}") - - with about_time() as atime: # type: ignore - flag = 1 - prefix = "" - then = time.time() - - logger.debug("about to generate") - - config = GenerationConfig(reset=True) - for elm in generate(user_message, config=config): - if flag == 1: - logger.debug("in the loop") - prefix = f"({time.time() - then:.2f}s) " - flag = 0 - print(prefix, end="", flush=True) - logger.debug(f"{prefix=}") - print(elm, end="", flush=True) - # logger.debug(f"{elm}") - - response.append(elm) - history[-1][1] = prefix + "".join(response) - yield history - - _ = ( - f"(time elapsed: {atime.duration_human}, " # type: ignore - f"{atime.duration/len(''.join(response)):.2f}s/char)" # type: ignore - ) - - history[-1][1] = "".join(response) + f"\n{_}" - yield history - - -def predict_api(prompt): - logger.debug(f"{prompt=}") - try: - # user_prompt = prompt - config = GenerationConfig( - temperature=0.2, - top_k=10, - top_p=0.9, - repetition_penalty=1.0, - max_new_tokens=512, # adjust as needed - seed=42, - reset=True, # reset history (cache) - stream=False, - # threads=cpu_count, - # stop=prompt_prefix[1:2], - ) - - response = generate( - prompt, - config=config, - ) - - logger.debug(f"api: {response=}") - except Exception as exc: - logger.error(exc) - response = f"{exc=}" - # bot = {"inputs": [response]} - # bot = [(prompt, response)] - - return response - - -css = """ - .importantButton { - background: linear-gradient(45deg, #7e0570,#5d1c99, #6e00ff) !important; - border: none !important; - } - .importantButton:hover { - background: linear-gradient(45deg, #ff00e0,#8500ff, #6e00ff) !important; - border: none !important; - } - .disclaimer {font-variant-caps: all-small-caps; font-size: xx-small;} - .xsmall {font-size: x-small;} -""" -etext = """In America, where cars are an important part of the national psyche, a decade ago people had suddenly started to drive less, which had not happened since the oil shocks of the 1970s. """ -examples_list = [ - ["What NFL team won the Super Bowl in the year Justin Bieber was born?"], - [ - "What NFL team won the Super Bowl in the year Justin Bieber was born? Think step by step." - ], - ["How to pick a lock? Provide detailed steps."], - [ - "If it takes 10 hours to dry 10 clothes, assuming all the clothes are hanged together at the same time for drying , then how long will it take to dry a cloth?" - ], - ["is infinity + 1 bigger than infinity?"], - ["Explain the plot of Cinderella in a sentence."], - [ - "How long does it take to become proficient in French, and what are the best methods for retaining information?" - ], - ["What are some common mistakes to avoid when writing code?"], - ["Build a prompt to generate a beautiful portrait of a horse"], - ["Suggest four metaphors to describe the benefits of AI"], - ["Write a pop song about leaving home for the sandy beaches."], - ["Write a summary demonstrating my ability to tame lions"], - ["鲁迅和周树人什么关系? 说中文。"], - ["鲁迅和周树人什么关系?"], - ["鲁迅和周树人什么关系? 用英文回答。"], - ["从前有一头牛,这头牛后面有什么?"], - ["正无穷大加一大于正无穷大吗?"], - ["正无穷大加正无穷大大于正无穷大吗?"], - ["-2的平方根等于什么?"], - ["树上有5只鸟,猎人开枪打死了一只。树上还有几只鸟?"], - ["树上有11只鸟,猎人开枪打死了一只。树上还有几只鸟?提示:需考虑鸟可能受惊吓飞走。"], - ["以红楼梦的行文风格写一张委婉的请假条。不少于320字。"], - [f"{etext} 翻成中文,列出3个版本。"], - [f"{etext} \n 翻成中文,保留原意,但使用文学性的语言。不要写解释。列出3个版本。"], - ["假定 1 + 2 = 4, 试求 7 + 8。"], - ["给出判断一个数是不是质数的 javascript 码。"], - ["给出实现python 里 range(10)的 javascript 码。"], - ["给出实现python 里 [*(range(10)]的 javascript 码。"], - ["Erkläre die Handlung von Cinderella in einem Satz."], - ["Erkläre die Handlung von Cinderella in einem Satz. Auf Deutsch."], -] - -logger.info("start block") - -with gr.Blocks( - title=f"{Path(model_loc).name}", - theme=gr.themes.Soft(text_size="sm", spacing_size="sm"), - css=css, -) as block: - # buff_var = gr.State("") - with gr.Accordion("🎈 Info", open=False): - gr.Markdown( - f"""
      {Path(model_loc).name}
      - Most examples are meant for another model. - You probably should try to test - some related prompts.""", - elem_classes="xsmall", - ) - - # chatbot = gr.Chatbot().style(height=700) # 500 - chatbot = gr.Chatbot(height=500) - - # buff = gr.Textbox(show_label=False, visible=True) - - with gr.Row(): - with gr.Column(scale=5): - msg = gr.Textbox( - label="Chat Message Box", - placeholder="Ask me anything (press Shift+Enter or click Submit to send)", - show_label=False, - # container=False, - lines=6, - max_lines=30, - show_copy_button=True, - # ).style(container=False) - ) - with gr.Column(scale=1, min_width=50): - with gr.Row(): - submit = gr.Button("Submit", elem_classes="xsmall") - stop = gr.Button("Stop", visible=True) - clear = gr.Button("Clear History", visible=True) - with gr.Row(visible=False): - with gr.Accordion("Advanced Options:", open=False): - with gr.Row(): - with gr.Column(scale=2): - system = gr.Textbox( - label="System Prompt", - value=prompt_template, - show_label=False, - container=False, - # ).style(container=False) - ) - with gr.Column(): - with gr.Row(): - change = gr.Button("Change System Prompt") - reset = gr.Button("Reset System Prompt") - - with gr.Accordion("Example Inputs", open=True): - examples = gr.Examples( - examples=examples_list, - inputs=[msg], - examples_per_page=40, - ) - - # with gr.Row(): - with gr.Accordion("Disclaimer", open=False): - _ = Path(model_loc).name - gr.Markdown( - f"Disclaimer: {_} can produce factually incorrect output, and should not be relied on to produce " - "factually accurate information. {_} was trained on various public datasets; while great efforts " - "have been taken to clean the pretraining data, it is possible that this model could generate lewd, " - "biased, or otherwise offensive outputs.", - elem_classes=["disclaimer"], - ) - - msg_submit_event = msg.submit( - # fn=conversation.user_turn, - fn=user, - inputs=[msg, chatbot], - outputs=[msg, chatbot], - queue=True, - show_progress="full", - # api_name=None, - ).then(bot, chatbot, chatbot, queue=True) - submit_click_event = submit.click( - # fn=lambda x, y: ("",) + user(x, y)[1:], # clear msg - fn=user1, # clear msg - inputs=[msg, chatbot], - outputs=[msg, chatbot], - queue=True, - # queue=False, - show_progress="full", - # api_name=None, - ).then(bot, chatbot, chatbot, queue=True) - stop.click( - fn=None, - inputs=None, - outputs=None, - cancels=[msg_submit_event, submit_click_event], - queue=False, - ) - clear.click(lambda: None, None, chatbot, queue=False) - - with gr.Accordion("For Chat/Translation API", open=False, visible=False): - input_text = gr.Text() - api_btn = gr.Button("Go", variant="primary") - out_text = gr.Text() - - api_btn.click( - predict_api, - input_text, - out_text, - api_name="api", - ) - - # block.load(update_buff, [], buff, every=1) - # block.load(update_buff, [buff_var], [buff_var, buff], every=1) - -# concurrency_count=5, max_size=20 -# max_size=36, concurrency_count=14 -# CPU cpu_count=2 16G, model 7G -# CPU UPGRADE cpu_count=8 32G, model 7G - -# does not work -_ = """ -# _ = int(psutil.virtual_memory().total / 10**9 // file_size - 1) -# concurrency_count = max(_, 1) -if psutil.cpu_count(logical=False) >= 8: - # concurrency_count = max(int(32 / file_size) - 1, 1) -else: - # concurrency_count = max(int(16 / file_size) - 1, 1) -# """ - -# default concurrency_count = 1 -# block.queue(concurrency_count=concurrency_count, max_size=5).launch(debug=True) - -server_port = 7860 -if "forindo" in platform.node(): - server_port = 7861 -block.queue(max_size=5).launch( - debug=True, server_name="0.0.0.0", server_port=server_port -) - -# block.queue(max_size=5).launch(debug=True, server_name="0.0.0.0") diff --git a/spaces/mikeee/radiobee-aligner/radiobee/gen_row_alignment.py b/spaces/mikeee/radiobee-aligner/radiobee/gen_row_alignment.py deleted file mode 100644 index a58549c0c2bbfdd823babee7a7e42b9f192916d6..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/radiobee/gen_row_alignment.py +++ /dev/null @@ -1,151 +0,0 @@ -"""Gen proper alignment for a given triple_set. - -cmat = fetch_sent_corr(src, tgt) -src_len, tgt_len = np.array(cmat).shape -r_ali = gen_row_alignment(cmat, tgt_len, src_len) # note the order -src[r_ali[1]], tgt[r_ali[0]], r_ali[2] - -or !!! (targer, source) -cmat = fetch_sent_corr(tgt, src) # note the order -src_len, tgt_len = np.array(cmat).shape -r_ali = gen_row_alignment(cmat, src_len, tgt_len) -src[r_ali[0]], tgt[r_ali[1]], r_ali[2] - ---- -src_txt = 'data/wu_ch2_en.txt' -tgt_txt = 'data/wu_ch2_zh.txt' - -assert Path(src_txt).exists() -assert Path(tgt_txt).exists() - -src_text, _ = load_paras(src_txt) -tgt_text, _ = load_paras(tgt_txt) - -cos_matrix = gen_cos_matrix(src_text, tgt_text) -t_set, m_matrix = find_aligned_pairs(cos_matrix0, thr=0.4, matrix=True) - -resu = gen_row_alignment(t_set, src_len, tgt_len) -resu = np.array(resu) - -idx = -1 -idx += 1; (resu[idx], src_text[int(resu[idx, 0])], - tgt_text[int(resu[idx, 1])]) if all(resu[idx]) else resu[idx] - -idx += 1; i0, i1, i2 = resu[idx]; '***' if i0 == '' -else src_text[int(i0)], '***' if i1 == '' else tgt_text[int(i1)], '' -if i2 == '' else i2 -""" -# pylint: disable=line-too-long, unused-variable -from typing import List, Union - -# natural extrapolation with slope equal to 1 -from itertools import zip_longest as zip_longest_middle - -import numpy as np - -from logzero import logger - -# from tinybee.zip_longest_middle import zip_longest_middle - -# from tinybee.zip_longest_middle import zip_longest_middle -# from tinybee.find_pairs import find_pairs - -# logger = logging.getLogger(__name__) -# logger.addHandler(logging.NullHandler()) - - -def gen_row_alignment( # pylint: disable=too-many-locals - t_set, - src_len, - tgt_len, - # ) -> List[Tuple[Union[str, int], Union[str, int], Union[str, float]]]: -) -> List[List[Union[str, float]]]: - """Gen proper rows for given triple_set. - - Arguments: - [t_set {np.array or list}] -- [nll matrix] - [src_len {int}] -- numb of source texts (para/sents) - [tgt_len {int}] -- numb of target texts (para/sents) - - Returns: - [np.array] -- [proper rows] - """ - t_set = np.array(t_set, dtype="object") - - # len0 = src_len - - # len1 tgt text length, must be provided - len1 = tgt_len - - # rearrange t_set as buff in increasing order - buff = [[-1, -1, ""]] # - idx_t = 0 - # for elm in t_set: - # start with bigger value from the 3rd col - - y00, yargmax, ymax = zip(*t_set) - ymax_ = np.array(ymax).copy() - reset_v = np.min(ymax_) - 1 - for count in range(tgt_len): - argmax = np.argmax(ymax_) - # reset - ymax_[argmax] = reset_v - idx_t = argmax - elm = t_set[idx_t] - logger.debug("%s: %s, %s", count, idx_t, elm) - - # find loc to insert - elm0, elm1, elm2 = elm - idx = -1 - for idx, loc in enumerate(buff): - if loc[0] > elm0: - break - else: - idx += 1 # last - - # make sure elm1 is within the range - # prev elm1 < elm1 < next elm1 - if elm1 > buff[idx - 1][1]: - try: # overflow possible (idx + 1 in # last) - next_elm = buff[idx][1] - except IndexError: - next_elm = len1 - if elm1 < next_elm: - # insert '' if necessary - # using zip_longest_middle - buff.insert( - idx, [elm0, elm1, elm2], - ) - # logger.debug('---') - - idx_t += 1 - # if idx_t == 24: # 20: - # break - - # remove [-1, -1] - # buff.pop(0) - # buff = np.array(buff, dtype='object') - - # take care of the tail - buff += [[src_len, tgt_len, ""]] - - resu = [] - # merit = [] - - for idx, elm in enumerate(buff[1:]): - idx1 = idx + 1 - elm0_, elm1_, elm2_ = buff[idx1 - 1] # idx starts from 0 - elm0, elm1, elm2 = elm - del elm2_, elm2 - - tmp0 = zip_longest_middle( - list(range(elm0_ + 1, elm0)), list(range(elm1_ + 1, elm1)), fillvalue="", - ) - # convet to list entries & attache merit - tmp = [list(t_elm) + [""] for t_elm in tmp0] - - # update resu - resu += tmp + [buff[idx1]] - - # remove the last entry - return resu[:-1] diff --git a/spaces/miyaaa666/bingo/src/components/chat-message.tsx b/spaces/miyaaa666/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
      -
      - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

      {children}

      - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
      -
      -
      - {message.author === 'bot' && } - {message.author === 'bot' && } -
      -
      - ) : null -} diff --git a/spaces/mlgeis/ArXivRecommenderSystem/embedding.py b/spaces/mlgeis/ArXivRecommenderSystem/embedding.py deleted file mode 100644 index d0121f92fa932d45bfe37bdaca9b33fcc8aee7fb..0000000000000000000000000000000000000000 --- a/spaces/mlgeis/ArXivRecommenderSystem/embedding.py +++ /dev/null @@ -1,140 +0,0 @@ -import cleaning as clean -from sentence_transformers import SentenceTransformer, util -import pandas as pd -import numpy as np -import json -from sklearn.base import BaseEstimator, TransformerMixin -import os - - -class Embedder(BaseEstimator, TransformerMixin): - """Takes a list of clean strings and outputs a numpy array of their embeddings generated by the ST model model_name.""" - - def __init__(self, model_name) -> None: - super().__init__() - self.model_name = model_name - - def fit(self, X, y=None): - return self - - def transform(self, X, y=None): - encoder = SentenceTransformer(self.model_name) - embedded_documents = encoder.encode(sentences=X) - - return embedded_documents - - -class FullEmbedder(BaseEstimator, TransformerMixin): - """A class to handle creating sentence transformer embeddings from a clean arxiv dataset.""" - - def fit(self, X, y=None): - return self - - def transform( - self, - X=None, - y=None, - model_name=None, - load_from_file=False, - path_to_embeddings=None, - ): - """Either generates embeddings from an clean ArXivData instance or loads embeddings from file. - - Args: - X: ArXivData instance that has been cleaned - y: Labels. Defaults to None. - model_name: Sentence transformer model used to generate embeddings. Defaults to None. - load_from_file: Boolean used to specify whether to calculate embeddings or load from file. Defaults to False. - path_to_embeddings: path to the location to save embeddings to or load embeddings from. Defaults to None. - - Raises: - Exception: Raises exception if the load_from_file is True without a specified path to load from. - """ - - if load_from_file: - if not path_to_embeddings: - raise Exception("You must specify a path to store the embeddings.") - X.embeddings = pd.read_feather(path_to_embeddings).to_numpy() - - return X - else: - ## Generate embeddings from X and save as an attribute of X. - - if not model_name: - raise Exception( - "You must specify the sentence transformer model to use." - ) - - doc_strings = (X.metadata.doc_strings).to_list() - model = SentenceTransformer(model_name) - embeddings = model.encode(doc_strings, show_progress_bar=True) - X.embeddings = embeddings - - ## Save the embeddings to the specified path, or, if no path is specified, use the default path - ## default path = ./model_name_embeddings.feather - - embeddings_df = pd.DataFrame(embeddings) - embeddings_df.columns = [ - str(column_name) for column_name in embeddings_df.columns - ] - - if not path_to_embeddings: - path_to_embeddings = os.path.join( - os.getcwd(), f"{model_name}_embeddings.feather" - ) - - embeddings_df.to_feather(path_to_embeddings) - - return X - - -class ComputeMSCLabels(BaseEstimator, TransformerMixin): - def fit(self, X, y=None): - return self - - def transform(self, X, y=None, path_to_embeddings=None): - tag_to_embedding_dict = clean.msc_encoded_dict() - - X["scored_tags"] = np.nan - - X_tagged_rows = X[X.msc_tags.notna()] - - X_tagged_rows["tag_embeddings"] = X_tagged_rows.msc_tags.apply( - clean.list_mapper, dictionary=tag_to_embedding_dict - ) - tag_scores = X_tagged_rows.apply( - self.get_tag_semantic_scores, path_to_embeddings=path_to_embeddings, axis=1 - ) - X.scored_tags[X.metadata.msc_tags.notna()] = tag_scores - - return X - - def get_tag_semantic_scores(self, metadata_row, path_to_embeddings): - embeddings = pd.read_feather(path_to_embeddings).to_numpy() - results = util.semantic_search( - query_embeddings=list(embeddings[metadata_row.doc_strings.index, :]), - corpus_embeddings=metadata_row.tag_embeddings, - top_k=50, - ) - - return results[0] - - -def generate_tag_embeddings(model_name, path_to_tag_dict, path_to_save_embeddings): - model = SentenceTransformer(model_name) - with open(path_to_tag_dict, "r") as file: - dict_string = file.read() - tag_dict = json.loads(dict_string) - - tag_name_list = list(set(tag_dict.values())) - embedded_tag_names = model.encode(sentences=tag_name_list, show_progress_bar=True) - embedded_tag_names_df = pd.DataFrame(embedded_tag_names) - embedded_tag_names_df.columns = [ - str(name) for name in embedded_tag_names_df.columns - ] - embedded_tag_names_df.index = tag_name_list - embedded_tag_names_df.to_parquet(path_to_save_embeddings, index=True) - - -def load_tag_embeddings(path_to_tag_embeddings): - return pd.read_parquet(path_to_tag_embeddings) diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/tools/__init__.py b/spaces/mmlab-ntu/Segment-Any-RGBD/tools/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mms-meta/MMS/uroman/lib/JSON/backportPP.pm b/spaces/mms-meta/MMS/uroman/lib/JSON/backportPP.pm deleted file mode 100644 index db4f8bbb3b741e95c5817edde612718af0f889e4..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/uroman/lib/JSON/backportPP.pm +++ /dev/null @@ -1,2806 +0,0 @@ -package # This is JSON::backportPP - JSON::PP; - -# JSON-2.0 - -use 5.005; -use strict; -use base qw(Exporter); -use overload (); - -use Carp (); -use B (); -#use Devel::Peek; - -use vars qw($VERSION); -$VERSION = '2.27204'; - -@JSON::PP::EXPORT = qw(encode_json decode_json from_json to_json); - -# instead of hash-access, i tried index-access for speed. -# but this method is not faster than what i expected. so it will be changed. - -use constant P_ASCII => 0; -use constant P_LATIN1 => 1; -use constant P_UTF8 => 2; -use constant P_INDENT => 3; -use constant P_CANONICAL => 4; -use constant P_SPACE_BEFORE => 5; -use constant P_SPACE_AFTER => 6; -use constant P_ALLOW_NONREF => 7; -use constant P_SHRINK => 8; -use constant P_ALLOW_BLESSED => 9; -use constant P_CONVERT_BLESSED => 10; -use constant P_RELAXED => 11; - -use constant P_LOOSE => 12; -use constant P_ALLOW_BIGNUM => 13; -use constant P_ALLOW_BAREKEY => 14; -use constant P_ALLOW_SINGLEQUOTE => 15; -use constant P_ESCAPE_SLASH => 16; -use constant P_AS_NONBLESSED => 17; - -use constant P_ALLOW_UNKNOWN => 18; - -use constant OLD_PERL => $] < 5.008 ? 1 : 0; - -BEGIN { - my @xs_compati_bit_properties = qw( - latin1 ascii utf8 indent canonical space_before space_after allow_nonref shrink - allow_blessed convert_blessed relaxed allow_unknown - ); - my @pp_bit_properties = qw( - allow_singlequote allow_bignum loose - allow_barekey escape_slash as_nonblessed - ); - - # Perl version check, Unicode handling is enable? - # Helper module sets @JSON::PP::_properties. - if ($] < 5.008 ) { - my $helper = $] >= 5.006 ? 'JSON::backportPP::Compat5006' : 'JSON::backportPP::Compat5005'; - eval qq| require $helper |; - if ($@) { Carp::croak $@; } - } - - for my $name (@xs_compati_bit_properties, @pp_bit_properties) { - my $flag_name = 'P_' . uc($name); - - eval qq/ - sub $name { - my \$enable = defined \$_[1] ? \$_[1] : 1; - - if (\$enable) { - \$_[0]->{PROPS}->[$flag_name] = 1; - } - else { - \$_[0]->{PROPS}->[$flag_name] = 0; - } - - \$_[0]; - } - - sub get_$name { - \$_[0]->{PROPS}->[$flag_name] ? 1 : ''; - } - /; - } - -} - - - -# Functions - -my %encode_allow_method - = map {($_ => 1)} qw/utf8 pretty allow_nonref latin1 self_encode escape_slash - allow_blessed convert_blessed indent indent_length allow_bignum - as_nonblessed - /; -my %decode_allow_method - = map {($_ => 1)} qw/utf8 allow_nonref loose allow_singlequote allow_bignum - allow_barekey max_size relaxed/; - - -my $JSON; # cache - -sub encode_json ($) { # encode - ($JSON ||= __PACKAGE__->new->utf8)->encode(@_); -} - - -sub decode_json { # decode - ($JSON ||= __PACKAGE__->new->utf8)->decode(@_); -} - -# Obsoleted - -sub to_json($) { - Carp::croak ("JSON::PP::to_json has been renamed to encode_json."); -} - - -sub from_json($) { - Carp::croak ("JSON::PP::from_json has been renamed to decode_json."); -} - - -# Methods - -sub new { - my $class = shift; - my $self = { - max_depth => 512, - max_size => 0, - indent => 0, - FLAGS => 0, - fallback => sub { encode_error('Invalid value. JSON can only reference.') }, - indent_length => 3, - }; - - bless $self, $class; -} - - -sub encode { - return $_[0]->PP_encode_json($_[1]); -} - - -sub decode { - return $_[0]->PP_decode_json($_[1], 0x00000000); -} - - -sub decode_prefix { - return $_[0]->PP_decode_json($_[1], 0x00000001); -} - - -# accessor - - -# pretty printing - -sub pretty { - my ($self, $v) = @_; - my $enable = defined $v ? $v : 1; - - if ($enable) { # indent_length(3) for JSON::XS compatibility - $self->indent(1)->indent_length(3)->space_before(1)->space_after(1); - } - else { - $self->indent(0)->space_before(0)->space_after(0); - } - - $self; -} - -# etc - -sub max_depth { - my $max = defined $_[1] ? $_[1] : 0x80000000; - $_[0]->{max_depth} = $max; - $_[0]; -} - - -sub get_max_depth { $_[0]->{max_depth}; } - - -sub max_size { - my $max = defined $_[1] ? $_[1] : 0; - $_[0]->{max_size} = $max; - $_[0]; -} - - -sub get_max_size { $_[0]->{max_size}; } - - -sub filter_json_object { - $_[0]->{cb_object} = defined $_[1] ? $_[1] : 0; - $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0; - $_[0]; -} - -sub filter_json_single_key_object { - if (@_ > 1) { - $_[0]->{cb_sk_object}->{$_[1]} = $_[2]; - } - $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0; - $_[0]; -} - -sub indent_length { - if (!defined $_[1] or $_[1] > 15 or $_[1] < 0) { - Carp::carp "The acceptable range of indent_length() is 0 to 15."; - } - else { - $_[0]->{indent_length} = $_[1]; - } - $_[0]; -} - -sub get_indent_length { - $_[0]->{indent_length}; -} - -sub sort_by { - $_[0]->{sort_by} = defined $_[1] ? $_[1] : 1; - $_[0]; -} - -sub allow_bigint { - Carp::carp("allow_bigint() is obsoleted. use allow_bignum() insted."); -} - -############################### - -### -### Perl => JSON -### - - -{ # Convert - - my $max_depth; - my $indent; - my $ascii; - my $latin1; - my $utf8; - my $space_before; - my $space_after; - my $canonical; - my $allow_blessed; - my $convert_blessed; - - my $indent_length; - my $escape_slash; - my $bignum; - my $as_nonblessed; - - my $depth; - my $indent_count; - my $keysort; - - - sub PP_encode_json { - my $self = shift; - my $obj = shift; - - $indent_count = 0; - $depth = 0; - - my $idx = $self->{PROPS}; - - ($ascii, $latin1, $utf8, $indent, $canonical, $space_before, $space_after, $allow_blessed, - $convert_blessed, $escape_slash, $bignum, $as_nonblessed) - = @{$idx}[P_ASCII .. P_SPACE_AFTER, P_ALLOW_BLESSED, P_CONVERT_BLESSED, - P_ESCAPE_SLASH, P_ALLOW_BIGNUM, P_AS_NONBLESSED]; - - ($max_depth, $indent_length) = @{$self}{qw/max_depth indent_length/}; - - $keysort = $canonical ? sub { $a cmp $b } : undef; - - if ($self->{sort_by}) { - $keysort = ref($self->{sort_by}) eq 'CODE' ? $self->{sort_by} - : $self->{sort_by} =~ /\D+/ ? $self->{sort_by} - : sub { $a cmp $b }; - } - - encode_error("hash- or arrayref expected (not a simple scalar, use allow_nonref to allow this)") - if(!ref $obj and !$idx->[ P_ALLOW_NONREF ]); - - my $str = $self->object_to_json($obj); - - $str .= "\n" if ( $indent ); # JSON::XS 2.26 compatible - - unless ($ascii or $latin1 or $utf8) { - utf8::upgrade($str); - } - - if ($idx->[ P_SHRINK ]) { - utf8::downgrade($str, 1); - } - - return $str; - } - - - sub object_to_json { - my ($self, $obj) = @_; - my $type = ref($obj); - - if($type eq 'HASH'){ - return $self->hash_to_json($obj); - } - elsif($type eq 'ARRAY'){ - return $self->array_to_json($obj); - } - elsif ($type) { # blessed object? - if (blessed($obj)) { - - return $self->value_to_json($obj) if ( $obj->isa('JSON::PP::Boolean') ); - - if ( $convert_blessed and $obj->can('TO_JSON') ) { - my $result = $obj->TO_JSON(); - if ( defined $result and ref( $result ) ) { - if ( refaddr( $obj ) eq refaddr( $result ) ) { - encode_error( sprintf( - "%s::TO_JSON method returned same object as was passed instead of a new one", - ref $obj - ) ); - } - } - - return $self->object_to_json( $result ); - } - - return "$obj" if ( $bignum and _is_bignum($obj) ); - return $self->blessed_to_json($obj) if ($allow_blessed and $as_nonblessed); # will be removed. - - encode_error( sprintf("encountered object '%s', but neither allow_blessed " - . "nor convert_blessed settings are enabled", $obj) - ) unless ($allow_blessed); - - return 'null'; - } - else { - return $self->value_to_json($obj); - } - } - else{ - return $self->value_to_json($obj); - } - } - - - sub hash_to_json { - my ($self, $obj) = @_; - my @res; - - encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)") - if (++$depth > $max_depth); - - my ($pre, $post) = $indent ? $self->_up_indent() : ('', ''); - my $del = ($space_before ? ' ' : '') . ':' . ($space_after ? ' ' : ''); - - for my $k ( _sort( $obj ) ) { - if ( OLD_PERL ) { utf8::decode($k) } # key for Perl 5.6 / be optimized - push @res, string_to_json( $self, $k ) - . $del - . ( $self->object_to_json( $obj->{$k} ) || $self->value_to_json( $obj->{$k} ) ); - } - - --$depth; - $self->_down_indent() if ($indent); - - return '{' . ( @res ? $pre : '' ) . ( @res ? join( ",$pre", @res ) . $post : '' ) . '}'; - } - - - sub array_to_json { - my ($self, $obj) = @_; - my @res; - - encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)") - if (++$depth > $max_depth); - - my ($pre, $post) = $indent ? $self->_up_indent() : ('', ''); - - for my $v (@$obj){ - push @res, $self->object_to_json($v) || $self->value_to_json($v); - } - - --$depth; - $self->_down_indent() if ($indent); - - return '[' . ( @res ? $pre : '' ) . ( @res ? join( ",$pre", @res ) . $post : '' ) . ']'; - } - - - sub value_to_json { - my ($self, $value) = @_; - - return 'null' if(!defined $value); - - my $b_obj = B::svref_2object(\$value); # for round trip problem - my $flags = $b_obj->FLAGS; - - return $value # as is - if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? - - my $type = ref($value); - - if(!$type){ - return string_to_json($self, $value); - } - elsif( blessed($value) and $value->isa('JSON::PP::Boolean') ){ - return $$value == 1 ? 'true' : 'false'; - } - elsif ($type) { - if ((overload::StrVal($value) =~ /=(\w+)/)[0]) { - return $self->value_to_json("$value"); - } - - if ($type eq 'SCALAR' and defined $$value) { - return $$value eq '1' ? 'true' - : $$value eq '0' ? 'false' - : $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ? 'null' - : encode_error("cannot encode reference to scalar"); - } - - if ( $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ) { - return 'null'; - } - else { - if ( $type eq 'SCALAR' or $type eq 'REF' ) { - encode_error("cannot encode reference to scalar"); - } - else { - encode_error("encountered $value, but JSON can only represent references to arrays or hashes"); - } - } - - } - else { - return $self->{fallback}->($value) - if ($self->{fallback} and ref($self->{fallback}) eq 'CODE'); - return 'null'; - } - - } - - - my %esc = ( - "\n" => '\n', - "\r" => '\r', - "\t" => '\t', - "\f" => '\f', - "\b" => '\b', - "\"" => '\"', - "\\" => '\\\\', - "\'" => '\\\'', - ); - - - sub string_to_json { - my ($self, $arg) = @_; - - $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; - $arg =~ s/\//\\\//g if ($escape_slash); - $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; - - if ($ascii) { - $arg = JSON_PP_encode_ascii($arg); - } - - if ($latin1) { - $arg = JSON_PP_encode_latin1($arg); - } - - if ($utf8) { - utf8::encode($arg); - } - - return '"' . $arg . '"'; - } - - - sub blessed_to_json { - my $reftype = reftype($_[1]) || ''; - if ($reftype eq 'HASH') { - return $_[0]->hash_to_json($_[1]); - } - elsif ($reftype eq 'ARRAY') { - return $_[0]->array_to_json($_[1]); - } - else { - return 'null'; - } - } - - - sub encode_error { - my $error = shift; - Carp::croak "$error"; - } - - - sub _sort { - defined $keysort ? (sort $keysort (keys %{$_[0]})) : keys %{$_[0]}; - } - - - sub _up_indent { - my $self = shift; - my $space = ' ' x $indent_length; - - my ($pre,$post) = ('',''); - - $post = "\n" . $space x $indent_count; - - $indent_count++; - - $pre = "\n" . $space x $indent_count; - - return ($pre,$post); - } - - - sub _down_indent { $indent_count--; } - - - sub PP_encode_box { - { - depth => $depth, - indent_count => $indent_count, - }; - } - -} # Convert - - -sub _encode_ascii { - join('', - map { - $_ <= 127 ? - chr($_) : - $_ <= 65535 ? - sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_)); - } unpack('U*', $_[0]) - ); -} - - -sub _encode_latin1 { - join('', - map { - $_ <= 255 ? - chr($_) : - $_ <= 65535 ? - sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_)); - } unpack('U*', $_[0]) - ); -} - - -sub _encode_surrogates { # from perlunicode - my $uni = $_[0] - 0x10000; - return ($uni / 0x400 + 0xD800, $uni % 0x400 + 0xDC00); -} - - -sub _is_bignum { - $_[0]->isa('Math::BigInt') or $_[0]->isa('Math::BigFloat'); -} - - - -# -# JSON => Perl -# - -my $max_intsize; - -BEGIN { - my $checkint = 1111; - for my $d (5..64) { - $checkint .= 1; - my $int = eval qq| $checkint |; - if ($int =~ /[eE]/) { - $max_intsize = $d - 1; - last; - } - } -} - -{ # PARSE - - my %escapes = ( # by Jeremy Muhlich - b => "\x8", - t => "\x9", - n => "\xA", - f => "\xC", - r => "\xD", - '\\' => '\\', - '"' => '"', - '/' => '/', - ); - - my $text; # json data - my $at; # offset - my $ch; # 1chracter - my $len; # text length (changed according to UTF8 or NON UTF8) - # INTERNAL - my $depth; # nest counter - my $encoding; # json text encoding - my $is_valid_utf8; # temp variable - my $utf8_len; # utf8 byte length - # FLAGS - my $utf8; # must be utf8 - my $max_depth; # max nest number of objects and arrays - my $max_size; - my $relaxed; - my $cb_object; - my $cb_sk_object; - - my $F_HOOK; - - my $allow_bigint; # using Math::BigInt - my $singlequote; # loosely quoting - my $loose; # - my $allow_barekey; # bareKey - - # $opt flag - # 0x00000001 .... decode_prefix - # 0x10000000 .... incr_parse - - sub PP_decode_json { - my ($self, $opt); # $opt is an effective flag during this decode_json. - - ($self, $text, $opt) = @_; - - ($at, $ch, $depth) = (0, '', 0); - - if ( !defined $text or ref $text ) { - decode_error("malformed JSON string, neither array, object, number, string or atom"); - } - - my $idx = $self->{PROPS}; - - ($utf8, $relaxed, $loose, $allow_bigint, $allow_barekey, $singlequote) - = @{$idx}[P_UTF8, P_RELAXED, P_LOOSE .. P_ALLOW_SINGLEQUOTE]; - - if ( $utf8 ) { - utf8::downgrade( $text, 1 ) or Carp::croak("Wide character in subroutine entry"); - } - else { - utf8::upgrade( $text ); - } - - $len = length $text; - - ($max_depth, $max_size, $cb_object, $cb_sk_object, $F_HOOK) - = @{$self}{qw/max_depth max_size cb_object cb_sk_object F_HOOK/}; - - if ($max_size > 1) { - use bytes; - my $bytes = length $text; - decode_error( - sprintf("attempted decode of JSON text of %s bytes size, but max_size is set to %s" - , $bytes, $max_size), 1 - ) if ($bytes > $max_size); - } - - # Currently no effect - # should use regexp - my @octets = unpack('C4', $text); - $encoding = ( $octets[0] and $octets[1]) ? 'UTF-8' - : (!$octets[0] and $octets[1]) ? 'UTF-16BE' - : (!$octets[0] and !$octets[1]) ? 'UTF-32BE' - : ( $octets[2] ) ? 'UTF-16LE' - : (!$octets[2] ) ? 'UTF-32LE' - : 'unknown'; - - white(); # remove head white space - - my $valid_start = defined $ch; # Is there a first character for JSON structure? - - my $result = value(); - - return undef if ( !$result && ( $opt & 0x10000000 ) ); # for incr_parse - - decode_error("malformed JSON string, neither array, object, number, string or atom") unless $valid_start; - - if ( !$idx->[ P_ALLOW_NONREF ] and !ref $result ) { - decode_error( - 'JSON text must be an object or array (but found number, string, true, false or null,' - . ' use allow_nonref to allow this)', 1); - } - - Carp::croak('something wrong.') if $len < $at; # we won't arrive here. - - my $consumed = defined $ch ? $at - 1 : $at; # consumed JSON text length - - white(); # remove tail white space - - if ( $ch ) { - return ( $result, $consumed ) if ($opt & 0x00000001); # all right if decode_prefix - decode_error("garbage after JSON object"); - } - - ( $opt & 0x00000001 ) ? ( $result, $consumed ) : $result; - } - - - sub next_chr { - return $ch = undef if($at >= $len); - $ch = substr($text, $at++, 1); - } - - - sub value { - white(); - return if(!defined $ch); - return object() if($ch eq '{'); - return array() if($ch eq '['); - return string() if($ch eq '"' or ($singlequote and $ch eq "'")); - return number() if($ch =~ /[0-9]/ or $ch eq '-'); - return word(); - } - - sub string { - my ($i, $s, $t, $u); - my $utf16; - my $is_utf8; - - ($is_valid_utf8, $utf8_len) = ('', 0); - - $s = ''; # basically UTF8 flag on - - if($ch eq '"' or ($singlequote and $ch eq "'")){ - my $boundChar = $ch; - - OUTER: while( defined(next_chr()) ){ - - if($ch eq $boundChar){ - next_chr(); - - if ($utf16) { - decode_error("missing low surrogate character in surrogate pair"); - } - - utf8::decode($s) if($is_utf8); - - return $s; - } - elsif($ch eq '\\'){ - next_chr(); - if(exists $escapes{$ch}){ - $s .= $escapes{$ch}; - } - elsif($ch eq 'u'){ # UNICODE handling - my $u = ''; - - for(1..4){ - $ch = next_chr(); - last OUTER if($ch !~ /[0-9a-fA-F]/); - $u .= $ch; - } - - # U+D800 - U+DBFF - if ($u =~ /^[dD][89abAB][0-9a-fA-F]{2}/) { # UTF-16 high surrogate? - $utf16 = $u; - } - # U+DC00 - U+DFFF - elsif ($u =~ /^[dD][c-fC-F][0-9a-fA-F]{2}/) { # UTF-16 low surrogate? - unless (defined $utf16) { - decode_error("missing high surrogate character in surrogate pair"); - } - $is_utf8 = 1; - $s .= JSON_PP_decode_surrogates($utf16, $u) || next; - $utf16 = undef; - } - else { - if (defined $utf16) { - decode_error("surrogate pair expected"); - } - - if ( ( my $hex = hex( $u ) ) > 127 ) { - $is_utf8 = 1; - $s .= JSON_PP_decode_unicode($u) || next; - } - else { - $s .= chr $hex; - } - } - - } - else{ - unless ($loose) { - $at -= 2; - decode_error('illegal backslash escape sequence in string'); - } - $s .= $ch; - } - } - else{ - - if ( ord $ch > 127 ) { - if ( $utf8 ) { - unless( $ch = is_valid_utf8($ch) ) { - $at -= 1; - decode_error("malformed UTF-8 character in JSON string"); - } - else { - $at += $utf8_len - 1; - } - } - else { - utf8::encode( $ch ); - } - - $is_utf8 = 1; - } - - if (!$loose) { - if ($ch =~ /[\x00-\x1f\x22\x5c]/) { # '/' ok - $at--; - decode_error('invalid character encountered while parsing JSON string'); - } - } - - $s .= $ch; - } - } - } - - decode_error("unexpected end of string while parsing JSON string"); - } - - - sub white { - while( defined $ch ){ - if($ch le ' '){ - next_chr(); - } - elsif($ch eq '/'){ - next_chr(); - if(defined $ch and $ch eq '/'){ - 1 while(defined(next_chr()) and $ch ne "\n" and $ch ne "\r"); - } - elsif(defined $ch and $ch eq '*'){ - next_chr(); - while(1){ - if(defined $ch){ - if($ch eq '*'){ - if(defined(next_chr()) and $ch eq '/'){ - next_chr(); - last; - } - } - else{ - next_chr(); - } - } - else{ - decode_error("Unterminated comment"); - } - } - next; - } - else{ - $at--; - decode_error("malformed JSON string, neither array, object, number, string or atom"); - } - } - else{ - if ($relaxed and $ch eq '#') { # correctly? - pos($text) = $at; - $text =~ /\G([^\n]*(?:\r\n|\r|\n|$))/g; - $at = pos($text); - next_chr; - next; - } - - last; - } - } - } - - - sub array { - my $a = $_[0] || []; # you can use this code to use another array ref object. - - decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)') - if (++$depth > $max_depth); - - next_chr(); - white(); - - if(defined $ch and $ch eq ']'){ - --$depth; - next_chr(); - return $a; - } - else { - while(defined($ch)){ - push @$a, value(); - - white(); - - if (!defined $ch) { - last; - } - - if($ch eq ']'){ - --$depth; - next_chr(); - return $a; - } - - if($ch ne ','){ - last; - } - - next_chr(); - white(); - - if ($relaxed and $ch eq ']') { - --$depth; - next_chr(); - return $a; - } - - } - } - - decode_error(", or ] expected while parsing array"); - } - - - sub object { - my $o = $_[0] || {}; # you can use this code to use another hash ref object. - my $k; - - decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)') - if (++$depth > $max_depth); - next_chr(); - white(); - - if(defined $ch and $ch eq '}'){ - --$depth; - next_chr(); - if ($F_HOOK) { - return _json_object_hook($o); - } - return $o; - } - else { - while (defined $ch) { - $k = ($allow_barekey and $ch ne '"' and $ch ne "'") ? bareKey() : string(); - white(); - - if(!defined $ch or $ch ne ':'){ - $at--; - decode_error("':' expected"); - } - - next_chr(); - $o->{$k} = value(); - white(); - - last if (!defined $ch); - - if($ch eq '}'){ - --$depth; - next_chr(); - if ($F_HOOK) { - return _json_object_hook($o); - } - return $o; - } - - if($ch ne ','){ - last; - } - - next_chr(); - white(); - - if ($relaxed and $ch eq '}') { - --$depth; - next_chr(); - if ($F_HOOK) { - return _json_object_hook($o); - } - return $o; - } - - } - - } - - $at--; - decode_error(", or } expected while parsing object/hash"); - } - - - sub bareKey { # doesn't strictly follow Standard ECMA-262 3rd Edition - my $key; - while($ch =~ /[^\x00-\x23\x25-\x2F\x3A-\x40\x5B-\x5E\x60\x7B-\x7F]/){ - $key .= $ch; - next_chr(); - } - return $key; - } - - - sub word { - my $word = substr($text,$at-1,4); - - if($word eq 'true'){ - $at += 3; - next_chr; - return $JSON::PP::true; - } - elsif($word eq 'null'){ - $at += 3; - next_chr; - return undef; - } - elsif($word eq 'fals'){ - $at += 3; - if(substr($text,$at,1) eq 'e'){ - $at++; - next_chr; - return $JSON::PP::false; - } - } - - $at--; # for decode_error report - - decode_error("'null' expected") if ($word =~ /^n/); - decode_error("'true' expected") if ($word =~ /^t/); - decode_error("'false' expected") if ($word =~ /^f/); - decode_error("malformed JSON string, neither array, object, number, string or atom"); - } - - - sub number { - my $n = ''; - my $v; - - # According to RFC4627, hex or oct digits are invalid. - if($ch eq '0'){ - my $peek = substr($text,$at,1); - my $hex = $peek =~ /[xX]/; # 0 or 1 - - if($hex){ - decode_error("malformed number (leading zero must not be followed by another digit)"); - ($n) = ( substr($text, $at+1) =~ /^([0-9a-fA-F]+)/); - } - else{ # oct - ($n) = ( substr($text, $at) =~ /^([0-7]+)/); - if (defined $n and length $n > 1) { - decode_error("malformed number (leading zero must not be followed by another digit)"); - } - } - - if(defined $n and length($n)){ - if (!$hex and length($n) == 1) { - decode_error("malformed number (leading zero must not be followed by another digit)"); - } - $at += length($n) + $hex; - next_chr; - return $hex ? hex($n) : oct($n); - } - } - - if($ch eq '-'){ - $n = '-'; - next_chr; - if (!defined $ch or $ch !~ /\d/) { - decode_error("malformed number (no digits after initial minus)"); - } - } - - while(defined $ch and $ch =~ /\d/){ - $n .= $ch; - next_chr; - } - - if(defined $ch and $ch eq '.'){ - $n .= '.'; - - next_chr; - if (!defined $ch or $ch !~ /\d/) { - decode_error("malformed number (no digits after decimal point)"); - } - else { - $n .= $ch; - } - - while(defined(next_chr) and $ch =~ /\d/){ - $n .= $ch; - } - } - - if(defined $ch and ($ch eq 'e' or $ch eq 'E')){ - $n .= $ch; - next_chr; - - if(defined($ch) and ($ch eq '+' or $ch eq '-')){ - $n .= $ch; - next_chr; - if (!defined $ch or $ch =~ /\D/) { - decode_error("malformed number (no digits after exp sign)"); - } - $n .= $ch; - } - elsif(defined($ch) and $ch =~ /\d/){ - $n .= $ch; - } - else { - decode_error("malformed number (no digits after exp sign)"); - } - - while(defined(next_chr) and $ch =~ /\d/){ - $n .= $ch; - } - - } - - $v .= $n; - - if ($v !~ /[.eE]/ and length $v > $max_intsize) { - if ($allow_bigint) { # from Adam Sussman - require Math::BigInt; - return Math::BigInt->new($v); - } - else { - return "$v"; - } - } - elsif ($allow_bigint) { - require Math::BigFloat; - return Math::BigFloat->new($v); - } - - return 0+$v; - } - - - sub is_valid_utf8 { - - $utf8_len = $_[0] =~ /[\x00-\x7F]/ ? 1 - : $_[0] =~ /[\xC2-\xDF]/ ? 2 - : $_[0] =~ /[\xE0-\xEF]/ ? 3 - : $_[0] =~ /[\xF0-\xF4]/ ? 4 - : 0 - ; - - return unless $utf8_len; - - my $is_valid_utf8 = substr($text, $at - 1, $utf8_len); - - return ( $is_valid_utf8 =~ /^(?: - [\x00-\x7F] - |[\xC2-\xDF][\x80-\xBF] - |[\xE0][\xA0-\xBF][\x80-\xBF] - |[\xE1-\xEC][\x80-\xBF][\x80-\xBF] - |[\xED][\x80-\x9F][\x80-\xBF] - |[\xEE-\xEF][\x80-\xBF][\x80-\xBF] - |[\xF0][\x90-\xBF][\x80-\xBF][\x80-\xBF] - |[\xF1-\xF3][\x80-\xBF][\x80-\xBF][\x80-\xBF] - |[\xF4][\x80-\x8F][\x80-\xBF][\x80-\xBF] - )$/x ) ? $is_valid_utf8 : ''; - } - - - sub decode_error { - my $error = shift; - my $no_rep = shift; - my $str = defined $text ? substr($text, $at) : ''; - my $mess = ''; - my $type = $] >= 5.008 ? 'U*' - : $] < 5.006 ? 'C*' - : utf8::is_utf8( $str ) ? 'U*' # 5.6 - : 'C*' - ; - - for my $c ( unpack( $type, $str ) ) { # emulate pv_uni_display() ? - $mess .= $c == 0x07 ? '\a' - : $c == 0x09 ? '\t' - : $c == 0x0a ? '\n' - : $c == 0x0d ? '\r' - : $c == 0x0c ? '\f' - : $c < 0x20 ? sprintf('\x{%x}', $c) - : $c == 0x5c ? '\\\\' - : $c < 0x80 ? chr($c) - : sprintf('\x{%x}', $c) - ; - if ( length $mess >= 20 ) { - $mess .= '...'; - last; - } - } - - unless ( length $mess ) { - $mess = '(end of string)'; - } - - Carp::croak ( - $no_rep ? "$error" : "$error, at character offset $at (before \"$mess\")" - ); - - } - - - sub _json_object_hook { - my $o = $_[0]; - my @ks = keys %{$o}; - - if ( $cb_sk_object and @ks == 1 and exists $cb_sk_object->{ $ks[0] } and ref $cb_sk_object->{ $ks[0] } ) { - my @val = $cb_sk_object->{ $ks[0] }->( $o->{$ks[0]} ); - if (@val == 1) { - return $val[0]; - } - } - - my @val = $cb_object->($o) if ($cb_object); - if (@val == 0 or @val > 1) { - return $o; - } - else { - return $val[0]; - } - } - - - sub PP_decode_box { - { - text => $text, - at => $at, - ch => $ch, - len => $len, - depth => $depth, - encoding => $encoding, - is_valid_utf8 => $is_valid_utf8, - }; - } - -} # PARSE - - -sub _decode_surrogates { # from perlunicode - my $uni = 0x10000 + (hex($_[0]) - 0xD800) * 0x400 + (hex($_[1]) - 0xDC00); - my $un = pack('U*', $uni); - utf8::encode( $un ); - return $un; -} - - -sub _decode_unicode { - my $un = pack('U', hex shift); - utf8::encode( $un ); - return $un; -} - -# -# Setup for various Perl versions (the code from JSON::PP58) -# - -BEGIN { - - unless ( defined &utf8::is_utf8 ) { - require Encode; - *utf8::is_utf8 = *Encode::is_utf8; - } - - if ( $] >= 5.008 ) { - *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii; - *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1; - *JSON::PP::JSON_PP_decode_surrogates = \&_decode_surrogates; - *JSON::PP::JSON_PP_decode_unicode = \&_decode_unicode; - } - - if ($] >= 5.008 and $] < 5.008003) { # join() in 5.8.0 - 5.8.2 is broken. - package # hide from PAUSE - JSON::PP; - require subs; - subs->import('join'); - eval q| - sub join { - return '' if (@_ < 2); - my $j = shift; - my $str = shift; - for (@_) { $str .= $j . $_; } - return $str; - } - |; - } - - - sub JSON::PP::incr_parse { - local $Carp::CarpLevel = 1; - ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_parse( @_ ); - } - - - sub JSON::PP::incr_skip { - ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_skip; - } - - - sub JSON::PP::incr_reset { - ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_reset; - } - - eval q{ - sub JSON::PP::incr_text : lvalue { - $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new; - - if ( $_[0]->{_incr_parser}->{incr_parsing} ) { - Carp::croak("incr_text can not be called when the incremental parser already started parsing"); - } - $_[0]->{_incr_parser}->{incr_text}; - } - } if ( $] >= 5.006 ); - -} # Setup for various Perl versions (the code from JSON::PP58) - - -############################### -# Utilities -# - -BEGIN { - eval 'require Scalar::Util'; - unless($@){ - *JSON::PP::blessed = \&Scalar::Util::blessed; - *JSON::PP::reftype = \&Scalar::Util::reftype; - *JSON::PP::refaddr = \&Scalar::Util::refaddr; - } - else{ # This code is from Scalar::Util. - # warn $@; - eval 'sub UNIVERSAL::a_sub_not_likely_to_be_here { ref($_[0]) }'; - *JSON::PP::blessed = sub { - local($@, $SIG{__DIE__}, $SIG{__WARN__}); - ref($_[0]) ? eval { $_[0]->a_sub_not_likely_to_be_here } : undef; - }; - my %tmap = qw( - B::NULL SCALAR - B::HV HASH - B::AV ARRAY - B::CV CODE - B::IO IO - B::GV GLOB - B::REGEXP REGEXP - ); - *JSON::PP::reftype = sub { - my $r = shift; - - return undef unless length(ref($r)); - - my $t = ref(B::svref_2object($r)); - - return - exists $tmap{$t} ? $tmap{$t} - : length(ref($$r)) ? 'REF' - : 'SCALAR'; - }; - *JSON::PP::refaddr = sub { - return undef unless length(ref($_[0])); - - my $addr; - if(defined(my $pkg = blessed($_[0]))) { - $addr .= bless $_[0], 'Scalar::Util::Fake'; - bless $_[0], $pkg; - } - else { - $addr .= $_[0] - } - - $addr =~ /0x(\w+)/; - local $^W; - #no warnings 'portable'; - hex($1); - } - } -} - - -# shamelessly copied and modified from JSON::XS code. - -unless ( $INC{'JSON/PP.pm'} ) { - eval q| - package - JSON::PP::Boolean; - - use overload ( - "0+" => sub { ${$_[0]} }, - "++" => sub { $_[0] = ${$_[0]} + 1 }, - "--" => sub { $_[0] = ${$_[0]} - 1 }, - fallback => 1, - ); - |; -} - -$JSON::PP::true = do { bless \(my $dummy = 1), "JSON::PP::Boolean" }; -$JSON::PP::false = do { bless \(my $dummy = 0), "JSON::PP::Boolean" }; - -sub is_bool { defined $_[0] and UNIVERSAL::isa($_[0], "JSON::PP::Boolean"); } - -sub true { $JSON::PP::true } -sub false { $JSON::PP::false } -sub null { undef; } - -############################### - -############################### - -package # hide from PAUSE - JSON::PP::IncrParser; - -use strict; - -use constant INCR_M_WS => 0; # initial whitespace skipping -use constant INCR_M_STR => 1; # inside string -use constant INCR_M_BS => 2; # inside backslash -use constant INCR_M_JSON => 3; # outside anything, count nesting -use constant INCR_M_C0 => 4; -use constant INCR_M_C1 => 5; - -use vars qw($VERSION); -$VERSION = '1.01'; - -my $unpack_format = $] < 5.006 ? 'C*' : 'U*'; - -sub new { - my ( $class ) = @_; - - bless { - incr_nest => 0, - incr_text => undef, - incr_parsing => 0, - incr_p => 0, - }, $class; -} - - -sub incr_parse { - my ( $self, $coder, $text ) = @_; - - $self->{incr_text} = '' unless ( defined $self->{incr_text} ); - - if ( defined $text ) { - if ( utf8::is_utf8( $text ) and !utf8::is_utf8( $self->{incr_text} ) ) { - utf8::upgrade( $self->{incr_text} ) ; - utf8::decode( $self->{incr_text} ) ; - } - $self->{incr_text} .= $text; - } - - - my $max_size = $coder->get_max_size; - - if ( defined wantarray ) { - - $self->{incr_mode} = INCR_M_WS unless defined $self->{incr_mode}; - - if ( wantarray ) { - my @ret; - - $self->{incr_parsing} = 1; - - do { - push @ret, $self->_incr_parse( $coder, $self->{incr_text} ); - - unless ( !$self->{incr_nest} and $self->{incr_mode} == INCR_M_JSON ) { - $self->{incr_mode} = INCR_M_WS if $self->{incr_mode} != INCR_M_STR; - } - - } until ( length $self->{incr_text} >= $self->{incr_p} ); - - $self->{incr_parsing} = 0; - - return @ret; - } - else { # in scalar context - $self->{incr_parsing} = 1; - my $obj = $self->_incr_parse( $coder, $self->{incr_text} ); - $self->{incr_parsing} = 0 if defined $obj; # pointed by Martin J. Evans - return $obj ? $obj : undef; # $obj is an empty string, parsing was completed. - } - - } - -} - - -sub _incr_parse { - my ( $self, $coder, $text, $skip ) = @_; - my $p = $self->{incr_p}; - my $restore = $p; - - my @obj; - my $len = length $text; - - if ( $self->{incr_mode} == INCR_M_WS ) { - while ( $len > $p ) { - my $s = substr( $text, $p, 1 ); - $p++ and next if ( 0x20 >= unpack($unpack_format, $s) ); - $self->{incr_mode} = INCR_M_JSON; - last; - } - } - - while ( $len > $p ) { - my $s = substr( $text, $p++, 1 ); - - if ( $s eq '"' ) { - if (substr( $text, $p - 2, 1 ) eq '\\' ) { - next; - } - - if ( $self->{incr_mode} != INCR_M_STR ) { - $self->{incr_mode} = INCR_M_STR; - } - else { - $self->{incr_mode} = INCR_M_JSON; - unless ( $self->{incr_nest} ) { - last; - } - } - } - - if ( $self->{incr_mode} == INCR_M_JSON ) { - - if ( $s eq '[' or $s eq '{' ) { - if ( ++$self->{incr_nest} > $coder->get_max_depth ) { - Carp::croak('json text or perl structure exceeds maximum nesting level (max_depth set too low?)'); - } - } - elsif ( $s eq ']' or $s eq '}' ) { - last if ( --$self->{incr_nest} <= 0 ); - } - elsif ( $s eq '#' ) { - while ( $len > $p ) { - last if substr( $text, $p++, 1 ) eq "\n"; - } - } - - } - - } - - $self->{incr_p} = $p; - - return if ( $self->{incr_mode} == INCR_M_STR and not $self->{incr_nest} ); - return if ( $self->{incr_mode} == INCR_M_JSON and $self->{incr_nest} > 0 ); - - return '' unless ( length substr( $self->{incr_text}, 0, $p ) ); - - local $Carp::CarpLevel = 2; - - $self->{incr_p} = $restore; - $self->{incr_c} = $p; - - my ( $obj, $tail ) = $coder->PP_decode_json( substr( $self->{incr_text}, 0, $p ), 0x10000001 ); - - $self->{incr_text} = substr( $self->{incr_text}, $p ); - $self->{incr_p} = 0; - - return $obj || ''; -} - - -sub incr_text { - if ( $_[0]->{incr_parsing} ) { - Carp::croak("incr_text can not be called when the incremental parser already started parsing"); - } - $_[0]->{incr_text}; -} - - -sub incr_skip { - my $self = shift; - $self->{incr_text} = substr( $self->{incr_text}, $self->{incr_c} ); - $self->{incr_p} = 0; -} - - -sub incr_reset { - my $self = shift; - $self->{incr_text} = undef; - $self->{incr_p} = 0; - $self->{incr_mode} = 0; - $self->{incr_nest} = 0; - $self->{incr_parsing} = 0; -} - -############################### - - -1; -__END__ -=pod - -=head1 NAME - -JSON::PP - JSON::XS compatible pure-Perl module. - -=head1 SYNOPSIS - - use JSON::PP; - - # exported functions, they croak on error - # and expect/generate UTF-8 - - $utf8_encoded_json_text = encode_json $perl_hash_or_arrayref; - $perl_hash_or_arrayref = decode_json $utf8_encoded_json_text; - - # OO-interface - - $coder = JSON::PP->new->ascii->pretty->allow_nonref; - - $json_text = $json->encode( $perl_scalar ); - $perl_scalar = $json->decode( $json_text ); - - $pretty_printed = $json->pretty->encode( $perl_scalar ); # pretty-printing - - # Note that JSON version 2.0 and above will automatically use - # JSON::XS or JSON::PP, so you should be able to just: - - use JSON; - - -=head1 VERSION - - 2.27200 - -L 2.27 (~2.30) compatible. - -=head1 DESCRIPTION - -This module is L compatible pure Perl module. -(Perl 5.8 or later is recommended) - -JSON::XS is the fastest and most proper JSON module on CPAN. -It is written by Marc Lehmann in C, so must be compiled and -installed in the used environment. - -JSON::PP is a pure-Perl module and has compatibility to JSON::XS. - - -=head2 FEATURES - -=over - -=item * correct unicode handling - -This module knows how to handle Unicode (depending on Perl version). - -See to L and -L. - - -=item * round-trip integrity - -When you serialise a perl data structure using only data types -supported by JSON and Perl, the deserialised data structure is -identical on the Perl level. (e.g. the string "2.0" doesn't suddenly -become "2" just because it looks like a number). There I minor -exceptions to this, read the MAPPING section below to learn about -those. - - -=item * strict checking of JSON correctness - -There is no guessing, no generating of illegal JSON texts by default, -and only JSON is accepted as input by default (the latter is a -security feature). But when some options are set, loose checking -features are available. - -=back - -=head1 FUNCTIONAL INTERFACE - -Some documents are copied and modified from L. - -=head2 encode_json - - $json_text = encode_json $perl_scalar - -Converts the given Perl data structure to a UTF-8 encoded, binary string. - -This function call is functionally identical to: - - $json_text = JSON::PP->new->utf8->encode($perl_scalar) - -=head2 decode_json - - $perl_scalar = decode_json $json_text - -The opposite of C: expects an UTF-8 (binary) string and tries -to parse that as an UTF-8 encoded JSON text, returning the resulting -reference. - -This function call is functionally identical to: - - $perl_scalar = JSON::PP->new->utf8->decode($json_text) - -=head2 JSON::PP::is_bool - - $is_boolean = JSON::PP::is_bool($scalar) - -Returns true if the passed scalar represents either JSON::PP::true or -JSON::PP::false, two constants that act like C<1> and C<0> respectively -and are also used to represent JSON C and C in Perl strings. - -=head2 JSON::PP::true - -Returns JSON true value which is blessed object. -It C JSON::PP::Boolean object. - -=head2 JSON::PP::false - -Returns JSON false value which is blessed object. -It C JSON::PP::Boolean object. - -=head2 JSON::PP::null - -Returns C. - -See L, below, for more information on how JSON values are mapped to -Perl. - - -=head1 HOW DO I DECODE A DATA FROM OUTER AND ENCODE TO OUTER - -This section supposes that your perl version is 5.8 or later. - -If you know a JSON text from an outer world - a network, a file content, and so on, -is encoded in UTF-8, you should use C or C module object -with C enable. And the decoded result will contain UNICODE characters. - - # from network - my $json = JSON::PP->new->utf8; - my $json_text = CGI->new->param( 'json_data' ); - my $perl_scalar = $json->decode( $json_text ); - - # from file content - local $/; - open( my $fh, '<', 'json.data' ); - $json_text = <$fh>; - $perl_scalar = decode_json( $json_text ); - -If an outer data is not encoded in UTF-8, firstly you should C it. - - use Encode; - local $/; - open( my $fh, '<', 'json.data' ); - my $encoding = 'cp932'; - my $unicode_json_text = decode( $encoding, <$fh> ); # UNICODE - - # or you can write the below code. - # - # open( my $fh, "<:encoding($encoding)", 'json.data' ); - # $unicode_json_text = <$fh>; - -In this case, C<$unicode_json_text> is of course UNICODE string. -So you B use C nor C module object with C enable. -Instead of them, you use C module object with C disable. - - $perl_scalar = $json->utf8(0)->decode( $unicode_json_text ); - -Or C and C: - - $perl_scalar = decode_json( encode( 'utf8', $unicode_json_text ) ); - # this way is not efficient. - -And now, you want to convert your C<$perl_scalar> into JSON data and -send it to an outer world - a network or a file content, and so on. - -Your data usually contains UNICODE strings and you want the converted data to be encoded -in UTF-8, you should use C or C module object with C enable. - - print encode_json( $perl_scalar ); # to a network? file? or display? - # or - print $json->utf8->encode( $perl_scalar ); - -If C<$perl_scalar> does not contain UNICODE but C<$encoding>-encoded strings -for some reason, then its characters are regarded as B for perl -(because it does not concern with your $encoding). -You B use C nor C module object with C enable. -Instead of them, you use C module object with C disable. -Note that the resulted text is a UNICODE string but no problem to print it. - - # $perl_scalar contains $encoding encoded string values - $unicode_json_text = $json->utf8(0)->encode( $perl_scalar ); - # $unicode_json_text consists of characters less than 0x100 - print $unicode_json_text; - -Or C all string values and C: - - $perl_scalar->{ foo } = decode( $encoding, $perl_scalar->{ foo } ); - # ... do it to each string values, then encode_json - $json_text = encode_json( $perl_scalar ); - -This method is a proper way but probably not efficient. - -See to L, L. - - -=head1 METHODS - -Basically, check to L or L. - -=head2 new - - $json = JSON::PP->new - -Returns a new JSON::PP object that can be used to de/encode JSON -strings. - -All boolean flags described below are by default I. - -The mutators for flags all return the JSON object again and thus calls can -be chained: - - my $json = JSON::PP->new->utf8->space_after->encode({a => [1,2]}) - => {"a": [1, 2]} - -=head2 ascii - - $json = $json->ascii([$enable]) - - $enabled = $json->get_ascii - -If $enable is true (or missing), then the encode method will not generate characters outside -the code range 0..127. Any Unicode characters outside that range will be escaped using either -a single \uXXXX or a double \uHHHH\uLLLLL escape sequence, as per RFC4627. -(See to L). - -In Perl 5.005, there is no character having high value (more than 255). -See to L. - -If $enable is false, then the encode method will not escape Unicode characters unless -required by the JSON syntax or other flags. This results in a faster and more compact format. - - JSON::PP->new->ascii(1)->encode([chr 0x10401]) - => ["\ud801\udc01"] - -=head2 latin1 - - $json = $json->latin1([$enable]) - - $enabled = $json->get_latin1 - -If $enable is true (or missing), then the encode method will encode the resulting JSON -text as latin1 (or iso-8859-1), escaping any characters outside the code range 0..255. - -If $enable is false, then the encode method will not escape Unicode characters -unless required by the JSON syntax or other flags. - - JSON::XS->new->latin1->encode (["\x{89}\x{abc}"] - => ["\x{89}\\u0abc"] # (perl syntax, U+abc escaped, U+89 not) - -See to L. - -=head2 utf8 - - $json = $json->utf8([$enable]) - - $enabled = $json->get_utf8 - -If $enable is true (or missing), then the encode method will encode the JSON result -into UTF-8, as required by many protocols, while the decode method expects to be handled -an UTF-8-encoded string. Please note that UTF-8-encoded strings do not contain any -characters outside the range 0..255, they are thus useful for bytewise/binary I/O. - -(In Perl 5.005, any character outside the range 0..255 does not exist. -See to L.) - -In future versions, enabling this option might enable autodetection of the UTF-16 and UTF-32 -encoding families, as described in RFC4627. - -If $enable is false, then the encode method will return the JSON string as a (non-encoded) -Unicode string, while decode expects thus a Unicode string. Any decoding or encoding -(e.g. to UTF-8 or UTF-16) needs to be done yourself, e.g. using the Encode module. - -Example, output UTF-16BE-encoded JSON: - - use Encode; - $jsontext = encode "UTF-16BE", JSON::PP->new->encode ($object); - -Example, decode UTF-32LE-encoded JSON: - - use Encode; - $object = JSON::PP->new->decode (decode "UTF-32LE", $jsontext); - - -=head2 pretty - - $json = $json->pretty([$enable]) - -This enables (or disables) all of the C, C and -C flags in one call to generate the most readable -(or most compact) form possible. - -Equivalent to: - - $json->indent->space_before->space_after - -=head2 indent - - $json = $json->indent([$enable]) - - $enabled = $json->get_indent - -The default indent space length is three. -You can use C to change the length. - -=head2 space_before - - $json = $json->space_before([$enable]) - - $enabled = $json->get_space_before - -If C<$enable> is true (or missing), then the C method will add an extra -optional space before the C<:> separating keys from values in JSON objects. - -If C<$enable> is false, then the C method will not add any extra -space at those places. - -This setting has no effect when decoding JSON texts. - -Example, space_before enabled, space_after and indent disabled: - - {"key" :"value"} - -=head2 space_after - - $json = $json->space_after([$enable]) - - $enabled = $json->get_space_after - -If C<$enable> is true (or missing), then the C method will add an extra -optional space after the C<:> separating keys from values in JSON objects -and extra whitespace after the C<,> separating key-value pairs and array -members. - -If C<$enable> is false, then the C method will not add any extra -space at those places. - -This setting has no effect when decoding JSON texts. - -Example, space_before and indent disabled, space_after enabled: - - {"key": "value"} - -=head2 relaxed - - $json = $json->relaxed([$enable]) - - $enabled = $json->get_relaxed - -If C<$enable> is true (or missing), then C will accept some -extensions to normal JSON syntax (see below). C will not be -affected in anyway. I. I suggest only to use this option to -parse application-specific files written by humans (configuration files, -resource files etc.) - -If C<$enable> is false (the default), then C will only accept -valid JSON texts. - -Currently accepted extensions are: - -=over 4 - -=item * list items can have an end-comma - -JSON I array elements and key-value pairs with commas. This -can be annoying if you write JSON texts manually and want to be able to -quickly append elements, so this extension accepts comma at the end of -such items not just between them: - - [ - 1, - 2, <- this comma not normally allowed - ] - { - "k1": "v1", - "k2": "v2", <- this comma not normally allowed - } - -=item * shell-style '#'-comments - -Whenever JSON allows whitespace, shell-style comments are additionally -allowed. They are terminated by the first carriage-return or line-feed -character, after which more white-space and comments are allowed. - - [ - 1, # this comment not allowed in JSON - # neither this one... - ] - -=back - -=head2 canonical - - $json = $json->canonical([$enable]) - - $enabled = $json->get_canonical - -If C<$enable> is true (or missing), then the C method will output JSON objects -by sorting their keys. This is adding a comparatively high overhead. - -If C<$enable> is false, then the C method will output key-value -pairs in the order Perl stores them (which will likely change between runs -of the same script). - -This option is useful if you want the same data structure to be encoded as -the same JSON text (given the same overall settings). If it is disabled, -the same hash might be encoded differently even if contains the same data, -as key-value pairs have no inherent ordering in Perl. - -This setting has no effect when decoding JSON texts. - -If you want your own sorting routine, you can give a code reference -or a subroutine name to C. See to C. - -=head2 allow_nonref - - $json = $json->allow_nonref([$enable]) - - $enabled = $json->get_allow_nonref - -If C<$enable> is true (or missing), then the C method can convert a -non-reference into its corresponding string, number or null JSON value, -which is an extension to RFC4627. Likewise, C will accept those JSON -values instead of croaking. - -If C<$enable> is false, then the C method will croak if it isn't -passed an arrayref or hashref, as JSON texts must either be an object -or array. Likewise, C will croak if given something that is not a -JSON object or array. - - JSON::PP->new->allow_nonref->encode ("Hello, World!") - => "Hello, World!" - -=head2 allow_unknown - - $json = $json->allow_unknown ([$enable]) - - $enabled = $json->get_allow_unknown - -If $enable is true (or missing), then "encode" will *not* throw an -exception when it encounters values it cannot represent in JSON (for -example, filehandles) but instead will encode a JSON "null" value. -Note that blessed objects are not included here and are handled -separately by c. - -If $enable is false (the default), then "encode" will throw an -exception when it encounters anything it cannot encode as JSON. - -This option does not affect "decode" in any way, and it is -recommended to leave it off unless you know your communications -partner. - -=head2 allow_blessed - - $json = $json->allow_blessed([$enable]) - - $enabled = $json->get_allow_blessed - -If C<$enable> is true (or missing), then the C method will not -barf when it encounters a blessed reference. Instead, the value of the -B option will decide whether C (C -disabled or no C method found) or a representation of the -object (C enabled and C method found) is being -encoded. Has no effect on C. - -If C<$enable> is false (the default), then C will throw an -exception when it encounters a blessed object. - -=head2 convert_blessed - - $json = $json->convert_blessed([$enable]) - - $enabled = $json->get_convert_blessed - -If C<$enable> is true (or missing), then C, upon encountering a -blessed object, will check for the availability of the C method -on the object's class. If found, it will be called in scalar context -and the resulting scalar will be encoded instead of the object. If no -C method is found, the value of C will decide what -to do. - -The C method may safely call die if it wants. If C -returns other blessed objects, those will be handled in the same -way. C must take care of not causing an endless recursion cycle -(== crash) in this case. The name of C was chosen because other -methods called by the Perl core (== not by the user of the object) are -usually in upper case letters and to avoid collisions with the C -function or method. - -This setting does not yet influence C in any way. - -If C<$enable> is false, then the C setting will decide what -to do when a blessed object is found. - -=head2 filter_json_object - - $json = $json->filter_json_object([$coderef]) - -When C<$coderef> is specified, it will be called from C each -time it decodes a JSON object. The only argument passed to the coderef -is a reference to the newly-created hash. If the code references returns -a single scalar (which need not be a reference), this value -(i.e. a copy of that scalar to avoid aliasing) is inserted into the -deserialised data structure. If it returns an empty list -(NOTE: I C, which is a valid scalar), the original deserialised -hash will be inserted. This setting can slow down decoding considerably. - -When C<$coderef> is omitted or undefined, any existing callback will -be removed and C will not change the deserialised hash in any -way. - -Example, convert all JSON objects into the integer 5: - - my $js = JSON::PP->new->filter_json_object (sub { 5 }); - # returns [5] - $js->decode ('[{}]'); # the given subroutine takes a hash reference. - # throw an exception because allow_nonref is not enabled - # so a lone 5 is not allowed. - $js->decode ('{"a":1, "b":2}'); - -=head2 filter_json_single_key_object - - $json = $json->filter_json_single_key_object($key [=> $coderef]) - -Works remotely similar to C, but is only called for -JSON objects having a single key named C<$key>. - -This C<$coderef> is called before the one specified via -C, if any. It gets passed the single value in the JSON -object. If it returns a single value, it will be inserted into the data -structure. If it returns nothing (not even C but the empty list), -the callback from C will be called next, as if no -single-key callback were specified. - -If C<$coderef> is omitted or undefined, the corresponding callback will be -disabled. There can only ever be one callback for a given key. - -As this callback gets called less often then the C -one, decoding speed will not usually suffer as much. Therefore, single-key -objects make excellent targets to serialise Perl objects into, especially -as single-key JSON objects are as close to the type-tagged value concept -as JSON gets (it's basically an ID/VALUE tuple). Of course, JSON does not -support this in any way, so you need to make sure your data never looks -like a serialised Perl hash. - -Typical names for the single object key are C<__class_whatever__>, or -C<$__dollars_are_rarely_used__$> or C<}ugly_brace_placement>, or even -things like C<__class_md5sum(classname)__>, to reduce the risk of clashing -with real hashes. - -Example, decode JSON objects of the form C<< { "__widget__" => } >> -into the corresponding C<< $WIDGET{} >> object: - - # return whatever is in $WIDGET{5}: - JSON::PP - ->new - ->filter_json_single_key_object (__widget__ => sub { - $WIDGET{ $_[0] } - }) - ->decode ('{"__widget__": 5') - - # this can be used with a TO_JSON method in some "widget" class - # for serialisation to json: - sub WidgetBase::TO_JSON { - my ($self) = @_; - - unless ($self->{id}) { - $self->{id} = ..get..some..id..; - $WIDGET{$self->{id}} = $self; - } - - { __widget__ => $self->{id} } - } - -=head2 shrink - - $json = $json->shrink([$enable]) - - $enabled = $json->get_shrink - -In JSON::XS, this flag resizes strings generated by either -C or C to their minimum size possible. -It will also try to downgrade any strings to octet-form if possible. - -In JSON::PP, it is noop about resizing strings but tries -C to the returned string by C. -See to L. - -See to L - -=head2 max_depth - - $json = $json->max_depth([$maximum_nesting_depth]) - - $max_depth = $json->get_max_depth - -Sets the maximum nesting level (default C<512>) accepted while encoding -or decoding. If a higher nesting level is detected in JSON text or a Perl -data structure, then the encoder and decoder will stop and croak at that -point. - -Nesting level is defined by number of hash- or arrayrefs that the encoder -needs to traverse to reach a given point or the number of C<{> or C<[> -characters without their matching closing parenthesis crossed to reach a -given character in a string. - -If no argument is given, the highest possible setting will be used, which -is rarely useful. - -See L for more info on why this is useful. - -When a large value (100 or more) was set and it de/encodes a deep nested object/text, -it may raise a warning 'Deep recursion on subroutine' at the perl runtime phase. - -=head2 max_size - - $json = $json->max_size([$maximum_string_size]) - - $max_size = $json->get_max_size - -Set the maximum length a JSON text may have (in bytes) where decoding is -being attempted. The default is C<0>, meaning no limit. When C -is called on a string that is longer then this many bytes, it will not -attempt to decode the string but throw an exception. This setting has no -effect on C (yet). - -If no argument is given, the limit check will be deactivated (same as when -C<0> is specified). - -See L for more info on why this is useful. - -=head2 encode - - $json_text = $json->encode($perl_scalar) - -Converts the given Perl data structure (a simple scalar or a reference -to a hash or array) to its JSON representation. Simple scalars will be -converted into JSON string or number sequences, while references to arrays -become JSON arrays and references to hashes become JSON objects. Undefined -Perl values (e.g. C) become JSON C values. -References to the integers C<0> and C<1> are converted into C and C. - -=head2 decode - - $perl_scalar = $json->decode($json_text) - -The opposite of C: expects a JSON text and tries to parse it, -returning the resulting simple scalar or reference. Croaks on error. - -JSON numbers and strings become simple Perl scalars. JSON arrays become -Perl arrayrefs and JSON objects become Perl hashrefs. C becomes -C<1> (C), C becomes C<0> (C) and -C becomes C. - -=head2 decode_prefix - - ($perl_scalar, $characters) = $json->decode_prefix($json_text) - -This works like the C method, but instead of raising an exception -when there is trailing garbage after the first JSON object, it will -silently stop parsing there and return the number of characters consumed -so far. - - JSON->new->decode_prefix ("[1] the tail") - => ([], 3) - -=head1 INCREMENTAL PARSING - -Most of this section are copied and modified from L. - -In some cases, there is the need for incremental parsing of JSON texts. -This module does allow you to parse a JSON stream incrementally. -It does so by accumulating text until it has a full JSON object, which -it then can decode. This process is similar to using C -to see if a full JSON object is available, but is much more efficient -(and can be implemented with a minimum of method calls). - -This module will only attempt to parse the JSON text once it is sure it -has enough text to get a decisive result, using a very simple but -truly incremental parser. This means that it sometimes won't stop as -early as the full parser, for example, it doesn't detect parenthesis -mismatches. The only thing it guarantees is that it starts decoding as -soon as a syntactically valid JSON text has been seen. This means you need -to set resource limits (e.g. C) to ensure the parser will stop -parsing in the presence if syntax errors. - -The following methods implement this incremental parser. - -=head2 incr_parse - - $json->incr_parse( [$string] ) # void context - - $obj_or_undef = $json->incr_parse( [$string] ) # scalar context - - @obj_or_empty = $json->incr_parse( [$string] ) # list context - -This is the central parsing function. It can both append new text and -extract objects from the stream accumulated so far (both of these -functions are optional). - -If C<$string> is given, then this string is appended to the already -existing JSON fragment stored in the C<$json> object. - -After that, if the function is called in void context, it will simply -return without doing anything further. This can be used to add more text -in as many chunks as you want. - -If the method is called in scalar context, then it will try to extract -exactly I JSON object. If that is successful, it will return this -object, otherwise it will return C. If there is a parse error, -this method will croak just as C would do (one can then use -C to skip the erroneous part). This is the most common way of -using the method. - -And finally, in list context, it will try to extract as many objects -from the stream as it can find and return them, or the empty list -otherwise. For this to work, there must be no separators between the JSON -objects or arrays, instead they must be concatenated back-to-back. If -an error occurs, an exception will be raised as in the scalar context -case. Note that in this case, any previously-parsed JSON texts will be -lost. - -Example: Parse some JSON arrays/objects in a given string and return them. - - my @objs = JSON->new->incr_parse ("[5][7][1,2]"); - -=head2 incr_text - - $lvalue_string = $json->incr_text - -This method returns the currently stored JSON fragment as an lvalue, that -is, you can manipulate it. This I works when a preceding call to -C in I successfully returned an object. Under -all other circumstances you must not call this function (I mean it. -although in simple tests it might actually work, it I fail under -real world conditions). As a special exception, you can also call this -method before having parsed anything. - -This function is useful in two cases: a) finding the trailing text after a -JSON object or b) parsing multiple JSON objects separated by non-JSON text -(such as commas). - - $json->incr_text =~ s/\s*,\s*//; - -In Perl 5.005, C attribute is not available. -You must write codes like the below: - - $string = $json->incr_text; - $string =~ s/\s*,\s*//; - $json->incr_text( $string ); - -=head2 incr_skip - - $json->incr_skip - -This will reset the state of the incremental parser and will remove the -parsed text from the input buffer. This is useful after C -died, in which case the input buffer and incremental parser state is left -unchanged, to skip the text parsed so far and to reset the parse state. - -=head2 incr_reset - - $json->incr_reset - -This completely resets the incremental parser, that is, after this call, -it will be as if the parser had never parsed anything. - -This is useful if you want to repeatedly parse JSON objects and want to -ignore any trailing data, which means you have to reset the parser after -each successful decode. - -See to L for examples. - - -=head1 JSON::PP OWN METHODS - -=head2 allow_singlequote - - $json = $json->allow_singlequote([$enable]) - -If C<$enable> is true (or missing), then C will accept -JSON strings quoted by single quotations that are invalid JSON -format. - - $json->allow_singlequote->decode({"foo":'bar'}); - $json->allow_singlequote->decode({'foo':"bar"}); - $json->allow_singlequote->decode({'foo':'bar'}); - -As same as the C option, this option may be used to parse -application-specific files written by humans. - - -=head2 allow_barekey - - $json = $json->allow_barekey([$enable]) - -If C<$enable> is true (or missing), then C will accept -bare keys of JSON object that are invalid JSON format. - -As same as the C option, this option may be used to parse -application-specific files written by humans. - - $json->allow_barekey->decode('{foo:"bar"}'); - -=head2 allow_bignum - - $json = $json->allow_bignum([$enable]) - -If C<$enable> is true (or missing), then C will convert -the big integer Perl cannot handle as integer into a L -object and convert a floating number (any) into a L. - -On the contrary, C converts C objects and C -objects into JSON numbers with C enable. - - $json->allow_nonref->allow_blessed->allow_bignum; - $bigfloat = $json->decode('2.000000000000000000000000001'); - print $json->encode($bigfloat); - # => 2.000000000000000000000000001 - -See to L about the normal conversion of JSON number. - -=head2 loose - - $json = $json->loose([$enable]) - -The unescaped [\x00-\x1f\x22\x2f\x5c] strings are invalid in JSON strings -and the module doesn't allow to C to these (except for \x2f). -If C<$enable> is true (or missing), then C will accept these -unescaped strings. - - $json->loose->decode(qq|["abc - def"]|); - -See L. - -=head2 escape_slash - - $json = $json->escape_slash([$enable]) - -According to JSON Grammar, I (U+002F) is escaped. But default -JSON::PP (as same as JSON::XS) encodes strings without escaping slash. - -If C<$enable> is true (or missing), then C will escape slashes. - -=head2 indent_length - - $json = $json->indent_length($length) - -JSON::XS indent space length is 3 and cannot be changed. -JSON::PP set the indent space length with the given $length. -The default is 3. The acceptable range is 0 to 15. - -=head2 sort_by - - $json = $json->sort_by($function_name) - $json = $json->sort_by($subroutine_ref) - -If $function_name or $subroutine_ref are set, its sort routine are used -in encoding JSON objects. - - $js = $pc->sort_by(sub { $JSON::PP::a cmp $JSON::PP::b })->encode($obj); - # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|); - - $js = $pc->sort_by('own_sort')->encode($obj); - # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|); - - sub JSON::PP::own_sort { $JSON::PP::a cmp $JSON::PP::b } - -As the sorting routine runs in the JSON::PP scope, the given -subroutine name and the special variables C<$a>, C<$b> will begin -'JSON::PP::'. - -If $integer is set, then the effect is same as C on. - -=head1 INTERNAL - -For developers. - -=over - -=item PP_encode_box - -Returns - - { - depth => $depth, - indent_count => $indent_count, - } - - -=item PP_decode_box - -Returns - - { - text => $text, - at => $at, - ch => $ch, - len => $len, - depth => $depth, - encoding => $encoding, - is_valid_utf8 => $is_valid_utf8, - }; - -=back - -=head1 MAPPING - -This section is copied from JSON::XS and modified to C. -JSON::XS and JSON::PP mapping mechanisms are almost equivalent. - -See to L. - -=head2 JSON -> PERL - -=over 4 - -=item object - -A JSON object becomes a reference to a hash in Perl. No ordering of object -keys is preserved (JSON does not preserver object key ordering itself). - -=item array - -A JSON array becomes a reference to an array in Perl. - -=item string - -A JSON string becomes a string scalar in Perl - Unicode codepoints in JSON -are represented by the same codepoints in the Perl string, so no manual -decoding is necessary. - -=item number - -A JSON number becomes either an integer, numeric (floating point) or -string scalar in perl, depending on its range and any fractional parts. On -the Perl level, there is no difference between those as Perl handles all -the conversion details, but an integer may take slightly less memory and -might represent more values exactly than floating point numbers. - -If the number consists of digits only, C will try to represent -it as an integer value. If that fails, it will try to represent it as -a numeric (floating point) value if that is possible without loss of -precision. Otherwise it will preserve the number as a string value (in -which case you lose roundtripping ability, as the JSON number will be -re-encoded to a JSON string). - -Numbers containing a fractional or exponential part will always be -represented as numeric (floating point) values, possibly at a loss of -precision (in which case you might lose perfect roundtripping ability, but -the JSON number will still be re-encoded as a JSON number). - -Note that precision is not accuracy - binary floating point values cannot -represent most decimal fractions exactly, and when converting from and to -floating point, C only guarantees precision up to but not including -the least significant bit. - -When C is enable, the big integers -and the numeric can be optionally converted into L and -L objects. - -=item true, false - -These JSON atoms become C and C, -respectively. They are overloaded to act almost exactly like the numbers -C<1> and C<0>. You can check whether a scalar is a JSON boolean by using -the C function. - - print JSON::PP::true . "\n"; - => true - print JSON::PP::true + 1; - => 1 - - ok(JSON::true eq '1'); - ok(JSON::true == 1); - -C will install these missing overloading features to the backend modules. - - -=item null - -A JSON null atom becomes C in Perl. - -C returns C. - -=back - - -=head2 PERL -> JSON - -The mapping from Perl to JSON is slightly more difficult, as Perl is a -truly typeless language, so we can only guess which JSON type is meant by -a Perl value. - -=over 4 - -=item hash references - -Perl hash references become JSON objects. As there is no inherent ordering -in hash keys (or JSON objects), they will usually be encoded in a -pseudo-random order that can change between runs of the same program but -stays generally the same within a single run of a program. C -optionally sort the hash keys (determined by the I flag), so -the same data structure will serialise to the same JSON text (given same -settings and version of JSON::XS), but this incurs a runtime overhead -and is only rarely useful, e.g. when you want to compare some JSON text -against another for equality. - - -=item array references - -Perl array references become JSON arrays. - -=item other references - -Other unblessed references are generally not allowed and will cause an -exception to be thrown, except for references to the integers C<0> and -C<1>, which get turned into C and C atoms in JSON. You can -also use C and C to improve readability. - - to_json [\0,JSON::PP::true] # yields [false,true] - -=item JSON::PP::true, JSON::PP::false, JSON::PP::null - -These special values become JSON true and JSON false values, -respectively. You can also use C<\1> and C<\0> directly if you want. - -JSON::PP::null returns C. - -=item blessed objects - -Blessed objects are not directly representable in JSON. See the -C and C methods on various options on -how to deal with this: basically, you can choose between throwing an -exception, encoding the reference as if it weren't blessed, or provide -your own serialiser method. - -See to L. - -=item simple scalars - -Simple Perl scalars (any scalar that is not a reference) are the most -difficult objects to encode: JSON::XS and JSON::PP will encode undefined scalars as -JSON C values, scalars that have last been used in a string context -before encoding as JSON strings, and anything else as number value: - - # dump as number - encode_json [2] # yields [2] - encode_json [-3.0e17] # yields [-3e+17] - my $value = 5; encode_json [$value] # yields [5] - - # used as string, so dump as string - print $value; - encode_json [$value] # yields ["5"] - - # undef becomes null - encode_json [undef] # yields [null] - -You can force the type to be a string by stringifying it: - - my $x = 3.1; # some variable containing a number - "$x"; # stringified - $x .= ""; # another, more awkward way to stringify - print $x; # perl does it for you, too, quite often - -You can force the type to be a number by numifying it: - - my $x = "3"; # some variable containing a string - $x += 0; # numify it, ensuring it will be dumped as a number - $x *= 1; # same thing, the choice is yours. - -You can not currently force the type in other, less obscure, ways. - -Note that numerical precision has the same meaning as under Perl (so -binary to decimal conversion follows the same rules as in Perl, which -can differ to other languages). Also, your perl interpreter might expose -extensions to the floating point numbers of your platform, such as -infinities or NaN's - these cannot be represented in JSON, and it is an -error to pass those in. - -=item Big Number - -When C is enable, -C converts C objects and C -objects into JSON numbers. - - -=back - -=head1 UNICODE HANDLING ON PERLS - -If you do not know about Unicode on Perl well, -please check L. - -=head2 Perl 5.8 and later - -Perl can handle Unicode and the JSON::PP de/encode methods also work properly. - - $json->allow_nonref->encode(chr hex 3042); - $json->allow_nonref->encode(chr hex 12345); - -Returns C<"\u3042"> and C<"\ud808\udf45"> respectively. - - $json->allow_nonref->decode('"\u3042"'); - $json->allow_nonref->decode('"\ud808\udf45"'); - -Returns UTF-8 encoded strings with UTF8 flag, regarded as C and C. - -Note that the versions from Perl 5.8.0 to 5.8.2, Perl built-in C was broken, -so JSON::PP wraps the C with a subroutine. Thus JSON::PP works slow in the versions. - - -=head2 Perl 5.6 - -Perl can handle Unicode and the JSON::PP de/encode methods also work. - -=head2 Perl 5.005 - -Perl 5.005 is a byte semantics world -- all strings are sequences of bytes. -That means the unicode handling is not available. - -In encoding, - - $json->allow_nonref->encode(chr hex 3042); # hex 3042 is 12354. - $json->allow_nonref->encode(chr hex 12345); # hex 12345 is 74565. - -Returns C and C, as C takes a value more than 255, it treats -as C<$value % 256>, so the above codes are equivalent to : - - $json->allow_nonref->encode(chr 66); - $json->allow_nonref->encode(chr 69); - -In decoding, - - $json->decode('"\u00e3\u0081\u0082"'); - -The returned is a byte sequence C<0xE3 0x81 0x82> for UTF-8 encoded -japanese character (C). -And if it is represented in Unicode code point, C. - -Next, - - $json->decode('"\u3042"'); - -We ordinary expect the returned value is a Unicode character C. -But here is 5.005 world. This is C<0xE3 0x81 0x82>. - - $json->decode('"\ud808\udf45"'); - -This is not a character C but bytes - C<0xf0 0x92 0x8d 0x85>. - - -=head1 TODO - -=over - -=item speed - -=item memory saving - -=back - - -=head1 SEE ALSO - -Most of the document are copied and modified from JSON::XS doc. - -L - -RFC4627 (L) - -=head1 AUTHOR - -Makamaka Hannyaharamitu, Emakamaka[at]cpan.orgE - - -=head1 COPYRIGHT AND LICENSE - -Copyright 2007-2012 by Makamaka Hannyaharamitu - -This library is free software; you can redistribute it and/or modify -it under the same terms as Perl itself. - -=cut diff --git a/spaces/monra/freegpt-webui-chimera/client/css/button.css b/spaces/monra/freegpt-webui-chimera/client/css/button.css deleted file mode 100644 index 5f604a8460d048458249f78be9dc544ade84801e..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui-chimera/client/css/button.css +++ /dev/null @@ -1,26 +0,0 @@ -.button { - display: flex; - padding: 8px 12px; - align-items: center; - justify-content: center; - border: 1px solid var(--conversations); - border-radius: var(--border-radius-1); - width: 100%; - background: transparent; - cursor: pointer; -} - -.button span { - color: var(--colour-3); - font-size: 0.875rem; -} - -.button i::before { - margin-right: 8px; -} - -@media screen and (max-width: 990px) { - .button span { - font-size: 0.75rem; - } -} diff --git a/spaces/mshukor/UnIVAL/fairseq/CONTRIBUTING.md b/spaces/mshukor/UnIVAL/fairseq/CONTRIBUTING.md deleted file mode 100644 index 3930c46196b7b6082cacc76fd5808b49677ae805..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/CONTRIBUTING.md +++ /dev/null @@ -1,28 +0,0 @@ -# Contributing to Facebook AI Research Sequence-to-Sequence Toolkit (fairseq) -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -## License -By contributing to Facebook AI Research Sequence-to-Sequence Toolkit (fairseq), -you agree that your contributions will be licensed under the LICENSE file in -the root directory of this source tree. diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/fully_sharded_data_parallel/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/fully_sharded_data_parallel/README.md deleted file mode 100644 index b9e44fef48bee5faeee27b3d1d1b1eb96b6a477f..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/fully_sharded_data_parallel/README.md +++ /dev/null @@ -1,177 +0,0 @@ -# Fully Sharded Data Parallel (FSDP) - -## Overview -Recent work by [Microsoft](https://arxiv.org/abs/1910.02054) and -[Google](https://arxiv.org/abs/2004.13336) has shown that data parallel -training can be made significantly more efficient by sharding the model -parameters and optimizer state across data parallel workers. These ideas are -encapsulated in the new **`FullyShardedDataParallel` (FSDP)** wrapper provided -by [fairscale](https://github.com/facebookresearch/fairscale/). - -Compared to PyTorch DDP: -* FSDP produces identical results as PyTorch DDP (it's still synchronous data parallel training) -* FSDP shards parameters (FP16 + FP32) and optimizer state across data parallel GPUs -* FSDP is faster than PyTorch DDP because the optimizer step is sharded, and the communication can be overlapped with the forward pass -* FSDP enables training 13B parameter models on 8 GPUs and 175B parameter models on 128 GPUs - -FSDP is fully supported in fairseq via the following new arguments: -* `--ddp-backend=fully_sharded`: enables full sharding via FSDP -* `--cpu-offload`: offloads the optimizer state and FP32 model copy to CPU (combine with `--optimizer=cpu_adam`) -* `--no-reshard-after-forward`: increases training speed for large models (1B+ params) and is similar to ZeRO stage 2 -* other popular options (`--fp16`, `--update-freq`, `--checkpoint-activations`, `--offload-activations`, etc.) continue to work as normal - -
      Limitations

      - -FSDP currently has several limitations compared to fairseq's default DDP backend (PyTorch DDP): -* while FSDP is full compatible with pointwise Optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.), it is not currently compatible with non-pointwise Optimizers (e.g., Adagrad, Adafactor, LAMB, etc.) -* FSDP depends on flattening the parameters, so models that currently require `--fp16-no-flatten-grads` may not be supported - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of these and other limitations. - -

      - -
      How it works

      - -Fully Sharded Data Parallel - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of how FSDP works. - -

      - -## Example usage - -The following examples illustrate how to train a very large language model with -13 billion parameters on 1 GPU by offloading parameters and optimizer states to -CPU, or on 8 GPUs by fully sharding the params and optimizer states across GPUs. - -These examples use the WikiText-103 dataset for demonstration purposes, but -in practice a much larger dataset will be needed to achieve good results. -Follow the [instructions here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.pretraining.md#1-preprocess-the-data) -to preprocess the WikiText-103 dataset using the GPT-2/RoBERTa vocabulary. - -### 13B params on 1 V100 GPU (with CPU offloading) - -The following command trains a 13B parameter GPT-3 model on a single V100 GPU -using the `--cpu-offload` feature to offload parameters and optimizer states to -CPU. In this setting, the optimizer step (Adam) happens on CPU. We also use the -`--checkpoint-activations` feature (sometimes called [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html)), -which further saves memory in exchange for a small increase in computation. - -**Requirements:** -- Install the latest master version of fairscale: `pip install git+https://github.com/facebookresearch/fairscale.git@master` -- You'll need 32GB of GPU memory and ~256GB of system memory to train the 13B param model. -- If you have less system memory, the 6.7B param model can be trained with ~128GB of system memory, just set `--arch transformer_lm_gpt3_6_7` -- We use the CPU Adam optimizer from [DeepSpeed](https://github.com/microsoft/DeepSpeed), so you'll need to `pip install deepspeed` before running the command. - -**Notes:** -- The command will take ~5 minutes to start training, during which time it will appear to be hung, since randomly initializing 13B weights can be slow. -- The `--cpu-offload` feature requires training in mixed precision (`--fp16`). -- Tune the `OMP_NUM_THREADS` env variable for best performance with CPU offloading. -- The example command below stops training after 10 steps (`--max-update 10`) and does not save checkpoints (`--no-save`). - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
      Example output

      - -``` -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | training on 1 devices (GPUs/TPUs) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 12:31:36 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.475", "ppl": "91120.8", "wps": "0", "ups": "0", "wpb": "16384", "bsz": "8", "num_updates": "1", "lr": "2e-05", "gnorm": "20.751", "loss_scale": "4", "train_wall": "99", "gb_free": "9.3", "wall": "105"} -2021-03-08 12:32:33 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.446", "ppl": "89281.6", "wps": "288.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "2", "lr": "4e-05", "gnorm": "19.777", "loss_scale": "4", "train_wall": "57", "gb_free": "9.3", "wall": "161"} -2021-03-08 12:33:12 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 12:33:51 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 12:34:45 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "25.22", "ppl": "3.90691e+07", "wps": "123.4", "ups": "0.01", "wpb": "16384", "bsz": "8", "num_updates": "3", "lr": "6e-05", "gnorm": "131.281", "loss_scale": "1", "train_wall": "133", "gb_free": "9.3", "wall": "294"} -2021-03-08 12:35:43 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.079", "ppl": "276809", "wps": "285.5", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "4", "lr": "8e-05", "gnorm": "13.776", "loss_scale": "1", "train_wall": "57", "gb_free": "9.3", "wall": "351"} -2021-03-08 12:36:35 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "23.729", "ppl": "1.39088e+07", "wps": "316.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "72.774", "loss_scale": "1", "train_wall": "52", "gb_free": "9.3", "wall": "403"} -2021-03-08 12:37:28 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "20.429", "ppl": "1.41203e+06", "wps": "307.6", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "6", "lr": "8e-05", "gnorm": "60.846", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "456"} -2021-03-08 12:38:27 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.965", "ppl": "511684", "wps": "279.4", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "7", "lr": "6e-05", "gnorm": "22.687", "loss_scale": "1", "train_wall": "59", "gb_free": "9.3", "wall": "515"} -2021-03-08 12:39:18 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.345", "ppl": "332887", "wps": "319.1", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "8", "lr": "4e-05", "gnorm": "8.451", "loss_scale": "1", "train_wall": "51", "gb_free": "9.3", "wall": "566"} -2021-03-08 12:40:11 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "18.262", "ppl": "314336", "wps": "305.9", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "9", "lr": "2e-05", "gnorm": "6.457", "loss_scale": "1", "train_wall": "54", "gb_free": "9.3", "wall": "620"} -2021-03-08 12:41:04 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "17.556", "ppl": "192686", "wps": "311.8", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "10", "lr": "0", "gnorm": "5.796", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "673"} -2021-03-08 12:41:04 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 12:41:04 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 12:43:15 | INFO | valid | {"epoch": 1, "valid_loss": "17.953", "valid_ppl": "253807", "valid_wps": "1868.4", "valid_wpb": "15400.2", "valid_bsz": "7.6", "valid_num_updates": "10"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 12:43:15 | INFO | train | {"epoch": 1, "train_loss": "19.351", "train_ppl": "668509", "train_wps": "210.9", "train_ups": "0.01", "train_wpb": "16384", "train_bsz": "8", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "36.26", "train_loss_scale": "1", "train_train_wall": "667", "train_gb_free": "9.3", "train_wall": "804"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | done training in 798.6 seconds -``` - -

      - -### 13B params on 8 V100 GPUs (with full parameter + optimizer state sharding) - -FSDP can also shard the parameters and optimizer states across multiple GPUs, -reducing memory requirements significantly. On 8 x 32GB GPUs, sharding enables -training the same 13B parameter model *without offloading the parameters to -CPU*. However, without CPU offloading we'd only be able to fit a batch size of -1 per GPU, which would cause training speed to suffer. - -We obtain the best performance on 8 GPUs by combining full sharding and CPU -offloading. The following command trains the same 13B parameter GPT-3 model as -before on 8 x 32GB V100 GPUs; training speed increases superlinearly from ~310 -words per second to ~3200 words per second. - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
      Example output

      - -``` -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | training on 8 devices (GPUs/TPUs) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 18:05:06 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "16.408", "ppl": "86945.6", "wps": "0", "ups": "0", "wpb": "131072", "bsz": "64", "num_updates": "1", "lr": "2e-05", "gnorm": "18.27", "loss_scale": "4", "train_wall": "47", "gb_free": "9.3", "wall": "56"} -2021-03-08 18:05:45 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "16.352", "ppl": "83644.3", "wps": "3283.4", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "2", "lr": "4e-05", "gnorm": "18.411", "loss_scale": "4", "train_wall": "40", "gb_free": "9.3", "wall": "96"} -2021-03-08 18:06:21 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 18:06:56 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 18:07:37 | INFO | train_inner | {"epoch": 1, "update": 0.006, "loss": "23.682", "ppl": "1.34537e+07", "wps": "1176.6", "ups": "0.01", "wpb": "131072", "bsz": "64", "num_updates": "3", "lr": "6e-05", "gnorm": "119.682", "loss_scale": "1", "train_wall": "111", "gb_free": "9.3", "wall": "208"} -2021-03-08 18:08:18 | INFO | train_inner | {"epoch": 1, "update": 0.007, "loss": "18.988", "ppl": "519921", "wps": "3189.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "4", "lr": "8e-05", "gnorm": "14.934", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "249"} -2021-03-08 18:08:59 | INFO | train_inner | {"epoch": 1, "update": 0.008, "loss": "20.08", "ppl": "1.10798e+06", "wps": "3223.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "5", "lr": "0.0001", "gnorm": "59.92", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "289"} -2021-03-08 18:09:39 | INFO | train_inner | {"epoch": 1, "update": 0.009, "loss": "18.323", "ppl": "327980", "wps": "3256.6", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "6", "lr": "8e-05", "gnorm": "37.425", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "330"} -2021-03-08 18:10:20 | INFO | train_inner | {"epoch": 1, "update": 0.01, "loss": "17.264", "ppl": "157354", "wps": "3188.7", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "7", "lr": "6e-05", "gnorm": "10.824", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "371"} -2021-03-08 18:11:01 | INFO | train_inner | {"epoch": 1, "update": 0.011, "loss": "16.794", "ppl": "113647", "wps": "3230", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "8", "lr": "4e-05", "gnorm": "5.616", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "411"} -2021-03-08 18:11:39 | INFO | train_inner | {"epoch": 1, "update": 0.012, "loss": "16.706", "ppl": "106938", "wps": "3384", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "9", "lr": "2e-05", "gnorm": "5.318", "loss_scale": "1", "train_wall": "39", "gb_free": "9.3", "wall": "450"} -2021-03-08 18:12:19 | INFO | train_inner | {"epoch": 1, "update": 0.013, "loss": "16.548", "ppl": "95796.2", "wps": "3274.4", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "10", "lr": "0", "gnorm": "5.22", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "490"} -2021-03-08 18:12:19 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 18:12:19 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 18:12:45 | INFO | valid | {"epoch": 1, "valid_loss": "16.624", "valid_ppl": "101000", "valid_wps": "10855.9", "valid_wpb": "123202", "valid_bsz": "60.5", "valid_num_updates": "10"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 18:12:45 | INFO | train | {"epoch": 1, "train_loss": "18.114", "train_ppl": "283776", "train_wps": "2567.8", "train_ups": "0.02", "train_wpb": "131072", "train_bsz": "64", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "29.562", "train_loss_scale": "1", "train_train_wall": "480", "train_gb_free": "9.3", "train_wall": "516"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | done training in 509.9 seconds -``` - -

      diff --git a/spaces/multimodalart/LoraTheExplorer/README.md b/spaces/multimodalart/LoraTheExplorer/README.md deleted file mode 100644 index 8e3026ccbb3545d2fbfa278a446495065d28776f..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/LoraTheExplorer/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: LoRA the Explorer -emoji: 🔎 🖼️ -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 4.1.2 -app_file: app.py -pinned: false -license: mit -suggested_hardware: a10g-large -models: ['nerijs/pixel-art-xl', 'Pclanglais/TintinIA', 'ProomptEngineer/pe-balloon-diffusion-style', 'joachimsallstrom/aether-cloud-lora-for-sdxl', 'ostris/crayon_style_lora_sdxl', 'jbilcke-hf/sdxl-zelda64', 'TheLastBen/Papercut_SDXL', 'fofr/sdxl-2004', 'joachimsallstrom/aether-ghost-lora-for-sdxl', 'artificialguybr/ColoringBookRedmond-V2', 'Norod78/SDXL-LofiGirl-Lora', 'ostris/embroidery_style_lora_sdxl', 'goofyai/3d_render_style_xl', 'ostris/watercolor_style_lora_sdxl', 'veryVANYA/ps1-graphics-sdxl-v2', 'TheLastBen/William_Eggleston_Style_SDXL', 'davizca87/c-a-g-coinmaker', 'goofyai/cyborg_style_xl', 'artificialguybr/ToyRedmond-ToyLoraForSDXL10', 'Fictiverse/Voxel_XL_Lora', 'minimaxir/sdxl-ugly-sonic-lora', 'nerijs/lego-brickheadz-xl', 'nerijs/lego-minifig-xl', 'Norod78/SDXL-jojoso_style-Lora', 'TheLastBen/Pikachu_SDXL', 'artificialguybr/LogoRedmond-LogoLoraForSDXL', 'Norod78/SDXL-StickerSheet-Lora', 'artificialguybr/LineAniRedmond-LinearMangaSDXL', 'TheLastBen/Josef_Koudelka_Style_SDXL', 'goofyai/Leonardo_Ai_Style_Illustration', 'Norod78/SDXL-simpstyle-Lora', 'artificialguybr/StoryBookRedmond', 'chillpixel/blacklight-makeup-sdxl-lora', 'ProomptEngineer/pe-neon-sign-style', 'ProomptEngineer/pe-lofi-hiphop-lofi-girl-concept', 'ProomptEngineer/pe-shitty-fanart', 'ProomptEngineer/pe-sandsculpter-style', 'ProomptEngineer/pe-shitty-medieval-paintings', 'ProomptEngineer/pe-courtroomsketch-style', 'ProomptEngineer/pe-funko-pop-diffusion-style', 'lordjia/lelo-lego-lora', 'KappaNeuro/dressed-animals', 'KappaNeuro/vintage-postage-stamps', 'KappaNeuro/video-installation', 'KappaNeuro/ukiyo-e-art', 'KappaNeuro/surreal-collage', 'KappaNeuro/stop-motion-animation', 'KappaNeuro/studio-ghibli-style', 'KappaNeuro/punk-collage', 'KappaNeuro/needlepoint', 'KappaNeuro/made-of-iridescent-foil', 'KappaNeuro/lascaux', 'KappaNeuro/color-palette', 'KappaNeuro/albumen-print', 'KappaNeuro/1987-action-figure-playset-packaging', 'Norod78/SDXL-VintageMagStyle-Lora', 'CiroN2022/road-sign', 'CiroN2022/mosaic-style', 'CiroN2022/cd-md-music', 'CiroN2022/hair-style', 'CiroN2022/overprint-effect', 'CiroN2022/toy-face', 'CiroN2022/ascii-art', 'artificialguybr/PixelArtRedmond', 'artificialguybr/StickersRedmond', 'artificialguybr/ClayAnimationRedmond', 'fofr/sdxl-vision-pro', 'joachimsallstrom/aether-glitch-lora-for-sdxl', 'artificialguybr/TshirtDesignRedmond-V2', 'ostris/ikea-instructions-lora-sdxl', 'ostris/super-cereal-sdxl-lora', 'jakedahn/sdxl-isometric-geology', 'artificialguybr/analogredmond-v2', 'stets/nintendo64_cartridge'] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nahue-passano/librispeech-corpus-generator/utils/audio.py b/spaces/nahue-passano/librispeech-corpus-generator/utils/audio.py deleted file mode 100644 index a78f7d454834fe37ea87ddf939ea3086d3847a84..0000000000000000000000000000000000000000 --- a/spaces/nahue-passano/librispeech-corpus-generator/utils/audio.py +++ /dev/null @@ -1,96 +0,0 @@ -from typing import Tuple, List -from pathlib import Path -import numpy as np -import soundfile as sf -import pandas as pd - -from utils.text import get_utterance_boundaries - - -def load_audio(audio_path: Path) -> Tuple[np.ndarray, float]: - """Loads an audio given its path - - Parameters - ---------- - audio_path : Path - Path of the audio file - - Returns - ------- - Tuple[np.ndarray, float] - Audio array and sample rate - """ - audio_array, sample_rate = sf.read(str(audio_path)) - return audio_array, sample_rate - - -def split_audio( - audio_array: np.ndarray, sample_rate: float, timestamp_list: list -) -> List[np.ndarray]: - """Slices audio_array with timestamps in timestamp_list - - Parameters - ---------- - audio_array : np.ndarray - Array of the audio to be splitted - sample_rate : float - Audio sample rate - timestamp_list : list - List of tuples containing the start and end of each stamp. - - Returns - ------- - List[np.ndarray] - List of numpy arrays with audio splits - """ - audio_segments = [] - for timestamp_i in timestamp_list: - start_sample = round(timestamp_i[0] * sample_rate) - end_sample = round(timestamp_i[1] * sample_rate) - audio_segments.append(audio_array[start_sample:end_sample]) - - return audio_segments - - -def save_audio_segments( - destination: Path, - audio_path: Path, - audio_segments: List[np.ndarray], - sample_rate: float, -) -> None: - """Saves audio segments from audio_segments in destination path. - - Parameters - ---------- - destination : Path - Path were segments will be saved - audio_name : Path - Name of the original audio file - audio_segments : List[np.ndarray] - List containing numpy arrays with the audio segments - sample_rate : float - Sample rate of the original audio file - """ - for i, segment in enumerate(audio_segments): - segment_path = destination / f"{audio_path.stem}-{i}.wav" - sf.write(str(segment_path), segment, sample_rate) - - -def generate_audio_splits( - audio_path: Path, timestamps_df: pd.DataFrame, destination: Path -) -> None: - """Splits an audio given its path and timestamps - - Parameters - ---------- - audio_path : Path - Path of the audio - timestamps_df : pd.DataFrame - DataFrame containing start and end of the utterances - destination : Path - Path were segments will be saved. - """ - audio_array, sample_rate = load_audio(audio_path) - timestamp_list = get_utterance_boundaries(timestamps_df) - audio_segments = split_audio(audio_array, sample_rate, timestamp_list) - save_audio_segments(destination, audio_path, audio_segments, sample_rate) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ibm Pc Clones Hardware Troubleshooting And Maintenance Govindarajulu Ebook.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ibm Pc Clones Hardware Troubleshooting And Maintenance Govindarajulu Ebook.md deleted file mode 100644 index e39d5ada827cb1f07845300da5ff3a561478cee2..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ibm Pc Clones Hardware Troubleshooting And Maintenance Govindarajulu Ebook.md +++ /dev/null @@ -1,44 +0,0 @@ - -

      Review of IBM PC and Clones: Hardware, Troubleshooting and Maintenance by B. Govindarajalu

      -

      If you are looking for a comprehensive and authoritative reference on the architecture, hardware organization, circuit design and maintenance of the IBM PC series and its clones, you might want to check out this book by B. Govindarajalu. The book covers the detailed aspects of hardware circuits, software concepts and interfaces, test equipments and diagnostic aids, as well as common problems and their troubleshooting procedures. The book also includes a CD-ROM that contains useful software tools and utilities for PC diagnosis and repair.

      -

      The book is divided into 24 chapters that cover topics such as:

      -

      ibm pc clones hardware, troubleshooting and maintenance govindarajulu ebook


      Download Ziphttps://urlcod.com/2uIc1Q



      -
        -
      • The evolution of the IBM PC and its clones
      • -
      • The system architecture and bus standards
      • -
      • The system board components and connectors
      • -
      • The power supply unit and its protection circuits
      • -
      • The memory subsystem and its expansion options
      • -
      • The input/output subsystem and its devices
      • -
      • The video subsystem and its display modes
      • -
      • The disk subsystem and its interfaces
      • -
      • The keyboard and mouse subsystems
      • -
      • The printer subsystem and its types
      • -
      • The communication subsystem and its protocols
      • -
      • The multimedia subsystem and its components
      • -
      • The BIOS functions and services
      • -
      • The DOS functions and commands
      • -
      • The Windows operating system and its features
      • -
      • The system configuration and optimization techniques
      • -
      • The system diagnostic tools and utilities
      • -
      • The system troubleshooting methods and procedures
      • -
      • The preventive maintenance practices and guidelines
      • -
      • The system upgrade and replacement strategies
      • -
      • The system assembly and disassembly techniques
      • -
      • The system board testing and repair techniques
      • -
      • The power supply testing and repair techniques
      • -
      • The disk drive testing and repair techniques
      • -
      - -

      The book is written in a clear and concise style, with numerous diagrams, tables, charts, examples, exercises, review questions, case studies, tips, tricks, hints, warnings, cautions, notes, references, appendices, glossary, index, etc. The book is suitable for students, teachers, professionals, technicians, hobbyists, enthusiasts, or anyone who wants to learn more about the IBM PC series and its clones.

      - -

      You can find this book online at Google Books[^1^] or Google Books[^2^].

      - -

      The IBM PC series and its clones are among the most popular and influential personal computers in the history of computing. They have been widely used for various purposes such as education, business, entertainment, gaming, etc. They have also spawned many innovations and developments in the fields of hardware, software, networking, multimedia, etc. The IBM PC series and its clones have set the standards and benchmarks for the personal computer industry.

      - -

      However, as with any complex system, the IBM PC series and its clones are prone to various problems and failures that can affect their performance, functionality, reliability, and usability. These problems can be caused by various factors such as faulty components, improper installation, incorrect configuration, incompatible devices, corrupted software, virus infection, power surge, physical damage, etc. These problems can result in symptoms such as no power, no display, no boot, no sound, no keyboard, no mouse, no printer, no communication, no disk access, etc.

      - -

      Therefore, it is essential to have a good knowledge and understanding of the IBM PC series and its clones and their hardware troubleshooting and maintenance techniques. This can help to prevent or minimize the occurrence of problems and failures, as well as to diagnose and resolve them quickly and effectively. This can also help to extend the lifespan and improve the performance of the IBM PC series and its clones.

      -

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/neural-ti/NeTI/scripts/__init__.py b/spaces/neural-ti/NeTI/scripts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/backbone/backbone.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/backbone/backbone.py deleted file mode 100644 index e1c765a6b38542f66cae55216bba697a6626d128..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/backbone/backbone.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from abc import ABCMeta, abstractmethod -from typing import Dict -import torch.nn as nn - -from detectron2.layers import ShapeSpec - -__all__ = ["Backbone"] - - -class Backbone(nn.Module, metaclass=ABCMeta): - """ - Abstract base class for network backbones. - """ - - def __init__(self): - """ - The `__init__` method of any subclass can specify its own set of arguments. - """ - super().__init__() - - @abstractmethod - def forward(self): - """ - Subclasses must override this method, but adhere to the same return type. - - Returns: - dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor - """ - pass - - @property - def size_divisibility(self) -> int: - """ - Some backbones require the input height and width to be divisible by a - specific integer. This is typically true for encoder / decoder type networks - with lateral connection (e.g., FPN) for which feature maps need to match - dimension in the "bottom up" and "top down" paths. Set to 0 if no specific - input size divisibility is required. - """ - return 0 - - @property - def padding_constraints(self) -> Dict[str, int]: - """ - This property is a generalization of size_divisibility. Some backbones and training - recipes require specific padding constraints, such as enforcing divisibility by a specific - integer (e.g., FPN) or padding to a square (e.g., ViTDet with large-scale jitter - in :paper:vitdet). `padding_constraints` contains these optional items like: - { - "size_divisibility": int, - "square_size": int, - # Future options are possible - } - `size_divisibility` will read from here if presented and `square_size` indicates the - square padding size if `square_size` > 0. - - TODO: use type of Dict[str, int] to avoid torchscipt issues. The type of padding_constraints - could be generalized as TypedDict (Python 3.8+) to support more types in the future. - """ - return {} - - def output_shape(self): - """ - Returns: - dict[str->ShapeSpec] - """ - # this is a backward-compatible default - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/evaluation/densepose_coco_evaluation.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/evaluation/densepose_coco_evaluation.py deleted file mode 100644 index 6324257396d4abbe270920107e0bb368a86f67fc..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/evaluation/densepose_coco_evaluation.py +++ /dev/null @@ -1,1303 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# This is a modified version of cocoeval.py where we also have the densepose evaluation. - -__author__ = "tsungyi" - -import copy -import datetime -import logging -import numpy as np -import pickle -import time -from collections import defaultdict -from enum import Enum -from typing import Any, Dict, Tuple -import scipy.spatial.distance as ssd -import torch -import torch.nn.functional as F -from pycocotools import mask as maskUtils -from scipy.io import loadmat -from scipy.ndimage import zoom as spzoom - -from detectron2.utils.file_io import PathManager - -from densepose.converters.chart_output_to_chart_result import resample_uv_tensors_to_bbox -from densepose.converters.segm_to_mask import ( - resample_coarse_segm_tensor_to_bbox, - resample_fine_and_coarse_segm_tensors_to_bbox, -) -from densepose.modeling.cse.utils import squared_euclidean_distance_matrix -from densepose.structures import DensePoseDataRelative -from densepose.structures.mesh import create_mesh - -logger = logging.getLogger(__name__) - - -class DensePoseEvalMode(str, Enum): - # use both masks and geodesic distances (GPS * IOU) to compute scores - GPSM = "gpsm" - # use only geodesic distances (GPS) to compute scores - GPS = "gps" - # use only masks (IOU) to compute scores - IOU = "iou" - - -class DensePoseDataMode(str, Enum): - # use estimated IUV data (default mode) - IUV_DT = "iuvdt" - # use ground truth IUV data - IUV_GT = "iuvgt" - # use ground truth labels I and set UV to 0 - I_GT_UV_0 = "igtuv0" - # use ground truth labels I and estimated UV coordinates - I_GT_UV_DT = "igtuvdt" - # use estimated labels I and set UV to 0 - I_DT_UV_0 = "idtuv0" - - -class DensePoseCocoEval(object): - # Interface for evaluating detection on the Microsoft COCO dataset. - # - # The usage for CocoEval is as follows: - # cocoGt=..., cocoDt=... # load dataset and results - # E = CocoEval(cocoGt,cocoDt); # initialize CocoEval object - # E.params.recThrs = ...; # set parameters as desired - # E.evaluate(); # run per image evaluation - # E.accumulate(); # accumulate per image results - # E.summarize(); # display summary metrics of results - # For example usage see evalDemo.m and http://mscoco.org/. - # - # The evaluation parameters are as follows (defaults in brackets): - # imgIds - [all] N img ids to use for evaluation - # catIds - [all] K cat ids to use for evaluation - # iouThrs - [.5:.05:.95] T=10 IoU thresholds for evaluation - # recThrs - [0:.01:1] R=101 recall thresholds for evaluation - # areaRng - [...] A=4 object area ranges for evaluation - # maxDets - [1 10 100] M=3 thresholds on max detections per image - # iouType - ['segm'] set iouType to 'segm', 'bbox', 'keypoints' or 'densepose' - # iouType replaced the now DEPRECATED useSegm parameter. - # useCats - [1] if true use category labels for evaluation - # Note: if useCats=0 category labels are ignored as in proposal scoring. - # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified. - # - # evaluate(): evaluates detections on every image and every category and - # concats the results into the "evalImgs" with fields: - # dtIds - [1xD] id for each of the D detections (dt) - # gtIds - [1xG] id for each of the G ground truths (gt) - # dtMatches - [TxD] matching gt id at each IoU or 0 - # gtMatches - [TxG] matching dt id at each IoU or 0 - # dtScores - [1xD] confidence of each dt - # gtIgnore - [1xG] ignore flag for each gt - # dtIgnore - [TxD] ignore flag for each dt at each IoU - # - # accumulate(): accumulates the per-image, per-category evaluation - # results in "evalImgs" into the dictionary "eval" with fields: - # params - parameters used for evaluation - # date - date evaluation was performed - # counts - [T,R,K,A,M] parameter dimensions (see above) - # precision - [TxRxKxAxM] precision for every evaluation setting - # recall - [TxKxAxM] max recall for every evaluation setting - # Note: precision and recall==-1 for settings with no gt objects. - # - # See also coco, mask, pycocoDemo, pycocoEvalDemo - # - # Microsoft COCO Toolbox. version 2.0 - # Data, paper, and tutorials available at: http://mscoco.org/ - # Code written by Piotr Dollar and Tsung-Yi Lin, 2015. - # Licensed under the Simplified BSD License [see coco/license.txt] - def __init__( - self, - cocoGt=None, - cocoDt=None, - iouType: str = "densepose", - multi_storage=None, - embedder=None, - dpEvalMode: DensePoseEvalMode = DensePoseEvalMode.GPS, - dpDataMode: DensePoseDataMode = DensePoseDataMode.IUV_DT, - ): - """ - Initialize CocoEval using coco APIs for gt and dt - :param cocoGt: coco object with ground truth annotations - :param cocoDt: coco object with detection results - :return: None - """ - self.cocoGt = cocoGt # ground truth COCO API - self.cocoDt = cocoDt # detections COCO API - self.multi_storage = multi_storage - self.embedder = embedder - self._dpEvalMode = dpEvalMode - self._dpDataMode = dpDataMode - self.evalImgs = defaultdict(list) # per-image per-category eval results [KxAxI] - self.eval = {} # accumulated evaluation results - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - self.params = Params(iouType=iouType) # parameters - self._paramsEval = {} # parameters for evaluation - self.stats = [] # result summarization - self.ious = {} # ious between all gts and dts - if cocoGt is not None: - self.params.imgIds = sorted(cocoGt.getImgIds()) - self.params.catIds = sorted(cocoGt.getCatIds()) - self.ignoreThrBB = 0.7 - self.ignoreThrUV = 0.9 - - def _loadGEval(self): - smpl_subdiv_fpath = PathManager.get_local_path( - "https://dl.fbaipublicfiles.com/densepose/data/SMPL_subdiv.mat" - ) - pdist_transform_fpath = PathManager.get_local_path( - "https://dl.fbaipublicfiles.com/densepose/data/SMPL_SUBDIV_TRANSFORM.mat" - ) - pdist_matrix_fpath = PathManager.get_local_path( - "https://dl.fbaipublicfiles.com/densepose/data/Pdist_matrix.pkl", timeout_sec=120 - ) - SMPL_subdiv = loadmat(smpl_subdiv_fpath) - self.PDIST_transform = loadmat(pdist_transform_fpath) - self.PDIST_transform = self.PDIST_transform["index"].squeeze() - UV = np.array([SMPL_subdiv["U_subdiv"], SMPL_subdiv["V_subdiv"]]).squeeze() - ClosestVertInds = np.arange(UV.shape[1]) + 1 - self.Part_UVs = [] - self.Part_ClosestVertInds = [] - for i in np.arange(24): - self.Part_UVs.append(UV[:, SMPL_subdiv["Part_ID_subdiv"].squeeze() == (i + 1)]) - self.Part_ClosestVertInds.append( - ClosestVertInds[SMPL_subdiv["Part_ID_subdiv"].squeeze() == (i + 1)] - ) - - with open(pdist_matrix_fpath, "rb") as hFile: - arrays = pickle.load(hFile, encoding="latin1") - self.Pdist_matrix = arrays["Pdist_matrix"] - self.Part_ids = np.array(SMPL_subdiv["Part_ID_subdiv"].squeeze()) - # Mean geodesic distances for parts. - self.Mean_Distances = np.array([0, 0.351, 0.107, 0.126, 0.237, 0.173, 0.142, 0.128, 0.150]) - # Coarse Part labels. - self.CoarseParts = np.array( - [0, 1, 1, 2, 2, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8] - ) - - def _prepare(self): - """ - Prepare ._gts and ._dts for evaluation based on params - :return: None - """ - - def _toMask(anns, coco): - # modify ann['segmentation'] by reference - for ann in anns: - # safeguard for invalid segmentation annotation; - # annotations containing empty lists exist in the posetrack - # dataset. This is not a correct segmentation annotation - # in terms of COCO format; we need to deal with it somehow - segm = ann["segmentation"] - if type(segm) == list and len(segm) == 0: - ann["segmentation"] = None - continue - rle = coco.annToRLE(ann) - ann["segmentation"] = rle - - def _getIgnoreRegion(iid, coco): - img = coco.imgs[iid] - - if "ignore_regions_x" not in img.keys(): - return None - - if len(img["ignore_regions_x"]) == 0: - return None - - rgns_merged = [ - [v for xy in zip(region_x, region_y) for v in xy] - for region_x, region_y in zip(img["ignore_regions_x"], img["ignore_regions_y"]) - ] - rles = maskUtils.frPyObjects(rgns_merged, img["height"], img["width"]) - rle = maskUtils.merge(rles) - return maskUtils.decode(rle) - - def _checkIgnore(dt, iregion): - if iregion is None: - return True - - bb = np.array(dt["bbox"]).astype(int) - x1, y1, x2, y2 = bb[0], bb[1], bb[0] + bb[2], bb[1] + bb[3] - x2 = min([x2, iregion.shape[1]]) - y2 = min([y2, iregion.shape[0]]) - - if bb[2] * bb[3] == 0: - return False - - crop_iregion = iregion[y1:y2, x1:x2] - - if crop_iregion.sum() == 0: - return True - - if "densepose" not in dt.keys(): # filtering boxes - return crop_iregion.sum() / bb[2] / bb[3] < self.ignoreThrBB - - # filtering UVs - ignoremask = np.require(crop_iregion, requirements=["F"]) - mask = self._extract_mask(dt) - uvmask = np.require(np.asarray(mask > 0), dtype=np.uint8, requirements=["F"]) - uvmask_ = maskUtils.encode(uvmask) - ignoremask_ = maskUtils.encode(ignoremask) - uviou = maskUtils.iou([uvmask_], [ignoremask_], [1])[0] - return uviou < self.ignoreThrUV - - p = self.params - - if p.useCats: - gts = self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds)) - dts = self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds)) - else: - gts = self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds)) - dts = self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds)) - - imns = self.cocoGt.loadImgs(p.imgIds) - self.size_mapping = {} - for im in imns: - self.size_mapping[im["id"]] = [im["height"], im["width"]] - - # if iouType == 'uv', add point gt annotations - if p.iouType == "densepose": - self._loadGEval() - - # convert ground truth to mask if iouType == 'segm' - if p.iouType == "segm": - _toMask(gts, self.cocoGt) - _toMask(dts, self.cocoDt) - - # set ignore flag - for gt in gts: - gt["ignore"] = gt["ignore"] if "ignore" in gt else 0 - gt["ignore"] = "iscrowd" in gt and gt["iscrowd"] - if p.iouType == "keypoints": - gt["ignore"] = (gt["num_keypoints"] == 0) or gt["ignore"] - if p.iouType == "densepose": - gt["ignore"] = ("dp_x" in gt) == 0 - if p.iouType == "segm": - gt["ignore"] = gt["segmentation"] is None - - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - self._igrgns = defaultdict(list) - - for gt in gts: - iid = gt["image_id"] - if iid not in self._igrgns.keys(): - self._igrgns[iid] = _getIgnoreRegion(iid, self.cocoGt) - if _checkIgnore(gt, self._igrgns[iid]): - self._gts[iid, gt["category_id"]].append(gt) - for dt in dts: - iid = dt["image_id"] - if (iid not in self._igrgns) or _checkIgnore(dt, self._igrgns[iid]): - self._dts[iid, dt["category_id"]].append(dt) - - self.evalImgs = defaultdict(list) # per-image per-category evaluation results - self.eval = {} # accumulated evaluation results - - def evaluate(self): - """ - Run per image evaluation on given images and store results (a list of dict) in self.evalImgs - :return: None - """ - tic = time.time() - logger.info("Running per image DensePose evaluation... {}".format(self.params.iouType)) - p = self.params - # add backward compatibility if useSegm is specified in params - if p.useSegm is not None: - p.iouType = "segm" if p.useSegm == 1 else "bbox" - logger.info("useSegm (deprecated) is not None. Running DensePose evaluation") - p.imgIds = list(np.unique(p.imgIds)) - if p.useCats: - p.catIds = list(np.unique(p.catIds)) - p.maxDets = sorted(p.maxDets) - self.params = p - - self._prepare() - # loop through images, area range, max detection number - catIds = p.catIds if p.useCats else [-1] - - if p.iouType in ["segm", "bbox"]: - computeIoU = self.computeIoU - elif p.iouType == "keypoints": - computeIoU = self.computeOks - elif p.iouType == "densepose": - computeIoU = self.computeOgps - if self._dpEvalMode in {DensePoseEvalMode.GPSM, DensePoseEvalMode.IOU}: - self.real_ious = { - (imgId, catId): self.computeDPIoU(imgId, catId) - for imgId in p.imgIds - for catId in catIds - } - - self.ious = { - (imgId, catId): computeIoU(imgId, catId) for imgId in p.imgIds for catId in catIds - } - - evaluateImg = self.evaluateImg - maxDet = p.maxDets[-1] - self.evalImgs = [ - evaluateImg(imgId, catId, areaRng, maxDet) - for catId in catIds - for areaRng in p.areaRng - for imgId in p.imgIds - ] - self._paramsEval = copy.deepcopy(self.params) - toc = time.time() - logger.info("DensePose evaluation DONE (t={:0.2f}s).".format(toc - tic)) - - def getDensePoseMask(self, polys): - maskGen = np.zeros([256, 256]) - stop = min(len(polys) + 1, 15) - for i in range(1, stop): - if polys[i - 1]: - currentMask = maskUtils.decode(polys[i - 1]) - maskGen[currentMask > 0] = i - return maskGen - - def _generate_rlemask_on_image(self, mask, imgId, data): - bbox_xywh = np.array(data["bbox"]) - x, y, w, h = bbox_xywh - im_h, im_w = self.size_mapping[imgId] - im_mask = np.zeros((im_h, im_w), dtype=np.uint8) - if mask is not None: - x0 = max(int(x), 0) - x1 = min(int(x + w), im_w, int(x) + mask.shape[1]) - y0 = max(int(y), 0) - y1 = min(int(y + h), im_h, int(y) + mask.shape[0]) - y = int(y) - x = int(x) - im_mask[y0:y1, x0:x1] = mask[y0 - y : y1 - y, x0 - x : x1 - x] - im_mask = np.require(np.asarray(im_mask > 0), dtype=np.uint8, requirements=["F"]) - rle_mask = maskUtils.encode(np.array(im_mask[:, :, np.newaxis], order="F"))[0] - return rle_mask - - def computeDPIoU(self, imgId, catId): - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return [] - inds = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt = dt[0 : p.maxDets[-1]] - - gtmasks = [] - for g in gt: - if DensePoseDataRelative.S_KEY in g: - # convert DensePose mask to a binary mask - mask = np.minimum(self.getDensePoseMask(g[DensePoseDataRelative.S_KEY]), 1.0) - _, _, w, h = g["bbox"] - scale_x = float(max(w, 1)) / mask.shape[1] - scale_y = float(max(h, 1)) / mask.shape[0] - mask = spzoom(mask, (scale_y, scale_x), order=1, prefilter=False) - mask = np.array(mask > 0.5, dtype=np.uint8) - rle_mask = self._generate_rlemask_on_image(mask, imgId, g) - elif "segmentation" in g: - segmentation = g["segmentation"] - if isinstance(segmentation, list) and segmentation: - # polygons - im_h, im_w = self.size_mapping[imgId] - rles = maskUtils.frPyObjects(segmentation, im_h, im_w) - rle_mask = maskUtils.merge(rles) - elif isinstance(segmentation, dict): - if isinstance(segmentation["counts"], list): - # uncompressed RLE - im_h, im_w = self.size_mapping[imgId] - rle_mask = maskUtils.frPyObjects(segmentation, im_h, im_w) - else: - # compressed RLE - rle_mask = segmentation - else: - rle_mask = self._generate_rlemask_on_image(None, imgId, g) - else: - rle_mask = self._generate_rlemask_on_image(None, imgId, g) - gtmasks.append(rle_mask) - - dtmasks = [] - for d in dt: - mask = self._extract_mask(d) - mask = np.require(np.asarray(mask > 0), dtype=np.uint8, requirements=["F"]) - rle_mask = self._generate_rlemask_on_image(mask, imgId, d) - dtmasks.append(rle_mask) - - # compute iou between each dt and gt region - iscrowd = [int(o.get("iscrowd", 0)) for o in gt] - iousDP = maskUtils.iou(dtmasks, gtmasks, iscrowd) - return iousDP - - def computeIoU(self, imgId, catId): - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return [] - inds = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt = dt[0 : p.maxDets[-1]] - - if p.iouType == "segm": - g = [g["segmentation"] for g in gt if g["segmentation"] is not None] - d = [d["segmentation"] for d in dt if d["segmentation"] is not None] - elif p.iouType == "bbox": - g = [g["bbox"] for g in gt] - d = [d["bbox"] for d in dt] - else: - raise Exception("unknown iouType for iou computation") - - # compute iou between each dt and gt region - iscrowd = [int(o.get("iscrowd", 0)) for o in gt] - ious = maskUtils.iou(d, g, iscrowd) - return ious - - def computeOks(self, imgId, catId): - p = self.params - # dimension here should be Nxm - gts = self._gts[imgId, catId] - dts = self._dts[imgId, catId] - inds = np.argsort([-d["score"] for d in dts], kind="mergesort") - dts = [dts[i] for i in inds] - if len(dts) > p.maxDets[-1]: - dts = dts[0 : p.maxDets[-1]] - # if len(gts) == 0 and len(dts) == 0: - if len(gts) == 0 or len(dts) == 0: - return [] - ious = np.zeros((len(dts), len(gts))) - sigmas = ( - np.array( - [ - 0.26, - 0.25, - 0.25, - 0.35, - 0.35, - 0.79, - 0.79, - 0.72, - 0.72, - 0.62, - 0.62, - 1.07, - 1.07, - 0.87, - 0.87, - 0.89, - 0.89, - ] - ) - / 10.0 - ) - vars = (sigmas * 2) ** 2 - k = len(sigmas) - # compute oks between each detection and ground truth object - for j, gt in enumerate(gts): - # create bounds for ignore regions(double the gt bbox) - g = np.array(gt["keypoints"]) - xg = g[0::3] - yg = g[1::3] - vg = g[2::3] - k1 = np.count_nonzero(vg > 0) - bb = gt["bbox"] - x0 = bb[0] - bb[2] - x1 = bb[0] + bb[2] * 2 - y0 = bb[1] - bb[3] - y1 = bb[1] + bb[3] * 2 - for i, dt in enumerate(dts): - d = np.array(dt["keypoints"]) - xd = d[0::3] - yd = d[1::3] - if k1 > 0: - # measure the per-keypoint distance if keypoints visible - dx = xd - xg - dy = yd - yg - else: - # measure minimum distance to keypoints in (x0,y0) & (x1,y1) - z = np.zeros(k) - dx = np.max((z, x0 - xd), axis=0) + np.max((z, xd - x1), axis=0) - dy = np.max((z, y0 - yd), axis=0) + np.max((z, yd - y1), axis=0) - e = (dx**2 + dy**2) / vars / (gt["area"] + np.spacing(1)) / 2 - if k1 > 0: - e = e[vg > 0] - ious[i, j] = np.sum(np.exp(-e)) / e.shape[0] - return ious - - def _extract_mask(self, dt: Dict[str, Any]) -> np.ndarray: - if "densepose" in dt: - densepose_results_quantized = dt["densepose"] - return densepose_results_quantized.labels_uv_uint8[0].numpy() - elif "cse_mask" in dt: - return dt["cse_mask"] - elif "coarse_segm" in dt: - dy = max(int(dt["bbox"][3]), 1) - dx = max(int(dt["bbox"][2]), 1) - return ( - F.interpolate( - dt["coarse_segm"].unsqueeze(0), - (dy, dx), - mode="bilinear", - align_corners=False, - ) - .squeeze(0) - .argmax(0) - .numpy() - .astype(np.uint8) - ) - elif "record_id" in dt: - assert ( - self.multi_storage is not None - ), f"Storage record id encountered in a detection {dt}, but no storage provided!" - record = self.multi_storage.get(dt["rank"], dt["record_id"]) - coarse_segm = record["coarse_segm"] - dy = max(int(dt["bbox"][3]), 1) - dx = max(int(dt["bbox"][2]), 1) - return ( - F.interpolate( - coarse_segm.unsqueeze(0), - (dy, dx), - mode="bilinear", - align_corners=False, - ) - .squeeze(0) - .argmax(0) - .numpy() - .astype(np.uint8) - ) - else: - raise Exception(f"No mask data in the detection: {dt}") - raise ValueError('The prediction dict needs to contain either "densepose" or "cse_mask"') - - def _extract_iuv( - self, densepose_data: np.ndarray, py: np.ndarray, px: np.ndarray, gt: Dict[str, Any] - ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: - """ - Extract arrays of I, U and V values at given points as numpy arrays - given the data mode stored in self._dpDataMode - """ - if self._dpDataMode == DensePoseDataMode.IUV_DT: - # estimated labels and UV (default) - ipoints = densepose_data[0, py, px] - upoints = densepose_data[1, py, px] / 255.0 # convert from uint8 by /255. - vpoints = densepose_data[2, py, px] / 255.0 - elif self._dpDataMode == DensePoseDataMode.IUV_GT: - # ground truth - ipoints = np.array(gt["dp_I"]) - upoints = np.array(gt["dp_U"]) - vpoints = np.array(gt["dp_V"]) - elif self._dpDataMode == DensePoseDataMode.I_GT_UV_0: - # ground truth labels, UV = 0 - ipoints = np.array(gt["dp_I"]) - upoints = upoints * 0.0 - vpoints = vpoints * 0.0 - elif self._dpDataMode == DensePoseDataMode.I_GT_UV_DT: - # ground truth labels, estimated UV - ipoints = np.array(gt["dp_I"]) - upoints = densepose_data[1, py, px] / 255.0 # convert from uint8 by /255. - vpoints = densepose_data[2, py, px] / 255.0 - elif self._dpDataMode == DensePoseDataMode.I_DT_UV_0: - # estimated labels, UV = 0 - ipoints = densepose_data[0, py, px] - upoints = upoints * 0.0 - vpoints = vpoints * 0.0 - else: - raise ValueError(f"Unknown data mode: {self._dpDataMode}") - return ipoints, upoints, vpoints - - def computeOgps_single_pair(self, dt, gt, py, px, pt_mask): - if "densepose" in dt: - ipoints, upoints, vpoints = self.extract_iuv_from_quantized(dt, gt, py, px, pt_mask) - return self.computeOgps_single_pair_iuv(dt, gt, ipoints, upoints, vpoints) - elif "u" in dt: - ipoints, upoints, vpoints = self.extract_iuv_from_raw(dt, gt, py, px, pt_mask) - return self.computeOgps_single_pair_iuv(dt, gt, ipoints, upoints, vpoints) - elif "record_id" in dt: - assert ( - self.multi_storage is not None - ), f"Storage record id encountered in detection {dt}, but no storage provided!" - record = self.multi_storage.get(dt["rank"], dt["record_id"]) - record["bbox"] = dt["bbox"] - if "u" in record: - ipoints, upoints, vpoints = self.extract_iuv_from_raw(record, gt, py, px, pt_mask) - return self.computeOgps_single_pair_iuv(dt, gt, ipoints, upoints, vpoints) - elif "embedding" in record: - return self.computeOgps_single_pair_cse( - dt, - gt, - py, - px, - pt_mask, - record["coarse_segm"], - record["embedding"], - record["bbox"], - ) - else: - raise Exception(f"Unknown record format: {record}") - elif "embedding" in dt: - return self.computeOgps_single_pair_cse( - dt, gt, py, px, pt_mask, dt["coarse_segm"], dt["embedding"], dt["bbox"] - ) - raise Exception(f"Unknown detection format: {dt}") - - def extract_iuv_from_quantized(self, dt, gt, py, px, pt_mask): - densepose_results_quantized = dt["densepose"] - ipoints, upoints, vpoints = self._extract_iuv( - densepose_results_quantized.labels_uv_uint8.numpy(), py, px, gt - ) - ipoints[pt_mask == -1] = 0 - return ipoints, upoints, vpoints - - def extract_iuv_from_raw(self, dt, gt, py, px, pt_mask): - labels_dt = resample_fine_and_coarse_segm_tensors_to_bbox( - dt["fine_segm"].unsqueeze(0), - dt["coarse_segm"].unsqueeze(0), - dt["bbox"], - ) - uv = resample_uv_tensors_to_bbox( - dt["u"].unsqueeze(0), dt["v"].unsqueeze(0), labels_dt.squeeze(0), dt["bbox"] - ) - labels_uv_uint8 = torch.cat((labels_dt.byte(), (uv * 255).clamp(0, 255).byte())) - ipoints, upoints, vpoints = self._extract_iuv(labels_uv_uint8.numpy(), py, px, gt) - ipoints[pt_mask == -1] = 0 - return ipoints, upoints, vpoints - - def computeOgps_single_pair_iuv(self, dt, gt, ipoints, upoints, vpoints): - cVertsGT, ClosestVertsGTTransformed = self.findAllClosestVertsGT(gt) - cVerts = self.findAllClosestVertsUV(upoints, vpoints, ipoints) - # Get pairwise geodesic distances between gt and estimated mesh points. - dist = self.getDistancesUV(ClosestVertsGTTransformed, cVerts) - # Compute the Ogps measure. - # Find the mean geodesic normalization distance for - # each GT point, based on which part it is on. - Current_Mean_Distances = self.Mean_Distances[ - self.CoarseParts[self.Part_ids[cVertsGT[cVertsGT > 0].astype(int) - 1]] - ] - return dist, Current_Mean_Distances - - def computeOgps_single_pair_cse( - self, dt, gt, py, px, pt_mask, coarse_segm, embedding, bbox_xywh_abs - ): - # 0-based mesh vertex indices - cVertsGT = torch.as_tensor(gt["dp_vertex"], dtype=torch.int64) - # label for each pixel of the bbox, [H, W] tensor of long - labels_dt = resample_coarse_segm_tensor_to_bbox( - coarse_segm.unsqueeze(0), bbox_xywh_abs - ).squeeze(0) - x, y, w, h = bbox_xywh_abs - # embedding for each pixel of the bbox, [D, H, W] tensor of float32 - embedding = F.interpolate( - embedding.unsqueeze(0), (int(h), int(w)), mode="bilinear", align_corners=False - ).squeeze(0) - # valid locations py, px - py_pt = torch.from_numpy(py[pt_mask > -1]) - px_pt = torch.from_numpy(px[pt_mask > -1]) - cVerts = torch.ones_like(cVertsGT) * -1 - cVerts[pt_mask > -1] = self.findClosestVertsCse( - embedding, py_pt, px_pt, labels_dt, gt["ref_model"] - ) - # Get pairwise geodesic distances between gt and estimated mesh points. - dist = self.getDistancesCse(cVertsGT, cVerts, gt["ref_model"]) - # normalize distances - if (gt["ref_model"] == "smpl_27554") and ("dp_I" in gt): - Current_Mean_Distances = self.Mean_Distances[ - self.CoarseParts[np.array(gt["dp_I"], dtype=int)] - ] - else: - Current_Mean_Distances = 0.255 - return dist, Current_Mean_Distances - - def computeOgps(self, imgId, catId): - p = self.params - # dimension here should be Nxm - g = self._gts[imgId, catId] - d = self._dts[imgId, catId] - inds = np.argsort([-d_["score"] for d_ in d], kind="mergesort") - d = [d[i] for i in inds] - if len(d) > p.maxDets[-1]: - d = d[0 : p.maxDets[-1]] - # if len(gts) == 0 and len(dts) == 0: - if len(g) == 0 or len(d) == 0: - return [] - ious = np.zeros((len(d), len(g))) - # compute opgs between each detection and ground truth object - # sigma = self.sigma #0.255 # dist = 0.3m corresponds to ogps = 0.5 - # 1 # dist = 0.3m corresponds to ogps = 0.96 - # 1.45 # dist = 1.7m (person height) corresponds to ogps = 0.5) - for j, gt in enumerate(g): - if not gt["ignore"]: - g_ = gt["bbox"] - for i, dt in enumerate(d): - # - dy = int(dt["bbox"][3]) - dx = int(dt["bbox"][2]) - dp_x = np.array(gt["dp_x"]) * g_[2] / 255.0 - dp_y = np.array(gt["dp_y"]) * g_[3] / 255.0 - py = (dp_y + g_[1] - dt["bbox"][1]).astype(int) - px = (dp_x + g_[0] - dt["bbox"][0]).astype(int) - # - pts = np.zeros(len(px)) - pts[px >= dx] = -1 - pts[py >= dy] = -1 - pts[px < 0] = -1 - pts[py < 0] = -1 - if len(pts) < 1: - ogps = 0.0 - elif np.max(pts) == -1: - ogps = 0.0 - else: - px[pts == -1] = 0 - py[pts == -1] = 0 - dists_between_matches, dist_norm_coeffs = self.computeOgps_single_pair( - dt, gt, py, px, pts - ) - # Compute gps - ogps_values = np.exp( - -(dists_between_matches**2) / (2 * (dist_norm_coeffs**2)) - ) - # - ogps = np.mean(ogps_values) if len(ogps_values) > 0 else 0.0 - ious[i, j] = ogps - - gbb = [gt["bbox"] for gt in g] - dbb = [dt["bbox"] for dt in d] - - # compute iou between each dt and gt region - iscrowd = [int(o.get("iscrowd", 0)) for o in g] - ious_bb = maskUtils.iou(dbb, gbb, iscrowd) - return ious, ious_bb - - def evaluateImg(self, imgId, catId, aRng, maxDet): - """ - perform evaluation for single category and image - :return: dict (single image results) - """ - - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return None - - for g in gt: - # g['_ignore'] = g['ignore'] - if g["ignore"] or (g["area"] < aRng[0] or g["area"] > aRng[1]): - g["_ignore"] = True - else: - g["_ignore"] = False - - # sort dt highest score first, sort gt ignore last - gtind = np.argsort([g["_ignore"] for g in gt], kind="mergesort") - gt = [gt[i] for i in gtind] - dtind = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in dtind[0:maxDet]] - iscrowd = [int(o.get("iscrowd", 0)) for o in gt] - # load computed ious - if p.iouType == "densepose": - # print('Checking the length', len(self.ious[imgId, catId])) - # if len(self.ious[imgId, catId]) == 0: - # print(self.ious[imgId, catId]) - ious = ( - self.ious[imgId, catId][0][:, gtind] - if len(self.ious[imgId, catId]) > 0 - else self.ious[imgId, catId] - ) - ioubs = ( - self.ious[imgId, catId][1][:, gtind] - if len(self.ious[imgId, catId]) > 0 - else self.ious[imgId, catId] - ) - if self._dpEvalMode in {DensePoseEvalMode.GPSM, DensePoseEvalMode.IOU}: - iousM = ( - self.real_ious[imgId, catId][:, gtind] - if len(self.real_ious[imgId, catId]) > 0 - else self.real_ious[imgId, catId] - ) - else: - ious = ( - self.ious[imgId, catId][:, gtind] - if len(self.ious[imgId, catId]) > 0 - else self.ious[imgId, catId] - ) - - T = len(p.iouThrs) - G = len(gt) - D = len(dt) - gtm = np.zeros((T, G)) - dtm = np.zeros((T, D)) - gtIg = np.array([g["_ignore"] for g in gt]) - dtIg = np.zeros((T, D)) - if np.all(gtIg) and p.iouType == "densepose": - dtIg = np.logical_or(dtIg, True) - - if len(ious) > 0: # and not p.iouType == 'densepose': - for tind, t in enumerate(p.iouThrs): - for dind, d in enumerate(dt): - # information about best match so far (m=-1 -> unmatched) - iou = min([t, 1 - 1e-10]) - m = -1 - for gind, _g in enumerate(gt): - # if this gt already matched, and not a crowd, continue - if gtm[tind, gind] > 0 and not iscrowd[gind]: - continue - # if dt matched to reg gt, and on ignore gt, stop - if m > -1 and gtIg[m] == 0 and gtIg[gind] == 1: - break - if p.iouType == "densepose": - if self._dpEvalMode == DensePoseEvalMode.GPSM: - new_iou = np.sqrt(iousM[dind, gind] * ious[dind, gind]) - elif self._dpEvalMode == DensePoseEvalMode.IOU: - new_iou = iousM[dind, gind] - elif self._dpEvalMode == DensePoseEvalMode.GPS: - new_iou = ious[dind, gind] - else: - new_iou = ious[dind, gind] - if new_iou < iou: - continue - if new_iou == 0.0: - continue - # if match successful and best so far, store appropriately - iou = new_iou - m = gind - # if match made store id of match for both dt and gt - if m == -1: - continue - dtIg[tind, dind] = gtIg[m] - dtm[tind, dind] = gt[m]["id"] - gtm[tind, m] = d["id"] - - if p.iouType == "densepose": - if not len(ioubs) == 0: - for dind, d in enumerate(dt): - # information about best match so far (m=-1 -> unmatched) - if dtm[tind, dind] == 0: - ioub = 0.8 - m = -1 - for gind, _g in enumerate(gt): - # if this gt already matched, and not a crowd, continue - if gtm[tind, gind] > 0 and not iscrowd[gind]: - continue - # continue to next gt unless better match made - if ioubs[dind, gind] < ioub: - continue - # if match successful and best so far, store appropriately - ioub = ioubs[dind, gind] - m = gind - # if match made store id of match for both dt and gt - if m > -1: - dtIg[:, dind] = gtIg[m] - if gtIg[m]: - dtm[tind, dind] = gt[m]["id"] - gtm[tind, m] = d["id"] - # set unmatched detections outside of area range to ignore - a = np.array([d["area"] < aRng[0] or d["area"] > aRng[1] for d in dt]).reshape((1, len(dt))) - dtIg = np.logical_or(dtIg, np.logical_and(dtm == 0, np.repeat(a, T, 0))) - # store results for given image and category - # print('Done with the function', len(self.ious[imgId, catId])) - return { - "image_id": imgId, - "category_id": catId, - "aRng": aRng, - "maxDet": maxDet, - "dtIds": [d["id"] for d in dt], - "gtIds": [g["id"] for g in gt], - "dtMatches": dtm, - "gtMatches": gtm, - "dtScores": [d["score"] for d in dt], - "gtIgnore": gtIg, - "dtIgnore": dtIg, - } - - def accumulate(self, p=None): - """ - Accumulate per image evaluation results and store the result in self.eval - :param p: input params for evaluation - :return: None - """ - logger.info("Accumulating evaluation results...") - tic = time.time() - if not self.evalImgs: - logger.info("Please run evaluate() first") - # allows input customized parameters - if p is None: - p = self.params - p.catIds = p.catIds if p.useCats == 1 else [-1] - T = len(p.iouThrs) - R = len(p.recThrs) - K = len(p.catIds) if p.useCats else 1 - A = len(p.areaRng) - M = len(p.maxDets) - precision = -(np.ones((T, R, K, A, M))) # -1 for the precision of absent categories - recall = -(np.ones((T, K, A, M))) - - # create dictionary for future indexing - logger.info("Categories: {}".format(p.catIds)) - _pe = self._paramsEval - catIds = _pe.catIds if _pe.useCats else [-1] - setK = set(catIds) - setA = set(map(tuple, _pe.areaRng)) - setM = set(_pe.maxDets) - setI = set(_pe.imgIds) - # get inds to evaluate - k_list = [n for n, k in enumerate(p.catIds) if k in setK] - m_list = [m for n, m in enumerate(p.maxDets) if m in setM] - a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA] - i_list = [n for n, i in enumerate(p.imgIds) if i in setI] - I0 = len(_pe.imgIds) - A0 = len(_pe.areaRng) - # retrieve E at each category, area range, and max number of detections - for k, k0 in enumerate(k_list): - Nk = k0 * A0 * I0 - for a, a0 in enumerate(a_list): - Na = a0 * I0 - for m, maxDet in enumerate(m_list): - E = [self.evalImgs[Nk + Na + i] for i in i_list] - E = [e for e in E if e is not None] - if len(E) == 0: - continue - dtScores = np.concatenate([e["dtScores"][0:maxDet] for e in E]) - - # different sorting method generates slightly different results. - # mergesort is used to be consistent as Matlab implementation. - inds = np.argsort(-dtScores, kind="mergesort") - - dtm = np.concatenate([e["dtMatches"][:, 0:maxDet] for e in E], axis=1)[:, inds] - dtIg = np.concatenate([e["dtIgnore"][:, 0:maxDet] for e in E], axis=1)[:, inds] - gtIg = np.concatenate([e["gtIgnore"] for e in E]) - npig = np.count_nonzero(gtIg == 0) - if npig == 0: - continue - tps = np.logical_and(dtm, np.logical_not(dtIg)) - fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg)) - tp_sum = np.cumsum(tps, axis=1).astype(dtype=float) - fp_sum = np.cumsum(fps, axis=1).astype(dtype=float) - for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): - tp = np.array(tp) - fp = np.array(fp) - nd = len(tp) - rc = tp / npig - pr = tp / (fp + tp + np.spacing(1)) - q = np.zeros((R,)) - - if nd: - recall[t, k, a, m] = rc[-1] - else: - recall[t, k, a, m] = 0 - - # numpy is slow without cython optimization for accessing elements - # use python array gets significant speed improvement - pr = pr.tolist() - q = q.tolist() - - for i in range(nd - 1, 0, -1): - if pr[i] > pr[i - 1]: - pr[i - 1] = pr[i] - - inds = np.searchsorted(rc, p.recThrs, side="left") - try: - for ri, pi in enumerate(inds): - q[ri] = pr[pi] - except Exception: - pass - precision[t, :, k, a, m] = np.array(q) - logger.info( - "Final: max precision {}, min precision {}".format(np.max(precision), np.min(precision)) - ) - self.eval = { - "params": p, - "counts": [T, R, K, A, M], - "date": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), - "precision": precision, - "recall": recall, - } - toc = time.time() - logger.info("DONE (t={:0.2f}s).".format(toc - tic)) - - def summarize(self): - """ - Compute and display summary metrics for evaluation results. - Note this function can *only* be applied on the default parameter setting - """ - - def _summarize(ap=1, iouThr=None, areaRng="all", maxDets=100): - p = self.params - iStr = " {:<18} {} @[ {}={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}" - titleStr = "Average Precision" if ap == 1 else "Average Recall" - typeStr = "(AP)" if ap == 1 else "(AR)" - measure = "IoU" - if self.params.iouType == "keypoints": - measure = "OKS" - elif self.params.iouType == "densepose": - measure = "OGPS" - iouStr = ( - "{:0.2f}:{:0.2f}".format(p.iouThrs[0], p.iouThrs[-1]) - if iouThr is None - else "{:0.2f}".format(iouThr) - ) - - aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng] - mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets] - if ap == 1: - # dimension of precision: [TxRxKxAxM] - s = self.eval["precision"] - # IoU - if iouThr is not None: - t = np.where(np.abs(iouThr - p.iouThrs) < 0.001)[0] - s = s[t] - s = s[:, :, :, aind, mind] - else: - # dimension of recall: [TxKxAxM] - s = self.eval["recall"] - if iouThr is not None: - t = np.where(np.abs(iouThr - p.iouThrs) < 0.001)[0] - s = s[t] - s = s[:, :, aind, mind] - if len(s[s > -1]) == 0: - mean_s = -1 - else: - mean_s = np.mean(s[s > -1]) - logger.info(iStr.format(titleStr, typeStr, measure, iouStr, areaRng, maxDets, mean_s)) - return mean_s - - def _summarizeDets(): - stats = np.zeros((12,)) - stats[0] = _summarize(1) - stats[1] = _summarize(1, iouThr=0.5, maxDets=self.params.maxDets[2]) - stats[2] = _summarize(1, iouThr=0.75, maxDets=self.params.maxDets[2]) - stats[3] = _summarize(1, areaRng="small", maxDets=self.params.maxDets[2]) - stats[4] = _summarize(1, areaRng="medium", maxDets=self.params.maxDets[2]) - stats[5] = _summarize(1, areaRng="large", maxDets=self.params.maxDets[2]) - stats[6] = _summarize(0, maxDets=self.params.maxDets[0]) - stats[7] = _summarize(0, maxDets=self.params.maxDets[1]) - stats[8] = _summarize(0, maxDets=self.params.maxDets[2]) - stats[9] = _summarize(0, areaRng="small", maxDets=self.params.maxDets[2]) - stats[10] = _summarize(0, areaRng="medium", maxDets=self.params.maxDets[2]) - stats[11] = _summarize(0, areaRng="large", maxDets=self.params.maxDets[2]) - return stats - - def _summarizeKps(): - stats = np.zeros((10,)) - stats[0] = _summarize(1, maxDets=20) - stats[1] = _summarize(1, maxDets=20, iouThr=0.5) - stats[2] = _summarize(1, maxDets=20, iouThr=0.75) - stats[3] = _summarize(1, maxDets=20, areaRng="medium") - stats[4] = _summarize(1, maxDets=20, areaRng="large") - stats[5] = _summarize(0, maxDets=20) - stats[6] = _summarize(0, maxDets=20, iouThr=0.5) - stats[7] = _summarize(0, maxDets=20, iouThr=0.75) - stats[8] = _summarize(0, maxDets=20, areaRng="medium") - stats[9] = _summarize(0, maxDets=20, areaRng="large") - return stats - - def _summarizeUvs(): - stats = [_summarize(1, maxDets=self.params.maxDets[0])] - min_threshold = self.params.iouThrs.min() - if min_threshold <= 0.201: - stats += [_summarize(1, maxDets=self.params.maxDets[0], iouThr=0.2)] - if min_threshold <= 0.301: - stats += [_summarize(1, maxDets=self.params.maxDets[0], iouThr=0.3)] - if min_threshold <= 0.401: - stats += [_summarize(1, maxDets=self.params.maxDets[0], iouThr=0.4)] - stats += [ - _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.5), - _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.75), - _summarize(1, maxDets=self.params.maxDets[0], areaRng="medium"), - _summarize(1, maxDets=self.params.maxDets[0], areaRng="large"), - _summarize(0, maxDets=self.params.maxDets[0]), - _summarize(0, maxDets=self.params.maxDets[0], iouThr=0.5), - _summarize(0, maxDets=self.params.maxDets[0], iouThr=0.75), - _summarize(0, maxDets=self.params.maxDets[0], areaRng="medium"), - _summarize(0, maxDets=self.params.maxDets[0], areaRng="large"), - ] - return np.array(stats) - - def _summarizeUvsOld(): - stats = np.zeros((18,)) - stats[0] = _summarize(1, maxDets=self.params.maxDets[0]) - stats[1] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.5) - stats[2] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.55) - stats[3] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.60) - stats[4] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.65) - stats[5] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.70) - stats[6] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.75) - stats[7] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.80) - stats[8] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.85) - stats[9] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.90) - stats[10] = _summarize(1, maxDets=self.params.maxDets[0], iouThr=0.95) - stats[11] = _summarize(1, maxDets=self.params.maxDets[0], areaRng="medium") - stats[12] = _summarize(1, maxDets=self.params.maxDets[0], areaRng="large") - stats[13] = _summarize(0, maxDets=self.params.maxDets[0]) - stats[14] = _summarize(0, maxDets=self.params.maxDets[0], iouThr=0.5) - stats[15] = _summarize(0, maxDets=self.params.maxDets[0], iouThr=0.75) - stats[16] = _summarize(0, maxDets=self.params.maxDets[0], areaRng="medium") - stats[17] = _summarize(0, maxDets=self.params.maxDets[0], areaRng="large") - return stats - - if not self.eval: - raise Exception("Please run accumulate() first") - iouType = self.params.iouType - if iouType in ["segm", "bbox"]: - summarize = _summarizeDets - elif iouType in ["keypoints"]: - summarize = _summarizeKps - elif iouType in ["densepose"]: - summarize = _summarizeUvs - self.stats = summarize() - - def __str__(self): - self.summarize() - - # ================ functions for dense pose ============================== - def findAllClosestVertsUV(self, U_points, V_points, Index_points): - ClosestVerts = np.ones(Index_points.shape) * -1 - for i in np.arange(24): - # - if (i + 1) in Index_points: - UVs = np.array( - [U_points[Index_points == (i + 1)], V_points[Index_points == (i + 1)]] - ) - Current_Part_UVs = self.Part_UVs[i] - Current_Part_ClosestVertInds = self.Part_ClosestVertInds[i] - D = ssd.cdist(Current_Part_UVs.transpose(), UVs.transpose()).squeeze() - ClosestVerts[Index_points == (i + 1)] = Current_Part_ClosestVertInds[ - np.argmin(D, axis=0) - ] - ClosestVertsTransformed = self.PDIST_transform[ClosestVerts.astype(int) - 1] - ClosestVertsTransformed[ClosestVerts < 0] = 0 - return ClosestVertsTransformed - - def findClosestVertsCse(self, embedding, py, px, mask, mesh_name): - mesh_vertex_embeddings = self.embedder(mesh_name) - pixel_embeddings = embedding[:, py, px].t().to(device="cuda") - mask_vals = mask[py, px] - edm = squared_euclidean_distance_matrix(pixel_embeddings, mesh_vertex_embeddings) - vertex_indices = edm.argmin(dim=1).cpu() - vertex_indices[mask_vals <= 0] = -1 - return vertex_indices - - def findAllClosestVertsGT(self, gt): - # - I_gt = np.array(gt["dp_I"]) - U_gt = np.array(gt["dp_U"]) - V_gt = np.array(gt["dp_V"]) - # - # print(I_gt) - # - ClosestVertsGT = np.ones(I_gt.shape) * -1 - for i in np.arange(24): - if (i + 1) in I_gt: - UVs = np.array([U_gt[I_gt == (i + 1)], V_gt[I_gt == (i + 1)]]) - Current_Part_UVs = self.Part_UVs[i] - Current_Part_ClosestVertInds = self.Part_ClosestVertInds[i] - D = ssd.cdist(Current_Part_UVs.transpose(), UVs.transpose()).squeeze() - ClosestVertsGT[I_gt == (i + 1)] = Current_Part_ClosestVertInds[np.argmin(D, axis=0)] - # - ClosestVertsGTTransformed = self.PDIST_transform[ClosestVertsGT.astype(int) - 1] - ClosestVertsGTTransformed[ClosestVertsGT < 0] = 0 - return ClosestVertsGT, ClosestVertsGTTransformed - - def getDistancesCse(self, cVertsGT, cVerts, mesh_name): - geodists_vertices = torch.ones_like(cVertsGT) * float("inf") - selected = (cVertsGT >= 0) * (cVerts >= 0) - mesh = create_mesh(mesh_name, "cpu") - geodists_vertices[selected] = mesh.geodists[cVertsGT[selected], cVerts[selected]] - return geodists_vertices.numpy() - - def getDistancesUV(self, cVertsGT, cVerts): - # - n = 27554 - dists = [] - for d in range(len(cVertsGT)): - if cVertsGT[d] > 0: - if cVerts[d] > 0: - i = cVertsGT[d] - 1 - j = cVerts[d] - 1 - if j == i: - dists.append(0) - elif j > i: - ccc = i - i = j - j = ccc - i = n - i - 1 - j = n - j - 1 - k = (n * (n - 1) / 2) - (n - i) * ((n - i) - 1) / 2 + j - i - 1 - k = (n * n - n) / 2 - k - 1 - dists.append(self.Pdist_matrix[int(k)][0]) - else: - i = n - i - 1 - j = n - j - 1 - k = (n * (n - 1) / 2) - (n - i) * ((n - i) - 1) / 2 + j - i - 1 - k = (n * n - n) / 2 - k - 1 - dists.append(self.Pdist_matrix[int(k)][0]) - else: - dists.append(np.inf) - return np.atleast_1d(np.array(dists).squeeze()) - - -class Params: - """ - Params for coco evaluation api - """ - - def setDetParams(self): - self.imgIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - self.iouThrs = np.linspace(0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True) - self.recThrs = np.linspace(0.0, 1.00, int(np.round((1.00 - 0.0) / 0.01)) + 1, endpoint=True) - self.maxDets = [1, 10, 100] - self.areaRng = [ - [0**2, 1e5**2], - [0**2, 32**2], - [32**2, 96**2], - [96**2, 1e5**2], - ] - self.areaRngLbl = ["all", "small", "medium", "large"] - self.useCats = 1 - - def setKpParams(self): - self.imgIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - self.iouThrs = np.linspace(0.5, 0.95, np.round((0.95 - 0.5) / 0.05) + 1, endpoint=True) - self.recThrs = np.linspace(0.0, 1.00, np.round((1.00 - 0.0) / 0.01) + 1, endpoint=True) - self.maxDets = [20] - self.areaRng = [[0**2, 1e5**2], [32**2, 96**2], [96**2, 1e5**2]] - self.areaRngLbl = ["all", "medium", "large"] - self.useCats = 1 - - def setUvParams(self): - self.imgIds = [] - self.catIds = [] - self.iouThrs = np.linspace(0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True) - self.recThrs = np.linspace(0.0, 1.00, int(np.round((1.00 - 0.0) / 0.01)) + 1, endpoint=True) - self.maxDets = [20] - self.areaRng = [[0**2, 1e5**2], [32**2, 96**2], [96**2, 1e5**2]] - self.areaRngLbl = ["all", "medium", "large"] - self.useCats = 1 - - def __init__(self, iouType="segm"): - if iouType == "segm" or iouType == "bbox": - self.setDetParams() - elif iouType == "keypoints": - self.setKpParams() - elif iouType == "densepose": - self.setUvParams() - else: - raise Exception("iouType not supported") - self.iouType = iouType - # useSegm is deprecated - self.useSegm = None diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/os/coop_threads.h b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/os/coop_threads.h deleted file mode 100644 index 9aefa614ea945d20f1699866e7931994b27d5842..0000000000000000000000000000000000000000 --- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/os/coop_threads.h +++ /dev/null @@ -1,179 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef LYRA_CODEC_SPARSE_MATMUL_OS_COOP_THREADS_H_ -#define LYRA_CODEC_SPARSE_MATMUL_OS_COOP_THREADS_H_ - -#include -#include // NOLINT -#include - -#define _COOP_THREADS_USE_STD_THREAD 1 - -#include "absl/memory/memory.h" -#include "glog/logging.h" - -namespace csrblocksparse { - -// A re-usable barrier. Keeps threads in extremely tight sync without -// relinquishing control. All memory writes _before_ this barrier are visible -// to all threads _after_ this barrier. Similar in spirit to -// pthreads_barrier. If you expect arrival times at this barrier to be varied -// by more than microseconds, this is probably not the right synchronization -// primitive for you. If |num_threads| exceeds the number of physical threads -// that can run simultaneously, then using this is certainly a bad idea -// (although it should still be correct). -// -// Callers MUST NOT call barrier from more threads than |num_threads|. The -// result is undefined behavior. -class SpinBarrier { - public: - explicit SpinBarrier(int num_threads) - : num_threads_(num_threads), threads_at_barrier_(0), barrier_step_(0) {} - - void barrier(); - - private: - const int num_threads_; - std::atomic threads_at_barrier_; - std::atomic barrier_step_; // unsigned to make overflow defined. -}; - -// Producer-consumer API using the same underlying mechanism as SpinBarrier. -// This class is intended to allow >=1 producers to produce data for >=1 -// consumers, without blocking the producers. -// The consumer will block if it is ready before all the producer(s) have -// produced. -// WARNING: By design this lock does not work without some other barrier that -// prevents any producer from producing again, or consumer from consuming again -// until all consumers have consumed. Basically any loop that uses -// ProducerConsumer must have at least two consume() calls in each thread (on -// different instances) in order for the lock to work correctly. -class ProducerConsumer { - public: - ProducerConsumer(int num_producers, int num_consumers) - : num_producers_(num_producers), - num_consumers_(num_consumers), - producers_ready_(0), - consumers_passed_(0) {} - - // Indicates that the data produced by this thread is ready. Does NOT block. - // NOTE that some other lock must exist between the call to this produce and - // looping back to call produce again on the same ProducerConsumer, that - // depends on all consumers having called consume. One such candidate would - // be a call to SpinBarrier above by all producers and consumers. - // Another candidate would be a separate ProducerConsumer object in which - // these producers consume some data produced by the threads that consume - // the data produced here. Eg. - // tid 0 1 2 3 - // action 1 produce produce consume consume (on ProducerConsumer 1) - // action 2 consume consume produce produce (on ProducerConsumer 2) - // action 3 produce produce consume consume (on ProducerConsumer 3) - // action 4 consume consume produce produce (on ProducerConsumer 4) - // loop back to action 1. - // NOTE: It is inadequate to loop back after action2, as thread 0 could loop - // back and consume again on PC2 while thread 1 is still completing its call - // to consume. It is still inadequate to loop back after action 3 for the same - // reason (but tsan doesn't seem to pick this up.) - inline void produce() { - producers_ready_.fetch_add(1, std::memory_order_acq_rel); - } - - // Waits if necessary for all producers to have produced before proceeding. - // The ProducerConsumer cannot be reused until all consumers have consumed. - // See detailed comment and example on produce(). - inline void consume() { - // We can't do anything until all the producers have produced. - while (producers_ready_.load(std::memory_order_acquire) < num_producers_) { -#if defined __aarch64__ || defined __arm__ - asm volatile("yield\n" ::: "memory"); -#else - // No pause for x86! The pause instruction on Skylake takes 141 clock - // cycles, which in an AVX2-down-clocked CPU is getting on for 70ns. -#endif - } - // NOTE: It is tempting to move this fetch_add to before the wait loop to - // reduce contention for the memory location, but that would break the lock, - // as then the last to arrive could zero out the producers_ready before the - // other consumers have noticed that all producers have produced. - // With the fetch_add after the wait loop, we are guaranteed that all - // producers have produced AND all consumers have noticed that they have - // produced before we zero out the counters. - int consumers = consumers_passed_.fetch_add(1, std::memory_order_acq_rel); - if (consumers == num_consumers_ - 1) { - // The last consumer to pass has to reset everything for the next time. - producers_ready_.store(0, std::memory_order_relaxed); - consumers_passed_.store(0, std::memory_order_relaxed); - } - } - int num_producers() const { return num_producers_; } - int num_consumers() const { return num_consumers_; } - - private: - const int num_producers_; - const int num_consumers_; - std::atomic producers_ready_; - std::atomic consumers_passed_; -}; - -// We define Thread here, so we can easily change its type later. - -using Thread = std::thread; -using ThreadId = std::thread::id; - -// Creates (|num_threads|-1) threads and executes a total of |num_threads| -// copies of |func| (executes one on the calling thread). -// -// Useful for long running func bodies that are intended to run in lock step. -// A possible use case for this style parallelism over a thread pool is when -// we want tight control over which memory is resident in the L2 cache of a -// processor. With a pool we have no control over which thread gets assigned -// which portion of the computation resulting in L2 thrashing. With this -// breakdown we can make sure each thread only acceses a specific L2-sized -// portion of memory. -// -// func's signature must be (SpinBarrier*, int thread_id, ...); -template -void LaunchOnThreadsWithBarrier(int num_threads, Function&& func, - Args&&... args) { - SpinBarrier spin_barrier(num_threads); - - std::vector> threads; - threads.reserve(num_threads); - for (int tid = 1; tid < num_threads; ++tid) { - auto f = [&, tid]() { func(&spin_barrier, tid, args...); }; - - threads.emplace_back(absl::make_unique(f)); -#ifndef _COOP_THREADS_USE_STD_THREAD - CHECK_OK(threads.back()->Start()); -#endif - } - - const int kLocalTid = 0; - func(&spin_barrier, kLocalTid, args...); - - for (auto& thread : threads) { -#ifdef _COOP_THREADS_USE_STD_THREAD - thread->join(); -#else - CHECK_OK(thread->Join()); -#endif - } -} - -} // namespace csrblocksparse - -#endif // LYRA_CODEC_SPARSE_MATMUL_OS_COOP_THREADS_H_ diff --git a/spaces/nyanko7/sd-diffusers-webui/modules/lora.py b/spaces/nyanko7/sd-diffusers-webui/modules/lora.py deleted file mode 100644 index 9a60204f4e54ab12986af7e7272e20983b5adf7c..0000000000000000000000000000000000000000 --- a/spaces/nyanko7/sd-diffusers-webui/modules/lora.py +++ /dev/null @@ -1,187 +0,0 @@ -# LoRA network module -# reference: -# https://github.com/microsoft/LoRA/blob/main/loralib/layers.py -# https://github.com/cloneofsimo/lora/blob/master/lora_diffusion/lora.py -# https://github.com/bmaltais/kohya_ss/blob/master/networks/lora.py#L48 - -import math -import os -import torch -import diffusers -import modules.safe as _ -from safetensors.torch import load_file - - -class LoRAModule(torch.nn.Module): - """ - replaces forward method of the original Linear, instead of replacing the original Linear module. - """ - - def __init__( - self, - lora_name, - org_module: torch.nn.Module, - multiplier=1.0, - lora_dim=4, - alpha=1, - ): - """if alpha == 0 or None, alpha is rank (no scaling).""" - super().__init__() - self.lora_name = lora_name - self.lora_dim = lora_dim - - if org_module.__class__.__name__ == "Conv2d": - in_dim = org_module.in_channels - out_dim = org_module.out_channels - self.lora_down = torch.nn.Conv2d(in_dim, lora_dim, (1, 1), bias=False) - self.lora_up = torch.nn.Conv2d(lora_dim, out_dim, (1, 1), bias=False) - else: - in_dim = org_module.in_features - out_dim = org_module.out_features - self.lora_down = torch.nn.Linear(in_dim, lora_dim, bias=False) - self.lora_up = torch.nn.Linear(lora_dim, out_dim, bias=False) - - if type(alpha) == torch.Tensor: - alpha = alpha.detach().float().numpy() # without casting, bf16 causes error - - alpha = lora_dim if alpha is None or alpha == 0 else alpha - self.scale = alpha / self.lora_dim - self.register_buffer("alpha", torch.tensor(alpha)) # 定数として扱える - - # same as microsoft's - torch.nn.init.kaiming_uniform_(self.lora_down.weight, a=math.sqrt(5)) - torch.nn.init.zeros_(self.lora_up.weight) - - self.multiplier = multiplier - self.org_module = org_module # remove in applying - self.enable = False - - def resize(self, rank, alpha, multiplier): - self.alpha = torch.tensor(alpha) - self.multiplier = multiplier - self.scale = alpha / rank - if self.lora_down.__class__.__name__ == "Conv2d": - in_dim = self.lora_down.in_channels - out_dim = self.lora_up.out_channels - self.lora_down = torch.nn.Conv2d(in_dim, rank, (1, 1), bias=False) - self.lora_up = torch.nn.Conv2d(rank, out_dim, (1, 1), bias=False) - else: - in_dim = self.lora_down.in_features - out_dim = self.lora_up.out_features - self.lora_down = torch.nn.Linear(in_dim, rank, bias=False) - self.lora_up = torch.nn.Linear(rank, out_dim, bias=False) - - def apply(self): - if hasattr(self, "org_module"): - self.org_forward = self.org_module.forward - self.org_module.forward = self.forward - del self.org_module - - def forward(self, x): - if self.enable: - return ( - self.org_forward(x) - + self.lora_up(self.lora_down(x)) * self.multiplier * self.scale - ) - return self.org_forward(x) - - -class LoRANetwork(torch.nn.Module): - UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel", "Attention"] - TEXT_ENCODER_TARGET_REPLACE_MODULE = ["CLIPAttention", "CLIPMLP"] - LORA_PREFIX_UNET = "lora_unet" - LORA_PREFIX_TEXT_ENCODER = "lora_te" - - def __init__(self, text_encoder, unet, multiplier=1.0, lora_dim=4, alpha=1) -> None: - super().__init__() - self.multiplier = multiplier - self.lora_dim = lora_dim - self.alpha = alpha - - # create module instances - def create_modules(prefix, root_module: torch.nn.Module, target_replace_modules): - loras = [] - for name, module in root_module.named_modules(): - if module.__class__.__name__ in target_replace_modules: - for child_name, child_module in module.named_modules(): - if child_module.__class__.__name__ == "Linear" or (child_module.__class__.__name__ == "Conv2d" and child_module.kernel_size == (1, 1)): - lora_name = prefix + "." + name + "." + child_name - lora_name = lora_name.replace(".", "_") - lora = LoRAModule(lora_name, child_module, self.multiplier, self.lora_dim, self.alpha,) - loras.append(lora) - return loras - - if isinstance(text_encoder, list): - self.text_encoder_loras = text_encoder - else: - self.text_encoder_loras = create_modules(LoRANetwork.LORA_PREFIX_TEXT_ENCODER, text_encoder, LoRANetwork.TEXT_ENCODER_TARGET_REPLACE_MODULE) - print(f"Create LoRA for Text Encoder: {len(self.text_encoder_loras)} modules.") - - if diffusers.__version__ >= "0.15.0": - LoRANetwork.UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel"] - - self.unet_loras = create_modules(LoRANetwork.LORA_PREFIX_UNET, unet, LoRANetwork.UNET_TARGET_REPLACE_MODULE) - print(f"Create LoRA for U-Net: {len(self.unet_loras)} modules.") - - self.weights_sd = None - - # assertion - names = set() - for lora in self.text_encoder_loras + self.unet_loras: - assert (lora.lora_name not in names), f"duplicated lora name: {lora.lora_name}" - names.add(lora.lora_name) - - lora.apply() - self.add_module(lora.lora_name, lora) - - def reset(self): - for lora in self.text_encoder_loras + self.unet_loras: - lora.enable = False - - def load(self, file, scale): - - weights = None - if os.path.splitext(file)[1] == ".safetensors": - weights = load_file(file) - else: - weights = torch.load(file, map_location="cpu") - - if not weights: - return - - network_alpha = None - network_dim = None - for key, value in weights.items(): - if network_alpha is None and "alpha" in key: - network_alpha = value - if network_dim is None and "lora_down" in key and len(value.size()) == 2: - network_dim = value.size()[0] - - if network_alpha is None: - network_alpha = network_dim - - weights_has_text_encoder = weights_has_unet = False - weights_to_modify = [] - - for key in weights.keys(): - if key.startswith(LoRANetwork.LORA_PREFIX_TEXT_ENCODER): - weights_has_text_encoder = True - - if key.startswith(LoRANetwork.LORA_PREFIX_UNET): - weights_has_unet = True - - if weights_has_text_encoder: - weights_to_modify += self.text_encoder_loras - - if weights_has_unet: - weights_to_modify += self.unet_loras - - for lora in self.text_encoder_loras + self.unet_loras: - lora.resize(network_dim, network_alpha, scale) - if lora in weights_to_modify: - lora.enable = True - - info = self.load_state_dict(weights, False) - if len(info.unexpected_keys) > 0: - print(f"Weights are loaded. Unexpected keys={info.unexpected_keys}") - \ No newline at end of file diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/util/utils.py b/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/util/utils.py deleted file mode 100644 index 4d38b2df4e0c79dfcdbc9d1e57add64cfdbf9dcc..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/util/utils.py +++ /dev/null @@ -1,158 +0,0 @@ -import random -import numpy as np -import cv2 - -def random_bbox(img_height, img_width, vertical_margin, horizontal_margin, mask_height, mask_width): - maxt = img_height - vertical_margin - mask_height - maxl = img_width - horizontal_margin - mask_width - - t = random.randint(vertical_margin, maxt) - l = random.randint(horizontal_margin, maxl) - h = random.randint(mask_height // 2, mask_height) - w = random.randint(mask_width // 2, mask_width) - return (t, l, h, w) # 产生随机块状box,这个box后面会发展成为mask - - -def mid_bbox_mask(img_height, img_width, mask_height, mask_width): - def npmask(bbox, height, width): - mask = np.zeros((height, width, 1), np.float32) - mask[bbox[0]: bbox[0] + bbox[2], bbox[1]: bbox[1] + bbox[3], :] = 255. - return mask - - bbox = (img_height * 3 // 8, img_width * 3 // 8, mask_height, mask_width) - mask = npmask(bbox, img_height, img_width) - - return mask - - -def bbox2mask(img_height, img_width, max_delta_height, max_delta_width, bbox): - """Generate mask tensor from bbox. - - Args: - bbox: configuration tuple, (top, left, height, width) - config: Config should have configuration including IMG_SHAPES, - MAX_DELTA_HEIGHT, MAX_DELTA_WIDTH. - - Returns: - tf.Tensor: output with shape [B, 1, H, W] - - """ - - def npmask(bbox, height, width, delta_h, delta_w): - mask = np.zeros((height, width, 1), np.float32) - h = np.random.randint(delta_h // 2 + 1) # 防止有0产生 - w = np.random.randint(delta_w // 2 + 1) - mask[bbox[0] + h: bbox[0] + bbox[2] - h, bbox[1] + w: bbox[1] + bbox[3] - w, :] = 255. # height_true = height - 2 * h, width_true = width - 2 * w - return mask - - mask = npmask(bbox, img_height, img_width, - max_delta_height, - max_delta_width) - - return mask - - -def matrix2bbox(img_height, img_width, mask_height, mask_width, row, column): - """Generate masks with a matrix form - @param img_height - @param img_width - @param mask_height - @param mask_width - @param row: number of blocks in row - @param column: number of blocks in column - @return mbbox: multiple bboxes in (y, h, h, w) manner - """ - assert img_height - column * mask_height > img_height // 2, "Too many masks across a column" - assert img_width - row * mask_width > img_width // 2, "Too many masks across a row" - - interval_height = (img_height - column * mask_height) // (column + 1) - interval_width = (img_width - row * mask_width) // (row + 1) - - mbbox = [] - for i in range(row): - for j in range(column): - y = interval_height * (j+1) + j * mask_height - x = interval_width * (i+1) + i * mask_width - mbbox.append((y, x, mask_height, mask_width)) - return mbbox - - -def mbbox2masks(img_height, img_width, mbbox): - - def npmask(mbbox, height, width): - mask = np.zeros((height, width, 1), np.float32) - for bbox in mbbox: - mask[bbox[0]: bbox[0] + bbox[2], bbox[1]: bbox[1] + bbox[3], :] = 255. # height_true = height - 2 * h, width_true = width - 2 * w - return mask - - mask = npmask(mbbox, img_height, img_width) - - return mask - - -def draw_line(mask, startX, startY, angle, length, brushWidth): - """assume the size of mask is (H,W,1) - """ - assert len(mask.shape) == 2 or mask.shape[2] == 1, "The channel of mask doesn't fit the opencv format" - offsetX = int(np.round(length * np.cos(angle))) - offsetY = int(np.round(length * np.sin(angle))) - endX = startX + offsetX - endY = startY + offsetY - if endX > mask.shape[1]: - endX = mask.shape[1] - if endY > mask.shape[0]: - endY = mask.shape[0] - mask_processed = cv2.line(mask, (startX, startY), (endX, endY), 255, brushWidth) - return mask_processed, endX, endY - - -def draw_circle(mask, circle_x, circle_y, brushWidth): - radius = brushWidth // 2 - assert len(mask.shape) == 2 or mask.shape[2] == 1, "The channel of mask doesn't fit the opencv format" - mask_processed = cv2.circle(mask, (circle_x, circle_y), radius, 255) - return mask_processed - - -def freeFormMask(img_height, img_width, maxVertex, maxLength, maxBrushWidth, maxAngle): - mask = np.zeros((img_height, img_width)) - numVertex = random.randint(1, maxVertex) - startX = random.randint(10, img_width) - startY = random.randint(10, img_height) - brushWidth = random.randint(10, maxBrushWidth) - for i in range(numVertex): - angle = random.uniform(0, maxAngle) - if i % 2 == 0: - angle = 2 * np.pi - angle - length = random.randint(10, maxLength) - mask, endX, endY = draw_line(mask, startX, startY, angle, length, brushWidth) - startX = startX + int(length * np.sin(angle)) - startY = startY + int(length * np.cos(angle)) - mask = draw_circle(mask, endX, endY, brushWidth) - - if random.random() < 0.5: - mask = np.fliplr(mask) - if random.random() < 0.5: - mask = np.flipud(mask) - - if len(mask.shape) == 2: - mask = mask[:, :, np.newaxis] - - return mask - - -if __name__ == "__main__": - # for stationary mask generation - # stationary_mask_generator(240, 480, 50, 120) - - # for free-form mask generation - # mask = freeFormMask(240, 480, 30, 50, 20, np.pi) - # cv2.imwrite('mask.png', mask) - - # for matrix mask generation - # img_height, img_width = 240, 480 - # masks = matrix2bbox(240, 480, 20, 20, 5, 4) - # matrixMask = mbbox2masks(img_height, img_width, masks) - # cv2.imwrite('matrixMask.png', matrixMask) - pass - - diff --git a/spaces/oliver2023/chatgpt-on-wechat/plugins/tool/__init__.py b/spaces/oliver2023/chatgpt-on-wechat/plugins/tool/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/omdenalagos/job_skill_cat/README.md b/spaces/omdenalagos/job_skill_cat/README.md deleted file mode 100644 index f06757939c1f8e90e3ef1b01f5ffeafa828492e3..0000000000000000000000000000000000000000 --- a/spaces/omdenalagos/job_skill_cat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Job Skill Cat -emoji: 🐨 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/onursavas/langchain-chat-with-pdf/app.py b/spaces/onursavas/langchain-chat-with-pdf/app.py deleted file mode 100644 index d9e0caecdf6b304681aed10ab8ecea3552d43606..0000000000000000000000000000000000000000 --- a/spaces/onursavas/langchain-chat-with-pdf/app.py +++ /dev/null @@ -1,88 +0,0 @@ -import gradio as gr - -from langchain.document_loaders import OnlinePDFLoader - -from langchain.text_splitter import CharacterTextSplitter - -from langchain.llms import HuggingFaceHub - -from langchain.embeddings import HuggingFaceHubEmbeddings - -from langchain.vectorstores import Chroma - -from langchain.chains import RetrievalQA - - - -def loading_pdf(): - return "Loading..." - -def pdf_changes(pdf_doc, repo_id): - - loader = OnlinePDFLoader(pdf_doc.name) - documents = loader.load() - text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0) - texts = text_splitter.split_documents(documents) - embeddings = HuggingFaceHubEmbeddings() - db = Chroma.from_documents(texts, embeddings) - retriever = db.as_retriever() - llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0.1, "max_new_tokens":250}) - global qa - qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True) - return "Ready" - -def add_text(history, text): - history = history + [(text, None)] - return history, "" - -def bot(history): - response = infer(history[-1][0]) - history[-1][1] = response['result'] - return history - -def infer(question): - - query = question - result = qa({"query": query}) - - return result - -css=""" -#col-container {max-width: 700px; margin-left: auto; margin-right: auto;} -""" - -title = """ -
      -

      Chat with PDF

      -

      Upload a .PDF from your computer, click the "Load PDF to LangChain" button,
      - when everything is ready, you can start asking questions about the pdf ;)

      - Duplicate Space -
      -""" - - -with gr.Blocks(css=css) as demo: - with gr.Column(elem_id="col-container"): - gr.HTML(title) - - with gr.Column(): - pdf_doc = gr.File(label="Load a pdf", file_types=['.pdf'], type="file") - repo_id = gr.Dropdown(label="LLM", choices=["google/flan-ul2", "OpenAssistant/oasst-sft-1-pythia-12b", "bigscience/bloomz"], value="google/flan-ul2") - with gr.Row(): - langchain_status = gr.Textbox(label="Status", placeholder="", interactive=False) - load_pdf = gr.Button("Load pdf to langchain") - - chatbot = gr.Chatbot([], elem_id="chatbot").style(height=350) - question = gr.Textbox(label="Question", placeholder="Type your question and hit Enter ") - submit_btn = gr.Button("Send message") - #load_pdf.click(loading_pdf, None, langchain_status, queue=False) - repo_id.change(pdf_changes, inputs=[pdf_doc, repo_id], outputs=[langchain_status], queue=False) - load_pdf.click(pdf_changes, inputs=[pdf_doc, repo_id], outputs=[langchain_status], queue=False) - question.submit(add_text, [chatbot, question], [chatbot, question]).then( - bot, chatbot, chatbot - ) - submit_btn.click(add_text, [chatbot, question], [chatbot, question]).then( - bot, chatbot, chatbot - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/openaccess-ai-collective/jackalope-7b/README.md b/spaces/openaccess-ai-collective/jackalope-7b/README.md deleted file mode 100644 index ff07cc25c590166711b8a233016cc65b10a76506..0000000000000000000000000000000000000000 --- a/spaces/openaccess-ai-collective/jackalope-7b/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Jackalope 7b -emoji: 🐰🦌 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/parsaesmaeilie/RecommenderSysteam/README.md b/spaces/parsaesmaeilie/RecommenderSysteam/README.md deleted file mode 100644 index 602d7a13a6a11e1fb63296177ca18434981003b3..0000000000000000000000000000000000000000 --- a/spaces/parsaesmaeilie/RecommenderSysteam/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: RecommenderSysteam -emoji: 🔥 -colorFrom: red -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/penscola/sale_predictions/app.py b/spaces/penscola/sale_predictions/app.py deleted file mode 100644 index ed4fefec5c6f21bcd7072218909a1c481199b0bd..0000000000000000000000000000000000000000 --- a/spaces/penscola/sale_predictions/app.py +++ /dev/null @@ -1,126 +0,0 @@ -import pandas as pd -import numpy as np -import streamlit as st -import os -import time -import pickle -import seaborn as sns -import matplotlib.pyplot as plt -import pip - - -try: - #insert headers - st.header(" Welcome to Sales Prediction Using Prophet ") - st.subheader("To help you know your future sales📈...") - st.image("future.png", width=500, caption="Sales Prediction") - - Disp_results = pd.DataFrame() # Initialize for download - - # Take input - with st.form("This form", clear_on_submit=True): - st.subheader("Enter the number of day(s)/Week(s) you want to predict, And the frequency as D for Daily or W for weekly ") - - frequency = str(st.text_input("Frequency 'D' for Daily 'W' for weekly ")).upper() # convert to string and change to upper - - Number_of_days = int(st.number_input("Number of day(s)/Week(s)")) # convert to int - - submit = st.form_submit_button("Predict your sales") - - # process the input - if submit: - # check if we have the right data type - if frequency == "D" or frequency == 'W': - st.success("Inputs received successfully ✅") - - # import model - with open('prophet_model.pkl', 'rb') as f: - model = pickle.load(f) - - # pass inputs to the model(To make predictions, prophet requires number of days and frequency) - future = model.make_future_dataframe(periods=Number_of_days, freq=str(frequency), include_history=False) - - # Make prediction - forecast = model.predict(future) - - # show results - print(f'[INFO]: The whole results {forecast}') - - # pick the relevant columns from the forecast - sales_forecast = forecast[['ds', 'yhat_lower', 'yhat_upper', 'yhat']] - - # rename the columns - Disp_results = sales_forecast.rename(columns={'ds': 'Date', 'yhat_lower': 'lowest Expected sales', 'yhat_upper': 'Highest Expected Sales', 'yhat': 'Expected Sales'}) - - # print result dataframe to terminal - print(f'[INFO]: results dataframe {Disp_results}') - - # show progress - with st.spinner("Prediction in progress..."): - time.sleep(2) - st.balloons() - st.success("Great✅") - - # Display results - if frequency == "W": - output_frequency = 'Week(s)' - else: - output_frequency = 'Day(s)' - - # Check frequency - st.write(f"These are your predicted sales in the next {Number_of_days} {output_frequency}") - st.dataframe(Disp_results) - - # Display the graph of sales - st.title(f"Line Graph Of Predicted Sales Over {Number_of_days} {output_frequency} ") - # Line Graph - st.line_chart(data=Disp_results, x='Date', y='Expected Sales') - print('[INFO]: Line Chart displayed') - - else: - st.error("Input the right frequency or Days ⚠") - - # Print input to the terminal - print(f'[INFO]: These are the inputs to the model {Number_of_days},{frequency}') - print(f"[INFO]: Inputs received") - - - # Create a function to convert df to csv - def convert_to_csv(df): - return df.to_csv() - - - # Create an expander - expand = st.expander('Download Results as CSV') - with expand: - st.download_button( - 'Download results', - convert_to_csv(Disp_results), - 'prediction_results.csv', - 'text/csv', - 'download' - ) - - - # Create Sidebar for Description - sidebar = st.sidebar.title('Sales Prediction') - - # first option - option1 = st.sidebar.button('About', key="About") - - # second option - option2 = st.sidebar.button('About the sales prediction', key="sales prediction") - - # Display text for a selected option - if option1: - st.sidebar.write('This is a Sales prediction app Using Prophet(Developed by meta), this project was done under the Azubi Africa Data Analysis Training program ') - - elif option2: - st.sidebar.write('This is a time series analysis & forecasting problem. In this project, we shalll predict store sales on data from Corporation Favorita, a large Ecuadorian-based grocery retailer. Specifically, this app predicts the sales for up to weeks in advance for Corporation Favorita ') - -except: - st.error('''something went wrong: Make sure you entered the correct number of days - otherwise contact admin! - ''' - ) - \ No newline at end of file diff --git a/spaces/pix2pix-zero-library/pix2pix-zero-demo/submodules/pix2pix-zero/src/utils/scheduler.py b/spaces/pix2pix-zero-library/pix2pix-zero-demo/submodules/pix2pix-zero/src/utils/scheduler.py deleted file mode 100644 index 282e036db77c48fc12e2756cdfc72b4dcc1cdc30..0000000000000000000000000000000000000000 --- a/spaces/pix2pix-zero-library/pix2pix-zero-demo/submodules/pix2pix-zero/src/utils/scheduler.py +++ /dev/null @@ -1,289 +0,0 @@ -# Copyright 2022 Stanford University Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion -# and https://github.com/hojonathanho/diffusion -import os, sys, pdb -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.utils import BaseOutput, randn_tensor -from diffusers.schedulers.scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->DDIM -class DDIMSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torch.Tensor: - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return torch.tensor(betas) - - -class DDIMInverseScheduler(SchedulerMixin, ConfigMixin): - """ - Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising - diffusion probabilistic models (DDPMs) with non-Markovian guidance. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2010.02502 - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - clip_sample (`bool`, default `True`): - option to clip predicted sample between -1 and 1 for numerical stability. - set_alpha_to_one (`bool`, default `True`): - each diffusion step uses the value of alphas product at that step and at the previous one. For the final - step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, - otherwise it uses the value of alpha at step 0. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - clip_sample: bool = True, - set_alpha_to_one: bool = True, - steps_offset: int = 0, - prediction_type: str = "epsilon", - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # At every step in ddim, we are looking into the previous alphas_cumprod - # For the final step, there is no previous alphas_cumprod because we are already at 0 - # `set_alpha_to_one` decides whether we set this parameter simply to one or - # whether we use the final alpha of the "non-previous" one. - self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0] - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy().astype(np.int64)) - - def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def _get_variance(self, timestep, prev_timestep): - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev) - - return variance - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - - if num_inference_steps > self.config.num_train_timesteps: - raise ValueError( - f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" - f" maximal {self.config.num_train_timesteps} timesteps." - ) - - self.num_inference_steps = num_inference_steps - step_ratio = self.config.num_train_timesteps // self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64) - self.timesteps = torch.from_numpy(timesteps).to(device) - self.timesteps += self.config.steps_offset - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - eta: float = 0.0, - use_clipped_model_output: bool = False, - generator=None, - variance_noise: Optional[torch.FloatTensor] = None, - return_dict: bool = True, - reverse=False - ) -> Union[DDIMSchedulerOutput, Tuple]: - - - e_t = model_output - - x = sample - prev_timestep = timestep + self.config.num_train_timesteps // self.num_inference_steps - # print(timestep, prev_timestep) - a_t = alpha_prod_t = self.alphas_cumprod[timestep-1] - a_prev = alpha_t_prev = self.alphas_cumprod[prev_timestep-1] if prev_timestep >= 0 else self.final_alpha_cumprod - beta_prod_t = 1 - alpha_prod_t - - pred_x0 = (x - (1-a_t)**0.5 * e_t) / a_t.sqrt() - # direction pointing to x_t - dir_xt = (1. - a_prev).sqrt() * e_t - x = a_prev.sqrt()*pred_x0 + dir_xt - if not return_dict: - return (x,) - return DDIMSchedulerOutput(prev_sample=x, pred_original_sample=pred_x0) - - - - - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - self.alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - def get_velocity( - self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as sample - self.alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype) - timesteps = timesteps.to(sample.device) - - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(sample.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample - return velocity - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/pixiou/bingo/src/components/voice.tsx b/spaces/pixiou/bingo/src/components/voice.tsx deleted file mode 100644 index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000 --- a/spaces/pixiou/bingo/src/components/voice.tsx +++ /dev/null @@ -1,52 +0,0 @@ -import React, { useEffect } from 'react' -import { useSetAtom } from 'jotai' -import { useBing } from '@/lib/hooks/use-bing' -import Image from 'next/image' -import VoiceIcon from '@/assets/images/voice.svg' -import VoiceButton from './ui/voice' -import { SR } from '@/lib/bots/bing/sr' -import { voiceListenAtom } from '@/state' - -const sr = new SR(['发送', '清空', '退出']) - -const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => { - const setListen = useSetAtom(voiceListenAtom) - useEffect(() => { - if (sr.listening) return - sr.transcript = !isSpeaking - }, [isSpeaking]) - - useEffect(() => { - sr.onchange = (msg: string, command?: string) => { - switch (command) { - case '退出': - sr.stop() - break; - case '发送': - sendMessage(input) - case '清空': - setInput('') - break; - default: - setInput(input + msg) - } - } - }, [input]) - - const switchSR = (enable: boolean = false) => { - setListen(enable) - if (enable) { - sr.start() - } else { - sr.stop() - } - } - - return sr.listening ? ( - switchSR(false)} /> - ) : ( - start voice switchSR(true)} /> - ) -}; - -export default Voice; diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/metadata.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/metadata.py deleted file mode 100644 index c329e1977fd1ed403bb65529296d5c803a6b289f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/metadata.py +++ /dev/null @@ -1,1076 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -"""Implementation of the Metadata for Python packages PEPs. - -Supports all metadata formats (1.0, 1.1, 1.2, 1.3/2.1 and 2.2). -""" -from __future__ import unicode_literals - -import codecs -from email import message_from_file -import json -import logging -import re - - -from . import DistlibException, __version__ -from .compat import StringIO, string_types, text_type -from .markers import interpret -from .util import extract_by_key, get_extras -from .version import get_scheme, PEP440_VERSION_RE - -logger = logging.getLogger(__name__) - - -class MetadataMissingError(DistlibException): - """A required metadata is missing""" - - -class MetadataConflictError(DistlibException): - """Attempt to read or write metadata fields that are conflictual.""" - - -class MetadataUnrecognizedVersionError(DistlibException): - """Unknown metadata version number.""" - - -class MetadataInvalidError(DistlibException): - """A metadata value is invalid""" - -# public API of this module -__all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION'] - -# Encoding used for the PKG-INFO files -PKG_INFO_ENCODING = 'utf-8' - -# preferred version. Hopefully will be changed -# to 1.2 once PEP 345 is supported everywhere -PKG_INFO_PREFERRED_VERSION = '1.1' - -_LINE_PREFIX_1_2 = re.compile('\n \\|') -_LINE_PREFIX_PRE_1_2 = re.compile('\n ') -_241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', - 'Summary', 'Description', - 'Keywords', 'Home-page', 'Author', 'Author-email', - 'License') - -_314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', - 'Supported-Platform', 'Summary', 'Description', - 'Keywords', 'Home-page', 'Author', 'Author-email', - 'License', 'Classifier', 'Download-URL', 'Obsoletes', - 'Provides', 'Requires') - -_314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier', - 'Download-URL') - -_345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', - 'Supported-Platform', 'Summary', 'Description', - 'Keywords', 'Home-page', 'Author', 'Author-email', - 'Maintainer', 'Maintainer-email', 'License', - 'Classifier', 'Download-URL', 'Obsoletes-Dist', - 'Project-URL', 'Provides-Dist', 'Requires-Dist', - 'Requires-Python', 'Requires-External') - -_345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python', - 'Obsoletes-Dist', 'Requires-External', 'Maintainer', - 'Maintainer-email', 'Project-URL') - -_426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', - 'Supported-Platform', 'Summary', 'Description', - 'Keywords', 'Home-page', 'Author', 'Author-email', - 'Maintainer', 'Maintainer-email', 'License', - 'Classifier', 'Download-URL', 'Obsoletes-Dist', - 'Project-URL', 'Provides-Dist', 'Requires-Dist', - 'Requires-Python', 'Requires-External', 'Private-Version', - 'Obsoleted-By', 'Setup-Requires-Dist', 'Extension', - 'Provides-Extra') - -_426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By', - 'Setup-Requires-Dist', 'Extension') - -# See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in -# the metadata. Include them in the tuple literal below to allow them -# (for now). -# Ditto for Obsoletes - see issue #140. -_566_FIELDS = _426_FIELDS + ('Description-Content-Type', - 'Requires', 'Provides', 'Obsoletes') - -_566_MARKERS = ('Description-Content-Type',) - -_643_MARKERS = ('Dynamic', 'License-File') - -_643_FIELDS = _566_FIELDS + _643_MARKERS - -_ALL_FIELDS = set() -_ALL_FIELDS.update(_241_FIELDS) -_ALL_FIELDS.update(_314_FIELDS) -_ALL_FIELDS.update(_345_FIELDS) -_ALL_FIELDS.update(_426_FIELDS) -_ALL_FIELDS.update(_566_FIELDS) -_ALL_FIELDS.update(_643_FIELDS) - -EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''') - - -def _version2fieldlist(version): - if version == '1.0': - return _241_FIELDS - elif version == '1.1': - return _314_FIELDS - elif version == '1.2': - return _345_FIELDS - elif version in ('1.3', '2.1'): - # avoid adding field names if already there - return _345_FIELDS + tuple(f for f in _566_FIELDS if f not in _345_FIELDS) - elif version == '2.0': - raise ValueError('Metadata 2.0 is withdrawn and not supported') - # return _426_FIELDS - elif version == '2.2': - return _643_FIELDS - raise MetadataUnrecognizedVersionError(version) - - -def _best_version(fields): - """Detect the best version depending on the fields used.""" - def _has_marker(keys, markers): - for marker in markers: - if marker in keys: - return True - return False - - keys = [] - for key, value in fields.items(): - if value in ([], 'UNKNOWN', None): - continue - keys.append(key) - - possible_versions = ['1.0', '1.1', '1.2', '1.3', '2.1', '2.2'] # 2.0 removed - - # first let's try to see if a field is not part of one of the version - for key in keys: - if key not in _241_FIELDS and '1.0' in possible_versions: - possible_versions.remove('1.0') - logger.debug('Removed 1.0 due to %s', key) - if key not in _314_FIELDS and '1.1' in possible_versions: - possible_versions.remove('1.1') - logger.debug('Removed 1.1 due to %s', key) - if key not in _345_FIELDS and '1.2' in possible_versions: - possible_versions.remove('1.2') - logger.debug('Removed 1.2 due to %s', key) - if key not in _566_FIELDS and '1.3' in possible_versions: - possible_versions.remove('1.3') - logger.debug('Removed 1.3 due to %s', key) - if key not in _566_FIELDS and '2.1' in possible_versions: - if key != 'Description': # In 2.1, description allowed after headers - possible_versions.remove('2.1') - logger.debug('Removed 2.1 due to %s', key) - if key not in _643_FIELDS and '2.2' in possible_versions: - possible_versions.remove('2.2') - logger.debug('Removed 2.2 due to %s', key) - # if key not in _426_FIELDS and '2.0' in possible_versions: - # possible_versions.remove('2.0') - # logger.debug('Removed 2.0 due to %s', key) - - # possible_version contains qualified versions - if len(possible_versions) == 1: - return possible_versions[0] # found ! - elif len(possible_versions) == 0: - logger.debug('Out of options - unknown metadata set: %s', fields) - raise MetadataConflictError('Unknown metadata set') - - # let's see if one unique marker is found - is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS) - is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS) - is_2_1 = '2.1' in possible_versions and _has_marker(keys, _566_MARKERS) - # is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS) - is_2_2 = '2.2' in possible_versions and _has_marker(keys, _643_MARKERS) - if int(is_1_1) + int(is_1_2) + int(is_2_1) + int(is_2_2) > 1: - raise MetadataConflictError('You used incompatible 1.1/1.2/2.1/2.2 fields') - - # we have the choice, 1.0, or 1.2, 2.1 or 2.2 - # - 1.0 has a broken Summary field but works with all tools - # - 1.1 is to avoid - # - 1.2 fixes Summary but has little adoption - # - 2.1 adds more features - # - 2.2 is the latest - if not is_1_1 and not is_1_2 and not is_2_1 and not is_2_2: - # we couldn't find any specific marker - if PKG_INFO_PREFERRED_VERSION in possible_versions: - return PKG_INFO_PREFERRED_VERSION - if is_1_1: - return '1.1' - if is_1_2: - return '1.2' - if is_2_1: - return '2.1' - # if is_2_2: - # return '2.2' - - return '2.2' - -# This follows the rules about transforming keys as described in -# https://www.python.org/dev/peps/pep-0566/#id17 -_ATTR2FIELD = { - name.lower().replace("-", "_"): name for name in _ALL_FIELDS -} -_FIELD2ATTR = {field: attr for attr, field in _ATTR2FIELD.items()} - -_PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist') -_VERSIONS_FIELDS = ('Requires-Python',) -_VERSION_FIELDS = ('Version',) -_LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes', - 'Requires', 'Provides', 'Obsoletes-Dist', - 'Provides-Dist', 'Requires-Dist', 'Requires-External', - 'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist', - 'Provides-Extra', 'Extension', 'License-File') -_LISTTUPLEFIELDS = ('Project-URL',) - -_ELEMENTSFIELD = ('Keywords',) - -_UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description') - -_MISSING = object() - -_FILESAFE = re.compile('[^A-Za-z0-9.]+') - - -def _get_name_and_version(name, version, for_filename=False): - """Return the distribution name with version. - - If for_filename is true, return a filename-escaped form.""" - if for_filename: - # For both name and version any runs of non-alphanumeric or '.' - # characters are replaced with a single '-'. Additionally any - # spaces in the version string become '.' - name = _FILESAFE.sub('-', name) - version = _FILESAFE.sub('-', version.replace(' ', '.')) - return '%s-%s' % (name, version) - - -class LegacyMetadata(object): - """The legacy metadata of a release. - - Supports versions 1.0, 1.1, 1.2, 2.0 and 1.3/2.1 (auto-detected). You can - instantiate the class with one of these arguments (or none): - - *path*, the path to a metadata file - - *fileobj* give a file-like object with metadata as content - - *mapping* is a dict-like object - - *scheme* is a version scheme name - """ - # TODO document the mapping API and UNKNOWN default key - - def __init__(self, path=None, fileobj=None, mapping=None, - scheme='default'): - if [path, fileobj, mapping].count(None) < 2: - raise TypeError('path, fileobj and mapping are exclusive') - self._fields = {} - self.requires_files = [] - self._dependencies = None - self.scheme = scheme - if path is not None: - self.read(path) - elif fileobj is not None: - self.read_file(fileobj) - elif mapping is not None: - self.update(mapping) - self.set_metadata_version() - - def set_metadata_version(self): - self._fields['Metadata-Version'] = _best_version(self._fields) - - def _write_field(self, fileobj, name, value): - fileobj.write('%s: %s\n' % (name, value)) - - def __getitem__(self, name): - return self.get(name) - - def __setitem__(self, name, value): - return self.set(name, value) - - def __delitem__(self, name): - field_name = self._convert_name(name) - try: - del self._fields[field_name] - except KeyError: - raise KeyError(name) - - def __contains__(self, name): - return (name in self._fields or - self._convert_name(name) in self._fields) - - def _convert_name(self, name): - if name in _ALL_FIELDS: - return name - name = name.replace('-', '_').lower() - return _ATTR2FIELD.get(name, name) - - def _default_value(self, name): - if name in _LISTFIELDS or name in _ELEMENTSFIELD: - return [] - return 'UNKNOWN' - - def _remove_line_prefix(self, value): - if self.metadata_version in ('1.0', '1.1'): - return _LINE_PREFIX_PRE_1_2.sub('\n', value) - else: - return _LINE_PREFIX_1_2.sub('\n', value) - - def __getattr__(self, name): - if name in _ATTR2FIELD: - return self[name] - raise AttributeError(name) - - # - # Public API - # - -# dependencies = property(_get_dependencies, _set_dependencies) - - def get_fullname(self, filesafe=False): - """Return the distribution name with version. - - If filesafe is true, return a filename-escaped form.""" - return _get_name_and_version(self['Name'], self['Version'], filesafe) - - def is_field(self, name): - """return True if name is a valid metadata key""" - name = self._convert_name(name) - return name in _ALL_FIELDS - - def is_multi_field(self, name): - name = self._convert_name(name) - return name in _LISTFIELDS - - def read(self, filepath): - """Read the metadata values from a file path.""" - fp = codecs.open(filepath, 'r', encoding='utf-8') - try: - self.read_file(fp) - finally: - fp.close() - - def read_file(self, fileob): - """Read the metadata values from a file object.""" - msg = message_from_file(fileob) - self._fields['Metadata-Version'] = msg['metadata-version'] - - # When reading, get all the fields we can - for field in _ALL_FIELDS: - if field not in msg: - continue - if field in _LISTFIELDS: - # we can have multiple lines - values = msg.get_all(field) - if field in _LISTTUPLEFIELDS and values is not None: - values = [tuple(value.split(',')) for value in values] - self.set(field, values) - else: - # single line - value = msg[field] - if value is not None and value != 'UNKNOWN': - self.set(field, value) - - # PEP 566 specifies that the body be used for the description, if - # available - body = msg.get_payload() - self["Description"] = body if body else self["Description"] - # logger.debug('Attempting to set metadata for %s', self) - # self.set_metadata_version() - - def write(self, filepath, skip_unknown=False): - """Write the metadata fields to filepath.""" - fp = codecs.open(filepath, 'w', encoding='utf-8') - try: - self.write_file(fp, skip_unknown) - finally: - fp.close() - - def write_file(self, fileobject, skip_unknown=False): - """Write the PKG-INFO format data to a file object.""" - self.set_metadata_version() - - for field in _version2fieldlist(self['Metadata-Version']): - values = self.get(field) - if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']): - continue - if field in _ELEMENTSFIELD: - self._write_field(fileobject, field, ','.join(values)) - continue - if field not in _LISTFIELDS: - if field == 'Description': - if self.metadata_version in ('1.0', '1.1'): - values = values.replace('\n', '\n ') - else: - values = values.replace('\n', '\n |') - values = [values] - - if field in _LISTTUPLEFIELDS: - values = [','.join(value) for value in values] - - for value in values: - self._write_field(fileobject, field, value) - - def update(self, other=None, **kwargs): - """Set metadata values from the given iterable `other` and kwargs. - - Behavior is like `dict.update`: If `other` has a ``keys`` method, - they are looped over and ``self[key]`` is assigned ``other[key]``. - Else, ``other`` is an iterable of ``(key, value)`` iterables. - - Keys that don't match a metadata field or that have an empty value are - dropped. - """ - def _set(key, value): - if key in _ATTR2FIELD and value: - self.set(self._convert_name(key), value) - - if not other: - # other is None or empty container - pass - elif hasattr(other, 'keys'): - for k in other.keys(): - _set(k, other[k]) - else: - for k, v in other: - _set(k, v) - - if kwargs: - for k, v in kwargs.items(): - _set(k, v) - - def set(self, name, value): - """Control then set a metadata field.""" - name = self._convert_name(name) - - if ((name in _ELEMENTSFIELD or name == 'Platform') and - not isinstance(value, (list, tuple))): - if isinstance(value, string_types): - value = [v.strip() for v in value.split(',')] - else: - value = [] - elif (name in _LISTFIELDS and - not isinstance(value, (list, tuple))): - if isinstance(value, string_types): - value = [value] - else: - value = [] - - if logger.isEnabledFor(logging.WARNING): - project_name = self['Name'] - - scheme = get_scheme(self.scheme) - if name in _PREDICATE_FIELDS and value is not None: - for v in value: - # check that the values are valid - if not scheme.is_valid_matcher(v.split(';')[0]): - logger.warning( - "'%s': '%s' is not valid (field '%s')", - project_name, v, name) - # FIXME this rejects UNKNOWN, is that right? - elif name in _VERSIONS_FIELDS and value is not None: - if not scheme.is_valid_constraint_list(value): - logger.warning("'%s': '%s' is not a valid version (field '%s')", - project_name, value, name) - elif name in _VERSION_FIELDS and value is not None: - if not scheme.is_valid_version(value): - logger.warning("'%s': '%s' is not a valid version (field '%s')", - project_name, value, name) - - if name in _UNICODEFIELDS: - if name == 'Description': - value = self._remove_line_prefix(value) - - self._fields[name] = value - - def get(self, name, default=_MISSING): - """Get a metadata field.""" - name = self._convert_name(name) - if name not in self._fields: - if default is _MISSING: - default = self._default_value(name) - return default - if name in _UNICODEFIELDS: - value = self._fields[name] - return value - elif name in _LISTFIELDS: - value = self._fields[name] - if value is None: - return [] - res = [] - for val in value: - if name not in _LISTTUPLEFIELDS: - res.append(val) - else: - # That's for Project-URL - res.append((val[0], val[1])) - return res - - elif name in _ELEMENTSFIELD: - value = self._fields[name] - if isinstance(value, string_types): - return value.split(',') - return self._fields[name] - - def check(self, strict=False): - """Check if the metadata is compliant. If strict is True then raise if - no Name or Version are provided""" - self.set_metadata_version() - - # XXX should check the versions (if the file was loaded) - missing, warnings = [], [] - - for attr in ('Name', 'Version'): # required by PEP 345 - if attr not in self: - missing.append(attr) - - if strict and missing != []: - msg = 'missing required metadata: %s' % ', '.join(missing) - raise MetadataMissingError(msg) - - for attr in ('Home-page', 'Author'): - if attr not in self: - missing.append(attr) - - # checking metadata 1.2 (XXX needs to check 1.1, 1.0) - if self['Metadata-Version'] != '1.2': - return missing, warnings - - scheme = get_scheme(self.scheme) - - def are_valid_constraints(value): - for v in value: - if not scheme.is_valid_matcher(v.split(';')[0]): - return False - return True - - for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints), - (_VERSIONS_FIELDS, - scheme.is_valid_constraint_list), - (_VERSION_FIELDS, - scheme.is_valid_version)): - for field in fields: - value = self.get(field, None) - if value is not None and not controller(value): - warnings.append("Wrong value for '%s': %s" % (field, value)) - - return missing, warnings - - def todict(self, skip_missing=False): - """Return fields as a dict. - - Field names will be converted to use the underscore-lowercase style - instead of hyphen-mixed case (i.e. home_page instead of Home-page). - This is as per https://www.python.org/dev/peps/pep-0566/#id17. - """ - self.set_metadata_version() - - fields = _version2fieldlist(self['Metadata-Version']) - - data = {} - - for field_name in fields: - if not skip_missing or field_name in self._fields: - key = _FIELD2ATTR[field_name] - if key != 'project_url': - data[key] = self[field_name] - else: - data[key] = [','.join(u) for u in self[field_name]] - - return data - - def add_requirements(self, requirements): - if self['Metadata-Version'] == '1.1': - # we can't have 1.1 metadata *and* Setuptools requires - for field in ('Obsoletes', 'Requires', 'Provides'): - if field in self: - del self[field] - self['Requires-Dist'] += requirements - - # Mapping API - # TODO could add iter* variants - - def keys(self): - return list(_version2fieldlist(self['Metadata-Version'])) - - def __iter__(self): - for key in self.keys(): - yield key - - def values(self): - return [self[key] for key in self.keys()] - - def items(self): - return [(key, self[key]) for key in self.keys()] - - def __repr__(self): - return '<%s %s %s>' % (self.__class__.__name__, self.name, - self.version) - - -METADATA_FILENAME = 'pydist.json' -WHEEL_METADATA_FILENAME = 'metadata.json' -LEGACY_METADATA_FILENAME = 'METADATA' - - -class Metadata(object): - """ - The metadata of a release. This implementation uses 2.1 - metadata where possible. If not possible, it wraps a LegacyMetadata - instance which handles the key-value metadata format. - """ - - METADATA_VERSION_MATCHER = re.compile(r'^\d+(\.\d+)*$') - - NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I) - - FIELDNAME_MATCHER = re.compile('^[A-Z]([0-9A-Z-]*[0-9A-Z])?$', re.I) - - VERSION_MATCHER = PEP440_VERSION_RE - - SUMMARY_MATCHER = re.compile('.{1,2047}') - - METADATA_VERSION = '2.0' - - GENERATOR = 'distlib (%s)' % __version__ - - MANDATORY_KEYS = { - 'name': (), - 'version': (), - 'summary': ('legacy',), - } - - INDEX_KEYS = ('name version license summary description author ' - 'author_email keywords platform home_page classifiers ' - 'download_url') - - DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires ' - 'dev_requires provides meta_requires obsoleted_by ' - 'supports_environments') - - SYNTAX_VALIDATORS = { - 'metadata_version': (METADATA_VERSION_MATCHER, ()), - 'name': (NAME_MATCHER, ('legacy',)), - 'version': (VERSION_MATCHER, ('legacy',)), - 'summary': (SUMMARY_MATCHER, ('legacy',)), - 'dynamic': (FIELDNAME_MATCHER, ('legacy',)), - } - - __slots__ = ('_legacy', '_data', 'scheme') - - def __init__(self, path=None, fileobj=None, mapping=None, - scheme='default'): - if [path, fileobj, mapping].count(None) < 2: - raise TypeError('path, fileobj and mapping are exclusive') - self._legacy = None - self._data = None - self.scheme = scheme - #import pdb; pdb.set_trace() - if mapping is not None: - try: - self._validate_mapping(mapping, scheme) - self._data = mapping - except MetadataUnrecognizedVersionError: - self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme) - self.validate() - else: - data = None - if path: - with open(path, 'rb') as f: - data = f.read() - elif fileobj: - data = fileobj.read() - if data is None: - # Initialised with no args - to be added - self._data = { - 'metadata_version': self.METADATA_VERSION, - 'generator': self.GENERATOR, - } - else: - if not isinstance(data, text_type): - data = data.decode('utf-8') - try: - self._data = json.loads(data) - self._validate_mapping(self._data, scheme) - except ValueError: - # Note: MetadataUnrecognizedVersionError does not - # inherit from ValueError (it's a DistlibException, - # which should not inherit from ValueError). - # The ValueError comes from the json.load - if that - # succeeds and we get a validation error, we want - # that to propagate - self._legacy = LegacyMetadata(fileobj=StringIO(data), - scheme=scheme) - self.validate() - - common_keys = set(('name', 'version', 'license', 'keywords', 'summary')) - - none_list = (None, list) - none_dict = (None, dict) - - mapped_keys = { - 'run_requires': ('Requires-Dist', list), - 'build_requires': ('Setup-Requires-Dist', list), - 'dev_requires': none_list, - 'test_requires': none_list, - 'meta_requires': none_list, - 'extras': ('Provides-Extra', list), - 'modules': none_list, - 'namespaces': none_list, - 'exports': none_dict, - 'commands': none_dict, - 'classifiers': ('Classifier', list), - 'source_url': ('Download-URL', None), - 'metadata_version': ('Metadata-Version', None), - } - - del none_list, none_dict - - def __getattribute__(self, key): - common = object.__getattribute__(self, 'common_keys') - mapped = object.__getattribute__(self, 'mapped_keys') - if key in mapped: - lk, maker = mapped[key] - if self._legacy: - if lk is None: - result = None if maker is None else maker() - else: - result = self._legacy.get(lk) - else: - value = None if maker is None else maker() - if key not in ('commands', 'exports', 'modules', 'namespaces', - 'classifiers'): - result = self._data.get(key, value) - else: - # special cases for PEP 459 - sentinel = object() - result = sentinel - d = self._data.get('extensions') - if d: - if key == 'commands': - result = d.get('python.commands', value) - elif key == 'classifiers': - d = d.get('python.details') - if d: - result = d.get(key, value) - else: - d = d.get('python.exports') - if not d: - d = self._data.get('python.exports') - if d: - result = d.get(key, value) - if result is sentinel: - result = value - elif key not in common: - result = object.__getattribute__(self, key) - elif self._legacy: - result = self._legacy.get(key) - else: - result = self._data.get(key) - return result - - def _validate_value(self, key, value, scheme=None): - if key in self.SYNTAX_VALIDATORS: - pattern, exclusions = self.SYNTAX_VALIDATORS[key] - if (scheme or self.scheme) not in exclusions: - m = pattern.match(value) - if not m: - raise MetadataInvalidError("'%s' is an invalid value for " - "the '%s' property" % (value, - key)) - - def __setattr__(self, key, value): - self._validate_value(key, value) - common = object.__getattribute__(self, 'common_keys') - mapped = object.__getattribute__(self, 'mapped_keys') - if key in mapped: - lk, _ = mapped[key] - if self._legacy: - if lk is None: - raise NotImplementedError - self._legacy[lk] = value - elif key not in ('commands', 'exports', 'modules', 'namespaces', - 'classifiers'): - self._data[key] = value - else: - # special cases for PEP 459 - d = self._data.setdefault('extensions', {}) - if key == 'commands': - d['python.commands'] = value - elif key == 'classifiers': - d = d.setdefault('python.details', {}) - d[key] = value - else: - d = d.setdefault('python.exports', {}) - d[key] = value - elif key not in common: - object.__setattr__(self, key, value) - else: - if key == 'keywords': - if isinstance(value, string_types): - value = value.strip() - if value: - value = value.split() - else: - value = [] - if self._legacy: - self._legacy[key] = value - else: - self._data[key] = value - - @property - def name_and_version(self): - return _get_name_and_version(self.name, self.version, True) - - @property - def provides(self): - if self._legacy: - result = self._legacy['Provides-Dist'] - else: - result = self._data.setdefault('provides', []) - s = '%s (%s)' % (self.name, self.version) - if s not in result: - result.append(s) - return result - - @provides.setter - def provides(self, value): - if self._legacy: - self._legacy['Provides-Dist'] = value - else: - self._data['provides'] = value - - def get_requirements(self, reqts, extras=None, env=None): - """ - Base method to get dependencies, given a set of extras - to satisfy and an optional environment context. - :param reqts: A list of sometimes-wanted dependencies, - perhaps dependent on extras and environment. - :param extras: A list of optional components being requested. - :param env: An optional environment for marker evaluation. - """ - if self._legacy: - result = reqts - else: - result = [] - extras = get_extras(extras or [], self.extras) - for d in reqts: - if 'extra' not in d and 'environment' not in d: - # unconditional - include = True - else: - if 'extra' not in d: - # Not extra-dependent - only environment-dependent - include = True - else: - include = d.get('extra') in extras - if include: - # Not excluded because of extras, check environment - marker = d.get('environment') - if marker: - include = interpret(marker, env) - if include: - result.extend(d['requires']) - for key in ('build', 'dev', 'test'): - e = ':%s:' % key - if e in extras: - extras.remove(e) - # A recursive call, but it should terminate since 'test' - # has been removed from the extras - reqts = self._data.get('%s_requires' % key, []) - result.extend(self.get_requirements(reqts, extras=extras, - env=env)) - return result - - @property - def dictionary(self): - if self._legacy: - return self._from_legacy() - return self._data - - @property - def dependencies(self): - if self._legacy: - raise NotImplementedError - else: - return extract_by_key(self._data, self.DEPENDENCY_KEYS) - - @dependencies.setter - def dependencies(self, value): - if self._legacy: - raise NotImplementedError - else: - self._data.update(value) - - def _validate_mapping(self, mapping, scheme): - if mapping.get('metadata_version') != self.METADATA_VERSION: - raise MetadataUnrecognizedVersionError() - missing = [] - for key, exclusions in self.MANDATORY_KEYS.items(): - if key not in mapping: - if scheme not in exclusions: - missing.append(key) - if missing: - msg = 'Missing metadata items: %s' % ', '.join(missing) - raise MetadataMissingError(msg) - for k, v in mapping.items(): - self._validate_value(k, v, scheme) - - def validate(self): - if self._legacy: - missing, warnings = self._legacy.check(True) - if missing or warnings: - logger.warning('Metadata: missing: %s, warnings: %s', - missing, warnings) - else: - self._validate_mapping(self._data, self.scheme) - - def todict(self): - if self._legacy: - return self._legacy.todict(True) - else: - result = extract_by_key(self._data, self.INDEX_KEYS) - return result - - def _from_legacy(self): - assert self._legacy and not self._data - result = { - 'metadata_version': self.METADATA_VERSION, - 'generator': self.GENERATOR, - } - lmd = self._legacy.todict(True) # skip missing ones - for k in ('name', 'version', 'license', 'summary', 'description', - 'classifier'): - if k in lmd: - if k == 'classifier': - nk = 'classifiers' - else: - nk = k - result[nk] = lmd[k] - kw = lmd.get('Keywords', []) - if kw == ['']: - kw = [] - result['keywords'] = kw - keys = (('requires_dist', 'run_requires'), - ('setup_requires_dist', 'build_requires')) - for ok, nk in keys: - if ok in lmd and lmd[ok]: - result[nk] = [{'requires': lmd[ok]}] - result['provides'] = self.provides - author = {} - maintainer = {} - return result - - LEGACY_MAPPING = { - 'name': 'Name', - 'version': 'Version', - ('extensions', 'python.details', 'license'): 'License', - 'summary': 'Summary', - 'description': 'Description', - ('extensions', 'python.project', 'project_urls', 'Home'): 'Home-page', - ('extensions', 'python.project', 'contacts', 0, 'name'): 'Author', - ('extensions', 'python.project', 'contacts', 0, 'email'): 'Author-email', - 'source_url': 'Download-URL', - ('extensions', 'python.details', 'classifiers'): 'Classifier', - } - - def _to_legacy(self): - def process_entries(entries): - reqts = set() - for e in entries: - extra = e.get('extra') - env = e.get('environment') - rlist = e['requires'] - for r in rlist: - if not env and not extra: - reqts.add(r) - else: - marker = '' - if extra: - marker = 'extra == "%s"' % extra - if env: - if marker: - marker = '(%s) and %s' % (env, marker) - else: - marker = env - reqts.add(';'.join((r, marker))) - return reqts - - assert self._data and not self._legacy - result = LegacyMetadata() - nmd = self._data - # import pdb; pdb.set_trace() - for nk, ok in self.LEGACY_MAPPING.items(): - if not isinstance(nk, tuple): - if nk in nmd: - result[ok] = nmd[nk] - else: - d = nmd - found = True - for k in nk: - try: - d = d[k] - except (KeyError, IndexError): - found = False - break - if found: - result[ok] = d - r1 = process_entries(self.run_requires + self.meta_requires) - r2 = process_entries(self.build_requires + self.dev_requires) - if self.extras: - result['Provides-Extra'] = sorted(self.extras) - result['Requires-Dist'] = sorted(r1) - result['Setup-Requires-Dist'] = sorted(r2) - # TODO: any other fields wanted - return result - - def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True): - if [path, fileobj].count(None) != 1: - raise ValueError('Exactly one of path and fileobj is needed') - self.validate() - if legacy: - if self._legacy: - legacy_md = self._legacy - else: - legacy_md = self._to_legacy() - if path: - legacy_md.write(path, skip_unknown=skip_unknown) - else: - legacy_md.write_file(fileobj, skip_unknown=skip_unknown) - else: - if self._legacy: - d = self._from_legacy() - else: - d = self._data - if fileobj: - json.dump(d, fileobj, ensure_ascii=True, indent=2, - sort_keys=True) - else: - with codecs.open(path, 'w', 'utf-8') as f: - json.dump(d, f, ensure_ascii=True, indent=2, - sort_keys=True) - - def add_requirements(self, requirements): - if self._legacy: - self._legacy.add_requirements(requirements) - else: - run_requires = self._data.setdefault('run_requires', []) - always = None - for entry in run_requires: - if 'environment' not in entry and 'extra' not in entry: - always = entry - break - if always is None: - always = { 'requires': requirements } - run_requires.insert(0, always) - else: - rset = set(always['requires']) | set(requirements) - always['requires'] = sorted(rset) - - def __repr__(self): - name = self.name or '(no name)' - version = self.version or 'no version' - return '<%s %s %s (%s)>' % (self.__class__.__name__, - self.metadata_version, name, version) diff --git a/spaces/ppsantiago/chatGPT/assets/custom.js b/spaces/ppsantiago/chatGPT/assets/custom.js deleted file mode 100644 index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000 --- a/spaces/ppsantiago/chatGPT/assets/custom.js +++ /dev/null @@ -1 +0,0 @@ -// custom javascript here \ No newline at end of file diff --git a/spaces/prerna9811/Chord/portaudio/examples/paex_record.c b/spaces/prerna9811/Chord/portaudio/examples/paex_record.c deleted file mode 100644 index 53bf571d4fd0d74df5619d4cc4805522ea3c4972..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/examples/paex_record.c +++ /dev/null @@ -1,353 +0,0 @@ -/** @file paex_record.c - @ingroup examples_src - @brief Record input into an array; Save array to a file; Playback recorded data. - @author Phil Burk http://www.softsynth.com -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" - -/* #define SAMPLE_RATE (17932) // Test failure to open with this value. */ -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (512) -#define NUM_SECONDS (5) -#define NUM_CHANNELS (2) -/* #define DITHER_FLAG (paDitherOff) */ -#define DITHER_FLAG (0) /**/ -/** Set to 1 if you want to capture the recording to a file. */ -#define WRITE_TO_FILE (0) - -/* Select sample format. */ -#if 1 -#define PA_SAMPLE_TYPE paFloat32 -typedef float SAMPLE; -#define SAMPLE_SILENCE (0.0f) -#define PRINTF_S_FORMAT "%.8f" -#elif 1 -#define PA_SAMPLE_TYPE paInt16 -typedef short SAMPLE; -#define SAMPLE_SILENCE (0) -#define PRINTF_S_FORMAT "%d" -#elif 0 -#define PA_SAMPLE_TYPE paInt8 -typedef char SAMPLE; -#define SAMPLE_SILENCE (0) -#define PRINTF_S_FORMAT "%d" -#else -#define PA_SAMPLE_TYPE paUInt8 -typedef unsigned char SAMPLE; -#define SAMPLE_SILENCE (128) -#define PRINTF_S_FORMAT "%d" -#endif - -typedef struct -{ - int frameIndex; /* Index into sample array. */ - int maxFrameIndex; - SAMPLE *recordedSamples; -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may be called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int recordCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - const SAMPLE *rptr = (const SAMPLE*)inputBuffer; - SAMPLE *wptr = &data->recordedSamples[data->frameIndex * NUM_CHANNELS]; - long framesToCalc; - long i; - int finished; - unsigned long framesLeft = data->maxFrameIndex - data->frameIndex; - - (void) outputBuffer; /* Prevent unused variable warnings. */ - (void) timeInfo; - (void) statusFlags; - (void) userData; - - if( framesLeft < framesPerBuffer ) - { - framesToCalc = framesLeft; - finished = paComplete; - } - else - { - framesToCalc = framesPerBuffer; - finished = paContinue; - } - - if( inputBuffer == NULL ) - { - for( i=0; iframeIndex += framesToCalc; - return finished; -} - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may be called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int playCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - SAMPLE *rptr = &data->recordedSamples[data->frameIndex * NUM_CHANNELS]; - SAMPLE *wptr = (SAMPLE*)outputBuffer; - unsigned int i; - int finished; - unsigned int framesLeft = data->maxFrameIndex - data->frameIndex; - - (void) inputBuffer; /* Prevent unused variable warnings. */ - (void) timeInfo; - (void) statusFlags; - (void) userData; - - if( framesLeft < framesPerBuffer ) - { - /* final buffer... */ - for( i=0; iframeIndex += framesLeft; - finished = paComplete; - } - else - { - for( i=0; iframeIndex += framesPerBuffer; - finished = paContinue; - } - return finished; -} - -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStreamParameters inputParameters, - outputParameters; - PaStream* stream; - PaError err = paNoError; - paTestData data; - int i; - int totalFrames; - int numSamples; - int numBytes; - SAMPLE max, val; - double average; - - printf("patest_record.c\n"); fflush(stdout); - - data.maxFrameIndex = totalFrames = NUM_SECONDS * SAMPLE_RATE; /* Record for a few seconds. */ - data.frameIndex = 0; - numSamples = totalFrames * NUM_CHANNELS; - numBytes = numSamples * sizeof(SAMPLE); - data.recordedSamples = (SAMPLE *) malloc( numBytes ); /* From now on, recordedSamples is initialised. */ - if( data.recordedSamples == NULL ) - { - printf("Could not allocate record array.\n"); - goto done; - } - for( i=0; idefaultLowInputLatency; - inputParameters.hostApiSpecificStreamInfo = NULL; - - /* Record some audio. -------------------------------------------- */ - err = Pa_OpenStream( - &stream, - &inputParameters, - NULL, /* &outputParameters, */ - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - recordCallback, - &data ); - if( err != paNoError ) goto done; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto done; - printf("\n=== Now recording!! Please speak into the microphone. ===\n"); fflush(stdout); - - while( ( err = Pa_IsStreamActive( stream ) ) == 1 ) - { - Pa_Sleep(1000); - printf("index = %d\n", data.frameIndex ); fflush(stdout); - } - if( err < 0 ) goto done; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto done; - - /* Measure maximum peak amplitude. */ - max = 0; - average = 0.0; - for( i=0; i max ) - { - max = val; - } - average += val; - } - - average = average / (double)numSamples; - - printf("sample max amplitude = "PRINTF_S_FORMAT"\n", max ); - printf("sample average = %lf\n", average ); - - /* Write recorded data to a file. */ -#if WRITE_TO_FILE - { - FILE *fid; - fid = fopen("recorded.raw", "wb"); - if( fid == NULL ) - { - printf("Could not open file."); - } - else - { - fwrite( data.recordedSamples, NUM_CHANNELS * sizeof(SAMPLE), totalFrames, fid ); - fclose( fid ); - printf("Wrote data to 'recorded.raw'\n"); - } - } -#endif - - /* Playback recorded data. -------------------------------------------- */ - data.frameIndex = 0; - - outputParameters.device = Pa_GetDefaultOutputDevice(); /* default output device */ - if (outputParameters.device == paNoDevice) { - fprintf(stderr,"Error: No default output device.\n"); - goto done; - } - outputParameters.channelCount = 2; /* stereo output */ - outputParameters.sampleFormat = PA_SAMPLE_TYPE; - outputParameters.suggestedLatency = Pa_GetDeviceInfo( outputParameters.device )->defaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - printf("\n=== Now playing back. ===\n"); fflush(stdout); - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - playCallback, - &data ); - if( err != paNoError ) goto done; - - if( stream ) - { - err = Pa_StartStream( stream ); - if( err != paNoError ) goto done; - - printf("Waiting for playback to finish.\n"); fflush(stdout); - - while( ( err = Pa_IsStreamActive( stream ) ) == 1 ) Pa_Sleep(100); - if( err < 0 ) goto done; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto done; - - printf("Done.\n"); fflush(stdout); - } - -done: - Pa_Terminate(); - if( data.recordedSamples ) /* Sure it is NULL or valid. */ - free( data.recordedSamples ); - if( err != paNoError ) - { - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - err = 1; /* Always return 0 or 1, but no other return codes. */ - } - return err; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageFile.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageFile.py deleted file mode 100644 index 8e4f7dfb2c8854ee3a1f65efd6535732df1764aa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageFile.py +++ /dev/null @@ -1,773 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# base class for image file handlers -# -# history: -# 1995-09-09 fl Created -# 1996-03-11 fl Fixed load mechanism. -# 1996-04-15 fl Added pcx/xbm decoders. -# 1996-04-30 fl Added encoders. -# 1996-12-14 fl Added load helpers -# 1997-01-11 fl Use encode_to_file where possible -# 1997-08-27 fl Flush output in _save -# 1998-03-05 fl Use memory mapping for some modes -# 1999-02-04 fl Use memory mapping also for "I;16" and "I;16B" -# 1999-05-31 fl Added image parser -# 2000-10-12 fl Set readonly flag on memory-mapped images -# 2002-03-20 fl Use better messages for common decoder errors -# 2003-04-21 fl Fall back on mmap/map_buffer if map is not available -# 2003-10-30 fl Added StubImageFile class -# 2004-02-25 fl Made incremental parser more robust -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1995-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import io -import itertools -import struct -import sys - -from . import Image -from ._util import is_path - -MAXBLOCK = 65536 - -SAFEBLOCK = 1024 * 1024 - -LOAD_TRUNCATED_IMAGES = False -"""Whether or not to load truncated image files. User code may change this.""" - -ERRORS = { - -1: "image buffer overrun error", - -2: "decoding error", - -3: "unknown error", - -8: "bad configuration", - -9: "out of memory error", -} -""" -Dict of known error codes returned from :meth:`.PyDecoder.decode`, -:meth:`.PyEncoder.encode` :meth:`.PyEncoder.encode_to_pyfd` and -:meth:`.PyEncoder.encode_to_file`. -""" - - -# -# -------------------------------------------------------------------- -# Helpers - - -def raise_oserror(error): - try: - msg = Image.core.getcodecstatus(error) - except AttributeError: - msg = ERRORS.get(error) - if not msg: - msg = f"decoder error {error}" - msg += " when reading image file" - raise OSError(msg) - - -def _tilesort(t): - # sort on offset - return t[2] - - -# -# -------------------------------------------------------------------- -# ImageFile base class - - -class ImageFile(Image.Image): - """Base class for image file format handlers.""" - - def __init__(self, fp=None, filename=None): - super().__init__() - - self._min_frame = 0 - - self.custom_mimetype = None - - self.tile = None - """ A list of tile descriptors, or ``None`` """ - - self.readonly = 1 # until we know better - - self.decoderconfig = () - self.decodermaxblock = MAXBLOCK - - if is_path(fp): - # filename - self.fp = open(fp, "rb") - self.filename = fp - self._exclusive_fp = True - else: - # stream - self.fp = fp - self.filename = filename - # can be overridden - self._exclusive_fp = None - - try: - try: - self._open() - except ( - IndexError, # end of data - TypeError, # end of data (ord) - KeyError, # unsupported mode - EOFError, # got header but not the first frame - struct.error, - ) as v: - raise SyntaxError(v) from v - - if not self.mode or self.size[0] <= 0 or self.size[1] <= 0: - msg = "not identified by this driver" - raise SyntaxError(msg) - except BaseException: - # close the file only if we have opened it this constructor - if self._exclusive_fp: - self.fp.close() - raise - - def get_format_mimetype(self): - if self.custom_mimetype: - return self.custom_mimetype - if self.format is not None: - return Image.MIME.get(self.format.upper()) - - def __setstate__(self, state): - self.tile = [] - super().__setstate__(state) - - def verify(self): - """Check file integrity""" - - # raise exception if something's wrong. must be called - # directly after open, and closes file when finished. - if self._exclusive_fp: - self.fp.close() - self.fp = None - - def load(self): - """Load image data based on tile list""" - - if self.tile is None: - msg = "cannot load this image" - raise OSError(msg) - - pixel = Image.Image.load(self) - if not self.tile: - return pixel - - self.map = None - use_mmap = self.filename and len(self.tile) == 1 - # As of pypy 2.1.0, memory mapping was failing here. - use_mmap = use_mmap and not hasattr(sys, "pypy_version_info") - - readonly = 0 - - # look for read/seek overrides - try: - read = self.load_read - # don't use mmap if there are custom read/seek functions - use_mmap = False - except AttributeError: - read = self.fp.read - - try: - seek = self.load_seek - use_mmap = False - except AttributeError: - seek = self.fp.seek - - if use_mmap: - # try memory mapping - decoder_name, extents, offset, args = self.tile[0] - if ( - decoder_name == "raw" - and len(args) >= 3 - and args[0] == self.mode - and args[0] in Image._MAPMODES - ): - try: - # use mmap, if possible - import mmap - - with open(self.filename) as fp: - self.map = mmap.mmap(fp.fileno(), 0, access=mmap.ACCESS_READ) - if offset + self.size[1] * args[1] > self.map.size(): - # buffer is not large enough - raise OSError - self.im = Image.core.map_buffer( - self.map, self.size, decoder_name, offset, args - ) - readonly = 1 - # After trashing self.im, - # we might need to reload the palette data. - if self.palette: - self.palette.dirty = 1 - except (AttributeError, OSError, ImportError): - self.map = None - - self.load_prepare() - err_code = -3 # initialize to unknown error - if not self.map: - # sort tiles in file order - self.tile.sort(key=_tilesort) - - try: - # FIXME: This is a hack to handle TIFF's JpegTables tag. - prefix = self.tile_prefix - except AttributeError: - prefix = b"" - - # Remove consecutive duplicates that only differ by their offset - self.tile = [ - list(tiles)[-1] - for _, tiles in itertools.groupby( - self.tile, lambda tile: (tile[0], tile[1], tile[3]) - ) - ] - for decoder_name, extents, offset, args in self.tile: - seek(offset) - decoder = Image._getdecoder( - self.mode, decoder_name, args, self.decoderconfig - ) - try: - decoder.setimage(self.im, extents) - if decoder.pulls_fd: - decoder.setfd(self.fp) - err_code = decoder.decode(b"")[1] - else: - b = prefix - while True: - try: - s = read(self.decodermaxblock) - except (IndexError, struct.error) as e: - # truncated png/gif - if LOAD_TRUNCATED_IMAGES: - break - else: - msg = "image file is truncated" - raise OSError(msg) from e - - if not s: # truncated jpeg - if LOAD_TRUNCATED_IMAGES: - break - else: - msg = ( - "image file is truncated " - f"({len(b)} bytes not processed)" - ) - raise OSError(msg) - - b = b + s - n, err_code = decoder.decode(b) - if n < 0: - break - b = b[n:] - finally: - # Need to cleanup here to prevent leaks - decoder.cleanup() - - self.tile = [] - self.readonly = readonly - - self.load_end() - - if self._exclusive_fp and self._close_exclusive_fp_after_loading: - self.fp.close() - self.fp = None - - if not self.map and not LOAD_TRUNCATED_IMAGES and err_code < 0: - # still raised if decoder fails to return anything - raise_oserror(err_code) - - return Image.Image.load(self) - - def load_prepare(self): - # create image memory if necessary - if not self.im or self.im.mode != self.mode or self.im.size != self.size: - self.im = Image.core.new(self.mode, self.size) - # create palette (optional) - if self.mode == "P": - Image.Image.load(self) - - def load_end(self): - # may be overridden - pass - - # may be defined for contained formats - # def load_seek(self, pos): - # pass - - # may be defined for blocked formats (e.g. PNG) - # def load_read(self, bytes): - # pass - - def _seek_check(self, frame): - if ( - frame < self._min_frame - # Only check upper limit on frames if additional seek operations - # are not required to do so - or ( - not (hasattr(self, "_n_frames") and self._n_frames is None) - and frame >= self.n_frames + self._min_frame - ) - ): - msg = "attempt to seek outside sequence" - raise EOFError(msg) - - return self.tell() != frame - - -class StubImageFile(ImageFile): - """ - Base class for stub image loaders. - - A stub loader is an image loader that can identify files of a - certain format, but relies on external code to load the file. - """ - - def _open(self): - msg = "StubImageFile subclass must implement _open" - raise NotImplementedError(msg) - - def load(self): - loader = self._load() - if loader is None: - msg = f"cannot find loader for this {self.format} file" - raise OSError(msg) - image = loader.load(self) - assert image is not None - # become the other object (!) - self.__class__ = image.__class__ - self.__dict__ = image.__dict__ - return image.load() - - def _load(self): - """(Hook) Find actual image loader.""" - msg = "StubImageFile subclass must implement _load" - raise NotImplementedError(msg) - - -class Parser: - """ - Incremental image parser. This class implements the standard - feed/close consumer interface. - """ - - incremental = None - image = None - data = None - decoder = None - offset = 0 - finished = 0 - - def reset(self): - """ - (Consumer) Reset the parser. Note that you can only call this - method immediately after you've created a parser; parser - instances cannot be reused. - """ - assert self.data is None, "cannot reuse parsers" - - def feed(self, data): - """ - (Consumer) Feed data to the parser. - - :param data: A string buffer. - :exception OSError: If the parser failed to parse the image file. - """ - # collect data - - if self.finished: - return - - if self.data is None: - self.data = data - else: - self.data = self.data + data - - # parse what we have - if self.decoder: - if self.offset > 0: - # skip header - skip = min(len(self.data), self.offset) - self.data = self.data[skip:] - self.offset = self.offset - skip - if self.offset > 0 or not self.data: - return - - n, e = self.decoder.decode(self.data) - - if n < 0: - # end of stream - self.data = None - self.finished = 1 - if e < 0: - # decoding error - self.image = None - raise_oserror(e) - else: - # end of image - return - self.data = self.data[n:] - - elif self.image: - # if we end up here with no decoder, this file cannot - # be incrementally parsed. wait until we've gotten all - # available data - pass - - else: - # attempt to open this file - try: - with io.BytesIO(self.data) as fp: - im = Image.open(fp) - except OSError: - # traceback.print_exc() - pass # not enough data - else: - flag = hasattr(im, "load_seek") or hasattr(im, "load_read") - if flag or len(im.tile) != 1: - # custom load code, or multiple tiles - self.decode = None - else: - # initialize decoder - im.load_prepare() - d, e, o, a = im.tile[0] - im.tile = [] - self.decoder = Image._getdecoder(im.mode, d, a, im.decoderconfig) - self.decoder.setimage(im.im, e) - - # calculate decoder offset - self.offset = o - if self.offset <= len(self.data): - self.data = self.data[self.offset :] - self.offset = 0 - - self.image = im - - def __enter__(self): - return self - - def __exit__(self, *args): - self.close() - - def close(self): - """ - (Consumer) Close the stream. - - :returns: An image object. - :exception OSError: If the parser failed to parse the image file either - because it cannot be identified or cannot be - decoded. - """ - # finish decoding - if self.decoder: - # get rid of what's left in the buffers - self.feed(b"") - self.data = self.decoder = None - if not self.finished: - msg = "image was incomplete" - raise OSError(msg) - if not self.image: - msg = "cannot parse this image" - raise OSError(msg) - if self.data: - # incremental parsing not possible; reopen the file - # not that we have all data - with io.BytesIO(self.data) as fp: - try: - self.image = Image.open(fp) - finally: - self.image.load() - return self.image - - -# -------------------------------------------------------------------- - - -def _save(im, fp, tile, bufsize=0): - """Helper to save image based on tile list - - :param im: Image object. - :param fp: File object. - :param tile: Tile list. - :param bufsize: Optional buffer size - """ - - im.load() - if not hasattr(im, "encoderconfig"): - im.encoderconfig = () - tile.sort(key=_tilesort) - # FIXME: make MAXBLOCK a configuration parameter - # It would be great if we could have the encoder specify what it needs - # But, it would need at least the image size in most cases. RawEncode is - # a tricky case. - bufsize = max(MAXBLOCK, bufsize, im.size[0] * 4) # see RawEncode.c - try: - fh = fp.fileno() - fp.flush() - _encode_tile(im, fp, tile, bufsize, fh) - except (AttributeError, io.UnsupportedOperation) as exc: - _encode_tile(im, fp, tile, bufsize, None, exc) - if hasattr(fp, "flush"): - fp.flush() - - -def _encode_tile(im, fp, tile, bufsize, fh, exc=None): - for e, b, o, a in tile: - if o > 0: - fp.seek(o) - encoder = Image._getencoder(im.mode, e, a, im.encoderconfig) - try: - encoder.setimage(im.im, b) - if encoder.pushes_fd: - encoder.setfd(fp) - errcode = encoder.encode_to_pyfd()[1] - else: - if exc: - # compress to Python file-compatible object - while True: - errcode, data = encoder.encode(bufsize)[1:] - fp.write(data) - if errcode: - break - else: - # slight speedup: compress to real file object - errcode = encoder.encode_to_file(fh, bufsize) - if errcode < 0: - msg = f"encoder error {errcode} when writing image file" - raise OSError(msg) from exc - finally: - encoder.cleanup() - - -def _safe_read(fp, size): - """ - Reads large blocks in a safe way. Unlike fp.read(n), this function - doesn't trust the user. If the requested size is larger than - SAFEBLOCK, the file is read block by block. - - :param fp: File handle. Must implement a read method. - :param size: Number of bytes to read. - :returns: A string containing size bytes of data. - - Raises an OSError if the file is truncated and the read cannot be completed - - """ - if size <= 0: - return b"" - if size <= SAFEBLOCK: - data = fp.read(size) - if len(data) < size: - msg = "Truncated File Read" - raise OSError(msg) - return data - data = [] - remaining_size = size - while remaining_size > 0: - block = fp.read(min(remaining_size, SAFEBLOCK)) - if not block: - break - data.append(block) - remaining_size -= len(block) - if sum(len(d) for d in data) < size: - msg = "Truncated File Read" - raise OSError(msg) - return b"".join(data) - - -class PyCodecState: - def __init__(self): - self.xsize = 0 - self.ysize = 0 - self.xoff = 0 - self.yoff = 0 - - def extents(self): - return self.xoff, self.yoff, self.xoff + self.xsize, self.yoff + self.ysize - - -class PyCodec: - def __init__(self, mode, *args): - self.im = None - self.state = PyCodecState() - self.fd = None - self.mode = mode - self.init(args) - - def init(self, args): - """ - Override to perform codec specific initialization - - :param args: Array of args items from the tile entry - :returns: None - """ - self.args = args - - def cleanup(self): - """ - Override to perform codec specific cleanup - - :returns: None - """ - pass - - def setfd(self, fd): - """ - Called from ImageFile to set the Python file-like object - - :param fd: A Python file-like object - :returns: None - """ - self.fd = fd - - def setimage(self, im, extents=None): - """ - Called from ImageFile to set the core output image for the codec - - :param im: A core image object - :param extents: a 4 tuple of (x0, y0, x1, y1) defining the rectangle - for this tile - :returns: None - """ - - # following c code - self.im = im - - if extents: - (x0, y0, x1, y1) = extents - else: - (x0, y0, x1, y1) = (0, 0, 0, 0) - - if x0 == 0 and x1 == 0: - self.state.xsize, self.state.ysize = self.im.size - else: - self.state.xoff = x0 - self.state.yoff = y0 - self.state.xsize = x1 - x0 - self.state.ysize = y1 - y0 - - if self.state.xsize <= 0 or self.state.ysize <= 0: - msg = "Size cannot be negative" - raise ValueError(msg) - - if ( - self.state.xsize + self.state.xoff > self.im.size[0] - or self.state.ysize + self.state.yoff > self.im.size[1] - ): - msg = "Tile cannot extend outside image" - raise ValueError(msg) - - -class PyDecoder(PyCodec): - """ - Python implementation of a format decoder. Override this class and - add the decoding logic in the :meth:`decode` method. - - See :ref:`Writing Your Own File Codec in Python` - """ - - _pulls_fd = False - - @property - def pulls_fd(self): - return self._pulls_fd - - def decode(self, buffer): - """ - Override to perform the decoding process. - - :param buffer: A bytes object with the data to be decoded. - :returns: A tuple of ``(bytes consumed, errcode)``. - If finished with decoding return -1 for the bytes consumed. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - raise NotImplementedError() - - def set_as_raw(self, data, rawmode=None): - """ - Convenience method to set the internal image from a stream of raw data - - :param data: Bytes to be set - :param rawmode: The rawmode to be used for the decoder. - If not specified, it will default to the mode of the image - :returns: None - """ - - if not rawmode: - rawmode = self.mode - d = Image._getdecoder(self.mode, "raw", rawmode) - d.setimage(self.im, self.state.extents()) - s = d.decode(data) - - if s[0] >= 0: - msg = "not enough image data" - raise ValueError(msg) - if s[1] != 0: - msg = "cannot decode image data" - raise ValueError(msg) - - -class PyEncoder(PyCodec): - """ - Python implementation of a format encoder. Override this class and - add the decoding logic in the :meth:`encode` method. - - See :ref:`Writing Your Own File Codec in Python` - """ - - _pushes_fd = False - - @property - def pushes_fd(self): - return self._pushes_fd - - def encode(self, bufsize): - """ - Override to perform the encoding process. - - :param bufsize: Buffer size. - :returns: A tuple of ``(bytes encoded, errcode, bytes)``. - If finished with encoding return 1 for the error code. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - raise NotImplementedError() - - def encode_to_pyfd(self): - """ - If ``pushes_fd`` is ``True``, then this method will be used, - and ``encode()`` will only be called once. - - :returns: A tuple of ``(bytes consumed, errcode)``. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - if not self.pushes_fd: - return 0, -8 # bad configuration - bytes_consumed, errcode, data = self.encode(0) - if data: - self.fd.write(data) - return bytes_consumed, errcode - - def encode_to_file(self, fh, bufsize): - """ - :param fh: File handle. - :param bufsize: Buffer size. - - :returns: If finished successfully, return 0. - Otherwise, return an error code. Err codes are from - :data:`.ImageFile.ERRORS`. - """ - errcode = 0 - while errcode == 0: - status, errcode, buf = self.encode(bufsize) - if status > 0: - fh.write(buf[status:]) - return errcode diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/api.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/api.py deleted file mode 100644 index a0b4b91bb60080ea0c66ec375c812f1246080a13..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/api.py +++ /dev/null @@ -1,3811 +0,0 @@ -import warnings - -import hashlib -import io -import json -import jsonschema -import pandas as pd -from toolz.curried import pipe as _pipe -import itertools -import sys -from typing import cast, List, Optional, Any, Iterable, Union, Literal - -# Have to rename it here as else it overlaps with schema.core.Type -from typing import Type as TypingType -from typing import Dict as TypingDict - -from .schema import core, channels, mixins, Undefined, UndefinedType, SCHEMA_URL - -from .data import data_transformers -from ... import utils, expr -from .display import renderers, VEGALITE_VERSION, VEGAEMBED_VERSION, VEGA_VERSION -from .theme import themes -from .compiler import vegalite_compilers -from ...utils._vegafusion_data import ( - using_vegafusion as _using_vegafusion, - compile_with_vegafusion as _compile_with_vegafusion, -) -from ...utils.core import _DataFrameLike - -if sys.version_info >= (3, 11): - from typing import Self -else: - from typing_extensions import Self - - -# ------------------------------------------------------------------------ -# Data Utilities -def _dataset_name(values): - """Generate a unique hash of the data - - Parameters - ---------- - values : list or dict - A list/dict representation of data values. - - Returns - ------- - name : string - A unique name generated from the hash of the values. - """ - if isinstance(values, core.InlineDataset): - values = values.to_dict() - if values == [{}]: - return "empty" - values_json = json.dumps(values, sort_keys=True) - hsh = hashlib.md5(values_json.encode()).hexdigest() - return "data-" + hsh - - -def _consolidate_data(data, context): - """If data is specified inline, then move it to context['datasets'] - - This function will modify context in-place, and return a new version of data - """ - values = Undefined - kwds = {} - - if isinstance(data, core.InlineData): - if data.name is Undefined and data.values is not Undefined: - if isinstance(data.values, core.InlineDataset): - values = data.to_dict()["values"] - else: - values = data.values - kwds = {"format": data.format} - - elif isinstance(data, dict): - if "name" not in data and "values" in data: - values = data["values"] - kwds = {k: v for k, v in data.items() if k != "values"} - - if values is not Undefined: - name = _dataset_name(values) - data = core.NamedData(name=name, **kwds) - context.setdefault("datasets", {})[name] = values - - return data - - -def _prepare_data(data, context=None): - """Convert input data to data for use within schema - - Parameters - ---------- - data : - The input dataset in the form of a DataFrame, dictionary, altair data - object, or other type that is recognized by the data transformers. - context : dict (optional) - The to_dict context in which the data is being prepared. This is used - to keep track of information that needs to be passed up and down the - recursive serialization routine, such as global named datasets. - """ - if data is Undefined: - return data - - # convert dataframes or objects with __geo_interface__ to dict - elif isinstance(data, pd.DataFrame) or hasattr(data, "__geo_interface__"): - data = _pipe(data, data_transformers.get()) - - # convert string input to a URLData - elif isinstance(data, str): - data = core.UrlData(data) - - elif hasattr(data, "__dataframe__"): - data = _pipe(data, data_transformers.get()) - - # consolidate inline data to top-level datasets - if context is not None and data_transformers.consolidate_datasets: - data = _consolidate_data(data, context) - - # if data is still not a recognized type, then return - if not isinstance(data, (dict, core.Data)): - warnings.warn("data of type {} not recognized".format(type(data)), stacklevel=1) - - return data - - -# ------------------------------------------------------------------------ -# Aliases & specializations -Bin = core.BinParams -Impute = core.ImputeParams -Title = core.TitleParams - - -class LookupData(core.LookupData): - @utils.use_signature(core.LookupData) - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def to_dict(self, *args, **kwargs): - """Convert the chart to a dictionary suitable for JSON export.""" - copy = self.copy(deep=False) - copy.data = _prepare_data(copy.data, kwargs.get("context")) - return super(LookupData, copy).to_dict(*args, **kwargs) - - -class FacetMapping(core.FacetMapping): - _class_is_valid_at_instantiation = False - - @utils.use_signature(core.FacetMapping) - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def to_dict(self, *args, **kwargs): - copy = self.copy(deep=False) - context = kwargs.get("context", {}) - data = context.get("data", None) - if isinstance(self.row, str): - copy.row = core.FacetFieldDef(**utils.parse_shorthand(self.row, data)) - if isinstance(self.column, str): - copy.column = core.FacetFieldDef(**utils.parse_shorthand(self.column, data)) - return super(FacetMapping, copy).to_dict(*args, **kwargs) - - -# ------------------------------------------------------------------------ -# Encoding will contain channel objects that aren't valid at instantiation -core.FacetedEncoding._class_is_valid_at_instantiation = False - -# ------------------------------------------------------------------------ -# These are parameters that are valid at the top level, but are not valid -# for specs that are within a composite chart -# (layer, hconcat, vconcat, facet, repeat) -TOPLEVEL_ONLY_KEYS = {"background", "config", "autosize", "padding", "$schema"} - - -def _get_channels_mapping(): - mapping = {} - for attr in dir(channels): - cls = getattr(channels, attr) - if isinstance(cls, type) and issubclass(cls, core.SchemaBase): - mapping[cls] = attr.replace("Value", "").lower() - return mapping - - -# ------------------------------------------------------------------------- -# Tools for working with parameters -class Parameter(expr.core.OperatorMixin, object): - """A Parameter object""" - - _counter: int = 0 - - @classmethod - def _get_name(cls) -> str: - cls._counter += 1 - return f"param_{cls._counter}" - - def __init__( - self, - name: Optional[str] = None, - empty: Union[bool, UndefinedType] = Undefined, - param: Union[ - core.VariableParameter, - core.TopLevelSelectionParameter, - core.SelectionParameter, - UndefinedType, - ] = Undefined, - param_type: Union[Literal["variable", "selection"], UndefinedType] = Undefined, - ) -> None: - if name is None: - name = self._get_name() - self.name = name - self.empty = empty - self.param = param - self.param_type = param_type - - @utils.deprecation.deprecated( - message="'ref' is deprecated. No need to call '.ref()' anymore." - ) - def ref(self) -> dict: - "'ref' is deprecated. No need to call '.ref()' anymore." - return self.to_dict() - - def to_dict(self) -> TypingDict[str, Union[str, dict]]: - if self.param_type == "variable": - return {"expr": self.name} - elif self.param_type == "selection": - return { - "param": self.name.to_dict() - if hasattr(self.name, "to_dict") - else self.name - } - else: - raise ValueError(f"Unrecognized parameter type: {self.param_type}") - - def __invert__(self): - if self.param_type == "selection": - return SelectionPredicateComposition({"not": {"param": self.name}}) - else: - return expr.core.OperatorMixin.__invert__(self) - - def __and__(self, other): - if self.param_type == "selection": - if isinstance(other, Parameter): - other = {"param": other.name} - return SelectionPredicateComposition({"and": [{"param": self.name}, other]}) - else: - return expr.core.OperatorMixin.__and__(self, other) - - def __or__(self, other): - if self.param_type == "selection": - if isinstance(other, Parameter): - other = {"param": other.name} - return SelectionPredicateComposition({"or": [{"param": self.name}, other]}) - else: - return expr.core.OperatorMixin.__or__(self, other) - - def __repr__(self) -> str: - return "Parameter({0!r}, {1})".format(self.name, self.param) - - def _to_expr(self) -> str: - return self.name - - def _from_expr(self, expr) -> "ParameterExpression": - return ParameterExpression(expr=expr) - - def __getattr__( - self, field_name: str - ) -> Union[expr.core.GetAttrExpression, "SelectionExpression"]: - if field_name.startswith("__") and field_name.endswith("__"): - raise AttributeError(field_name) - _attrexpr = expr.core.GetAttrExpression(self.name, field_name) - # If self is a SelectionParameter and field_name is in its - # fields or encodings list, then we want to return an expression. - if check_fields_and_encodings(self, field_name): - return SelectionExpression(_attrexpr) - return expr.core.GetAttrExpression(self.name, field_name) - - # TODO: Are there any special cases to consider for __getitem__? - # This was copied from v4. - def __getitem__(self, field_name: str) -> expr.core.GetItemExpression: - return expr.core.GetItemExpression(self.name, field_name) - - -# Enables use of ~, &, | with compositions of selection objects. -class SelectionPredicateComposition(core.PredicateComposition): - def __invert__(self): - return SelectionPredicateComposition({"not": self.to_dict()}) - - def __and__(self, other): - return SelectionPredicateComposition({"and": [self.to_dict(), other.to_dict()]}) - - def __or__(self, other): - return SelectionPredicateComposition({"or": [self.to_dict(), other.to_dict()]}) - - -class ParameterExpression(expr.core.OperatorMixin, object): - def __init__(self, expr): - self.expr = expr - - def to_dict(self): - return {"expr": repr(self.expr)} - - def _to_expr(self): - return repr(self.expr) - - def _from_expr(self, expr): - return ParameterExpression(expr=expr) - - -class SelectionExpression(expr.core.OperatorMixin, object): - def __init__(self, expr): - self.expr = expr - - def to_dict(self): - return {"expr": repr(self.expr)} - - def _to_expr(self): - return repr(self.expr) - - def _from_expr(self, expr): - return SelectionExpression(expr=expr) - - -def check_fields_and_encodings(parameter, field_name): - for prop in ["fields", "encodings"]: - try: - if field_name in getattr(parameter.param.select, prop): - return True - except (AttributeError, TypeError): - pass - - return False - - -# ------------------------------------------------------------------------ -# Top-Level Functions - - -def value(value, **kwargs): - """Specify a value for use in an encoding""" - return dict(value=value, **kwargs) - - -def param( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - **kwds, -): - """Create a named parameter. See https://altair-viz.github.io/user_guide/interactions.html for examples. Although both variable parameters and selection parameters can be created using this 'param' function, to create a selection parameter, it is recommended to use either 'selection_point' or 'selection_interval' instead. - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - **kwds : - additional keywords will be used to construct a parameter. If 'select' - is among the keywords, then a selection parameter will be created. - Otherwise, a variable parameter will be created. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - parameter = Parameter(name) - - if empty is not Undefined: - parameter.empty = empty - if parameter.empty == "none": - warnings.warn( - """The value of 'empty' should be True or False.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - parameter.empty = False - elif parameter.empty == "all": - warnings.warn( - """The value of 'empty' should be True or False.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - parameter.empty = True - elif (parameter.empty is False) or (parameter.empty is True): - pass - else: - raise ValueError("The value of 'empty' should be True or False.") - - if "init" in kwds: - warnings.warn( - """Use 'value' instead of 'init'.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - if value is Undefined: - kwds["value"] = kwds.pop("init") - else: - # If both 'value' and 'init' are set, we ignore 'init'. - kwds.pop("init") - - if "select" not in kwds: - parameter.param = core.VariableParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "variable" - elif "views" in kwds: - parameter.param = core.TopLevelSelectionParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "selection" - else: - parameter.param = core.SelectionParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "selection" - - return parameter - - -def _selection(type=Undefined, **kwds): - # We separate out the parameter keywords from the selection keywords - param_kwds = {} - - for kwd in {"name", "bind", "value", "empty", "init", "views"}: - if kwd in kwds: - param_kwds[kwd] = kwds.pop(kwd) - - if type == "interval": - select = core.IntervalSelectionConfig(type=type, **kwds) - elif type == "point": - select = core.PointSelectionConfig(type=type, **kwds) - elif type in ["single", "multi"]: - select = core.PointSelectionConfig(type="point", **kwds) - warnings.warn( - """The types 'single' and 'multi' are now - combined and should be specified using "selection_point()".""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - else: - raise ValueError("""'type' must be 'point' or 'interval'""") - - return param(select=select, **param_kwds) - - -@utils.deprecation.deprecated( - message="""'selection' is deprecated. - Use 'selection_point()' or 'selection_interval()' instead; these functions also include more helpful docstrings.""" -) -def selection(type=Undefined, **kwds): - """ - Users are recommended to use either 'selection_point' or 'selection_interval' instead, depending on the type of parameter they want to create. - - Create a selection parameter. - - Parameters - ---------- - type : enum('point', 'interval') (required) - Determines the default event processing and data query for the - selection. Vega-Lite currently supports two selection types: - * "point" - to select multiple discrete data values; the first - value is selected on click and additional values toggled on - shift-click. - * "interval" - to select a continuous range of data values on - drag. - **kwds : - additional keywords to control the selection. - """ - - return _selection(type=type, **kwds) - - -def selection_interval( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - encodings=Undefined, - on=Undefined, - clear=Undefined, - resolve=Undefined, - mark=Undefined, - translate=Undefined, - zoom=Undefined, - **kwds, -): - """Create an interval selection parameter. Selection parameters define data queries that are driven by direct manipulation from user input (e.g., mouse clicks or drags). Interval selection parameters are used to select a continuous range of data values on drag, whereas point selection parameters (`selection_point`) are used to select multiple discrete data values.) - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - encodings : List[str] (optional) - A list of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - on : string (optional) - A Vega event stream (object or selector) that triggers the selection. - For interval selections, the event stream must specify a start and end. - clear : string or boolean (optional) - Clears the selection, emptying it of all values. This property can - be an Event Stream or False to disable clear. Default is 'dblclick'. - resolve : enum('global', 'union', 'intersect') (optional) - With layered and multi-view displays, a strategy that determines - how selections' data queries are resolved when applied in a filter - transform, conditional encoding rule, or scale domain. - One of: - - * 'global': only one brush exists for the entire SPLOM. When the - user begins to drag, any previous brushes are cleared, and a - new one is constructed. - * 'union': each cell contains its own brush, and points are - highlighted if they lie within any of these individual brushes. - * 'intersect': each cell contains its own brush, and points are - highlighted only if they fall within all of these individual - brushes. - - The default is 'global'. - mark : :class:`Mark` (optional) - An interval selection also adds a rectangle mark to depict the - extents of the interval. The mark property can be used to - customize the appearance of the mark. - translate : string or boolean (optional) - When truthy, allows a user to interactively move an interval - selection back-and-forth. Can be True, False (to disable panning), - or a Vega event stream definition which must include a start and - end event to trigger continuous panning. Discrete panning (e.g., - pressing the left/right arrow keys) will be supported in future - versions. - The default value is True, which corresponds to - [mousedown, window:mouseup] > window:mousemove! - This default allows users to click and drag within an interval - selection to reposition it. - zoom : string or boolean (optional) - When truthy, allows a user to interactively resize an interval - selection. Can be True, False (to disable zooming), or a Vega - event stream definition. Currently, only wheel events are supported, - but custom event streams can still be used to specify filters, - debouncing, and throttling. Future versions will expand the set of - events that can trigger this transformation. - The default value is True, which corresponds to wheel!. This - default allows users to use the mouse wheel to resize an interval - selection. - **kwds : - Additional keywords to control the selection. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - return _selection( - type="interval", - name=name, - value=value, - bind=bind, - empty=empty, - expr=expr, - encodings=encodings, - on=on, - clear=clear, - resolve=resolve, - mark=mark, - translate=translate, - zoom=zoom, - **kwds, - ) - - -def selection_point( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - encodings=Undefined, - fields=Undefined, - on=Undefined, - clear=Undefined, - resolve=Undefined, - toggle=Undefined, - nearest=Undefined, - **kwds, -): - """Create a point selection parameter. Selection parameters define data queries that are driven by direct manipulation from user input (e.g., mouse clicks or drags). Point selection parameters are used to select multiple discrete data values; the first value is selected on click and additional values toggled on shift-click. To select a continuous range of data values on drag interval selection parameters (`selection_interval`) can be used instead. - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - encodings : List[str] (optional) - A list of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - fields : List[str] (optional) - A list of field names whose values must match for a data tuple to - fall within the selection. - on : string (optional) - A Vega event stream (object or selector) that triggers the selection. - For interval selections, the event stream must specify a start and end. - clear : string or boolean (optional) - Clears the selection, emptying it of all values. This property can - be an Event Stream or False to disable clear. Default is 'dblclick'. - resolve : enum('global', 'union', 'intersect') (optional) - With layered and multi-view displays, a strategy that determines - how selections' data queries are resolved when applied in a filter - transform, conditional encoding rule, or scale domain. - One of: - - * 'global': only one brush exists for the entire SPLOM. When the - user begins to drag, any previous brushes are cleared, and a - new one is constructed. - * 'union': each cell contains its own brush, and points are - highlighted if they lie within any of these individual brushes. - * 'intersect': each cell contains its own brush, and points are - highlighted only if they fall within all of these individual - brushes. - - The default is 'global'. - toggle : string or boolean (optional) - Controls whether data values should be toggled (inserted or - removed from a point selection) or only ever inserted into - point selections. - One of: - - * True (default): the toggle behavior, which corresponds to - "event.shiftKey". As a result, data values are toggled - when the user interacts with the shift-key pressed. - * False: disables toggling behaviour; the selection will - only ever contain a single data value corresponding - to the most recent interaction. - * A Vega expression which is re-evaluated as the user interacts. - If the expression evaluates to True, the data value is - toggled into or out of the point selection. If the expression - evaluates to False, the point selection is first cleared, and - the data value is then inserted. For example, setting the - value to the Vega expression True will toggle data values - without the user pressing the shift-key. - - nearest : boolean (optional) - When true, an invisible voronoi diagram is computed to accelerate - discrete selection. The data value nearest the mouse cursor is - added to the selection. The default is False, which means that - data values must be interacted with directly (e.g., clicked on) - to be added to the selection. - **kwds : - Additional keywords to control the selection. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - return _selection( - type="point", - name=name, - value=value, - bind=bind, - empty=empty, - expr=expr, - encodings=encodings, - fields=fields, - on=on, - clear=clear, - resolve=resolve, - toggle=toggle, - nearest=nearest, - **kwds, - ) - - -@utils.deprecation.deprecated( - message="'selection_multi' is deprecated. Use 'selection_point'" -) -@utils.use_signature(core.PointSelectionConfig) -def selection_multi(**kwargs): - """'selection_multi' is deprecated. Use 'selection_point'""" - return _selection(type="point", **kwargs) - - -@utils.deprecation.deprecated( - message="'selection_single' is deprecated. Use 'selection_point'" -) -@utils.use_signature(core.PointSelectionConfig) -def selection_single(**kwargs): - """'selection_single' is deprecated. Use 'selection_point'""" - return _selection(type="point", **kwargs) - - -@utils.use_signature(core.Binding) -def binding(input, **kwargs): - """A generic binding""" - return core.Binding(input=input, **kwargs) - - -@utils.use_signature(core.BindCheckbox) -def binding_checkbox(**kwargs): - """A checkbox binding""" - return core.BindCheckbox(input="checkbox", **kwargs) - - -@utils.use_signature(core.BindRadioSelect) -def binding_radio(**kwargs): - """A radio button binding""" - return core.BindRadioSelect(input="radio", **kwargs) - - -@utils.use_signature(core.BindRadioSelect) -def binding_select(**kwargs): - """A select binding""" - return core.BindRadioSelect(input="select", **kwargs) - - -@utils.use_signature(core.BindRange) -def binding_range(**kwargs): - """A range binding""" - return core.BindRange(input="range", **kwargs) - - -# TODO: update the docstring -def condition(predicate, if_true, if_false, **kwargs): - """A conditional attribute or encoding - - Parameters - ---------- - predicate: Selection, PredicateComposition, expr.Expression, dict, or string - the selection predicate or test predicate for the condition. - if a string is passed, it will be treated as a test operand. - if_true: - the spec or object to use if the selection predicate is true - if_false: - the spec or object to use if the selection predicate is false - **kwargs: - additional keyword args are added to the resulting dict - - Returns - ------- - spec: dict or VegaLiteSchema - the spec that describes the condition - """ - test_predicates = (str, expr.Expression, core.PredicateComposition) - - if isinstance(predicate, Parameter): - if predicate.param_type == "selection" or predicate.param.expr is Undefined: - condition = {"param": predicate.name} - if "empty" in kwargs: - condition["empty"] = kwargs.pop("empty") - elif isinstance(predicate.empty, bool): - condition["empty"] = predicate.empty - else: - condition = {"test": predicate.param.expr} - elif isinstance(predicate, test_predicates): - condition = {"test": predicate} - elif isinstance(predicate, dict): - condition = predicate - else: - raise NotImplementedError( - "condition predicate of type {}" "".format(type(predicate)) - ) - - if isinstance(if_true, core.SchemaBase): - # convert to dict for now; the from_dict call below will wrap this - # dict in the appropriate schema - if_true = if_true.to_dict() - elif isinstance(if_true, str): - if isinstance(if_false, str): - raise ValueError( - "A field cannot be used for both the `if_true` and `if_false` values of a condition. One of them has to specify a `value` or `datum` definition." - ) - else: - if_true = utils.parse_shorthand(if_true) - if_true.update(kwargs) - condition.update(if_true) - - if isinstance(if_false, core.SchemaBase): - # For the selection, the channel definitions all allow selections - # already. So use this SchemaBase wrapper if possible. - selection = if_false.copy() - selection.condition = condition - elif isinstance(if_false, str): - selection = {"condition": condition, "shorthand": if_false} - selection.update(kwargs) - else: - selection = dict(condition=condition, **if_false) - - return selection - - -# -------------------------------------------------------------------- -# Top-level objects - - -class TopLevelMixin(mixins.ConfigMethodMixin): - """Mixin for top-level chart objects such as Chart, LayeredChart, etc.""" - - _class_is_valid_at_instantiation = False - - def to_dict( - self, - validate: bool = True, - *, - format: str = "vega-lite", - ignore: Optional[List[str]] = None, - context: Optional[TypingDict[str, Any]] = None, - ) -> dict: - """Convert the chart to a dictionary suitable for JSON export - - Parameters - ---------- - validate : bool, optional - If True (default), then validate the output dictionary - against the schema. - format : str, optional - Chart specification format, one of "vega-lite" (default) or "vega" - ignore : list[str], optional - A list of keys to ignore. It is usually not needed - to specify this argument as a user. - context : dict[str, Any], optional - A context dictionary. It is usually not needed - to specify this argument as a user. - - Notes - ----- - Technical: The ignore parameter will *not* be passed to child to_dict - function calls. - - Returns - ------- - dict - The dictionary representation of this chart - - Raises - ------ - SchemaValidationError - if validate=True and the dict does not conform to the schema - """ - - # Validate format - if format not in ("vega-lite", "vega"): - raise ValueError( - f'The format argument must be either "vega-lite" or "vega". Received {repr(format)}' - ) - - # We make use of three context markers: - # - 'data' points to the data that should be referenced for column type - # inference. - # - 'top_level' is a boolean flag that is assumed to be true; if it's - # true then a "$schema" arg is added to the dict. - # - 'datasets' is a dict of named datasets that should be inserted - # in the top-level object - # - 'pre_transform' whether data transformations should be pre-evaluated - # if the current data transformer supports it (currently only used when - # the "vegafusion" transformer is enabled) - - # note: not a deep copy because we want datasets and data arguments to - # be passed by reference - context = context.copy() if context else {} - context.setdefault("datasets", {}) - is_top_level = context.get("top_level", True) - - # TopLevelMixin instance does not necessarily have copy defined but due to how - # Altair is set up this should hold. Too complex to type hint right now - copy = self.copy(deep=False) # type: ignore[attr-defined] - original_data = getattr(copy, "data", Undefined) - copy.data = _prepare_data(original_data, context) - - if original_data is not Undefined: - context["data"] = original_data - - # remaining to_dict calls are not at top level - context["top_level"] = False - - # TopLevelMixin instance does not necessarily have to_dict defined - # but due to how Altair is set up this should hold. - # Too complex to type hint right now - vegalite_spec = super(TopLevelMixin, copy).to_dict( # type: ignore[misc] - validate=validate, ignore=ignore, context=dict(context, pre_transform=False) - ) - - # TODO: following entries are added after validation. Should they be validated? - if is_top_level: - # since this is top-level we add $schema if it's missing - if "$schema" not in vegalite_spec: - vegalite_spec["$schema"] = SCHEMA_URL - - # apply theme from theme registry - the_theme = themes.get() - # Use assert to tell type checkers that it is not None. Holds true - # as there is always a default theme set when importing Altair - assert the_theme is not None - vegalite_spec = utils.update_nested(the_theme(), vegalite_spec, copy=True) - - # update datasets - if context["datasets"]: - vegalite_spec.setdefault("datasets", {}).update(context["datasets"]) - - if context.get("pre_transform", True) and _using_vegafusion(): - if format == "vega-lite": - raise ValueError( - 'When the "vegafusion" data transformer is enabled, the \n' - "to_dict() and to_json() chart methods must be called with " - 'format="vega". \n' - "For example: \n" - ' >>> chart.to_dict(format="vega")\n' - ' >>> chart.to_json(format="vega")' - ) - else: - return _compile_with_vegafusion(vegalite_spec) - else: - if format == "vega": - plugin = vegalite_compilers.get() - if plugin is None: - raise ValueError("No active vega-lite compiler plugin found") - return plugin(vegalite_spec) - else: - return vegalite_spec - - def to_json( - self, - validate: bool = True, - indent: int = 2, - sort_keys: bool = True, - *, - format: str = "vega-lite", - ignore: Optional[List[str]] = None, - context: Optional[TypingDict[str, Any]] = None, - **kwargs, - ) -> str: - """Convert a chart to a JSON string - - Parameters - ---------- - validate : bool, optional - If True (default), then validate the output dictionary - against the schema. - indent : int, optional - The number of spaces of indentation to use. The default is 2. - sort_keys : bool, optional - If True (default), sort keys in the output. - format : str, optional - The chart specification format. One of "vega-lite" (default) or "vega". - The "vega" format relies on the active Vega-Lite compiler plugin, which - by default requires the vl-convert-python package. - ignore : list[str], optional - A list of keys to ignore. It is usually not needed - to specify this argument as a user. - context : dict[str, Any], optional - A context dictionary. It is usually not needed - to specify this argument as a user. - **kwargs - Additional keyword arguments are passed to ``json.dumps()`` - """ - if ignore is None: - ignore = [] - if context is None: - context = {} - spec = self.to_dict( - validate=validate, format=format, ignore=ignore, context=context - ) - return json.dumps(spec, indent=indent, sort_keys=sort_keys, **kwargs) - - def to_html( - self, - base_url="https://cdn.jsdelivr.net/npm", - output_div="vis", - embed_options=None, - json_kwds=None, - fullhtml=True, - requirejs=False, - ) -> str: - return utils.spec_to_html( - self.to_dict(), - mode="vega-lite", - vegalite_version=VEGALITE_VERSION, - vegaembed_version=VEGAEMBED_VERSION, - vega_version=VEGA_VERSION, - base_url=base_url, - output_div=output_div, - embed_options=embed_options, - json_kwds=json_kwds, - fullhtml=fullhtml, - requirejs=requirejs, - ) - - def save( - self, - fp, - format=None, - override_data_transformer=True, - scale_factor=1.0, - vegalite_version=VEGALITE_VERSION, - vega_version=VEGA_VERSION, - vegaembed_version=VEGAEMBED_VERSION, - **kwargs, - ): - """Save a chart to file in a variety of formats - - Supported formats are json, html, png, svg, pdf; the last three require - the altair_saver package to be installed. - - Parameters - ---------- - fp : string filename or file-like object - file in which to write the chart. - format : string (optional) - the format to write: one of ['json', 'html', 'png', 'svg', 'pdf']. - If not specified, the format will be determined from the filename. - override_data_transformer : `boolean` (optional) - If True (default), then the save action will be done with - the MaxRowsError disabled. If False, then do not change the data - transformer. - scale_factor : float - For svg or png formats, scale the image by this factor when saving. - This can be used to control the size or resolution of the output. - Default is 1.0 - **kwargs : - Additional keyword arguments are passed to the output method - associated with the specified format. - - """ - from ...utils.save import save - - kwds = dict( - chart=self, - fp=fp, - format=format, - scale_factor=scale_factor, - vegalite_version=vegalite_version, - vega_version=vega_version, - vegaembed_version=vegaembed_version, - **kwargs, - ) - - # By default we override the data transformer. This makes it so - # that save() will succeed even for large datasets that would - # normally trigger a MaxRowsError - if override_data_transformer: - with data_transformers.disable_max_rows(): - result = save(**kwds) - else: - result = save(**kwds) - return result - - # Fallback for when rendering fails; the full repr is too long to be - # useful in nearly all cases. - def __repr__(self): - return "alt.{}(...)".format(self.__class__.__name__) - - # Layering and stacking - def __add__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be layered.") - return layer(self, other) - - def __and__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be concatenated.") - return vconcat(self, other) - - def __or__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be concatenated.") - return hconcat(self, other) - - def repeat( - self, - repeat=Undefined, - row=Undefined, - column=Undefined, - layer=Undefined, - columns=Undefined, - **kwargs, - ) -> "RepeatChart": - """Return a RepeatChart built from the chart - - Fields within the chart can be set to correspond to the row or - column using `alt.repeat('row')` and `alt.repeat('column')`. - - Parameters - ---------- - repeat : list - a list of data column names to be repeated. This cannot be - used along with the ``row``, ``column`` or ``layer`` argument. - row : list - a list of data column names to be mapped to the row facet - column : list - a list of data column names to be mapped to the column facet - layer : list - a list of data column names to be layered. This cannot be - used along with the ``row``, ``column`` or ``repeat`` argument. - columns : int - the maximum number of columns before wrapping. Only referenced - if ``repeat`` is specified. - **kwargs : - additional keywords passed to RepeatChart. - - Returns - ------- - chart : RepeatChart - a repeated chart. - """ - repeat_specified = repeat is not Undefined - rowcol_specified = row is not Undefined or column is not Undefined - layer_specified = layer is not Undefined - - if repeat_specified and rowcol_specified: - raise ValueError( - "repeat argument cannot be combined with row/column argument." - ) - elif repeat_specified and layer_specified: - raise ValueError("repeat argument cannot be combined with layer argument.") - - if repeat_specified: - repeat = repeat - elif layer_specified: - repeat = core.LayerRepeatMapping(layer=layer, row=row, column=column) - else: - repeat = core.RepeatMapping(row=row, column=column) - - return RepeatChart(spec=self, repeat=repeat, columns=columns, **kwargs) - - def properties(self, **kwargs) -> Self: - """Set top-level properties of the Chart. - - Argument names and types are the same as class initialization. - """ - # ignore type as copy comes from another class for subclasses of TopLevelMixin - copy = self.copy(deep=False) # type: ignore[attr-defined] - for key, val in kwargs.items(): - if key == "selection" and isinstance(val, Parameter): - # TODO: Can this be removed - # For backward compatibility with old selection interface. - setattr(copy, key, {val.name: val.selection}) - else: - # Don't validate data, because it hasn't been processed. - if key != "data": - # ignore type as validate_property comes from SchemaBase, - # not from TopLevelMixin - self.validate_property(key, val) # type: ignore[attr-defined] - setattr(copy, key, val) - return copy - - def project( - self, - type=Undefined, - center=Undefined, - clipAngle=Undefined, - clipExtent=Undefined, - coefficient=Undefined, - distance=Undefined, - fraction=Undefined, - lobes=Undefined, - parallel=Undefined, - precision=Undefined, - radius=Undefined, - ratio=Undefined, - reflectX=Undefined, - reflectY=Undefined, - rotate=Undefined, - scale=Undefined, - spacing=Undefined, - tilt=Undefined, - translate=Undefined, - **kwds, - ) -> Self: - """Add a geographic projection to the chart. - - This is generally used either with ``mark_geoshape`` or with the - ``latitude``/``longitude`` encodings. - - Available projection types are - ['albers', 'albersUsa', 'azimuthalEqualArea', 'azimuthalEquidistant', - 'conicConformal', 'conicEqualArea', 'conicEquidistant', 'equalEarth', 'equirectangular', - 'gnomonic', 'identity', 'mercator', 'orthographic', 'stereographic', 'transverseMercator'] - - Parameters - ---------- - type : ProjectionType - The cartographic projection to use. This value is case-insensitive, for example - `"albers"` and `"Albers"` indicate the same projection type. You can find all valid - projection types [in the - documentation](https://vega.github.io/vega-lite/docs/projection.html#projection-types). - - **Default value:** `equalEarth` - center : List(float) - Sets the projection’s center to the specified center, a two-element array of - longitude and latitude in degrees. - - **Default value:** `[0, 0]` - clipAngle : float - Sets the projection’s clipping circle radius to the specified angle in degrees. If - `null`, switches to [antimeridian](http://bl.ocks.org/mbostock/3788999) cutting - rather than small-circle clipping. - clipExtent : List(List(float)) - Sets the projection’s viewport clip extent to the specified bounds in pixels. The - extent bounds are specified as an array `[[x0, y0], [x1, y1]]`, where `x0` is the - left-side of the viewport, `y0` is the top, `x1` is the right and `y1` is the - bottom. If `null`, no viewport clipping is performed. - coefficient : float - - distance : float - - fraction : float - - lobes : float - - parallel : float - - precision : Mapping(required=[length]) - Sets the threshold for the projection’s [adaptive - resampling](http://bl.ocks.org/mbostock/3795544) to the specified value in pixels. - This value corresponds to the [Douglas–Peucker - distance](http://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm). - If precision is not specified, returns the projection’s current resampling - precision which defaults to `√0.5 ≅ 0.70710…`. - radius : float - - ratio : float - - reflectX : boolean - - reflectY : boolean - - rotate : List(float) - Sets the projection’s three-axis rotation to the specified angles, which must be a - two- or three-element array of numbers [`lambda`, `phi`, `gamma`] specifying the - rotation angles in degrees about each spherical axis. (These correspond to yaw, - pitch and roll.) - - **Default value:** `[0, 0, 0]` - scale : float - Sets the projection's scale (zoom) value, overriding automatic fitting. - - spacing : float - - tilt : float - - translate : List(float) - Sets the projection's translation (pan) value, overriding automatic fitting. - - """ - projection = core.Projection( - center=center, - clipAngle=clipAngle, - clipExtent=clipExtent, - coefficient=coefficient, - distance=distance, - fraction=fraction, - lobes=lobes, - parallel=parallel, - precision=precision, - radius=radius, - ratio=ratio, - reflectX=reflectX, - reflectY=reflectY, - rotate=rotate, - scale=scale, - spacing=spacing, - tilt=tilt, - translate=translate, - type=type, - **kwds, - ) - return self.properties(projection=projection) - - def _add_transform(self, *transforms): - """Copy the chart and add specified transforms to chart.transform""" - copy = self.copy(deep=["transform"]) - if copy.transform is Undefined: - copy.transform = [] - copy.transform.extend(transforms) - return copy - - def transform_aggregate( - self, aggregate=Undefined, groupby=Undefined, **kwds - ) -> Self: - """ - Add an :class:`AggregateTransform` to the schema. - - Parameters - ---------- - aggregate : List(:class:`AggregatedFieldDef`) - Array of objects that define fields to aggregate. - groupby : List(string) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - **kwds : - additional keywords are converted to aggregates using standard - shorthand parsing. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - The aggregate transform allows you to specify transforms directly using - the same shorthand syntax as used in encodings: - - >>> import altair as alt - >>> chart1 = alt.Chart().transform_aggregate( - ... mean_acc='mean(Acceleration)', - ... groupby=['Origin'] - ... ) - >>> print(chart1.transform[0].to_json()) # doctest: +NORMALIZE_WHITESPACE - { - "aggregate": [ - { - "as": "mean_acc", - "field": "Acceleration", - "op": "mean" - } - ], - "groupby": [ - "Origin" - ] - } - - It also supports including AggregatedFieldDef instances or dicts directly, - so you can create the above transform like this: - - >>> chart2 = alt.Chart().transform_aggregate( - ... [alt.AggregatedFieldDef(field='Acceleration', op='mean', - ... **{'as': 'mean_acc'})], - ... groupby=['Origin'] - ... ) - >>> chart2.transform == chart1.transform - True - - See Also - -------- - alt.AggregateTransform : underlying transform object - - """ - if aggregate is Undefined: - aggregate = [] - for key, val in kwds.items(): - parsed = utils.parse_shorthand(val) - dct = { - "as": key, - "field": parsed.get("field", Undefined), - "op": parsed.get("aggregate", Undefined), - } - aggregate.append(core.AggregatedFieldDef(**dct)) - return self._add_transform( - core.AggregateTransform(aggregate=aggregate, groupby=groupby) - ) - - def transform_bin(self, as_=Undefined, field=Undefined, bin=True, **kwargs) -> Self: - """ - Add a :class:`BinTransform` to the schema. - - Parameters - ---------- - as_ : anyOf(string, List(string)) - The output fields at which to write the start and end bin values. - bin : anyOf(boolean, :class:`BinParams`) - An object indicating bin properties, or simply ``true`` for using default bin - parameters. - field : string - The data field to bin. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> chart = alt.Chart().transform_bin("x_binned", "x") - >>> chart.transform[0] - BinTransform({ - as: 'x_binned', - bin: True, - field: 'x' - }) - - >>> chart = alt.Chart().transform_bin("x_binned", "x", - ... bin=alt.Bin(maxbins=10)) - >>> chart.transform[0] - BinTransform({ - as: 'x_binned', - bin: BinParams({ - maxbins: 10 - }), - field: 'x' - }) - - See Also - -------- - alt.BinTransform : underlying transform object - - """ - if as_ is not Undefined: - if "as" in kwargs: - raise ValueError( - "transform_bin: both 'as_' and 'as' passed as arguments." - ) - kwargs["as"] = as_ - kwargs["bin"] = bin - kwargs["field"] = field - return self._add_transform(core.BinTransform(**kwargs)) - - def transform_calculate(self, as_=Undefined, calculate=Undefined, **kwargs) -> Self: - """ - Add a :class:`CalculateTransform` to the schema. - - Parameters - ---------- - as_ : string - The field for storing the computed formula value. - calculate : string or alt.expr expression - A `expression `__ - string. Use the variable ``datum`` to refer to the current data object. - **kwargs - transforms can also be passed by keyword argument; see Examples - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> from altair import datum, expr - - >>> chart = alt.Chart().transform_calculate(y = 2 * expr.sin(datum.x)) - >>> chart.transform[0] - CalculateTransform({ - as: 'y', - calculate: (2 * sin(datum.x)) - }) - - It's also possible to pass the ``CalculateTransform`` arguments directly: - - >>> kwds = {'as': 'y', 'calculate': '2 * sin(datum.x)'} - >>> chart = alt.Chart().transform_calculate(**kwds) - >>> chart.transform[0] - CalculateTransform({ - as: 'y', - calculate: '2 * sin(datum.x)' - }) - - As the first form is easier to write and understand, that is the - recommended method. - - See Also - -------- - alt.CalculateTransform : underlying transform object - """ - if as_ is Undefined: - as_ = kwargs.pop("as", Undefined) - elif "as" in kwargs: - raise ValueError( - "transform_calculate: both 'as_' and 'as' passed as arguments." - ) - if as_ is not Undefined or calculate is not Undefined: - dct = {"as": as_, "calculate": calculate} - self = self._add_transform(core.CalculateTransform(**dct)) - for as_, calculate in kwargs.items(): - dct = {"as": as_, "calculate": calculate} - self = self._add_transform(core.CalculateTransform(**dct)) - return self - - def transform_density( - self, - density, - as_=Undefined, - bandwidth=Undefined, - counts=Undefined, - cumulative=Undefined, - extent=Undefined, - groupby=Undefined, - maxsteps=Undefined, - minsteps=Undefined, - steps=Undefined, - ) -> Self: - """Add a :class:`DensityTransform` to the spec. - - Parameters - ---------- - density : str - The data field for which to perform density estimation. - as_ : [str, str] - The output fields for the sample value and corresponding density estimate. - **Default value:** ``["value", "density"]`` - bandwidth : float - The bandwidth (standard deviation) of the Gaussian kernel. If unspecified or set to - zero, the bandwidth value is automatically estimated from the input data using - Scott’s rule. - counts : boolean - A boolean flag indicating if the output values should be probability estimates - (false) or smoothed counts (true). - **Default value:** ``false`` - cumulative : boolean - A boolean flag indicating whether to produce density estimates (false) or cumulative - density estimates (true). - **Default value:** ``false`` - extent : List([float, float]) - A [min, max] domain from which to sample the distribution. If unspecified, the - extent will be determined by the observed minimum and maximum values of the density - value field. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - maxsteps : float - The maximum number of samples to take along the extent domain for plotting the - density. **Default value:** ``200`` - minsteps : float - The minimum number of samples to take along the extent domain for plotting the - density. **Default value:** ``25`` - steps : float - The exact number of samples to take along the extent domain for plotting the - density. If specified, overrides both minsteps and maxsteps to set an exact number - of uniform samples. Potentially useful in conjunction with a fixed extent to ensure - consistent sample points for stacked densities. - """ - return self._add_transform( - core.DensityTransform( - density=density, - bandwidth=bandwidth, - counts=counts, - cumulative=cumulative, - extent=extent, - groupby=groupby, - maxsteps=maxsteps, - minsteps=minsteps, - steps=steps, - **{"as": as_}, - ) - ) - - def transform_impute( - self, - impute, - key, - frame=Undefined, - groupby=Undefined, - keyvals=Undefined, - method=Undefined, - value=Undefined, - ) -> Self: - """ - Add an :class:`ImputeTransform` to the schema. - - Parameters - ---------- - impute : string - The data field for which the missing values should be imputed. - key : string - A key field that uniquely identifies data objects within a group. - Missing key values (those occurring in the data but not in the current group) will - be imputed. - frame : List(anyOf(None, float)) - A frame specification as a two-element array used to control the window over which - the specified method is applied. The array entries should either be a number - indicating the offset from the current data object, or null to indicate unbounded - rows preceding or following the current data object. For example, the value ``[-5, - 5]`` indicates that the window should include five objects preceding and five - objects following the current object. - **Default value:** : ``[null, null]`` indicating that the window includes all - objects. - groupby : List(string) - An optional array of fields by which to group the values. - Imputation will then be performed on a per-group basis. - keyvals : anyOf(List(Mapping(required=[])), :class:`ImputeSequence`) - Defines the key values that should be considered for imputation. - An array of key values or an object defining a `number sequence - `__. - If provided, this will be used in addition to the key values observed within the - input data. If not provided, the values will be derived from all unique values of - the ``key`` field. For ``impute`` in ``encoding``, the key field is the x-field if - the y-field is imputed, or vice versa. - If there is no impute grouping, this property *must* be specified. - method : :class:`ImputeMethod` - The imputation method to use for the field value of imputed data objects. - One of ``value``, ``mean``, ``median``, ``max`` or ``min``. - **Default value:** ``"value"`` - value : Mapping(required=[]) - The field value to use when the imputation ``method`` is ``"value"``. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.ImputeTransform : underlying transform object - """ - return self._add_transform( - core.ImputeTransform( - impute=impute, - key=key, - frame=frame, - groupby=groupby, - keyvals=keyvals, - method=method, - value=value, - ) - ) - - def transform_joinaggregate( - self, joinaggregate=Undefined, groupby=Undefined, **kwargs - ) -> Self: - """ - Add a :class:`JoinAggregateTransform` to the schema. - - Parameters - ---------- - joinaggregate : List(:class:`JoinAggregateFieldDef`) - The definition of the fields in the join aggregate, and what calculations to use. - groupby : List(string) - The data fields for partitioning the data objects into separate groups. If - unspecified, all data points will be in a single group. - **kwargs - joinaggregates can also be passed by keyword argument; see Examples. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> chart = alt.Chart().transform_joinaggregate(x='sum(y)') - >>> chart.transform[0] - JoinAggregateTransform({ - joinaggregate: [JoinAggregateFieldDef({ - as: 'x', - field: 'y', - op: 'sum' - })] - }) - - See Also - -------- - alt.JoinAggregateTransform : underlying transform object - """ - if joinaggregate is Undefined: - joinaggregate = [] - for key, val in kwargs.items(): - parsed = utils.parse_shorthand(val) - dct = { - "as": key, - "field": parsed.get("field", Undefined), - "op": parsed.get("aggregate", Undefined), - } - joinaggregate.append(core.JoinAggregateFieldDef(**dct)) - return self._add_transform( - core.JoinAggregateTransform(joinaggregate=joinaggregate, groupby=groupby) - ) - - def transform_extent(self, extent: str, param: str) -> Self: - """Add a :class:`ExtentTransform` to the spec. - - Parameters - ---------- - extent : str - The field of which to get the extent. - param : str - The name of the output parameter which will be created by - the extent transform. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - """ - return self._add_transform(core.ExtentTransform(extent=extent, param=param)) - - # TODO: Update docstring - def transform_filter(self, filter, **kwargs) -> Self: - """ - Add a :class:`FilterTransform` to the schema. - - Parameters - ---------- - filter : a filter expression or :class:`PredicateComposition` - The `filter` property must be one of the predicate definitions: - (1) a string or alt.expr expression - (2) a range predicate - (3) a selection predicate - (4) a logical operand combining (1)-(3) - (5) a Selection object - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.FilterTransform : underlying transform object - - """ - if isinstance(filter, Parameter): - new_filter: TypingDict[str, Union[bool, str]] = {"param": filter.name} - if "empty" in kwargs: - new_filter["empty"] = kwargs.pop("empty") - elif isinstance(filter.empty, bool): - new_filter["empty"] = filter.empty - filter = new_filter - return self._add_transform(core.FilterTransform(filter=filter, **kwargs)) - - def transform_flatten(self, flatten, as_=Undefined) -> Self: - """Add a :class:`FlattenTransform` to the schema. - - Parameters - ---------- - flatten : List(string) - An array of one or more data fields containing arrays to flatten. - If multiple fields are specified, their array values should have a parallel - structure, ideally with the same length. - If the lengths of parallel arrays do not match, - the longest array will be used with ``null`` values added for missing entries. - as : List(string) - The output field names for extracted array values. - **Default value:** The field name of the corresponding array field - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.FlattenTransform : underlying transform object - """ - return self._add_transform( - core.FlattenTransform(flatten=flatten, **{"as": as_}) - ) - - def transform_fold(self, fold, as_=Undefined) -> Self: - """Add a :class:`FoldTransform` to the spec. - - Parameters - ---------- - fold : List(string) - An array of data fields indicating the properties to fold. - as : [string, string] - The output field names for the key and value properties produced by the fold - transform. Default: ``["key", "value"]`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_pivot : pivot transform - opposite of fold. - alt.FoldTransform : underlying transform object - """ - return self._add_transform(core.FoldTransform(fold=fold, **{"as": as_})) - - def transform_loess( - self, - on, - loess, - as_=Undefined, - bandwidth=Undefined, - groupby=Undefined, - ) -> Self: - """Add a :class:`LoessTransform` to the spec. - - Parameters - ---------- - on : str - The data field of the independent variable to use a predictor. - loess : str - The data field of the dependent variable to smooth. - as_ : [str, str] - The output field names for the smoothed points generated by the loess transform. - **Default value:** The field names of the input x and y values. - bandwidth : float - A bandwidth parameter in the range ``[0, 1]`` that determines the amount of - smoothing. **Default value:** ``0.3`` - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_regression: regression transform - alt.LoessTransform : underlying transform object - """ - return self._add_transform( - core.LoessTransform( - loess=loess, on=on, bandwidth=bandwidth, groupby=groupby, **{"as": as_} - ) - ) - - def transform_lookup( - self, - lookup=Undefined, - from_=Undefined, - as_=Undefined, - default=Undefined, - **kwargs, - ) -> Self: - """Add a :class:`DataLookupTransform` or :class:`SelectionLookupTransform` to the chart - - Parameters - ---------- - lookup : string - Key in primary data source. - from_ : anyOf(:class:`LookupData`, :class:`LookupSelection`) - Secondary data reference. - as_ : anyOf(string, List(string)) - The output fields on which to store the looked up data values. - - For data lookups, this property may be left blank if ``from_.fields`` - has been specified (those field names will be used); if ``from_.fields`` - has not been specified, ``as_`` must be a string. - - For selection lookups, this property is optional: if unspecified, - looked up values will be stored under a property named for the selection; - and if specified, it must correspond to ``from_.fields``. - default : string - The default value to use if lookup fails. **Default value:** ``null`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.DataLookupTransform : underlying transform object - alt.SelectionLookupTransform : underlying transform object - """ - if as_ is not Undefined: - if "as" in kwargs: - raise ValueError( - "transform_lookup: both 'as_' and 'as' passed as arguments." - ) - kwargs["as"] = as_ - if from_ is not Undefined: - if "from" in kwargs: - raise ValueError( - "transform_lookup: both 'from_' and 'from' passed as arguments." - ) - kwargs["from"] = from_ - kwargs["lookup"] = lookup - kwargs["default"] = default - return self._add_transform(core.LookupTransform(**kwargs)) - - def transform_pivot( - self, - pivot, - value, - groupby=Undefined, - limit=Undefined, - op=Undefined, - ) -> Self: - """Add a :class:`PivotTransform` to the chart. - - Parameters - ---------- - pivot : str - The data field to pivot on. The unique values of this field become new field names - in the output stream. - value : str - The data field to populate pivoted fields. The aggregate values of this field become - the values of the new pivoted fields. - groupby : List(str) - The optional data fields to group by. If not specified, a single group containing - all data objects will be used. - limit : float - An optional parameter indicating the maximum number of pivoted fields to generate. - The default ( ``0`` ) applies no limit. The pivoted ``pivot`` names are sorted in - ascending order prior to enforcing the limit. - **Default value:** ``0`` - op : string - The aggregation operation to apply to grouped ``value`` field values. - **Default value:** ``sum`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_fold : fold transform - opposite of pivot. - alt.PivotTransform : underlying transform object - """ - return self._add_transform( - core.PivotTransform( - pivot=pivot, value=value, groupby=groupby, limit=limit, op=op - ) - ) - - def transform_quantile( - self, - quantile, - as_=Undefined, - groupby=Undefined, - probs=Undefined, - step=Undefined, - ) -> Self: - """Add a :class:`QuantileTransform` to the chart - - Parameters - ---------- - quantile : str - The data field for which to perform quantile estimation. - as : [str, str] - The output field names for the probability and quantile values. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - probs : List(float) - An array of probabilities in the range (0, 1) for which to compute quantile values. - If not specified, the *step* parameter will be used. - step : float - A probability step size (default 0.01) for sampling quantile values. All values from - one-half the step size up to 1 (exclusive) will be sampled. This parameter is only - used if the *probs* parameter is not provided. **Default value:** ``["prob", "value"]`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.QuantileTransform : underlying transform object - """ - return self._add_transform( - core.QuantileTransform( - quantile=quantile, - groupby=groupby, - probs=probs, - step=step, - **{"as": as_}, - ) - ) - - def transform_regression( - self, - on, - regression, - as_=Undefined, - extent=Undefined, - groupby=Undefined, - method=Undefined, - order=Undefined, - params=Undefined, - ) -> Self: - """Add a :class:`RegressionTransform` to the chart. - - Parameters - ---------- - on : str - The data field of the independent variable to use a predictor. - regression : str - The data field of the dependent variable to predict. - as_ : [str, str] - The output field names for the smoothed points generated by the regression - transform. **Default value:** The field names of the input x and y values. - extent : [float, float] - A [min, max] domain over the independent (x) field for the starting and ending - points of the generated trend line. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - method : enum('linear', 'log', 'exp', 'pow', 'quad', 'poly') - The functional form of the regression model. One of ``"linear"``, ``"log"``, - ``"exp"``, ``"pow"``, ``"quad"``, or ``"poly"``. **Default value:** ``"linear"`` - order : float - The polynomial order (number of coefficients) for the 'poly' method. - **Default value:** ``3`` - params : boolean - A boolean flag indicating if the transform should return the regression model - parameters (one object per group), rather than trend line points. - The resulting objects include a ``coef`` array of fitted coefficient values - (starting with the intercept term and then including terms of increasing order) - and an ``rSquared`` value (indicating the total variance explained by the model). - **Default value:** ``false`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_loess : LOESS transform - alt.RegressionTransform : underlying transform object - """ - return self._add_transform( - core.RegressionTransform( - regression=regression, - on=on, - extent=extent, - groupby=groupby, - method=method, - order=order, - params=params, - **{"as": as_}, - ) - ) - - def transform_sample(self, sample=1000) -> Self: - """ - Add a :class:`SampleTransform` to the schema. - - Parameters - ---------- - sample : float - The maximum number of data objects to include in the sample. Default: 1000. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.SampleTransform : underlying transform object - """ - return self._add_transform(core.SampleTransform(sample)) - - def transform_stack( - self, as_, stack, groupby, offset=Undefined, sort=Undefined - ) -> Self: - """ - Add a :class:`StackTransform` to the schema. - - Parameters - ---------- - as_ : anyOf(string, List(string)) - Output field names. This can be either a string or an array of strings with - two elements denoting the name for the fields for stack start and stack end - respectively. - If a single string(eg."val") is provided, the end field will be "val_end". - stack : string - The field which is stacked. - groupby : List(string) - The data fields to group by. - offset : enum('zero', 'center', 'normalize') - Mode for stacking marks. Default: 'zero'. - sort : List(:class:`SortField`) - Field that determines the order of leaves in the stacked charts. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.StackTransform : underlying transform object - """ - return self._add_transform( - core.StackTransform( - stack=stack, groupby=groupby, offset=offset, sort=sort, **{"as": as_} - ) - ) - - def transform_timeunit( - self, - as_=Undefined, - field=Undefined, - timeUnit=Undefined, - **kwargs, - ) -> Self: - """ - Add a :class:`TimeUnitTransform` to the schema. - - Parameters - ---------- - as_ : string - The output field to write the timeUnit value. - field : string - The data field to apply time unit. - timeUnit : :class:`TimeUnit` - The timeUnit. - **kwargs - transforms can also be passed by keyword argument; see Examples - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> from altair import datum, expr - - >>> chart = alt.Chart().transform_timeunit(month='month(date)') - >>> chart.transform[0] - TimeUnitTransform({ - as: 'month', - field: 'date', - timeUnit: 'month' - }) - - It's also possible to pass the ``TimeUnitTransform`` arguments directly; - this is most useful in cases where the desired field name is not a - valid python identifier: - - >>> kwds = {'as': 'month', 'timeUnit': 'month', 'field': 'The Month'} - >>> chart = alt.Chart().transform_timeunit(**kwds) - >>> chart.transform[0] - TimeUnitTransform({ - as: 'month', - field: 'The Month', - timeUnit: 'month' - }) - - As the first form is easier to write and understand, that is the - recommended method. - - See Also - -------- - alt.TimeUnitTransform : underlying transform object - - """ - if as_ is Undefined: - as_ = kwargs.pop("as", Undefined) - else: - if "as" in kwargs: - raise ValueError( - "transform_timeunit: both 'as_' and 'as' passed as arguments." - ) - if as_ is not Undefined: - dct = {"as": as_, "timeUnit": timeUnit, "field": field} - self = self._add_transform(core.TimeUnitTransform(**dct)) - for as_, shorthand in kwargs.items(): - dct = utils.parse_shorthand( - shorthand, - parse_timeunits=True, - parse_aggregates=False, - parse_types=False, - ) - dct.pop("type", None) - dct["as"] = as_ - if "timeUnit" not in dct: - raise ValueError("'{}' must include a valid timeUnit".format(shorthand)) - self = self._add_transform(core.TimeUnitTransform(**dct)) - return self - - def transform_window( - self, - window=Undefined, - frame=Undefined, - groupby=Undefined, - ignorePeers=Undefined, - sort=Undefined, - **kwargs, - ) -> Self: - """Add a :class:`WindowTransform` to the schema - - Parameters - ---------- - window : List(:class:`WindowFieldDef`) - The definition of the fields in the window, and what calculations to use. - frame : List(anyOf(None, float)) - A frame specification as a two-element array indicating how the sliding window - should proceed. The array entries should either be a number indicating the offset - from the current data object, or null to indicate unbounded rows preceding or - following the current data object. The default value is ``[null, 0]``, indicating - that the sliding window includes the current object and all preceding objects. The - value ``[-5, 5]`` indicates that the window should include five objects preceding - and five objects following the current object. Finally, ``[null, null]`` indicates - that the window frame should always include all data objects. The only operators - affected are the aggregation operations and the ``first_value``, ``last_value``, and - ``nth_value`` window operations. The other window operations are not affected by - this. - - **Default value:** : ``[null, 0]`` (includes the current object and all preceding - objects) - groupby : List(string) - The data fields for partitioning the data objects into separate windows. If - unspecified, all data points will be in a single group. - ignorePeers : boolean - Indicates if the sliding window frame should ignore peer values. (Peer values are - those considered identical by the sort criteria). The default is false, causing the - window frame to expand to include all peer values. If set to true, the window frame - will be defined by offset values only. This setting only affects those operations - that depend on the window frame, namely aggregation operations and the first_value, - last_value, and nth_value window operations. - - **Default value:** ``false`` - sort : List(:class:`SortField`) - A sort field definition for sorting data objects within a window. If two data - objects are considered equal by the comparator, they are considered “peer” values of - equal rank. If sort is not specified, the order is undefined: data objects are - processed in the order they are observed and none are considered peers (the - ignorePeers parameter is ignored and treated as if set to ``true`` ). - **kwargs - transforms can also be passed by keyword argument; see Examples - - Examples - -------- - A cumulative line chart - - >>> import altair as alt - >>> import numpy as np - >>> import pandas as pd - >>> data = pd.DataFrame({'x': np.arange(100), - ... 'y': np.random.randn(100)}) - >>> chart = alt.Chart(data).mark_line().encode( - ... x='x:Q', - ... y='ycuml:Q' - ... ).transform_window( - ... ycuml='sum(y)' - ... ) - >>> chart.transform[0] - WindowTransform({ - window: [WindowFieldDef({ - as: 'ycuml', - field: 'y', - op: 'sum' - })] - }) - - """ - if kwargs: - if window is Undefined: - window = [] - for as_, shorthand in kwargs.items(): - kwds = {"as": as_} - kwds.update( - utils.parse_shorthand( - shorthand, - parse_aggregates=False, - parse_window_ops=True, - parse_timeunits=False, - parse_types=False, - ) - ) - window.append(core.WindowFieldDef(**kwds)) - - return self._add_transform( - core.WindowTransform( - window=window, - frame=frame, - groupby=groupby, - ignorePeers=ignorePeers, - sort=sort, - ) - ) - - # Display-related methods - - def _repr_mimebundle_(self, include=None, exclude=None): - """Return a MIME bundle for display in Jupyter frontends.""" - # Catch errors explicitly to get around issues in Jupyter frontend - # see https://github.com/ipython/ipython/issues/11038 - try: - dct = self.to_dict(context={"pre_transform": False}) - except Exception: - utils.display_traceback(in_ipython=True) - return {} - else: - return renderers.get()(dct) - - def display(self, renderer=Undefined, theme=Undefined, actions=Undefined, **kwargs): - """Display chart in Jupyter notebook or JupyterLab - - Parameters are passed as options to vega-embed within supported frontends. - See https://github.com/vega/vega-embed#options for details. - - Parameters - ---------- - renderer : string ('canvas' or 'svg') - The renderer to use - theme : string - The Vega theme name to use; see https://github.com/vega/vega-themes - actions : bool or dict - Specify whether action links ("Open In Vega Editor", etc.) are - included in the view. - **kwargs : - Additional parameters are also passed to vega-embed as options. - - """ - from IPython.display import display - - if renderer is not Undefined: - kwargs["renderer"] = renderer - if theme is not Undefined: - kwargs["theme"] = theme - if actions is not Undefined: - kwargs["actions"] = actions - - if kwargs: - options = renderers.options.copy() - options["embed_options"] = options.get("embed_options", {}).copy() - options["embed_options"].update(kwargs) - with renderers.enable(**options): - display(self) - else: - display(self) - - @utils.deprecation.deprecated(message="'serve' is deprecated. Use 'show' instead.") - def serve( - self, - ip="127.0.0.1", - port=8888, - n_retries=50, - files=None, - jupyter_warning=True, - open_browser=True, - http_server=None, - **kwargs, - ): - """ - 'serve' is deprecated. Use 'show' instead. - - Open a browser window and display a rendering of the chart - - Parameters - ---------- - html : string - HTML to serve - ip : string (default = '127.0.0.1') - ip address at which the HTML will be served. - port : int (default = 8888) - the port at which to serve the HTML - n_retries : int (default = 50) - the number of nearby ports to search if the specified port - is already in use. - files : dictionary (optional) - dictionary of extra content to serve - jupyter_warning : bool (optional) - if True (default), then print a warning if this is used - within the Jupyter notebook - open_browser : bool (optional) - if True (default), then open a web browser to the given HTML - http_server : class (optional) - optionally specify an HTTPServer class to use for showing the - figure. The default is Python's basic HTTPServer. - **kwargs : - additional keyword arguments passed to the save() method - - """ - from ...utils.server import serve - - html = io.StringIO() - self.save(html, format="html", **kwargs) - html.seek(0) - - serve( - html.read(), - ip=ip, - port=port, - n_retries=n_retries, - files=files, - jupyter_warning=jupyter_warning, - open_browser=open_browser, - http_server=http_server, - ) - - def show(self, embed_opt=None, open_browser=None): - """Show the chart in an external browser window. - - This requires a recent version of the altair_viewer package. - - Parameters - ---------- - embed_opt : dict (optional) - The Vega embed options that control the dispay of the chart. - open_browser : bool (optional) - Specify whether a browser window should be opened. If not specified, - a browser window will be opened only if the server is not already - connected to a browser. - """ - try: - import altair_viewer - except ImportError as err: - raise ValueError( - "'show' method requires the altair_viewer package. " - "See http://github.com/altair-viz/altair_viewer" - ) from err - altair_viewer.show(self, embed_opt=embed_opt, open_browser=open_browser) - - @utils.use_signature(core.Resolve) - def _set_resolve(self, **kwargs): - """Copy the chart and update the resolve property with kwargs""" - if not hasattr(self, "resolve"): - raise ValueError( - "{} object has no attribute " "'resolve'".format(self.__class__) - ) - copy = self.copy(deep=["resolve"]) - if copy.resolve is Undefined: - copy.resolve = core.Resolve() - for key, val in kwargs.items(): - copy.resolve[key] = val - return copy - - @utils.use_signature(core.AxisResolveMap) - def resolve_axis(self, *args, **kwargs) -> Self: - return self._set_resolve(axis=core.AxisResolveMap(*args, **kwargs)) - - @utils.use_signature(core.LegendResolveMap) - def resolve_legend(self, *args, **kwargs) -> Self: - return self._set_resolve(legend=core.LegendResolveMap(*args, **kwargs)) - - @utils.use_signature(core.ScaleResolveMap) - def resolve_scale(self, *args, **kwargs) -> Self: - return self._set_resolve(scale=core.ScaleResolveMap(*args, **kwargs)) - - -class _EncodingMixin: - @utils.use_signature(core.FacetedEncoding) - def encode(self, *args, **kwargs) -> Self: - # Convert args to kwargs based on their types. - kwargs = utils.infer_encoding_types(args, kwargs, channels) - - # get a copy of the dict representation of the previous encoding - # ignore type as copy method comes from SchemaBase - copy = self.copy(deep=["encoding"]) # type: ignore[attr-defined] - encoding = copy._get("encoding", {}) - if isinstance(encoding, core.VegaLiteSchema): - encoding = {k: v for k, v in encoding._kwds.items() if v is not Undefined} - - # update with the new encodings, and apply them to the copy - encoding.update(kwargs) - copy.encoding = core.FacetedEncoding(**encoding) - return copy - - def facet( - self, - facet=Undefined, - row=Undefined, - column=Undefined, - data=Undefined, - columns=Undefined, - **kwargs, - ) -> "FacetChart": - """Create a facet chart from the current chart. - - Faceted charts require data to be specified at the top level; if data - is not specified, the data from the current chart will be used at the - top level. - - Parameters - ---------- - facet : string or alt.Facet (optional) - The data column to use as an encoding for a wrapped facet. - If specified, then neither row nor column may be specified. - column : string or alt.Column (optional) - The data column to use as an encoding for a column facet. - May be combined with row argument, but not with facet argument. - row : string or alt.Column (optional) - The data column to use as an encoding for a row facet. - May be combined with column argument, but not with facet argument. - data : string or dataframe (optional) - The dataset to use for faceting. If not supplied, then data must - be specified in the top-level chart that calls this method. - columns : integer - the maximum number of columns for a wrapped facet. - - Returns - ------- - self : - for chaining - """ - facet_specified = facet is not Undefined - rowcol_specified = row is not Undefined or column is not Undefined - - if facet_specified and rowcol_specified: - raise ValueError( - "facet argument cannot be combined with row/column argument." - ) - - # Remove "ignore" statement once Undefined is no longer typed as Any - if data is Undefined: - # Remove "ignore" statement once Undefined is no longer typed as Any - if self.data is Undefined: # type: ignore - raise ValueError( - "Facet charts require data to be specified at the top level." - ) - # ignore type as copy comes from another class - self = self.copy(deep=False) # type: ignore[attr-defined] - # Remove "ignore" statement once Undefined is no longer typed as Any - data, self.data = self.data, Undefined # type: ignore - - if facet_specified: - if isinstance(facet, str): - facet = channels.Facet(facet) - else: - facet = FacetMapping(row=row, column=column) - - return FacetChart(spec=self, facet=facet, data=data, columns=columns, **kwargs) - - -class Chart( - TopLevelMixin, _EncodingMixin, mixins.MarkMethodMixin, core.TopLevelUnitSpec -): - """Create a basic Altair/Vega-Lite chart. - - Although it is possible to set all Chart properties as constructor attributes, - it is more idiomatic to use methods such as ``mark_point()``, ``encode()``, - ``transform_filter()``, ``properties()``, etc. See Altair's documentation - for details and examples: http://altair-viz.github.io/. - - Parameters - ---------- - data : Data - An object describing the data source - mark : AnyMark - A string describing the mark type (one of `"bar"`, `"circle"`, `"square"`, `"tick"`, - `"line"`, * `"area"`, `"point"`, `"rule"`, `"geoshape"`, and `"text"`) or a - MarkDef object. - encoding : FacetedEncoding - A key-value mapping between encoding channels and definition of fields. - autosize : anyOf(AutosizeType, AutoSizeParams) - Sets how the visualization size should be determined. If a string, should be one of - `"pad"`, `"fit"` or `"none"`. Object values can additionally specify parameters for - content sizing and automatic resizing. `"fit"` is only supported for single and - layered views that don't use `rangeStep`. Default value: `pad` - background : string - CSS color property to use as the background of visualization. - - **Default value:** none (transparent) - config : Config - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - description : string - Description of this mark for commenting purpose. - height : float - The height of a visualization. - name : string - Name of the visualization for later reference. - padding : Padding - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. If an - object, the value should have the format `{"left": 5, "top": 5, "right": 5, - "bottom": 5}` to specify padding for each side of the visualization. Default - value: `5` - projection : Projection - An object defining properties of geographic projection. Works with `"geoshape"` - marks and `"point"` or `"line"` marks that have a channel (one or more of `"X"`, - `"X2"`, `"Y"`, `"Y2"`) with type `"latitude"`, or `"longitude"`. - selection : Mapping(required=[]) - A key-value mapping between selection names and definitions. - title : anyOf(string, TitleParams) - Title for the plot. - transform : List(Transform) - An array of data transformations such as filter and new field calculation. - width : float - The width of a visualization. - """ - - def __init__( - self, - data=Undefined, - encoding=Undefined, - mark=Undefined, - width=Undefined, - height=Undefined, - **kwargs, - ): - super(Chart, self).__init__( - data=data, - encoding=encoding, - mark=mark, - width=width, - height=height, - **kwargs, - ) - - _counter = 0 - - @classmethod - def _get_name(cls): - cls._counter += 1 - return f"view_{cls._counter}" - - @classmethod - def from_dict(cls, dct, validate=True) -> core.SchemaBase: # type: ignore[override] # Not the same signature as SchemaBase.from_dict. Would ideally be aligned in the future - """Construct class from a dictionary representation - - Parameters - ---------- - dct : dictionary - The dict from which to construct the class - validate : boolean - If True (default), then validate the input against the schema. - - Returns - ------- - obj : Chart object - The wrapped schema - - Raises - ------ - jsonschema.ValidationError : - if validate=True and dct does not conform to the schema - """ - for class_ in TopLevelMixin.__subclasses__(): - if class_ is Chart: - class_ = cast(TypingType[TopLevelMixin], super(Chart, cls)) - try: - # TopLevelMixin classes don't necessarily have from_dict defined - # but all classes which are used here have due to how Altair is - # designed. Too complex to type check right now. - return class_.from_dict(dct, validate=validate) # type: ignore[attr-defined] - except jsonschema.ValidationError: - pass - - # As a last resort, try using the Root vegalite object - return core.Root.from_dict(dct, validate) - - def to_dict( - self, - validate: bool = True, - *, - format: str = "vega-lite", - ignore: Optional[List[str]] = None, - context: Optional[TypingDict[str, Any]] = None, - ) -> dict: - """Convert the chart to a dictionary suitable for JSON export - - Parameters - ---------- - validate : bool, optional - If True (default), then validate the output dictionary - against the schema. - format : str, optional - Chart specification format, one of "vega-lite" (default) or "vega" - ignore : list[str], optional - A list of keys to ignore. It is usually not needed - to specify this argument as a user. - context : dict[str, Any], optional - A context dictionary. It is usually not needed - to specify this argument as a user. - - Notes - ----- - Technical: The ignore parameter will *not* be passed to child to_dict - function calls. - - Returns - ------- - dict - The dictionary representation of this chart - - Raises - ------ - SchemaValidationError - if validate=True and the dict does not conform to the schema - """ - context = context or {} - if self.data is Undefined and "data" not in context: - # No data specified here or in parent: inject empty data - # for easier specification of datum encodings. - copy = self.copy(deep=False) - copy.data = core.InlineData(values=[{}]) - return super(Chart, copy).to_dict( - validate=validate, format=format, ignore=ignore, context=context - ) - return super().to_dict( - validate=validate, format=format, ignore=ignore, context=context - ) - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> Optional[_DataFrameLike]: - """Evaluate a Chart's transforms - - Evaluate the data transforms associated with a Chart and return the - transformed data a DataFrame - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - DataFrame - Transformed data as a DataFrame - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params: - return self - copy = self.copy(deep=["params"]) - if copy.params is Undefined: - copy.params = [] - - for s in params: - copy.params.append(s.param) - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *params) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*params) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - -def _check_if_valid_subspec(spec, classname): - """Check if the spec is a valid sub-spec. - - If it is not, then raise a ValueError - """ - err = ( - 'Objects with "{0}" attribute cannot be used within {1}. ' - "Consider defining the {0} attribute in the {1} object instead." - ) - - if not isinstance(spec, (core.SchemaBase, dict)): - raise ValueError("Only chart objects can be used in {0}.".format(classname)) - for attr in TOPLEVEL_ONLY_KEYS: - if isinstance(spec, core.SchemaBase): - val = getattr(spec, attr, Undefined) - else: - val = spec.get(attr, Undefined) - if val is not Undefined: - raise ValueError(err.format(attr, classname)) - - -def _check_if_can_be_layered(spec): - """Check if the spec can be layered.""" - - def _get(spec, attr): - if isinstance(spec, core.SchemaBase): - return spec._get(attr) - else: - return spec.get(attr, Undefined) - - encoding = _get(spec, "encoding") - if encoding is not Undefined: - for channel in ["row", "column", "facet"]: - if _get(encoding, channel) is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, (Chart, LayerChart)): - return - - if not isinstance(spec, (core.SchemaBase, dict)): - raise ValueError("Only chart objects can be layered.") - if _get(spec, "facet") is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, FacetChart) or _get(spec, "facet") is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, RepeatChart) or _get(spec, "repeat") is not Undefined: - raise ValueError( - "Repeat charts cannot be layered. Instead, layer the charts before repeating." - ) - if isinstance(spec, ConcatChart) or _get(spec, "concat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - if isinstance(spec, HConcatChart) or _get(spec, "hconcat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - if isinstance(spec, VConcatChart) or _get(spec, "vconcat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - - -class RepeatChart(TopLevelMixin, core.TopLevelRepeatSpec): - """A chart repeated across rows and columns with small changes""" - - # Because TopLevelRepeatSpec is defined as a union as of Vega-Lite schema 4.9, - # we set the arguments explicitly here. - # TODO: Should we instead use tools/schemapi/codegen.get_args? - @utils.use_signature(core.TopLevelRepeatSpec) - def __init__( - self, - repeat=Undefined, - spec=Undefined, - align=Undefined, - autosize=Undefined, - background=Undefined, - bounds=Undefined, - center=Undefined, - columns=Undefined, - config=Undefined, - data=Undefined, - datasets=Undefined, - description=Undefined, - name=Undefined, - padding=Undefined, - params=Undefined, - resolve=Undefined, - spacing=Undefined, - title=Undefined, - transform=Undefined, - usermeta=Undefined, - **kwds, - ): - _check_if_valid_subspec(spec, "RepeatChart") - _spec_as_list = [spec] - params, _spec_as_list = _combine_subchart_params(params, _spec_as_list) - spec = _spec_as_list[0] - if isinstance(spec, (Chart, LayerChart)): - params = _repeat_names(params, repeat, spec) - super(RepeatChart, self).__init__( - repeat=repeat, - spec=spec, - align=align, - autosize=autosize, - background=background, - bounds=bounds, - center=center, - columns=columns, - config=config, - data=data, - datasets=datasets, - description=description, - name=name, - padding=padding, - params=params, - resolve=resolve, - spacing=spacing, - title=title, - transform=transform, - usermeta=usermeta, - **kwds, - ) - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> Optional[_DataFrameLike]: - """Evaluate a RepeatChart's transforms - - Evaluate the data transforms associated with a RepeatChart and return the - transformed data a DataFrame - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Raises - ------ - NotImplementedError - RepeatChart does not yet support transformed_data - """ - raise NotImplementedError( - "transformed_data is not yet implemented for RepeatChart" - ) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - copy = self.copy(deep=False) - copy.spec = copy.spec.interactive(name=name, bind_x=bind_x, bind_y=bind_y) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or self.spec is Undefined: - return self - copy = self.copy() - copy.spec = copy.spec.add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def repeat(repeater="repeat"): - """Tie a channel to the row or column within a repeated chart - - The output of this should be passed to the ``field`` attribute of - a channel. - - Parameters - ---------- - repeater : {'row'|'column'|'repeat'|'layer'} - The repeater to tie the field to. Default is 'repeat'. - - Returns - ------- - repeat : RepeatRef object - """ - if repeater not in ["row", "column", "repeat", "layer"]: - raise ValueError("repeater must be one of ['row', 'column', 'repeat', 'layer']") - return core.RepeatRef(repeat=repeater) - - -class ConcatChart(TopLevelMixin, core.TopLevelConcatSpec): - """A chart with horizontally-concatenated facets""" - - @utils.use_signature(core.TopLevelConcatSpec) - def __init__(self, data=Undefined, concat=(), columns=Undefined, **kwargs): - # TODO: move common data to top level? - for spec in concat: - _check_if_valid_subspec(spec, "ConcatChart") - super(ConcatChart, self).__init__( - data=data, concat=list(concat), columns=columns, **kwargs - ) - self.data, self.concat = _combine_subchart_data(self.data, self.concat) - self.params, self.concat = _combine_subchart_params(self.params, self.concat) - - def __ior__(self, other): - _check_if_valid_subspec(other, "ConcatChart") - self.concat.append(other) - self.data, self.concat = _combine_subchart_data(self.data, self.concat) - self.params, self.concat = _combine_subchart_params(self.params, self.concat) - return self - - def __or__(self, other): - copy = self.copy(deep=["concat"]) - copy |= other - return copy - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a ConcatChart's transforms - - Evaluate the data transforms associated with a ConcatChart and return the - transformed data for each subplot as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each subplot as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.concat: - return self - copy = self.copy() - copy.concat = [chart.add_params(*params) for chart in copy.concat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def concat(*charts, **kwargs): - """Concatenate charts horizontally""" - return ConcatChart(concat=charts, **kwargs) - - -class HConcatChart(TopLevelMixin, core.TopLevelHConcatSpec): - """A chart with horizontally-concatenated facets""" - - @utils.use_signature(core.TopLevelHConcatSpec) - def __init__(self, data=Undefined, hconcat=(), **kwargs): - # TODO: move common data to top level? - for spec in hconcat: - _check_if_valid_subspec(spec, "HConcatChart") - super(HConcatChart, self).__init__(data=data, hconcat=list(hconcat), **kwargs) - self.data, self.hconcat = _combine_subchart_data(self.data, self.hconcat) - self.params, self.hconcat = _combine_subchart_params(self.params, self.hconcat) - - def __ior__(self, other): - _check_if_valid_subspec(other, "HConcatChart") - self.hconcat.append(other) - self.data, self.hconcat = _combine_subchart_data(self.data, self.hconcat) - self.params, self.hconcat = _combine_subchart_params(self.params, self.hconcat) - return self - - def __or__(self, other): - copy = self.copy(deep=["hconcat"]) - copy |= other - return copy - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a HConcatChart's transforms - - Evaluate the data transforms associated with a HConcatChart and return the - transformed data for each subplot as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each subplot as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.hconcat: - return self - copy = self.copy() - copy.hconcat = [chart.add_params(*params) for chart in copy.hconcat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def hconcat(*charts, **kwargs): - """Concatenate charts horizontally""" - return HConcatChart(hconcat=charts, **kwargs) - - -class VConcatChart(TopLevelMixin, core.TopLevelVConcatSpec): - """A chart with vertically-concatenated facets""" - - @utils.use_signature(core.TopLevelVConcatSpec) - def __init__(self, data=Undefined, vconcat=(), **kwargs): - # TODO: move common data to top level? - for spec in vconcat: - _check_if_valid_subspec(spec, "VConcatChart") - super(VConcatChart, self).__init__(data=data, vconcat=list(vconcat), **kwargs) - self.data, self.vconcat = _combine_subchart_data(self.data, self.vconcat) - self.params, self.vconcat = _combine_subchart_params(self.params, self.vconcat) - - def __iand__(self, other): - _check_if_valid_subspec(other, "VConcatChart") - self.vconcat.append(other) - self.data, self.vconcat = _combine_subchart_data(self.data, self.vconcat) - self.params, self.vconcat = _combine_subchart_params(self.params, self.vconcat) - return self - - def __and__(self, other): - copy = self.copy(deep=["vconcat"]) - copy &= other - return copy - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a VConcatChart's transforms - - Evaluate the data transforms associated with a VConcatChart and return the - transformed data for each subplot as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each subplot as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.vconcat: - return self - copy = self.copy() - copy.vconcat = [chart.add_params(*params) for chart in copy.vconcat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def vconcat(*charts, **kwargs): - """Concatenate charts vertically""" - return VConcatChart(vconcat=charts, **kwargs) - - -class LayerChart(TopLevelMixin, _EncodingMixin, core.TopLevelLayerSpec): - """A Chart with layers within a single panel""" - - @utils.use_signature(core.TopLevelLayerSpec) - def __init__(self, data=Undefined, layer=(), **kwargs): - # TODO: move common data to top level? - # TODO: check for conflicting interaction - for spec in layer: - _check_if_valid_subspec(spec, "LayerChart") - _check_if_can_be_layered(spec) - super(LayerChart, self).__init__(data=data, layer=list(layer), **kwargs) - self.data, self.layer = _combine_subchart_data(self.data, self.layer) - # Currently (Vega-Lite 5.5) the same param can't occur on two layers - self.layer = _remove_duplicate_params(self.layer) - self.params, self.layer = _combine_subchart_params(self.params, self.layer) - - # Some properties are not allowed within layer; we'll move to parent. - layer_props = ("height", "width", "view") - combined_dict, self.layer = _remove_layer_props(self, self.layer, layer_props) - - for prop in combined_dict: - self[prop] = combined_dict[prop] - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a LayerChart's transforms - - Evaluate the data transforms associated with a LayerChart and return the - transformed data for each layer as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each layer as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def __iadd__(self, other): - _check_if_valid_subspec(other, "LayerChart") - _check_if_can_be_layered(other) - self.layer.append(other) - self.data, self.layer = _combine_subchart_data(self.data, self.layer) - self.params, self.layer = _combine_subchart_params(self.params, self.layer) - return self - - def __add__(self, other): - copy = self.copy(deep=["layer"]) - copy += other - return copy - - def add_layers(self, *layers) -> Self: - copy = self.copy(deep=["layer"]) - for layer in layers: - copy += layer - return copy - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - if not self.layer: - raise ValueError( - "LayerChart: cannot call interactive() until a " "layer is defined" - ) - copy = self.copy(deep=["layer"]) - copy.layer[0] = copy.layer[0].interactive( - name=name, bind_x=bind_x, bind_y=bind_y - ) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.layer: - return self - copy = self.copy() - copy.layer[0] = copy.layer[0].add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def layer(*charts, **kwargs): - """layer multiple charts""" - return LayerChart(layer=charts, **kwargs) - - -class FacetChart(TopLevelMixin, core.TopLevelFacetSpec): - """A Chart with layers within a single panel""" - - @utils.use_signature(core.TopLevelFacetSpec) - def __init__( - self, - data=Undefined, - spec=Undefined, - facet=Undefined, - params=Undefined, - **kwargs, - ): - _check_if_valid_subspec(spec, "FacetChart") - _spec_as_list = [spec] - params, _spec_as_list = _combine_subchart_params(params, _spec_as_list) - spec = _spec_as_list[0] - super(FacetChart, self).__init__( - data=data, spec=spec, facet=facet, params=params, **kwargs - ) - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> Optional[_DataFrameLike]: - """Evaluate a FacetChart's transforms - - Evaluate the data transforms associated with a FacetChart and return the - transformed data a DataFrame - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - DataFrame - Transformed data as a DataFrame - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - copy = self.copy(deep=False) - copy.spec = copy.spec.interactive(name=name, bind_x=bind_x, bind_y=bind_y) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or self.spec is Undefined: - return self - copy = self.copy() - copy.spec = copy.spec.add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def topo_feature(url, feature, **kwargs): - """A convenience function for extracting features from a topojson url - - Parameters - ---------- - url : string - An URL from which to load the data set. - - feature : string - The name of the TopoJSON object set to convert to a GeoJSON feature collection. For - example, in a map of the world, there may be an object set named `"countries"`. - Using the feature property, we can extract this set and generate a GeoJSON feature - object for each country. - - **kwargs : - additional keywords passed to TopoDataFormat - """ - return core.UrlData( - url=url, format=core.TopoDataFormat(type="topojson", feature=feature, **kwargs) - ) - - -def _combine_subchart_data(data, subcharts): - def remove_data(subchart): - if subchart.data is not Undefined: - subchart = subchart.copy() - subchart.data = Undefined - return subchart - - if not subcharts: - # No subcharts = nothing to do. - pass - elif data is Undefined: - # Top level has no data; all subchart data must - # be identical to proceed. - subdata = subcharts[0].data - if subdata is not Undefined and all(c.data is subdata for c in subcharts): - data = subdata - subcharts = [remove_data(c) for c in subcharts] - else: - # Top level has data; subchart data must be either - # undefined or identical to proceed. - if all(c.data is Undefined or c.data is data for c in subcharts): - subcharts = [remove_data(c) for c in subcharts] - - return data, subcharts - - -def _viewless_dict(param): - d = param.to_dict() - d.pop("views", None) - return d - - -def _needs_name(subchart): - # Only `Chart` objects need a name - if (subchart.name is not Undefined) or (not isinstance(subchart, Chart)): - return False - - # Variable parameters won't receive a views property. - if all(isinstance(p, core.VariableParameter) for p in subchart.params): - return False - - return True - - -# Convert SelectionParameters to TopLevelSelectionParameters with a views property. -def _prepare_to_lift(param): - param = param.copy() - - if isinstance(param, core.VariableParameter): - return param - - if isinstance(param, core.SelectionParameter): - return core.TopLevelSelectionParameter(**param.to_dict(), views=[]) - - if param.views is Undefined: - param.views = [] - - return param - - -def _remove_duplicate_params(layer): - subcharts = [subchart.copy() for subchart in layer] - found_params = [] - - for subchart in subcharts: - if (not hasattr(subchart, "params")) or (subchart.params is Undefined): - continue - - params = [] - - # Ensure the same selection parameter doesn't appear twice - for param in subchart.params: - if isinstance(param, core.VariableParameter): - params.append(param) - continue - - p = param.copy() - pd = _viewless_dict(p) - - if pd not in found_params: - params.append(p) - found_params.append(pd) - - if len(params) == 0: - subchart.params = Undefined - else: - subchart.params = params - - return subcharts - - -def _combine_subchart_params(params, subcharts): - if params is Undefined: - params = [] - - # List of triples related to params, (param, dictionary minus views, views) - param_info = [] - - # Put parameters already found into `param_info` list. - for param in params: - p = _prepare_to_lift(param) - param_info.append( - ( - p, - _viewless_dict(p), - [] if isinstance(p, core.VariableParameter) else p.views, - ) - ) - - subcharts = [subchart.copy() for subchart in subcharts] - - for subchart in subcharts: - if (not hasattr(subchart, "params")) or (subchart.params is Undefined): - continue - - if _needs_name(subchart): - subchart.name = subchart._get_name() - - for param in subchart.params: - p = _prepare_to_lift(param) - pd = _viewless_dict(p) - - dlist = [d for _, d, _ in param_info] - found = pd in dlist - - if isinstance(p, core.VariableParameter) and found: - continue - - if isinstance(p, core.VariableParameter) and not found: - param_info.append((p, pd, [])) - continue - - # At this stage in the loop, p must be a TopLevelSelectionParameter. - - if isinstance(subchart, Chart) and (subchart.name not in p.views): - p.views.append(subchart.name) - - if found: - i = dlist.index(pd) - _, _, old_views = param_info[i] - new_views = [v for v in p.views if v not in old_views] - old_views += new_views - else: - param_info.append((p, pd, p.views)) - - subchart.params = Undefined - - for p, _, v in param_info: - if len(v) > 0: - p.views = v - - subparams = [p for p, _, _ in param_info] - - if len(subparams) == 0: - subparams = Undefined - - return subparams, subcharts - - -def _get_repeat_strings(repeat): - if isinstance(repeat, list): - return repeat - elif isinstance(repeat, core.LayerRepeatMapping): - klist = ["row", "column", "layer"] - elif isinstance(repeat, core.RepeatMapping): - klist = ["row", "column"] - rclist = [k for k in klist if repeat[k] is not Undefined] - rcstrings = [[f"{k}_{v}" for v in repeat[k]] for k in rclist] - return ["".join(s) for s in itertools.product(*rcstrings)] - - -def _extend_view_name(v, r, spec): - # prevent the same extension from happening more than once - if isinstance(spec, Chart): - if v.endswith("child__" + r): - return v - else: - return f"{v}_child__{r}" - elif isinstance(spec, LayerChart): - if v.startswith("child__" + r): - return v - else: - return f"child__{r}_{v}" - - -def _repeat_names(params, repeat, spec): - if params is Undefined: - return params - - repeat = _get_repeat_strings(repeat) - params_named = [] - - for param in params: - if not isinstance(param, core.TopLevelSelectionParameter): - params_named.append(param) - continue - p = param.copy() - views = [] - repeat_strings = _get_repeat_strings(repeat) - for v in param.views: - if isinstance(spec, Chart): - if any(v.endswith(f"child__{r}") for r in repeat_strings): - views.append(v) - else: - views += [_extend_view_name(v, r, spec) for r in repeat_strings] - elif isinstance(spec, LayerChart): - if any(v.startswith(f"child__{r}") for r in repeat_strings): - views.append(v) - else: - views += [_extend_view_name(v, r, spec) for r in repeat_strings] - - p.views = views - params_named.append(p) - - return params_named - - -def _remove_layer_props(chart, subcharts, layer_props): - def remove_prop(subchart, prop): - # If subchart is a UnitSpec, then subchart["height"] raises a KeyError - try: - if subchart[prop] is not Undefined: - subchart = subchart.copy() - subchart[prop] = Undefined - except KeyError: - pass - return subchart - - output_dict = {} - - if not subcharts: - # No subcharts = nothing to do. - return output_dict, subcharts - - for prop in layer_props: - if chart[prop] is Undefined: - # Top level does not have this prop. - # Check for consistent props within the subcharts. - values = [] - for c in subcharts: - # If c is a UnitSpec, then c["height"] raises a KeyError. - try: - val = c[prop] - if val is not Undefined: - values.append(val) - except KeyError: - pass - if len(values) == 0: - pass - elif all(v == values[0] for v in values[1:]): - output_dict[prop] = values[0] - else: - raise ValueError(f"There are inconsistent values {values} for {prop}") - else: - # Top level has this prop; subchart must either not have the prop - # or it must be Undefined or identical to proceed. - if all( - getattr(c, prop, Undefined) is Undefined or c[prop] == chart[prop] - for c in subcharts - ): - output_dict[prop] = chart[prop] - else: - raise ValueError(f"There are inconsistent values {values} for {prop}") - subcharts = [remove_prop(c, prop) for c in subcharts] - - return output_dict, subcharts - - -@utils.use_signature(core.SequenceParams) -def sequence(start, stop=None, step=Undefined, as_=Undefined, **kwds): - """Sequence generator.""" - if stop is None: - start, stop = 0, start - params = core.SequenceParams(start=start, stop=stop, step=step, **{"as": as_}) - return core.SequenceGenerator(sequence=params, **kwds) - - -@utils.use_signature(core.GraticuleParams) -def graticule(**kwds): - """Graticule generator.""" - if not kwds: - # graticule: True indicates default parameters - graticule = True - else: - graticule = core.GraticuleParams(**kwds) - return core.GraticuleGenerator(graticule=graticule) - - -def sphere(): - """Sphere generator.""" - return core.SphereGenerator(sphere=True) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/qu2cu/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/qu2cu/__init__.py deleted file mode 100644 index ce357417c7139664a194a6826220889f5ed59894..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/qu2cu/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from .qu2cu import * diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_V_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_V_.py deleted file mode 100644 index d7aec4589c5d83b35b02b8f07c68b6462438e066..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_V_.py +++ /dev/null @@ -1,20 +0,0 @@ -from fontTools.misc.textTools import strjoin, tobytes, tostr -from . import asciiTable - - -class table_T_S_I_V_(asciiTable.asciiTable): - def toXML(self, writer, ttFont): - data = tostr(self.data) - # removing null bytes. XXX needed?? - data = data.split("\0") - data = strjoin(data) - writer.begintag("source") - writer.newline() - writer.write_noindent(data.replace("\r", "\n")) - writer.newline() - writer.endtag("source") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - lines = strjoin(content).split("\n") - self.data = tobytes("\r".join(lines[1:-1])) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/src/svelte-internal.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/src/svelte-internal.ts deleted file mode 100644 index a824d9e73b37da2857fc190065b845f18652009b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/src/svelte-internal.ts +++ /dev/null @@ -1,2 +0,0 @@ -//@ts-ignore -export * from "svelte/internal"; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/number.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/number.py deleted file mode 100644 index aa8088653e10d6685b97d9ac7b64115aa3f502d7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/number.py +++ /dev/null @@ -1,131 +0,0 @@ -"""gr.Number() component.""" - -from __future__ import annotations - -from typing import Any, Callable - -from gradio_client.documentation import document, set_documentation_group - -from gradio.components.base import FormComponent -from gradio.events import Events -from gradio.exceptions import Error - -set_documentation_group("component") - - -@document() -class Number(FormComponent): - """ - Creates a numeric field for user to enter numbers as input or display numeric output. - Preprocessing: passes field value as a {float} or {int} into the function, depending on `precision`. - Postprocessing: expects an {int} or {float} returned from the function and sets field value to it. - Examples-format: a {float} or {int} representing the number's value. - - Demos: tax_calculator, titanic_survival, blocks_simple_squares - """ - - EVENTS = [Events.change, Events.input, Events.submit, Events.focus] - - def __init__( - self, - value: float | Callable | None = None, - *, - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - precision: int | None = None, - minimum: float | None = None, - maximum: float | None = None, - step: float = 1, - ): - """ - Parameters: - value: default value. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, will be editable; if False, editing will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. - precision: Precision to round input/output to. If set to 0, will round to nearest integer and convert type to int. If None, no rounding happens. - minimum: Minimum value. Only applied when component is used as an input. If a user provides a smaller value, a gr.Error exception is raised by the backend. - maximum: Maximum value. Only applied when component is used as an input. If a user provides a larger value, a gr.Error exception is raised by the backend. - step: The interval between allowed numbers in the component. Can be used along with optional parameters `minimum` and `maximum` to create a range of legal values starting from `minimum` and incrementing according to this parameter. - """ - self.precision = precision - self.minimum = minimum - self.maximum = maximum - self.step = step - - super().__init__( - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - value=value, - ) - - @staticmethod - def _round_to_precision(num: float | int, precision: int | None) -> float | int: - """ - Round to a given precision. - - If precision is None, no rounding happens. If 0, num is converted to int. - - Parameters: - num: Number to round. - precision: Precision to round to. - Returns: - rounded number - """ - if precision is None: - return float(num) - elif precision == 0: - return int(round(num, precision)) - else: - return round(num, precision) - - def preprocess(self, payload: float | None) -> float | None: - if payload is None: - return None - elif self.minimum is not None and payload < self.minimum: - raise Error(f"Value {payload} is less than minimum value {self.minimum}.") - elif self.maximum is not None and payload > self.maximum: - raise Error( - f"Value {payload} is greater than maximum value {self.maximum}." - ) - return self._round_to_precision(payload, self.precision) - - def postprocess(self, value: float | None) -> float | None: - if value is None: - return None - return self._round_to_precision(value, self.precision) - - def api_info(self) -> dict[str, str]: - return {"type": "number"} - - def example_inputs(self) -> Any: - return 3 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_snapshot_download.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_snapshot_download.py deleted file mode 100644 index 72f40ad2ef7e8692a6b4239a481d59f707a0ac12..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_snapshot_download.py +++ /dev/null @@ -1,250 +0,0 @@ -import os -from pathlib import Path -from typing import Dict, List, Literal, Optional, Union - -from tqdm.auto import tqdm as base_tqdm -from tqdm.contrib.concurrent import thread_map - -from .constants import ( - DEFAULT_REVISION, - HF_HUB_ENABLE_HF_TRANSFER, - HUGGINGFACE_HUB_CACHE, - REPO_TYPES, -) -from .file_download import REGEX_COMMIT_HASH, hf_hub_download, repo_folder_name -from .hf_api import HfApi -from .utils import filter_repo_objects, logging, validate_hf_hub_args -from .utils import tqdm as hf_tqdm - - -logger = logging.get_logger(__name__) - - -@validate_hf_hub_args -def snapshot_download( - repo_id: str, - *, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - endpoint: Optional[str] = None, - cache_dir: Union[str, Path, None] = None, - local_dir: Union[str, Path, None] = None, - local_dir_use_symlinks: Union[bool, Literal["auto"]] = "auto", - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Optional[Union[Dict, str]] = None, - proxies: Optional[Dict] = None, - etag_timeout: float = 10, - resume_download: bool = False, - force_download: bool = False, - token: Optional[Union[bool, str]] = None, - local_files_only: bool = False, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - max_workers: int = 8, - tqdm_class: Optional[base_tqdm] = None, -) -> str: - """Download repo files. - - Download a whole snapshot of a repo's files at the specified revision. This is useful when you want all files from - a repo, because you don't know which ones you will need a priori. All files are nested inside a folder in order - to keep their actual filename relative to that folder. You can also filter which files to download using - `allow_patterns` and `ignore_patterns`. - - If `local_dir` is provided, the file structure from the repo will be replicated in this location. You can configure - how you want to move those files: - - If `local_dir_use_symlinks="auto"` (default), files are downloaded and stored in the cache directory as blob - files. Small files (<5MB) are duplicated in `local_dir` while a symlink is created for bigger files. The goal - is to be able to manually edit and save small files without corrupting the cache while saving disk space for - binary files. The 5MB threshold can be configured with the `HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD` - environment variable. - - If `local_dir_use_symlinks=True`, files are downloaded, stored in the cache directory and symlinked in `local_dir`. - This is optimal in term of disk usage but files must not be manually edited. - - If `local_dir_use_symlinks=False` and the blob files exist in the cache directory, they are duplicated in the - local dir. This means disk usage is not optimized. - - Finally, if `local_dir_use_symlinks=False` and the blob files do not exist in the cache directory, then the - files are downloaded and directly placed under `local_dir`. This means if you need to download them again later, - they will be re-downloaded entirely. - - An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly - configured. It is also not possible to filter which files to download when cloning a repository using git. - - Args: - repo_id (`str`): - A user or an organization name and a repo name separated by a `/`. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if downloading from a dataset or space, - `None` or `"model"` if downloading from a model. Default is `None`. - revision (`str`, *optional*): - An optional Git revision id which can be a branch name, a tag, or a - commit hash. - endpoint (`str`, *optional*): - Hugging Face Hub base url. Will default to https://huggingface.co/. Otherwise, one can set the `HF_ENDPOINT` - environment variable. - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - local_dir (`str` or `Path`, *optional*): - If provided, the downloaded files will be placed under this directory, either as symlinks (default) or - regular files (see description for more details). - local_dir_use_symlinks (`"auto"` or `bool`, defaults to `"auto"`): - To be used with `local_dir`. If set to "auto", the cache directory will be used and the file will be either - duplicated or symlinked to the local directory depending on its size. It set to `True`, a symlink will be - created, no matter the file size. If set to `False`, the file will either be duplicated from cache (if - already exists) or downloaded from the Hub and not cached. See description for more details. - library_name (`str`, *optional*): - The name of the library to which the object corresponds. - library_version (`str`, *optional*): - The version of the library. - user_agent (`str`, `dict`, *optional*): - The user-agent info in the form of a dictionary or a string. - proxies (`dict`, *optional*): - Dictionary mapping protocol to the URL of the proxy passed to - `requests.request`. - etag_timeout (`float`, *optional*, defaults to `10`): - When fetching ETag, how many seconds to wait for the server to send - data before giving up which is passed to `requests.request`. - resume_download (`bool`, *optional*, defaults to `False): - If `True`, resume a previously interrupted download. - force_download (`bool`, *optional*, defaults to `False`): - Whether the file should be downloaded even if it already exists in the local cache. - token (`str`, `bool`, *optional*): - A token to be used for the download. - - If `True`, the token is read from the HuggingFace config - folder. - - If a string, it's used as the authentication token. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, avoid downloading the file and return the path to the - local cached file if it exists. - allow_patterns (`List[str]` or `str`, *optional*): - If provided, only files matching at least one pattern are downloaded. - ignore_patterns (`List[str]` or `str`, *optional*): - If provided, files matching any of the patterns are not downloaded. - max_workers (`int`, *optional*): - Number of concurrent threads to download files (1 thread = 1 file download). - Defaults to 8. - tqdm_class (`tqdm`, *optional*): - If provided, overwrites the default behavior for the progress bar. Passed - argument must inherit from `tqdm.auto.tqdm` or at least mimic its behavior. - Note that the `tqdm_class` is not passed to each individual download. - Defaults to the custom HF progress bar that can be disabled by setting - `HF_HUB_DISABLE_PROGRESS_BARS` environment variable. - - Returns: - Local folder path (string) of repo snapshot - - - - Raises the following errors: - - - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - if `token=True` and the token cannot be found. - - [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) if - ETag cannot be determined. - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - - """ - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - if revision is None: - revision = DEFAULT_REVISION - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - if repo_type is None: - repo_type = "model" - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type: {repo_type}. Accepted repo types are: {str(REPO_TYPES)}") - - storage_folder = os.path.join(cache_dir, repo_folder_name(repo_id=repo_id, repo_type=repo_type)) - - # if we have no internet connection we will look for an - # appropriate folder in the cache - # If the specified revision is a commit hash, look inside "snapshots". - # If the specified revision is a branch or tag, look inside "refs". - if local_files_only: - if REGEX_COMMIT_HASH.match(revision): - commit_hash = revision - else: - # retrieve commit_hash from file - ref_path = os.path.join(storage_folder, "refs", revision) - with open(ref_path) as f: - commit_hash = f.read() - - snapshot_folder = os.path.join(storage_folder, "snapshots", commit_hash) - - if os.path.exists(snapshot_folder): - return snapshot_folder - - raise ValueError( - "Cannot find an appropriate cached snapshot folder for the specified" - " revision on the local disk and outgoing traffic has been disabled. To" - " enable repo look-ups and downloads online, set 'local_files_only' to" - " False." - ) - - # if we have internet connection we retrieve the correct folder name from the huggingface api - api = HfApi(library_name=library_name, library_version=library_version, user_agent=user_agent, endpoint=endpoint) - repo_info = api.repo_info(repo_id=repo_id, repo_type=repo_type, revision=revision, token=token) - assert repo_info.sha is not None, "Repo info returned from server must have a revision sha." - - filtered_repo_files = list( - filter_repo_objects( - items=[f.rfilename for f in repo_info.siblings], - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - ) - ) - commit_hash = repo_info.sha - snapshot_folder = os.path.join(storage_folder, "snapshots", commit_hash) - # if passed revision is not identical to commit_hash - # then revision has to be a branch name or tag name. - # In that case store a ref. - if revision != commit_hash: - ref_path = os.path.join(storage_folder, "refs", revision) - os.makedirs(os.path.dirname(ref_path), exist_ok=True) - with open(ref_path, "w") as f: - f.write(commit_hash) - - # we pass the commit_hash to hf_hub_download - # so no network call happens if we already - # have the file locally. - def _inner_hf_hub_download(repo_file: str): - return hf_hub_download( - repo_id, - filename=repo_file, - repo_type=repo_type, - revision=commit_hash, - endpoint=endpoint, - cache_dir=cache_dir, - local_dir=local_dir, - local_dir_use_symlinks=local_dir_use_symlinks, - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - proxies=proxies, - etag_timeout=etag_timeout, - resume_download=resume_download, - force_download=force_download, - token=token, - ) - - if HF_HUB_ENABLE_HF_TRANSFER: - # when using hf_transfer we don't want extra parallelism - # from the one hf_transfer provides - for file in filtered_repo_files: - _inner_hf_hub_download(file) - else: - thread_map( - _inner_hf_hub_download, - filtered_repo_files, - desc=f"Fetching {len(filtered_repo_files)} files", - max_workers=max_workers, - # User can use its own tqdm class or the default one from `huggingface_hub.utils` - tqdm_class=tqdm_class or hf_tqdm, - ) - - if local_dir is not None: - return str(os.path.realpath(local_dir)) - return snapshot_folder diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_animation_data.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_animation_data.py deleted file mode 100644 index 4bf2ae3148d23ae154eba3192da28e6c94c077e2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_animation_data.py +++ /dev/null @@ -1,262 +0,0 @@ -# JavaScript template for HTMLWriter -JS_INCLUDE = """ - - -""" - - -# Style definitions for the HTML template -STYLE_INCLUDE = """ - -""" - - -# HTML template for HTMLWriter -DISPLAY_TEMPLATE = """ -
      - -
      - -
      - - - - - - - - - -
      - - - - - - - - -
      -
      - - - -""" - - -INCLUDED_FRAMES = """ - for (var i=0; i<{Nframes}; i++){{ - frames[i] = "{frame_dir}/frame" + ("0000000" + i).slice(-7) + - ".{frame_format}"; - }} -""" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/pyplot.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/pyplot.py deleted file mode 100644 index 3f41376e0a63159ebea588fbad693a7ce6fc5237..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/pyplot.py +++ /dev/null @@ -1,4376 +0,0 @@ -# Note: The first part of this file can be modified in place, but the latter -# part is autogenerated by the boilerplate.py script. - -""" -`matplotlib.pyplot` is a state-based interface to matplotlib. It provides -an implicit, MATLAB-like, way of plotting. It also opens figures on your -screen, and acts as the figure GUI manager. - -pyplot is mainly intended for interactive plots and simple cases of -programmatic plot generation:: - - import numpy as np - import matplotlib.pyplot as plt - - x = np.arange(0, 5, 0.1) - y = np.sin(x) - plt.plot(x, y) - -The explicit object-oriented API is recommended for complex plots, though -pyplot is still usually used to create the figure and often the axes in the -figure. See `.pyplot.figure`, `.pyplot.subplots`, and -`.pyplot.subplot_mosaic` to create figures, and -:doc:`Axes API ` for the plotting methods on an Axes:: - - import numpy as np - import matplotlib.pyplot as plt - - x = np.arange(0, 5, 0.1) - y = np.sin(x) - fig, ax = plt.subplots() - ax.plot(x, y) - - -See :ref:`api_interfaces` for an explanation of the tradeoffs between the -implicit and explicit interfaces. -""" - -# fmt: off - -from __future__ import annotations - -from contextlib import AbstractContextManager, ExitStack -from enum import Enum -import functools -import importlib -import inspect -import logging -import re -import sys -import threading -import time -from typing import cast, overload - -from cycler import cycler -import matplotlib -import matplotlib.colorbar -import matplotlib.image -from matplotlib import _api -from matplotlib import ( # Re-exported for typing. - cm as cm, get_backend as get_backend, rcParams as rcParams, style as style) -from matplotlib import _pylab_helpers, interactive -from matplotlib import cbook -from matplotlib import _docstring -from matplotlib.backend_bases import ( - FigureCanvasBase, FigureManagerBase, MouseButton) -from matplotlib.figure import Figure, FigureBase, figaspect -from matplotlib.gridspec import GridSpec, SubplotSpec -from matplotlib import rcsetup, rcParamsDefault, rcParamsOrig -from matplotlib.artist import Artist -from matplotlib.axes import Axes, Subplot # type: ignore -from matplotlib.projections import PolarAxes # type: ignore -from matplotlib import mlab # for detrend_none, window_hanning -from matplotlib.scale import get_scale_names - -from matplotlib.cm import _colormaps -from matplotlib.cm import register_cmap # type: ignore -from matplotlib.colors import _color_sequences - -import numpy as np - -from typing import TYPE_CHECKING, cast - -if TYPE_CHECKING: - from collections.abc import Callable, Hashable, Iterable, Sequence - import datetime - import pathlib - import os - from typing import Any, BinaryIO, Literal, TypeVar - from typing_extensions import ParamSpec - - import PIL.Image - from numpy.typing import ArrayLike - - from matplotlib.axis import Tick - from matplotlib.axes._base import _AxesBase - from matplotlib.backend_bases import RendererBase, Event - from matplotlib.cm import ScalarMappable - from matplotlib.contour import ContourSet, QuadContourSet - from matplotlib.collections import ( - Collection, - LineCollection, - BrokenBarHCollection, - PolyCollection, - PathCollection, - EventCollection, - QuadMesh, - ) - from matplotlib.colorbar import Colorbar - from matplotlib.colors import Colormap - from matplotlib.container import ( - BarContainer, - ErrorbarContainer, - StemContainer, - ) - from matplotlib.figure import SubFigure - from matplotlib.legend import Legend - from matplotlib.mlab import GaussianKDE - from matplotlib.image import AxesImage, FigureImage - from matplotlib.patches import FancyArrow, StepPatch, Wedge - from matplotlib.quiver import Barbs, Quiver, QuiverKey - from matplotlib.scale import ScaleBase - from matplotlib.transforms import Transform, Bbox - from matplotlib.typing import ColorType, LineStyleType, MarkerType, HashableList - from matplotlib.widgets import SubplotTool - - _P = ParamSpec('_P') - _R = TypeVar('_R') - _T = TypeVar('_T') - - -# We may not need the following imports here: -from matplotlib.colors import Normalize -from matplotlib.lines import Line2D -from matplotlib.text import Text, Annotation -from matplotlib.patches import Polygon, Rectangle, Circle, Arrow -from matplotlib.widgets import Button, Slider, Widget - -from .ticker import ( - TickHelper, Formatter, FixedFormatter, NullFormatter, FuncFormatter, - FormatStrFormatter, ScalarFormatter, LogFormatter, LogFormatterExponent, - LogFormatterMathtext, Locator, IndexLocator, FixedLocator, NullLocator, - LinearLocator, LogLocator, AutoLocator, MultipleLocator, MaxNLocator) - -_log = logging.getLogger(__name__) - - -# Explicit rename instead of import-as for typing's sake. -colormaps = _colormaps -color_sequences = _color_sequences - - -@overload -def _copy_docstring_and_deprecators( - method: Any, - func: Literal[None] = None -) -> Callable[[Callable[_P, _R]], Callable[_P, _R]]: ... - - -@overload -def _copy_docstring_and_deprecators( - method: Any, func: Callable[_P, _R]) -> Callable[_P, _R]: ... - - -def _copy_docstring_and_deprecators( - method: Any, - func: Callable[_P, _R] | None = None -) -> Callable[[Callable[_P, _R]], Callable[_P, _R]] | Callable[_P, _R]: - if func is None: - return cast('Callable[[Callable[_P, _R]], Callable[_P, _R]]', - functools.partial(_copy_docstring_and_deprecators, method)) - decorators: list[Callable[[Callable[_P, _R]], Callable[_P, _R]]] = [ - _docstring.copy(method) - ] - # Check whether the definition of *method* includes @_api.rename_parameter - # or @_api.make_keyword_only decorators; if so, propagate them to the - # pyplot wrapper as well. - while hasattr(method, "__wrapped__"): - potential_decorator = _api.deprecation.DECORATORS.get(method) - if potential_decorator: - decorators.append(potential_decorator) - method = method.__wrapped__ - for decorator in decorators[::-1]: - func = decorator(func) - return func - - -## Global ## - - -# The state controlled by {,un}install_repl_displayhook(). -_ReplDisplayHook = Enum("_ReplDisplayHook", ["NONE", "PLAIN", "IPYTHON"]) -_REPL_DISPLAYHOOK = _ReplDisplayHook.NONE - - -def _draw_all_if_interactive() -> None: - if matplotlib.is_interactive(): - draw_all() - - -def install_repl_displayhook() -> None: - """ - Connect to the display hook of the current shell. - - The display hook gets called when the read-evaluate-print-loop (REPL) of - the shell has finished the execution of a command. We use this callback - to be able to automatically update a figure in interactive mode. - - This works both with IPython and with vanilla python shells. - """ - global _REPL_DISPLAYHOOK - - if _REPL_DISPLAYHOOK is _ReplDisplayHook.IPYTHON: - return - - # See if we have IPython hooks around, if so use them. - # Use ``sys.modules.get(name)`` rather than ``name in sys.modules`` as - # entries can also have been explicitly set to None. - mod_ipython = sys.modules.get("IPython") - if not mod_ipython: - _REPL_DISPLAYHOOK = _ReplDisplayHook.PLAIN - return - ip = mod_ipython.get_ipython() - if not ip: - _REPL_DISPLAYHOOK = _ReplDisplayHook.PLAIN - return - - ip.events.register("post_execute", _draw_all_if_interactive) - _REPL_DISPLAYHOOK = _ReplDisplayHook.IPYTHON - - from IPython.core.pylabtools import backend2gui # type: ignore - # trigger IPython's eventloop integration, if available - ipython_gui_name = backend2gui.get(get_backend()) - if ipython_gui_name: - ip.enable_gui(ipython_gui_name) - - -def uninstall_repl_displayhook() -> None: - """Disconnect from the display hook of the current shell.""" - global _REPL_DISPLAYHOOK - if _REPL_DISPLAYHOOK is _ReplDisplayHook.IPYTHON: - from IPython import get_ipython # type: ignore - ip = get_ipython() - ip.events.unregister("post_execute", _draw_all_if_interactive) - _REPL_DISPLAYHOOK = _ReplDisplayHook.NONE - - -draw_all = _pylab_helpers.Gcf.draw_all - - -# Ensure this appears in the pyplot docs. -@_copy_docstring_and_deprecators(matplotlib.set_loglevel) -def set_loglevel(*args, **kwargs) -> None: - return matplotlib.set_loglevel(*args, **kwargs) - - -@_copy_docstring_and_deprecators(Artist.findobj) -def findobj( - o: Artist | None = None, - match: Callable[[Artist], bool] | type[Artist] | None = None, - include_self: bool = True -) -> list[Artist]: - if o is None: - o = gcf() - return o.findobj(match, include_self=include_self) - - -_backend_mod: type[matplotlib.backend_bases._Backend] | None = None - - -def _get_backend_mod() -> type[matplotlib.backend_bases._Backend]: - """ - Ensure that a backend is selected and return it. - - This is currently private, but may be made public in the future. - """ - if _backend_mod is None: - # Use rcParams._get("backend") to avoid going through the fallback - # logic (which will (re)import pyplot and then call switch_backend if - # we need to resolve the auto sentinel) - switch_backend(rcParams._get("backend")) # type: ignore[attr-defined] - return cast(type[matplotlib.backend_bases._Backend], _backend_mod) - - -def switch_backend(newbackend: str) -> None: - """ - Set the pyplot backend. - - Switching to an interactive backend is possible only if no event loop for - another interactive backend has started. Switching to and from - non-interactive backends is always possible. - - If the new backend is different than the current backend then all open - Figures will be closed via ``plt.close('all')``. - - Parameters - ---------- - newbackend : str - The case-insensitive name of the backend to use. - - """ - global _backend_mod - # make sure the init is pulled up so we can assign to it later - import matplotlib.backends - - if newbackend is rcsetup._auto_backend_sentinel: - current_framework = cbook._get_running_interactive_framework() - mapping = {'qt': 'qtagg', - 'gtk3': 'gtk3agg', - 'gtk4': 'gtk4agg', - 'wx': 'wxagg', - 'tk': 'tkagg', - 'macosx': 'macosx', - 'headless': 'agg'} - - if current_framework in mapping: - candidates = [mapping[current_framework]] - else: - candidates = [] - candidates += [ - "macosx", "qtagg", "gtk4agg", "gtk3agg", "tkagg", "wxagg"] - - # Don't try to fallback on the cairo-based backends as they each have - # an additional dependency (pycairo) over the agg-based backend, and - # are of worse quality. - for candidate in candidates: - try: - switch_backend(candidate) - except ImportError: - continue - else: - rcParamsOrig['backend'] = candidate - return - else: - # Switching to Agg should always succeed; if it doesn't, let the - # exception propagate out. - switch_backend("agg") - rcParamsOrig["backend"] = "agg" - return - # have to escape the switch on access logic - old_backend = dict.__getitem__(rcParams, 'backend') - - module = importlib.import_module(cbook._backend_module_name(newbackend)) - canvas_class = module.FigureCanvas - - required_framework = canvas_class.required_interactive_framework - if required_framework is not None: - current_framework = cbook._get_running_interactive_framework() - if (current_framework and required_framework - and current_framework != required_framework): - raise ImportError( - "Cannot load backend {!r} which requires the {!r} interactive " - "framework, as {!r} is currently running".format( - newbackend, required_framework, current_framework)) - - # Load the new_figure_manager() and show() functions from the backend. - - # Classically, backends can directly export these functions. This should - # keep working for backcompat. - new_figure_manager = getattr(module, "new_figure_manager", None) - show = getattr(module, "show", None) - - # In that classical approach, backends are implemented as modules, but - # "inherit" default method implementations from backend_bases._Backend. - # This is achieved by creating a "class" that inherits from - # backend_bases._Backend and whose body is filled with the module globals. - class backend_mod(matplotlib.backend_bases._Backend): - locals().update(vars(module)) - - # However, the newer approach for defining new_figure_manager and - # show is to derive them from canvas methods. In that case, also - # update backend_mod accordingly; also, per-backend customization of - # draw_if_interactive is disabled. - if new_figure_manager is None: - - def new_figure_manager_given_figure(num, figure): - return canvas_class.new_manager(figure, num) - - def new_figure_manager(num, *args, FigureClass=Figure, **kwargs): - fig = FigureClass(*args, **kwargs) - return new_figure_manager_given_figure(num, fig) - - def draw_if_interactive() -> None: - if matplotlib.is_interactive(): - manager = _pylab_helpers.Gcf.get_active() - if manager: - manager.canvas.draw_idle() - - backend_mod.new_figure_manager_given_figure = ( # type: ignore[method-assign] - new_figure_manager_given_figure) - backend_mod.new_figure_manager = ( # type: ignore[method-assign] - new_figure_manager) - backend_mod.draw_if_interactive = ( # type: ignore[method-assign] - draw_if_interactive) - - # If the manager explicitly overrides pyplot_show, use it even if a global - # show is already present, as the latter may be here for backcompat. - manager_class = getattr(canvas_class, "manager_class", None) - # We can't compare directly manager_class.pyplot_show and FMB.pyplot_show because - # pyplot_show is a classmethod so the above constructs are bound classmethods, and - # thus always different (being bound to different classes). We also have to use - # getattr_static instead of vars as manager_class could have no __dict__. - manager_pyplot_show = inspect.getattr_static(manager_class, "pyplot_show", None) - base_pyplot_show = inspect.getattr_static(FigureManagerBase, "pyplot_show", None) - if (show is None - or (manager_pyplot_show is not None - and manager_pyplot_show != base_pyplot_show)): - if not manager_pyplot_show: - raise ValueError( - f"Backend {newbackend} defines neither FigureCanvas.manager_class nor " - f"a toplevel show function") - _pyplot_show = cast('Any', manager_class).pyplot_show - backend_mod.show = _pyplot_show # type: ignore[method-assign] - - _log.debug("Loaded backend %s version %s.", - newbackend, backend_mod.backend_version) - - rcParams['backend'] = rcParamsDefault['backend'] = newbackend - _backend_mod = backend_mod - for func_name in ["new_figure_manager", "draw_if_interactive", "show"]: - globals()[func_name].__signature__ = inspect.signature( - getattr(backend_mod, func_name)) - - # Need to keep a global reference to the backend for compatibility reasons. - # See https://github.com/matplotlib/matplotlib/issues/6092 - matplotlib.backends.backend = newbackend # type: ignore[attr-defined] - - if not cbook._str_equal(old_backend, newbackend): - if get_fignums(): - _api.warn_deprecated("3.8", message=( - "Auto-close()ing of figures upon backend switching is deprecated since " - "%(since)s and will be removed %(removal)s. To suppress this warning, " - "explicitly call plt.close('all') first.")) - close("all") - - # Make sure the repl display hook is installed in case we become interactive. - install_repl_displayhook() - - -def _warn_if_gui_out_of_main_thread() -> None: - warn = False - canvas_class = cast(type[FigureCanvasBase], _get_backend_mod().FigureCanvas) - if canvas_class.required_interactive_framework: - if hasattr(threading, 'get_native_id'): - # This compares native thread ids because even if Python-level - # Thread objects match, the underlying OS thread (which is what - # really matters) may be different on Python implementations with - # green threads. - if threading.get_native_id() != threading.main_thread().native_id: - warn = True - else: - # Fall back to Python-level Thread if native IDs are unavailable, - # mainly for PyPy. - if threading.current_thread() is not threading.main_thread(): - warn = True - if warn: - _api.warn_external( - "Starting a Matplotlib GUI outside of the main thread will likely " - "fail.") - - -# This function's signature is rewritten upon backend-load by switch_backend. -def new_figure_manager(*args, **kwargs): - """Create a new figure manager instance.""" - _warn_if_gui_out_of_main_thread() - return _get_backend_mod().new_figure_manager(*args, **kwargs) - - -# This function's signature is rewritten upon backend-load by switch_backend. -def draw_if_interactive(*args, **kwargs): - """ - Redraw the current figure if in interactive mode. - - .. warning:: - - End users will typically not have to call this function because the - the interactive mode takes care of this. - """ - return _get_backend_mod().draw_if_interactive(*args, **kwargs) - - -# This function's signature is rewritten upon backend-load by switch_backend. -def show(*args, **kwargs) -> None: - """ - Display all open figures. - - Parameters - ---------- - block : bool, optional - Whether to wait for all figures to be closed before returning. - - If `True` block and run the GUI main loop until all figure windows - are closed. - - If `False` ensure that all figure windows are displayed and return - immediately. In this case, you are responsible for ensuring - that the event loop is running to have responsive figures. - - Defaults to True in non-interactive mode and to False in interactive - mode (see `.pyplot.isinteractive`). - - See Also - -------- - ion : Enable interactive mode, which shows / updates the figure after - every plotting command, so that calling ``show()`` is not necessary. - ioff : Disable interactive mode. - savefig : Save the figure to an image file instead of showing it on screen. - - Notes - ----- - **Saving figures to file and showing a window at the same time** - - If you want an image file as well as a user interface window, use - `.pyplot.savefig` before `.pyplot.show`. At the end of (a blocking) - ``show()`` the figure is closed and thus unregistered from pyplot. Calling - `.pyplot.savefig` afterwards would save a new and thus empty figure. This - limitation of command order does not apply if the show is non-blocking or - if you keep a reference to the figure and use `.Figure.savefig`. - - **Auto-show in jupyter notebooks** - - The jupyter backends (activated via ``%matplotlib inline``, - ``%matplotlib notebook``, or ``%matplotlib widget``), call ``show()`` at - the end of every cell by default. Thus, you usually don't have to call it - explicitly there. - """ - _warn_if_gui_out_of_main_thread() - return _get_backend_mod().show(*args, **kwargs) - - -def isinteractive() -> bool: - """ - Return whether plots are updated after every plotting command. - - The interactive mode is mainly useful if you build plots from the command - line and want to see the effect of each command while you are building the - figure. - - In interactive mode: - - - newly created figures will be shown immediately; - - figures will automatically redraw on change; - - `.pyplot.show` will not block by default. - - In non-interactive mode: - - - newly created figures and changes to figures will not be reflected until - explicitly asked to be; - - `.pyplot.show` will block by default. - - See Also - -------- - ion : Enable interactive mode. - ioff : Disable interactive mode. - show : Show all figures (and maybe block). - pause : Show all figures, and block for a time. - """ - return matplotlib.is_interactive() - - -def ioff() -> ExitStack: - """ - Disable interactive mode. - - See `.pyplot.isinteractive` for more details. - - See Also - -------- - ion : Enable interactive mode. - isinteractive : Whether interactive mode is enabled. - show : Show all figures (and maybe block). - pause : Show all figures, and block for a time. - - Notes - ----- - For a temporary change, this can be used as a context manager:: - - # if interactive mode is on - # then figures will be shown on creation - plt.ion() - # This figure will be shown immediately - fig = plt.figure() - - with plt.ioff(): - # interactive mode will be off - # figures will not automatically be shown - fig2 = plt.figure() - # ... - - To enable optional usage as a context manager, this function returns a - `~contextlib.ExitStack` object, which is not intended to be stored or - accessed by the user. - """ - stack = ExitStack() - stack.callback(ion if isinteractive() else ioff) - matplotlib.interactive(False) - uninstall_repl_displayhook() - return stack - - -def ion() -> ExitStack: - """ - Enable interactive mode. - - See `.pyplot.isinteractive` for more details. - - See Also - -------- - ioff : Disable interactive mode. - isinteractive : Whether interactive mode is enabled. - show : Show all figures (and maybe block). - pause : Show all figures, and block for a time. - - Notes - ----- - For a temporary change, this can be used as a context manager:: - - # if interactive mode is off - # then figures will not be shown on creation - plt.ioff() - # This figure will not be shown immediately - fig = plt.figure() - - with plt.ion(): - # interactive mode will be on - # figures will automatically be shown - fig2 = plt.figure() - # ... - - To enable optional usage as a context manager, this function returns a - `~contextlib.ExitStack` object, which is not intended to be stored or - accessed by the user. - """ - stack = ExitStack() - stack.callback(ion if isinteractive() else ioff) - matplotlib.interactive(True) - install_repl_displayhook() - return stack - - -def pause(interval: float) -> None: - """ - Run the GUI event loop for *interval* seconds. - - If there is an active figure, it will be updated and displayed before the - pause, and the GUI event loop (if any) will run during the pause. - - This can be used for crude animation. For more complex animation use - :mod:`matplotlib.animation`. - - If there is no active figure, sleep for *interval* seconds instead. - - See Also - -------- - matplotlib.animation : Proper animations - show : Show all figures and optional block until all figures are closed. - """ - manager = _pylab_helpers.Gcf.get_active() - if manager is not None: - canvas = manager.canvas - if canvas.figure.stale: - canvas.draw_idle() - show(block=False) - canvas.start_event_loop(interval) - else: - time.sleep(interval) - - -@_copy_docstring_and_deprecators(matplotlib.rc) -def rc(group: str, **kwargs) -> None: - matplotlib.rc(group, **kwargs) - - -@_copy_docstring_and_deprecators(matplotlib.rc_context) -def rc_context( - rc: dict[str, Any] | None = None, - fname: str | pathlib.Path | os.PathLike | None = None, -) -> AbstractContextManager[None]: - return matplotlib.rc_context(rc, fname) - - -@_copy_docstring_and_deprecators(matplotlib.rcdefaults) -def rcdefaults() -> None: - matplotlib.rcdefaults() - if matplotlib.is_interactive(): - draw_all() - - -# getp/get/setp are explicitly reexported so that they show up in pyplot docs. - - -@_copy_docstring_and_deprecators(matplotlib.artist.getp) -def getp(obj, *args, **kwargs): - return matplotlib.artist.getp(obj, *args, **kwargs) - - -@_copy_docstring_and_deprecators(matplotlib.artist.get) -def get(obj, *args, **kwargs): - return matplotlib.artist.get(obj, *args, **kwargs) - - -@_copy_docstring_and_deprecators(matplotlib.artist.setp) -def setp(obj, *args, **kwargs): - return matplotlib.artist.setp(obj, *args, **kwargs) - - -def xkcd( - scale: float = 1, length: float = 100, randomness: float = 2 -) -> ExitStack: - """ - Turn on `xkcd `_ sketch-style drawing mode. This will - only have effect on things drawn after this function is called. - - For best results, the "Humor Sans" font should be installed: it is - not included with Matplotlib. - - Parameters - ---------- - scale : float, optional - The amplitude of the wiggle perpendicular to the source line. - length : float, optional - The length of the wiggle along the line. - randomness : float, optional - The scale factor by which the length is shrunken or expanded. - - Notes - ----- - This function works by a number of rcParams, so it will probably - override others you have set before. - - If you want the effects of this function to be temporary, it can - be used as a context manager, for example:: - - with plt.xkcd(): - # This figure will be in XKCD-style - fig1 = plt.figure() - # ... - - # This figure will be in regular style - fig2 = plt.figure() - """ - # This cannot be implemented in terms of contextmanager() or rc_context() - # because this needs to work as a non-contextmanager too. - - if rcParams['text.usetex']: - raise RuntimeError( - "xkcd mode is not compatible with text.usetex = True") - - stack = ExitStack() - stack.callback(dict.update, rcParams, rcParams.copy()) # type: ignore - - from matplotlib import patheffects - rcParams.update({ - 'font.family': ['xkcd', 'xkcd Script', 'Humor Sans', 'Comic Neue', - 'Comic Sans MS'], - 'font.size': 14.0, - 'path.sketch': (scale, length, randomness), - 'path.effects': [ - patheffects.withStroke(linewidth=4, foreground="w")], - 'axes.linewidth': 1.5, - 'lines.linewidth': 2.0, - 'figure.facecolor': 'white', - 'grid.linewidth': 0.0, - 'axes.grid': False, - 'axes.unicode_minus': False, - 'axes.edgecolor': 'black', - 'xtick.major.size': 8, - 'xtick.major.width': 3, - 'ytick.major.size': 8, - 'ytick.major.width': 3, - }) - - return stack - - -## Figures ## - -def figure( - # autoincrement if None, else integer from 1-N - num: int | str | Figure | SubFigure | None = None, - # defaults to rc figure.figsize - figsize: tuple[float, float] | None = None, - # defaults to rc figure.dpi - dpi: float | None = None, - *, - # defaults to rc figure.facecolor - facecolor: ColorType | None = None, - # defaults to rc figure.edgecolor - edgecolor: ColorType | None = None, - frameon: bool = True, - FigureClass: type[Figure] = Figure, - clear: bool = False, - **kwargs -) -> Figure: - """ - Create a new figure, or activate an existing figure. - - Parameters - ---------- - num : int or str or `.Figure` or `.SubFigure`, optional - A unique identifier for the figure. - - If a figure with that identifier already exists, this figure is made - active and returned. An integer refers to the ``Figure.number`` - attribute, a string refers to the figure label. - - If there is no figure with the identifier or *num* is not given, a new - figure is created, made active and returned. If *num* is an int, it - will be used for the ``Figure.number`` attribute, otherwise, an - auto-generated integer value is used (starting at 1 and incremented - for each new figure). If *num* is a string, the figure label and the - window title is set to this value. If num is a ``SubFigure``, its - parent ``Figure`` is activated. - - figsize : (float, float), default: :rc:`figure.figsize` - Width, height in inches. - - dpi : float, default: :rc:`figure.dpi` - The resolution of the figure in dots-per-inch. - - facecolor : color, default: :rc:`figure.facecolor` - The background color. - - edgecolor : color, default: :rc:`figure.edgecolor` - The border color. - - frameon : bool, default: True - If False, suppress drawing the figure frame. - - FigureClass : subclass of `~matplotlib.figure.Figure` - If set, an instance of this subclass will be created, rather than a - plain `.Figure`. - - clear : bool, default: False - If True and the figure already exists, then it is cleared. - - layout : {'constrained', 'compressed', 'tight', 'none', `.LayoutEngine`, None}, \ -default: None - The layout mechanism for positioning of plot elements to avoid - overlapping Axes decorations (labels, ticks, etc). Note that layout - managers can measurably slow down figure display. - - - 'constrained': The constrained layout solver adjusts axes sizes - to avoid overlapping axes decorations. Can handle complex plot - layouts and colorbars, and is thus recommended. - - See :ref:`constrainedlayout_guide` - for examples. - - - 'compressed': uses the same algorithm as 'constrained', but - removes extra space between fixed-aspect-ratio Axes. Best for - simple grids of axes. - - - 'tight': Use the tight layout mechanism. This is a relatively - simple algorithm that adjusts the subplot parameters so that - decorations do not overlap. See `.Figure.set_tight_layout` for - further details. - - - 'none': Do not use a layout engine. - - - A `.LayoutEngine` instance. Builtin layout classes are - `.ConstrainedLayoutEngine` and `.TightLayoutEngine`, more easily - accessible by 'constrained' and 'tight'. Passing an instance - allows third parties to provide their own layout engine. - - If not given, fall back to using the parameters *tight_layout* and - *constrained_layout*, including their config defaults - :rc:`figure.autolayout` and :rc:`figure.constrained_layout.use`. - - **kwargs - Additional keyword arguments are passed to the `.Figure` constructor. - - Returns - ------- - `~matplotlib.figure.Figure` - - Notes - ----- - A newly created figure is passed to the `~.FigureCanvasBase.new_manager` - method or the `new_figure_manager` function provided by the current - backend, which install a canvas and a manager on the figure. - - Once this is done, :rc:`figure.hooks` are called, one at a time, on the - figure; these hooks allow arbitrary customization of the figure (e.g., - attaching callbacks) or of associated elements (e.g., modifying the - toolbar). See :doc:`/gallery/user_interfaces/mplcvd` for an example of - toolbar customization. - - If you are creating many figures, make sure you explicitly call - `.pyplot.close` on the figures you are not using, because this will - enable pyplot to properly clean up the memory. - - `~matplotlib.rcParams` defines the default values, which can be modified - in the matplotlibrc file. - """ - if isinstance(num, FigureBase): - # type narrowed to `Figure | SubFigure` by combination of input and isinstance - if num.canvas.manager is None: - raise ValueError("The passed figure is not managed by pyplot") - _pylab_helpers.Gcf.set_active(num.canvas.manager) - return num.figure - - allnums = get_fignums() - next_num = max(allnums) + 1 if allnums else 1 - fig_label = '' - if num is None: - num = next_num - elif isinstance(num, str): - fig_label = num - all_labels = get_figlabels() - if fig_label not in all_labels: - if fig_label == 'all': - _api.warn_external("close('all') closes all existing figures.") - num = next_num - else: - inum = all_labels.index(fig_label) - num = allnums[inum] - else: - num = int(num) # crude validation of num argument - - # Type of "num" has narrowed to int, but mypy can't quite see it - manager = _pylab_helpers.Gcf.get_fig_manager(num) # type: ignore[arg-type] - if manager is None: - max_open_warning = rcParams['figure.max_open_warning'] - if len(allnums) == max_open_warning >= 1: - _api.warn_external( - f"More than {max_open_warning} figures have been opened. " - f"Figures created through the pyplot interface " - f"(`matplotlib.pyplot.figure`) are retained until explicitly " - f"closed and may consume too much memory. (To control this " - f"warning, see the rcParam `figure.max_open_warning`). " - f"Consider using `matplotlib.pyplot.close()`.", - RuntimeWarning) - - manager = new_figure_manager( - num, figsize=figsize, dpi=dpi, - facecolor=facecolor, edgecolor=edgecolor, frameon=frameon, - FigureClass=FigureClass, **kwargs) - fig = manager.canvas.figure - if fig_label: - fig.set_label(fig_label) - - for hookspecs in rcParams["figure.hooks"]: - module_name, dotted_name = hookspecs.split(":") - obj: Any = importlib.import_module(module_name) - for part in dotted_name.split("."): - obj = getattr(obj, part) - obj(fig) - - _pylab_helpers.Gcf._set_new_active_manager(manager) - - # make sure backends (inline) that we don't ship that expect this - # to be called in plotting commands to make the figure call show - # still work. There is probably a better way to do this in the - # FigureManager base class. - draw_if_interactive() - - if _REPL_DISPLAYHOOK is _ReplDisplayHook.PLAIN: - fig.stale_callback = _auto_draw_if_interactive - - if clear: - manager.canvas.figure.clear() - - return manager.canvas.figure - - -def _auto_draw_if_interactive(fig, val): - """ - An internal helper function for making sure that auto-redrawing - works as intended in the plain python repl. - - Parameters - ---------- - fig : Figure - A figure object which is assumed to be associated with a canvas - """ - if (val and matplotlib.is_interactive() - and not fig.canvas.is_saving() - and not fig.canvas._is_idle_drawing): - # Some artists can mark themselves as stale in the middle of drawing - # (e.g. axes position & tick labels being computed at draw time), but - # this shouldn't trigger a redraw because the current redraw will - # already take them into account. - with fig.canvas._idle_draw_cntx(): - fig.canvas.draw_idle() - - -def gcf() -> Figure: - """ - Get the current figure. - - If there is currently no figure on the pyplot figure stack, a new one is - created using `~.pyplot.figure()`. (To test whether there is currently a - figure on the pyplot figure stack, check whether `~.pyplot.get_fignums()` - is empty.) - """ - manager = _pylab_helpers.Gcf.get_active() - if manager is not None: - return manager.canvas.figure - else: - return figure() - - -def fignum_exists(num: int) -> bool: - """Return whether the figure with the given id exists.""" - return _pylab_helpers.Gcf.has_fignum(num) or num in get_figlabels() - - -def get_fignums() -> list[int]: - """Return a list of existing figure numbers.""" - return sorted(_pylab_helpers.Gcf.figs) - - -def get_figlabels() -> list[Any]: - """Return a list of existing figure labels.""" - managers = _pylab_helpers.Gcf.get_all_fig_managers() - managers.sort(key=lambda m: m.num) - return [m.canvas.figure.get_label() for m in managers] - - -def get_current_fig_manager() -> FigureManagerBase | None: - """ - Return the figure manager of the current figure. - - The figure manager is a container for the actual backend-depended window - that displays the figure on screen. - - If no current figure exists, a new one is created, and its figure - manager is returned. - - Returns - ------- - `.FigureManagerBase` or backend-dependent subclass thereof - """ - return gcf().canvas.manager - - -@_copy_docstring_and_deprecators(FigureCanvasBase.mpl_connect) -def connect(s: str, func: Callable[[Event], Any]) -> int: - return gcf().canvas.mpl_connect(s, func) - - -@_copy_docstring_and_deprecators(FigureCanvasBase.mpl_disconnect) -def disconnect(cid: int) -> None: - gcf().canvas.mpl_disconnect(cid) - - -def close(fig: None | int | str | Figure | Literal["all"] = None) -> None: - """ - Close a figure window. - - Parameters - ---------- - fig : None or int or str or `.Figure` - The figure to close. There are a number of ways to specify this: - - - *None*: the current figure - - `.Figure`: the given `.Figure` instance - - ``int``: a figure number - - ``str``: a figure name - - 'all': all figures - - """ - if fig is None: - manager = _pylab_helpers.Gcf.get_active() - if manager is None: - return - else: - _pylab_helpers.Gcf.destroy(manager) - elif fig == 'all': - _pylab_helpers.Gcf.destroy_all() - elif isinstance(fig, int): - _pylab_helpers.Gcf.destroy(fig) - elif hasattr(fig, 'int'): - # if we are dealing with a type UUID, we - # can use its integer representation - _pylab_helpers.Gcf.destroy(fig.int) - elif isinstance(fig, str): - all_labels = get_figlabels() - if fig in all_labels: - num = get_fignums()[all_labels.index(fig)] - _pylab_helpers.Gcf.destroy(num) - elif isinstance(fig, Figure): - _pylab_helpers.Gcf.destroy_fig(fig) - else: - raise TypeError("close() argument must be a Figure, an int, a string, " - "or None, not %s" % type(fig)) - - -def clf() -> None: - """Clear the current figure.""" - gcf().clear() - - -def draw() -> None: - """ - Redraw the current figure. - - This is used to update a figure that has been altered, but not - automatically re-drawn. If interactive mode is on (via `.ion()`), this - should be only rarely needed, but there may be ways to modify the state of - a figure without marking it as "stale". Please report these cases as bugs. - - This is equivalent to calling ``fig.canvas.draw_idle()``, where ``fig`` is - the current figure. - - See Also - -------- - .FigureCanvasBase.draw_idle - .FigureCanvasBase.draw - """ - gcf().canvas.draw_idle() - - -@_copy_docstring_and_deprecators(Figure.savefig) -def savefig(*args, **kwargs) -> None: - fig = gcf() - # savefig default implementation has no return, so mypy is unhappy - # presumably this is here because subclasses can return? - res = fig.savefig(*args, **kwargs) # type: ignore[func-returns-value] - fig.canvas.draw_idle() # Need this if 'transparent=True', to reset colors. - return res - - -## Putting things in figures ## - - -def figlegend(*args, **kwargs) -> Legend: - return gcf().legend(*args, **kwargs) -if Figure.legend.__doc__: - figlegend.__doc__ = Figure.legend.__doc__ \ - .replace(" legend(", " figlegend(") \ - .replace("fig.legend(", "plt.figlegend(") \ - .replace("ax.plot(", "plt.plot(") - - -## Axes ## - -@_docstring.dedent_interpd -def axes( - arg: None | tuple[float, float, float, float] = None, - **kwargs -) -> matplotlib.axes.Axes: - """ - Add an Axes to the current figure and make it the current Axes. - - Call signatures:: - - plt.axes() - plt.axes(rect, projection=None, polar=False, **kwargs) - plt.axes(ax) - - Parameters - ---------- - arg : None or 4-tuple - The exact behavior of this function depends on the type: - - - *None*: A new full window Axes is added using - ``subplot(**kwargs)``. - - 4-tuple of floats *rect* = ``(left, bottom, width, height)``. - A new Axes is added with dimensions *rect* in normalized - (0, 1) units using `~.Figure.add_axes` on the current figure. - - projection : {None, 'aitoff', 'hammer', 'lambert', 'mollweide', \ -'polar', 'rectilinear', str}, optional - The projection type of the `~.axes.Axes`. *str* is the name of - a custom projection, see `~matplotlib.projections`. The default - None results in a 'rectilinear' projection. - - polar : bool, default: False - If True, equivalent to projection='polar'. - - sharex, sharey : `~matplotlib.axes.Axes`, optional - Share the x or y `~matplotlib.axis` with sharex and/or sharey. - The axis will have the same limits, ticks, and scale as the axis - of the shared Axes. - - label : str - A label for the returned Axes. - - Returns - ------- - `~.axes.Axes`, or a subclass of `~.axes.Axes` - The returned axes class depends on the projection used. It is - `~.axes.Axes` if rectilinear projection is used and - `.projections.polar.PolarAxes` if polar projection is used. - - Other Parameters - ---------------- - **kwargs - This method also takes the keyword arguments for - the returned Axes class. The keyword arguments for the - rectilinear Axes class `~.axes.Axes` can be found in - the following table but there might also be other keyword - arguments if another projection is used, see the actual Axes - class. - - %(Axes:kwdoc)s - - See Also - -------- - .Figure.add_axes - .pyplot.subplot - .Figure.add_subplot - .Figure.subplots - .pyplot.subplots - - Examples - -------- - :: - - # Creating a new full window Axes - plt.axes() - - # Creating a new Axes with specified dimensions and a grey background - plt.axes((left, bottom, width, height), facecolor='grey') - """ - fig = gcf() - pos = kwargs.pop('position', None) - if arg is None: - if pos is None: - return fig.add_subplot(**kwargs) - else: - return fig.add_axes(pos, **kwargs) - else: - return fig.add_axes(arg, **kwargs) - - -def delaxes(ax: matplotlib.axes.Axes | None = None) -> None: - """ - Remove an `~.axes.Axes` (defaulting to the current axes) from its figure. - """ - if ax is None: - ax = gca() - ax.remove() - - -def sca(ax: Axes) -> None: - """ - Set the current Axes to *ax* and the current Figure to the parent of *ax*. - """ - # Mypy sees ax.figure as potentially None, - # but if you are calling this, it won't be None - # Additionally the slight difference between `Figure` and `FigureBase` mypy catches - figure(ax.figure) # type: ignore[arg-type] - ax.figure.sca(ax) # type: ignore[union-attr] - - -def cla() -> None: - """Clear the current axes.""" - # Not generated via boilerplate.py to allow a different docstring. - return gca().cla() - - -## More ways of creating axes ## - -@_docstring.dedent_interpd -def subplot(*args, **kwargs) -> Axes: - """ - Add an Axes to the current figure or retrieve an existing Axes. - - This is a wrapper of `.Figure.add_subplot` which provides additional - behavior when working with the implicit API (see the notes section). - - Call signatures:: - - subplot(nrows, ncols, index, **kwargs) - subplot(pos, **kwargs) - subplot(**kwargs) - subplot(ax) - - Parameters - ---------- - *args : int, (int, int, *index*), or `.SubplotSpec`, default: (1, 1, 1) - The position of the subplot described by one of - - - Three integers (*nrows*, *ncols*, *index*). The subplot will take the - *index* position on a grid with *nrows* rows and *ncols* columns. - *index* starts at 1 in the upper left corner and increases to the - right. *index* can also be a two-tuple specifying the (*first*, - *last*) indices (1-based, and including *last*) of the subplot, e.g., - ``fig.add_subplot(3, 1, (1, 2))`` makes a subplot that spans the - upper 2/3 of the figure. - - A 3-digit integer. The digits are interpreted as if given separately - as three single-digit integers, i.e. ``fig.add_subplot(235)`` is the - same as ``fig.add_subplot(2, 3, 5)``. Note that this can only be used - if there are no more than 9 subplots. - - A `.SubplotSpec`. - - projection : {None, 'aitoff', 'hammer', 'lambert', 'mollweide', \ -'polar', 'rectilinear', str}, optional - The projection type of the subplot (`~.axes.Axes`). *str* is the name - of a custom projection, see `~matplotlib.projections`. The default - None results in a 'rectilinear' projection. - - polar : bool, default: False - If True, equivalent to projection='polar'. - - sharex, sharey : `~matplotlib.axes.Axes`, optional - Share the x or y `~matplotlib.axis` with sharex and/or sharey. The - axis will have the same limits, ticks, and scale as the axis of the - shared axes. - - label : str - A label for the returned axes. - - Returns - ------- - `~.axes.Axes` - - The Axes of the subplot. The returned Axes can actually be an instance - of a subclass, such as `.projections.polar.PolarAxes` for polar - projections. - - Other Parameters - ---------------- - **kwargs - This method also takes the keyword arguments for the returned axes - base class; except for the *figure* argument. The keyword arguments - for the rectilinear base class `~.axes.Axes` can be found in - the following table but there might also be other keyword - arguments if another projection is used. - - %(Axes:kwdoc)s - - Notes - ----- - Creating a new Axes will delete any preexisting Axes that - overlaps with it beyond sharing a boundary:: - - import matplotlib.pyplot as plt - # plot a line, implicitly creating a subplot(111) - plt.plot([1, 2, 3]) - # now create a subplot which represents the top plot of a grid - # with 2 rows and 1 column. Since this subplot will overlap the - # first, the plot (and its axes) previously created, will be removed - plt.subplot(211) - - If you do not want this behavior, use the `.Figure.add_subplot` method - or the `.pyplot.axes` function instead. - - If no *kwargs* are passed and there exists an Axes in the location - specified by *args* then that Axes will be returned rather than a new - Axes being created. - - If *kwargs* are passed and there exists an Axes in the location - specified by *args*, the projection type is the same, and the - *kwargs* match with the existing Axes, then the existing Axes is - returned. Otherwise a new Axes is created with the specified - parameters. We save a reference to the *kwargs* which we use - for this comparison. If any of the values in *kwargs* are - mutable we will not detect the case where they are mutated. - In these cases we suggest using `.Figure.add_subplot` and the - explicit Axes API rather than the implicit pyplot API. - - See Also - -------- - .Figure.add_subplot - .pyplot.subplots - .pyplot.axes - .Figure.subplots - - Examples - -------- - :: - - plt.subplot(221) - - # equivalent but more general - ax1 = plt.subplot(2, 2, 1) - - # add a subplot with no frame - ax2 = plt.subplot(222, frameon=False) - - # add a polar subplot - plt.subplot(223, projection='polar') - - # add a red subplot that shares the x-axis with ax1 - plt.subplot(224, sharex=ax1, facecolor='red') - - # delete ax2 from the figure - plt.delaxes(ax2) - - # add ax2 to the figure again - plt.subplot(ax2) - - # make the first axes "current" again - plt.subplot(221) - - """ - # Here we will only normalize `polar=True` vs `projection='polar'` and let - # downstream code deal with the rest. - unset = object() - projection = kwargs.get('projection', unset) - polar = kwargs.pop('polar', unset) - if polar is not unset and polar: - # if we got mixed messages from the user, raise - if projection is not unset and projection != 'polar': - raise ValueError( - f"polar={polar}, yet projection={projection!r}. " - "Only one of these arguments should be supplied." - ) - kwargs['projection'] = projection = 'polar' - - # if subplot called without arguments, create subplot(1, 1, 1) - if len(args) == 0: - args = (1, 1, 1) - - # This check was added because it is very easy to type subplot(1, 2, False) - # when subplots(1, 2, False) was intended (sharex=False, that is). In most - # cases, no error will ever occur, but mysterious behavior can result - # because what was intended to be the sharex argument is instead treated as - # a subplot index for subplot() - if len(args) >= 3 and isinstance(args[2], bool): - _api.warn_external("The subplot index argument to subplot() appears " - "to be a boolean. Did you intend to use " - "subplots()?") - # Check for nrows and ncols, which are not valid subplot args: - if 'nrows' in kwargs or 'ncols' in kwargs: - raise TypeError("subplot() got an unexpected keyword argument 'ncols' " - "and/or 'nrows'. Did you intend to call subplots()?") - - fig = gcf() - - # First, search for an existing subplot with a matching spec. - key = SubplotSpec._from_subplot_args(fig, args) - - for ax in fig.axes: - # If we found an Axes at the position, we can re-use it if the user passed no - # kwargs or if the axes class and kwargs are identical. - if (ax.get_subplotspec() == key - and (kwargs == {} - or (ax._projection_init - == fig._process_projection_requirements(**kwargs)))): - break - else: - # we have exhausted the known Axes and none match, make a new one! - ax = fig.add_subplot(*args, **kwargs) - - fig.sca(ax) - - return ax - - -def subplots( - nrows: int = 1, ncols: int = 1, *, - sharex: bool | Literal["none", "all", "row", "col"] = False, - sharey: bool | Literal["none", "all", "row", "col"] = False, - squeeze: bool = True, - width_ratios: Sequence[float] | None = None, - height_ratios: Sequence[float] | None = None, - subplot_kw: dict[str, Any] | None = None, - gridspec_kw: dict[str, Any] | None = None, - **fig_kw -) -> tuple[Figure, Any]: - """ - Create a figure and a set of subplots. - - This utility wrapper makes it convenient to create common layouts of - subplots, including the enclosing figure object, in a single call. - - Parameters - ---------- - nrows, ncols : int, default: 1 - Number of rows/columns of the subplot grid. - - sharex, sharey : bool or {'none', 'all', 'row', 'col'}, default: False - Controls sharing of properties among x (*sharex*) or y (*sharey*) - axes: - - - True or 'all': x- or y-axis will be shared among all subplots. - - False or 'none': each subplot x- or y-axis will be independent. - - 'row': each subplot row will share an x- or y-axis. - - 'col': each subplot column will share an x- or y-axis. - - When subplots have a shared x-axis along a column, only the x tick - labels of the bottom subplot are created. Similarly, when subplots - have a shared y-axis along a row, only the y tick labels of the first - column subplot are created. To later turn other subplots' ticklabels - on, use `~matplotlib.axes.Axes.tick_params`. - - When subplots have a shared axis that has units, calling - `~matplotlib.axis.Axis.set_units` will update each axis with the - new units. - - squeeze : bool, default: True - - If True, extra dimensions are squeezed out from the returned - array of `~matplotlib.axes.Axes`: - - - if only one subplot is constructed (nrows=ncols=1), the - resulting single Axes object is returned as a scalar. - - for Nx1 or 1xM subplots, the returned object is a 1D numpy - object array of Axes objects. - - for NxM, subplots with N>1 and M>1 are returned as a 2D array. - - - If False, no squeezing at all is done: the returned Axes object is - always a 2D array containing Axes instances, even if it ends up - being 1x1. - - width_ratios : array-like of length *ncols*, optional - Defines the relative widths of the columns. Each column gets a - relative width of ``width_ratios[i] / sum(width_ratios)``. - If not given, all columns will have the same width. Equivalent - to ``gridspec_kw={'width_ratios': [...]}``. - - height_ratios : array-like of length *nrows*, optional - Defines the relative heights of the rows. Each row gets a - relative height of ``height_ratios[i] / sum(height_ratios)``. - If not given, all rows will have the same height. Convenience - for ``gridspec_kw={'height_ratios': [...]}``. - - subplot_kw : dict, optional - Dict with keywords passed to the - `~matplotlib.figure.Figure.add_subplot` call used to create each - subplot. - - gridspec_kw : dict, optional - Dict with keywords passed to the `~matplotlib.gridspec.GridSpec` - constructor used to create the grid the subplots are placed on. - - **fig_kw - All additional keyword arguments are passed to the - `.pyplot.figure` call. - - Returns - ------- - fig : `.Figure` - - ax : `~matplotlib.axes.Axes` or array of Axes - *ax* can be either a single `~.axes.Axes` object, or an array of Axes - objects if more than one subplot was created. The dimensions of the - resulting array can be controlled with the squeeze keyword, see above. - - Typical idioms for handling the return value are:: - - # using the variable ax for single a Axes - fig, ax = plt.subplots() - - # using the variable axs for multiple Axes - fig, axs = plt.subplots(2, 2) - - # using tuple unpacking for multiple Axes - fig, (ax1, ax2) = plt.subplots(1, 2) - fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2) - - The names ``ax`` and pluralized ``axs`` are preferred over ``axes`` - because for the latter it's not clear if it refers to a single - `~.axes.Axes` instance or a collection of these. - - See Also - -------- - .pyplot.figure - .pyplot.subplot - .pyplot.axes - .Figure.subplots - .Figure.add_subplot - - Examples - -------- - :: - - # First create some toy data: - x = np.linspace(0, 2*np.pi, 400) - y = np.sin(x**2) - - # Create just a figure and only one subplot - fig, ax = plt.subplots() - ax.plot(x, y) - ax.set_title('Simple plot') - - # Create two subplots and unpack the output array immediately - f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) - ax1.plot(x, y) - ax1.set_title('Sharing Y axis') - ax2.scatter(x, y) - - # Create four polar axes and access them through the returned array - fig, axs = plt.subplots(2, 2, subplot_kw=dict(projection="polar")) - axs[0, 0].plot(x, y) - axs[1, 1].scatter(x, y) - - # Share a X axis with each column of subplots - plt.subplots(2, 2, sharex='col') - - # Share a Y axis with each row of subplots - plt.subplots(2, 2, sharey='row') - - # Share both X and Y axes with all subplots - plt.subplots(2, 2, sharex='all', sharey='all') - - # Note that this is the same as - plt.subplots(2, 2, sharex=True, sharey=True) - - # Create figure number 10 with a single subplot - # and clears it if it already exists. - fig, ax = plt.subplots(num=10, clear=True) - - """ - fig = figure(**fig_kw) - axs = fig.subplots(nrows=nrows, ncols=ncols, sharex=sharex, sharey=sharey, - squeeze=squeeze, subplot_kw=subplot_kw, - gridspec_kw=gridspec_kw, height_ratios=height_ratios, - width_ratios=width_ratios) - return fig, axs - - -@overload -def subplot_mosaic( - mosaic: str, - *, - sharex: bool = ..., - sharey: bool = ..., - width_ratios: ArrayLike | None = ..., - height_ratios: ArrayLike | None = ..., - empty_sentinel: str = ..., - subplot_kw: dict[str, Any] | None = ..., - gridspec_kw: dict[str, Any] | None = ..., - per_subplot_kw: dict[str | tuple[str, ...], dict[str, Any]] | None = ..., - **fig_kw: Any -) -> tuple[Figure, dict[str, matplotlib.axes.Axes]]: ... - - -@overload -def subplot_mosaic( - mosaic: list[HashableList[_T]], - *, - sharex: bool = ..., - sharey: bool = ..., - width_ratios: ArrayLike | None = ..., - height_ratios: ArrayLike | None = ..., - empty_sentinel: _T = ..., - subplot_kw: dict[str, Any] | None = ..., - gridspec_kw: dict[str, Any] | None = ..., - per_subplot_kw: dict[_T | tuple[_T, ...], dict[str, Any]] | None = ..., - **fig_kw: Any -) -> tuple[Figure, dict[_T, matplotlib.axes.Axes]]: ... - - -@overload -def subplot_mosaic( - mosaic: list[HashableList[Hashable]], - *, - sharex: bool = ..., - sharey: bool = ..., - width_ratios: ArrayLike | None = ..., - height_ratios: ArrayLike | None = ..., - empty_sentinel: Any = ..., - subplot_kw: dict[str, Any] | None = ..., - gridspec_kw: dict[str, Any] | None = ..., - per_subplot_kw: dict[Hashable | tuple[Hashable, ...], dict[str, Any]] | None = ..., - **fig_kw: Any -) -> tuple[Figure, dict[Hashable, matplotlib.axes.Axes]]: ... - - -def subplot_mosaic( - mosaic: str | list[HashableList[_T]] | list[HashableList[Hashable]], - *, - sharex: bool = False, - sharey: bool = False, - width_ratios: ArrayLike | None = None, - height_ratios: ArrayLike | None = None, - empty_sentinel: Any = '.', - subplot_kw: dict[str, Any] | None = None, - gridspec_kw: dict[str, Any] | None = None, - per_subplot_kw: dict[str | tuple[str, ...], dict[str, Any]] | - dict[_T | tuple[_T, ...], dict[str, Any]] | - dict[Hashable | tuple[Hashable, ...], dict[str, Any]] | None = None, - **fig_kw: Any -) -> tuple[Figure, dict[str, matplotlib.axes.Axes]] | \ - tuple[Figure, dict[_T, matplotlib.axes.Axes]] | \ - tuple[Figure, dict[Hashable, matplotlib.axes.Axes]]: - """ - Build a layout of Axes based on ASCII art or nested lists. - - This is a helper function to build complex GridSpec layouts visually. - - See :ref:`mosaic` - for an example and full API documentation - - Parameters - ---------- - mosaic : list of list of {hashable or nested} or str - - A visual layout of how you want your Axes to be arranged - labeled as strings. For example :: - - x = [['A panel', 'A panel', 'edge'], - ['C panel', '.', 'edge']] - - produces 4 axes: - - - 'A panel' which is 1 row high and spans the first two columns - - 'edge' which is 2 rows high and is on the right edge - - 'C panel' which in 1 row and 1 column wide in the bottom left - - a blank space 1 row and 1 column wide in the bottom center - - Any of the entries in the layout can be a list of lists - of the same form to create nested layouts. - - If input is a str, then it must be of the form :: - - ''' - AAE - C.E - ''' - - where each character is a column and each line is a row. - This only allows only single character Axes labels and does - not allow nesting but is very terse. - - sharex, sharey : bool, default: False - If True, the x-axis (*sharex*) or y-axis (*sharey*) will be shared - among all subplots. In that case, tick label visibility and axis units - behave as for `subplots`. If False, each subplot's x- or y-axis will - be independent. - - width_ratios : array-like of length *ncols*, optional - Defines the relative widths of the columns. Each column gets a - relative width of ``width_ratios[i] / sum(width_ratios)``. - If not given, all columns will have the same width. Convenience - for ``gridspec_kw={'width_ratios': [...]}``. - - height_ratios : array-like of length *nrows*, optional - Defines the relative heights of the rows. Each row gets a - relative height of ``height_ratios[i] / sum(height_ratios)``. - If not given, all rows will have the same height. Convenience - for ``gridspec_kw={'height_ratios': [...]}``. - - empty_sentinel : object, optional - Entry in the layout to mean "leave this space empty". Defaults - to ``'.'``. Note, if *layout* is a string, it is processed via - `inspect.cleandoc` to remove leading white space, which may - interfere with using white-space as the empty sentinel. - - subplot_kw : dict, optional - Dictionary with keywords passed to the `.Figure.add_subplot` call - used to create each subplot. These values may be overridden by - values in *per_subplot_kw*. - - per_subplot_kw : dict, optional - A dictionary mapping the Axes identifiers or tuples of identifiers - to a dictionary of keyword arguments to be passed to the - `.Figure.add_subplot` call used to create each subplot. The values - in these dictionaries have precedence over the values in - *subplot_kw*. - - If *mosaic* is a string, and thus all keys are single characters, - it is possible to use a single string instead of a tuple as keys; - i.e. ``"AB"`` is equivalent to ``("A", "B")``. - - .. versionadded:: 3.7 - - gridspec_kw : dict, optional - Dictionary with keywords passed to the `.GridSpec` constructor used - to create the grid the subplots are placed on. - - **fig_kw - All additional keyword arguments are passed to the - `.pyplot.figure` call. - - Returns - ------- - fig : `.Figure` - The new figure - - dict[label, Axes] - A dictionary mapping the labels to the Axes objects. The order of - the axes is left-to-right and top-to-bottom of their position in the - total layout. - - """ - fig = figure(**fig_kw) - ax_dict = fig.subplot_mosaic( # type: ignore[misc] - mosaic, # type: ignore[arg-type] - sharex=sharex, sharey=sharey, - height_ratios=height_ratios, width_ratios=width_ratios, - subplot_kw=subplot_kw, gridspec_kw=gridspec_kw, - empty_sentinel=empty_sentinel, - per_subplot_kw=per_subplot_kw, # type: ignore[arg-type] - ) - return fig, ax_dict - - -def subplot2grid( - shape: tuple[int, int], loc: tuple[int, int], - rowspan: int = 1, colspan: int = 1, - fig: Figure | None = None, - **kwargs -) -> matplotlib.axes.Axes: - """ - Create a subplot at a specific location inside a regular grid. - - Parameters - ---------- - shape : (int, int) - Number of rows and of columns of the grid in which to place axis. - loc : (int, int) - Row number and column number of the axis location within the grid. - rowspan : int, default: 1 - Number of rows for the axis to span downwards. - colspan : int, default: 1 - Number of columns for the axis to span to the right. - fig : `.Figure`, optional - Figure to place the subplot in. Defaults to the current figure. - **kwargs - Additional keyword arguments are handed to `~.Figure.add_subplot`. - - Returns - ------- - `~.axes.Axes` - - The Axes of the subplot. The returned Axes can actually be an instance - of a subclass, such as `.projections.polar.PolarAxes` for polar - projections. - - Notes - ----- - The following call :: - - ax = subplot2grid((nrows, ncols), (row, col), rowspan, colspan) - - is identical to :: - - fig = gcf() - gs = fig.add_gridspec(nrows, ncols) - ax = fig.add_subplot(gs[row:row+rowspan, col:col+colspan]) - """ - if fig is None: - fig = gcf() - rows, cols = shape - gs = GridSpec._check_gridspec_exists(fig, rows, cols) - subplotspec = gs.new_subplotspec(loc, rowspan=rowspan, colspan=colspan) - return fig.add_subplot(subplotspec, **kwargs) - - -def twinx(ax: matplotlib.axes.Axes | None = None) -> _AxesBase: - """ - Make and return a second axes that shares the *x*-axis. The new axes will - overlay *ax* (or the current axes if *ax* is *None*), and its ticks will be - on the right. - - Examples - -------- - :doc:`/gallery/subplots_axes_and_figures/two_scales` - """ - if ax is None: - ax = gca() - ax1 = ax.twinx() - return ax1 - - -def twiny(ax: matplotlib.axes.Axes | None = None) -> _AxesBase: - """ - Make and return a second axes that shares the *y*-axis. The new axes will - overlay *ax* (or the current axes if *ax* is *None*), and its ticks will be - on the top. - - Examples - -------- - :doc:`/gallery/subplots_axes_and_figures/two_scales` - """ - if ax is None: - ax = gca() - ax1 = ax.twiny() - return ax1 - - -def subplot_tool(targetfig: Figure | None = None) -> SubplotTool | None: - """ - Launch a subplot tool window for a figure. - - Returns - ------- - `matplotlib.widgets.SubplotTool` - """ - if targetfig is None: - targetfig = gcf() - tb = targetfig.canvas.manager.toolbar # type: ignore[union-attr] - if hasattr(tb, "configure_subplots"): # toolbar2 - from matplotlib.backend_bases import NavigationToolbar2 - return cast(NavigationToolbar2, tb).configure_subplots() - elif hasattr(tb, "trigger_tool"): # toolmanager - from matplotlib.backend_bases import ToolContainerBase - cast(ToolContainerBase, tb).trigger_tool("subplots") - return None - else: - raise ValueError("subplot_tool can only be launched for figures with " - "an associated toolbar") - - -def box(on: bool | None = None) -> None: - """ - Turn the axes box on or off on the current axes. - - Parameters - ---------- - on : bool or None - The new `~matplotlib.axes.Axes` box state. If ``None``, toggle - the state. - - See Also - -------- - :meth:`matplotlib.axes.Axes.set_frame_on` - :meth:`matplotlib.axes.Axes.get_frame_on` - """ - ax = gca() - if on is None: - on = not ax.get_frame_on() - ax.set_frame_on(on) - -## Axis ## - - -def xlim(*args, **kwargs) -> tuple[float, float]: - """ - Get or set the x limits of the current axes. - - Call signatures:: - - left, right = xlim() # return the current xlim - xlim((left, right)) # set the xlim to left, right - xlim(left, right) # set the xlim to left, right - - If you do not specify args, you can pass *left* or *right* as kwargs, - i.e.:: - - xlim(right=3) # adjust the right leaving left unchanged - xlim(left=1) # adjust the left leaving right unchanged - - Setting limits turns autoscaling off for the x-axis. - - Returns - ------- - left, right - A tuple of the new x-axis limits. - - Notes - ----- - Calling this function with no arguments (e.g. ``xlim()``) is the pyplot - equivalent of calling `~.Axes.get_xlim` on the current axes. - Calling this function with arguments is the pyplot equivalent of calling - `~.Axes.set_xlim` on the current axes. All arguments are passed though. - """ - ax = gca() - if not args and not kwargs: - return ax.get_xlim() - ret = ax.set_xlim(*args, **kwargs) - return ret - - -def ylim(*args, **kwargs) -> tuple[float, float]: - """ - Get or set the y-limits of the current axes. - - Call signatures:: - - bottom, top = ylim() # return the current ylim - ylim((bottom, top)) # set the ylim to bottom, top - ylim(bottom, top) # set the ylim to bottom, top - - If you do not specify args, you can alternatively pass *bottom* or - *top* as kwargs, i.e.:: - - ylim(top=3) # adjust the top leaving bottom unchanged - ylim(bottom=1) # adjust the bottom leaving top unchanged - - Setting limits turns autoscaling off for the y-axis. - - Returns - ------- - bottom, top - A tuple of the new y-axis limits. - - Notes - ----- - Calling this function with no arguments (e.g. ``ylim()``) is the pyplot - equivalent of calling `~.Axes.get_ylim` on the current axes. - Calling this function with arguments is the pyplot equivalent of calling - `~.Axes.set_ylim` on the current axes. All arguments are passed though. - """ - ax = gca() - if not args and not kwargs: - return ax.get_ylim() - ret = ax.set_ylim(*args, **kwargs) - return ret - - -def xticks( - ticks: ArrayLike | None = None, - labels: Sequence[str] | None = None, - *, - minor: bool = False, - **kwargs -) -> tuple[list[Tick] | np.ndarray, list[Text]]: - """ - Get or set the current tick locations and labels of the x-axis. - - Pass no arguments to return the current values without modifying them. - - Parameters - ---------- - ticks : array-like, optional - The list of xtick locations. Passing an empty list removes all xticks. - labels : array-like, optional - The labels to place at the given *ticks* locations. This argument can - only be passed if *ticks* is passed as well. - minor : bool, default: False - If ``False``, get/set the major ticks/labels; if ``True``, the minor - ticks/labels. - **kwargs - `.Text` properties can be used to control the appearance of the labels. - - Returns - ------- - locs - The list of xtick locations. - labels - The list of xlabel `.Text` objects. - - Notes - ----- - Calling this function with no arguments (e.g. ``xticks()``) is the pyplot - equivalent of calling `~.Axes.get_xticks` and `~.Axes.get_xticklabels` on - the current axes. - Calling this function with arguments is the pyplot equivalent of calling - `~.Axes.set_xticks` and `~.Axes.set_xticklabels` on the current axes. - - Examples - -------- - >>> locs, labels = xticks() # Get the current locations and labels. - >>> xticks(np.arange(0, 1, step=0.2)) # Set label locations. - >>> xticks(np.arange(3), ['Tom', 'Dick', 'Sue']) # Set text labels. - >>> xticks([0, 1, 2], ['January', 'February', 'March'], - ... rotation=20) # Set text labels and properties. - >>> xticks([]) # Disable xticks. - """ - ax = gca() - - locs: list[Tick] | np.ndarray - if ticks is None: - locs = ax.get_xticks(minor=minor) - if labels is not None: - raise TypeError("xticks(): Parameter 'labels' can't be set " - "without setting 'ticks'") - else: - locs = ax.set_xticks(ticks, minor=minor) - - labels_out: list[Text] = [] - if labels is None: - labels_out = ax.get_xticklabels(minor=minor) - for l in labels_out: - l._internal_update(kwargs) - else: - labels_out = ax.set_xticklabels(labels, minor=minor, **kwargs) - - return locs, labels_out - - -def yticks( - ticks: ArrayLike | None = None, - labels: Sequence[str] | None = None, - *, - minor: bool = False, - **kwargs -) -> tuple[list[Tick] | np.ndarray, list[Text]]: - """ - Get or set the current tick locations and labels of the y-axis. - - Pass no arguments to return the current values without modifying them. - - Parameters - ---------- - ticks : array-like, optional - The list of ytick locations. Passing an empty list removes all yticks. - labels : array-like, optional - The labels to place at the given *ticks* locations. This argument can - only be passed if *ticks* is passed as well. - minor : bool, default: False - If ``False``, get/set the major ticks/labels; if ``True``, the minor - ticks/labels. - **kwargs - `.Text` properties can be used to control the appearance of the labels. - - Returns - ------- - locs - The list of ytick locations. - labels - The list of ylabel `.Text` objects. - - Notes - ----- - Calling this function with no arguments (e.g. ``yticks()``) is the pyplot - equivalent of calling `~.Axes.get_yticks` and `~.Axes.get_yticklabels` on - the current axes. - Calling this function with arguments is the pyplot equivalent of calling - `~.Axes.set_yticks` and `~.Axes.set_yticklabels` on the current axes. - - Examples - -------- - >>> locs, labels = yticks() # Get the current locations and labels. - >>> yticks(np.arange(0, 1, step=0.2)) # Set label locations. - >>> yticks(np.arange(3), ['Tom', 'Dick', 'Sue']) # Set text labels. - >>> yticks([0, 1, 2], ['January', 'February', 'March'], - ... rotation=45) # Set text labels and properties. - >>> yticks([]) # Disable yticks. - """ - ax = gca() - - locs: list[Tick] | np.ndarray - if ticks is None: - locs = ax.get_yticks(minor=minor) - if labels is not None: - raise TypeError("yticks(): Parameter 'labels' can't be set " - "without setting 'ticks'") - else: - locs = ax.set_yticks(ticks, minor=minor) - - labels_out: list[Text] = [] - if labels is None: - labels_out = ax.get_yticklabels(minor=minor) - for l in labels_out: - l._internal_update(kwargs) - else: - labels_out = ax.set_yticklabels(labels, minor=minor, **kwargs) - - return locs, labels_out - - -def rgrids( - radii: ArrayLike | None = None, - labels: Sequence[str | Text] | None = None, - angle: float | None = None, - fmt: str | None = None, - **kwargs -) -> tuple[list[Line2D], list[Text]]: - """ - Get or set the radial gridlines on the current polar plot. - - Call signatures:: - - lines, labels = rgrids() - lines, labels = rgrids(radii, labels=None, angle=22.5, fmt=None, **kwargs) - - When called with no arguments, `.rgrids` simply returns the tuple - (*lines*, *labels*). When called with arguments, the labels will - appear at the specified radial distances and angle. - - Parameters - ---------- - radii : tuple with floats - The radii for the radial gridlines - - labels : tuple with strings or None - The labels to use at each radial gridline. The - `matplotlib.ticker.ScalarFormatter` will be used if None. - - angle : float - The angular position of the radius labels in degrees. - - fmt : str or None - Format string used in `matplotlib.ticker.FormatStrFormatter`. - For example '%f'. - - Returns - ------- - lines : list of `.lines.Line2D` - The radial gridlines. - - labels : list of `.text.Text` - The tick labels. - - Other Parameters - ---------------- - **kwargs - *kwargs* are optional `.Text` properties for the labels. - - See Also - -------- - .pyplot.thetagrids - .projections.polar.PolarAxes.set_rgrids - .Axis.get_gridlines - .Axis.get_ticklabels - - Examples - -------- - :: - - # set the locations of the radial gridlines - lines, labels = rgrids( (0.25, 0.5, 1.0) ) - - # set the locations and labels of the radial gridlines - lines, labels = rgrids( (0.25, 0.5, 1.0), ('Tom', 'Dick', 'Harry' )) - """ - ax = gca() - if not isinstance(ax, PolarAxes): - raise RuntimeError('rgrids only defined for polar axes') - if all(p is None for p in [radii, labels, angle, fmt]) and not kwargs: - lines_out: list[Line2D] = ax.yaxis.get_gridlines() - labels_out: list[Text] = ax.yaxis.get_ticklabels() - elif radii is None: - raise TypeError("'radii' cannot be None when other parameters are passed") - else: - lines_out, labels_out = ax.set_rgrids( - radii, labels=labels, angle=angle, fmt=fmt, **kwargs) - return lines_out, labels_out - - -def thetagrids( - angles: ArrayLike | None = None, - labels: Sequence[str | Text] | None = None, - fmt: str | None = None, - **kwargs -) -> tuple[list[Line2D], list[Text]]: - """ - Get or set the theta gridlines on the current polar plot. - - Call signatures:: - - lines, labels = thetagrids() - lines, labels = thetagrids(angles, labels=None, fmt=None, **kwargs) - - When called with no arguments, `.thetagrids` simply returns the tuple - (*lines*, *labels*). When called with arguments, the labels will - appear at the specified angles. - - Parameters - ---------- - angles : tuple with floats, degrees - The angles of the theta gridlines. - - labels : tuple with strings or None - The labels to use at each radial gridline. The - `.projections.polar.ThetaFormatter` will be used if None. - - fmt : str or None - Format string used in `matplotlib.ticker.FormatStrFormatter`. - For example '%f'. Note that the angle in radians will be used. - - Returns - ------- - lines : list of `.lines.Line2D` - The theta gridlines. - - labels : list of `.text.Text` - The tick labels. - - Other Parameters - ---------------- - **kwargs - *kwargs* are optional `.Text` properties for the labels. - - See Also - -------- - .pyplot.rgrids - .projections.polar.PolarAxes.set_thetagrids - .Axis.get_gridlines - .Axis.get_ticklabels - - Examples - -------- - :: - - # set the locations of the angular gridlines - lines, labels = thetagrids(range(45, 360, 90)) - - # set the locations and labels of the angular gridlines - lines, labels = thetagrids(range(45, 360, 90), ('NE', 'NW', 'SW', 'SE')) - """ - ax = gca() - if not isinstance(ax, PolarAxes): - raise RuntimeError('thetagrids only defined for polar axes') - if all(param is None for param in [angles, labels, fmt]) and not kwargs: - lines_out: list[Line2D] = ax.xaxis.get_ticklines() - labels_out: list[Text] = ax.xaxis.get_ticklabels() - elif angles is None: - raise TypeError("'angles' cannot be None when other parameters are passed") - else: - lines_out, labels_out = ax.set_thetagrids(angles, - labels=labels, fmt=fmt, - **kwargs) - return lines_out, labels_out - - -@_api.deprecated("3.7", pending=True) -def get_plot_commands() -> list[str]: - """ - Get a sorted list of all of the plotting commands. - """ - NON_PLOT_COMMANDS = { - 'connect', 'disconnect', 'get_current_fig_manager', 'ginput', - 'new_figure_manager', 'waitforbuttonpress'} - return [name for name in _get_pyplot_commands() - if name not in NON_PLOT_COMMANDS] - - -def _get_pyplot_commands() -> list[str]: - # This works by searching for all functions in this module and removing - # a few hard-coded exclusions, as well as all of the colormap-setting - # functions, and anything marked as private with a preceding underscore. - exclude = {'colormaps', 'colors', 'get_plot_commands', *colormaps} - this_module = inspect.getmodule(get_plot_commands) - return sorted( - name for name, obj in globals().items() - if not name.startswith('_') and name not in exclude - and inspect.isfunction(obj) - and inspect.getmodule(obj) is this_module) - - -## Plotting part 1: manually generated functions and wrappers ## - - -@_copy_docstring_and_deprecators(Figure.colorbar) -def colorbar( - mappable: ScalarMappable | None = None, - cax: matplotlib.axes.Axes | None = None, - ax: matplotlib.axes.Axes | Iterable[matplotlib.axes.Axes] | None = None, - **kwargs -) -> Colorbar: - if mappable is None: - mappable = gci() - if mappable is None: - raise RuntimeError('No mappable was found to use for colorbar ' - 'creation. First define a mappable such as ' - 'an image (with imshow) or a contour set (' - 'with contourf).') - ret = gcf().colorbar(mappable, cax=cax, ax=ax, **kwargs) - return ret - - -def clim(vmin: float | None = None, vmax: float | None = None) -> None: - """ - Set the color limits of the current image. - - If either *vmin* or *vmax* is None, the image min/max respectively - will be used for color scaling. - - If you want to set the clim of multiple images, use - `~.ScalarMappable.set_clim` on every image, for example:: - - for im in gca().get_images(): - im.set_clim(0, 0.5) - - """ - im = gci() - if im is None: - raise RuntimeError('You must first define an image, e.g., with imshow') - - im.set_clim(vmin, vmax) - - -# eventually this implementation should move here, use indirection for now to -# avoid having two copies of the code floating around. -def get_cmap( - name: Colormap | str | None = None, - lut: int | None = None -) -> Colormap: - return cm._get_cmap(name=name, lut=lut) # type: ignore -get_cmap.__doc__ = cm._get_cmap.__doc__ # type: ignore - - -def set_cmap(cmap: Colormap | str) -> None: - """ - Set the default colormap, and applies it to the current image if any. - - Parameters - ---------- - cmap : `~matplotlib.colors.Colormap` or str - A colormap instance or the name of a registered colormap. - - See Also - -------- - colormaps - matplotlib.cm.register_cmap - matplotlib.cm.get_cmap - """ - cmap = get_cmap(cmap) - - rc('image', cmap=cmap.name) - im = gci() - - if im is not None: - im.set_cmap(cmap) - - -@_copy_docstring_and_deprecators(matplotlib.image.imread) -def imread( - fname: str | pathlib.Path | BinaryIO, format: str | None = None -) -> np.ndarray: - return matplotlib.image.imread(fname, format) - - -@_copy_docstring_and_deprecators(matplotlib.image.imsave) -def imsave( - fname: str | os.PathLike | BinaryIO, arr: ArrayLike, **kwargs -) -> None: - matplotlib.image.imsave(fname, arr, **kwargs) - - -def matshow(A: ArrayLike, fignum: None | int = None, **kwargs) -> AxesImage: - """ - Display an array as a matrix in a new figure window. - - The origin is set at the upper left hand corner and rows (first - dimension of the array) are displayed horizontally. The aspect - ratio of the figure window is that of the array, unless this would - make an excessively short or narrow figure. - - Tick labels for the xaxis are placed on top. - - Parameters - ---------- - A : 2D array-like - The matrix to be displayed. - - fignum : None or int - If *None*, create a new figure window with automatic numbering. - - If a nonzero integer, draw into the figure with the given number - (create it if it does not exist). - - If 0, use the current axes (or create one if it does not exist). - - .. note:: - - Because of how `.Axes.matshow` tries to set the figure aspect - ratio to be the one of the array, strange things may happen if you - reuse an existing figure. - - Returns - ------- - `~matplotlib.image.AxesImage` - - Other Parameters - ---------------- - **kwargs : `~matplotlib.axes.Axes.imshow` arguments - - """ - A = np.asanyarray(A) - if fignum == 0: - ax = gca() - else: - # Extract actual aspect ratio of array and make appropriately sized - # figure. - fig = figure(fignum, figsize=figaspect(A)) - ax = fig.add_axes((0.15, 0.09, 0.775, 0.775)) - im = ax.matshow(A, **kwargs) - sci(im) - return im - - -def polar(*args, **kwargs) -> list[Line2D]: - """ - Make a polar plot. - - call signature:: - - polar(theta, r, **kwargs) - - Multiple *theta*, *r* arguments are supported, with format strings, as in - `plot`. - """ - # If an axis already exists, check if it has a polar projection - if gcf().get_axes(): - ax = gca() - if not isinstance(ax, PolarAxes): - _api.warn_external('Trying to create polar plot on an Axes ' - 'that does not have a polar projection.') - else: - ax = axes(projection="polar") - return ax.plot(*args, **kwargs) - - -# If rcParams['backend_fallback'] is true, and an interactive backend is -# requested, ignore rcParams['backend'] and force selection of a backend that -# is compatible with the current running interactive framework. -if (rcParams["backend_fallback"] - and rcParams._get_backend_or_none() in ( # type: ignore - set(rcsetup.interactive_bk) - {'WebAgg', 'nbAgg'}) - and cbook._get_running_interactive_framework()): # type: ignore - rcParams._set("backend", rcsetup._auto_backend_sentinel) # type: ignore - -# fmt: on - -################# REMAINING CONTENT GENERATED BY boilerplate.py ############## - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.figimage) -def figimage( - X: ArrayLike, - xo: int = 0, - yo: int = 0, - alpha: float | None = None, - norm: str | Normalize | None = None, - cmap: str | Colormap | None = None, - vmin: float | None = None, - vmax: float | None = None, - origin: Literal["upper", "lower"] | None = None, - resize: bool = False, - **kwargs, -) -> FigureImage: - return gcf().figimage( - X, - xo=xo, - yo=yo, - alpha=alpha, - norm=norm, - cmap=cmap, - vmin=vmin, - vmax=vmax, - origin=origin, - resize=resize, - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.text) -def figtext( - x: float, y: float, s: str, fontdict: dict[str, Any] | None = None, **kwargs -) -> Text: - return gcf().text(x, y, s, fontdict=fontdict, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.gca) -def gca() -> Axes: - return gcf().gca() - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure._gci) -def gci() -> ScalarMappable | None: - return gcf()._gci() - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.ginput) -def ginput( - n: int = 1, - timeout: float = 30, - show_clicks: bool = True, - mouse_add: MouseButton = MouseButton.LEFT, - mouse_pop: MouseButton = MouseButton.RIGHT, - mouse_stop: MouseButton = MouseButton.MIDDLE, -) -> list[tuple[int, int]]: - return gcf().ginput( - n=n, - timeout=timeout, - show_clicks=show_clicks, - mouse_add=mouse_add, - mouse_pop=mouse_pop, - mouse_stop=mouse_stop, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.subplots_adjust) -def subplots_adjust( - left: float | None = None, - bottom: float | None = None, - right: float | None = None, - top: float | None = None, - wspace: float | None = None, - hspace: float | None = None, -) -> None: - gcf().subplots_adjust( - left=left, bottom=bottom, right=right, top=top, wspace=wspace, hspace=hspace - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.suptitle) -def suptitle(t: str, **kwargs) -> Text: - return gcf().suptitle(t, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.tight_layout) -def tight_layout( - *, - pad: float = 1.08, - h_pad: float | None = None, - w_pad: float | None = None, - rect: tuple[float, float, float, float] | None = None, -) -> None: - gcf().tight_layout(pad=pad, h_pad=h_pad, w_pad=w_pad, rect=rect) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Figure.waitforbuttonpress) -def waitforbuttonpress(timeout: float = -1) -> None | bool: - return gcf().waitforbuttonpress(timeout=timeout) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.acorr) -def acorr( - x: ArrayLike, *, data=None, **kwargs -) -> tuple[np.ndarray, np.ndarray, LineCollection | Line2D, Line2D | None]: - return gca().acorr(x, **({"data": data} if data is not None else {}), **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.angle_spectrum) -def angle_spectrum( - x: ArrayLike, - Fs: float | None = None, - Fc: int | None = None, - window: Callable[[ArrayLike], ArrayLike] | ArrayLike | None = None, - pad_to: int | None = None, - sides: Literal["default", "onesided", "twosided"] | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray, Line2D]: - return gca().angle_spectrum( - x, - Fs=Fs, - Fc=Fc, - window=window, - pad_to=pad_to, - sides=sides, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.annotate) -def annotate( - text: str, - xy: tuple[float, float], - xytext: tuple[float, float] | None = None, - xycoords: str - | Artist - | Transform - | Callable[[RendererBase], Bbox | Transform] - | tuple[float, float] = "data", - textcoords: str - | Artist - | Transform - | Callable[[RendererBase], Bbox | Transform] - | tuple[float, float] - | None = None, - arrowprops: dict[str, Any] | None = None, - annotation_clip: bool | None = None, - **kwargs, -) -> Annotation: - return gca().annotate( - text, - xy, - xytext=xytext, - xycoords=xycoords, - textcoords=textcoords, - arrowprops=arrowprops, - annotation_clip=annotation_clip, - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.arrow) -def arrow(x: float, y: float, dx: float, dy: float, **kwargs) -> FancyArrow: - return gca().arrow(x, y, dx, dy, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.autoscale) -def autoscale( - enable: bool = True, - axis: Literal["both", "x", "y"] = "both", - tight: bool | None = None, -) -> None: - gca().autoscale(enable=enable, axis=axis, tight=tight) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.axhline) -def axhline(y: float = 0, xmin: float = 0, xmax: float = 1, **kwargs) -> Line2D: - return gca().axhline(y=y, xmin=xmin, xmax=xmax, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.axhspan) -def axhspan( - ymin: float, ymax: float, xmin: float = 0, xmax: float = 1, **kwargs -) -> Polygon: - return gca().axhspan(ymin, ymax, xmin=xmin, xmax=xmax, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.axis) -def axis( - arg: tuple[float, float, float, float] | bool | str | None = None, - /, - *, - emit: bool = True, - **kwargs, -) -> tuple[float, float, float, float]: - return gca().axis(arg, emit=emit, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.axline) -def axline( - xy1: tuple[float, float], - xy2: tuple[float, float] | None = None, - *, - slope: float | None = None, - **kwargs, -) -> Line2D: - return gca().axline(xy1, xy2=xy2, slope=slope, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.axvline) -def axvline(x: float = 0, ymin: float = 0, ymax: float = 1, **kwargs) -> Line2D: - return gca().axvline(x=x, ymin=ymin, ymax=ymax, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.axvspan) -def axvspan( - xmin: float, xmax: float, ymin: float = 0, ymax: float = 1, **kwargs -) -> Polygon: - return gca().axvspan(xmin, xmax, ymin=ymin, ymax=ymax, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.bar) -def bar( - x: float | ArrayLike, - height: float | ArrayLike, - width: float | ArrayLike = 0.8, - bottom: float | ArrayLike | None = None, - *, - align: Literal["center", "edge"] = "center", - data=None, - **kwargs, -) -> BarContainer: - return gca().bar( - x, - height, - width=width, - bottom=bottom, - align=align, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.barbs) -def barbs(*args, data=None, **kwargs) -> Barbs: - return gca().barbs(*args, **({"data": data} if data is not None else {}), **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.barh) -def barh( - y: float | ArrayLike, - width: float | ArrayLike, - height: float | ArrayLike = 0.8, - left: float | ArrayLike | None = None, - *, - align: Literal["center", "edge"] = "center", - data=None, - **kwargs, -) -> BarContainer: - return gca().barh( - y, - width, - height=height, - left=left, - align=align, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.bar_label) -def bar_label( - container: BarContainer, - labels: ArrayLike | None = None, - *, - fmt: str | Callable[[float], str] = "%g", - label_type: Literal["center", "edge"] = "edge", - padding: float = 0, - **kwargs, -) -> list[Annotation]: - return gca().bar_label( - container, - labels=labels, - fmt=fmt, - label_type=label_type, - padding=padding, - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.boxplot) -def boxplot( - x: ArrayLike | Sequence[ArrayLike], - notch: bool | None = None, - sym: str | None = None, - vert: bool | None = None, - whis: float | tuple[float, float] | None = None, - positions: ArrayLike | None = None, - widths: float | ArrayLike | None = None, - patch_artist: bool | None = None, - bootstrap: int | None = None, - usermedians: ArrayLike | None = None, - conf_intervals: ArrayLike | None = None, - meanline: bool | None = None, - showmeans: bool | None = None, - showcaps: bool | None = None, - showbox: bool | None = None, - showfliers: bool | None = None, - boxprops: dict[str, Any] | None = None, - labels: Sequence[str] | None = None, - flierprops: dict[str, Any] | None = None, - medianprops: dict[str, Any] | None = None, - meanprops: dict[str, Any] | None = None, - capprops: dict[str, Any] | None = None, - whiskerprops: dict[str, Any] | None = None, - manage_ticks: bool = True, - autorange: bool = False, - zorder: float | None = None, - capwidths: float | ArrayLike | None = None, - *, - data=None, -) -> dict[str, Any]: - return gca().boxplot( - x, - notch=notch, - sym=sym, - vert=vert, - whis=whis, - positions=positions, - widths=widths, - patch_artist=patch_artist, - bootstrap=bootstrap, - usermedians=usermedians, - conf_intervals=conf_intervals, - meanline=meanline, - showmeans=showmeans, - showcaps=showcaps, - showbox=showbox, - showfliers=showfliers, - boxprops=boxprops, - labels=labels, - flierprops=flierprops, - medianprops=medianprops, - meanprops=meanprops, - capprops=capprops, - whiskerprops=whiskerprops, - manage_ticks=manage_ticks, - autorange=autorange, - zorder=zorder, - capwidths=capwidths, - **({"data": data} if data is not None else {}), - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.broken_barh) -def broken_barh( - xranges: Sequence[tuple[float, float]], - yrange: tuple[float, float], - *, - data=None, - **kwargs, -) -> BrokenBarHCollection: - return gca().broken_barh( - xranges, yrange, **({"data": data} if data is not None else {}), **kwargs - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.clabel) -def clabel(CS: ContourSet, levels: ArrayLike | None = None, **kwargs) -> list[Text]: - return gca().clabel(CS, levels=levels, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.cohere) -def cohere( - x: ArrayLike, - y: ArrayLike, - NFFT: int = 256, - Fs: float = 2, - Fc: int = 0, - detrend: Literal["none", "mean", "linear"] - | Callable[[ArrayLike], ArrayLike] = mlab.detrend_none, - window: Callable[[ArrayLike], ArrayLike] | ArrayLike = mlab.window_hanning, - noverlap: int = 0, - pad_to: int | None = None, - sides: Literal["default", "onesided", "twosided"] = "default", - scale_by_freq: bool | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray]: - return gca().cohere( - x, - y, - NFFT=NFFT, - Fs=Fs, - Fc=Fc, - detrend=detrend, - window=window, - noverlap=noverlap, - pad_to=pad_to, - sides=sides, - scale_by_freq=scale_by_freq, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.contour) -def contour(*args, data=None, **kwargs) -> QuadContourSet: - __ret = gca().contour( - *args, **({"data": data} if data is not None else {}), **kwargs - ) - if __ret._A is not None: # type: ignore[attr-defined] - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.contourf) -def contourf(*args, data=None, **kwargs) -> QuadContourSet: - __ret = gca().contourf( - *args, **({"data": data} if data is not None else {}), **kwargs - ) - if __ret._A is not None: # type: ignore[attr-defined] - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.csd) -def csd( - x: ArrayLike, - y: ArrayLike, - NFFT: int | None = None, - Fs: float | None = None, - Fc: int | None = None, - detrend: Literal["none", "mean", "linear"] - | Callable[[ArrayLike], ArrayLike] - | None = None, - window: Callable[[ArrayLike], ArrayLike] | ArrayLike | None = None, - noverlap: int | None = None, - pad_to: int | None = None, - sides: Literal["default", "onesided", "twosided"] | None = None, - scale_by_freq: bool | None = None, - return_line: bool | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray] | tuple[np.ndarray, np.ndarray, Line2D]: - return gca().csd( - x, - y, - NFFT=NFFT, - Fs=Fs, - Fc=Fc, - detrend=detrend, - window=window, - noverlap=noverlap, - pad_to=pad_to, - sides=sides, - scale_by_freq=scale_by_freq, - return_line=return_line, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.ecdf) -def ecdf( - x: ArrayLike, - weights: ArrayLike | None = None, - *, - complementary: bool = False, - orientation: Literal["vertical", "horizonatal"] = "vertical", - compress: bool = False, - data=None, - **kwargs, -) -> Line2D: - return gca().ecdf( - x, - weights=weights, - complementary=complementary, - orientation=orientation, - compress=compress, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.errorbar) -def errorbar( - x: float | ArrayLike, - y: float | ArrayLike, - yerr: float | ArrayLike | None = None, - xerr: float | ArrayLike | None = None, - fmt: str = "", - ecolor: ColorType | None = None, - elinewidth: float | None = None, - capsize: float | None = None, - barsabove: bool = False, - lolims: bool | ArrayLike = False, - uplims: bool | ArrayLike = False, - xlolims: bool | ArrayLike = False, - xuplims: bool | ArrayLike = False, - errorevery: int | tuple[int, int] = 1, - capthick: float | None = None, - *, - data=None, - **kwargs, -) -> ErrorbarContainer: - return gca().errorbar( - x, - y, - yerr=yerr, - xerr=xerr, - fmt=fmt, - ecolor=ecolor, - elinewidth=elinewidth, - capsize=capsize, - barsabove=barsabove, - lolims=lolims, - uplims=uplims, - xlolims=xlolims, - xuplims=xuplims, - errorevery=errorevery, - capthick=capthick, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.eventplot) -def eventplot( - positions: ArrayLike | Sequence[ArrayLike], - orientation: Literal["horizontal", "vertical"] = "horizontal", - lineoffsets: float | Sequence[float] = 1, - linelengths: float | Sequence[float] = 1, - linewidths: float | Sequence[float] | None = None, - colors: ColorType | Sequence[ColorType] | None = None, - alpha: float | Sequence[float] | None = None, - linestyles: LineStyleType | Sequence[LineStyleType] = "solid", - *, - data=None, - **kwargs, -) -> EventCollection: - return gca().eventplot( - positions, - orientation=orientation, - lineoffsets=lineoffsets, - linelengths=linelengths, - linewidths=linewidths, - colors=colors, - alpha=alpha, - linestyles=linestyles, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.fill) -def fill(*args, data=None, **kwargs) -> list[Polygon]: - return gca().fill(*args, **({"data": data} if data is not None else {}), **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.fill_between) -def fill_between( - x: ArrayLike, - y1: ArrayLike | float, - y2: ArrayLike | float = 0, - where: Sequence[bool] | None = None, - interpolate: bool = False, - step: Literal["pre", "post", "mid"] | None = None, - *, - data=None, - **kwargs, -) -> PolyCollection: - return gca().fill_between( - x, - y1, - y2=y2, - where=where, - interpolate=interpolate, - step=step, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.fill_betweenx) -def fill_betweenx( - y: ArrayLike, - x1: ArrayLike | float, - x2: ArrayLike | float = 0, - where: Sequence[bool] | None = None, - step: Literal["pre", "post", "mid"] | None = None, - interpolate: bool = False, - *, - data=None, - **kwargs, -) -> PolyCollection: - return gca().fill_betweenx( - y, - x1, - x2=x2, - where=where, - step=step, - interpolate=interpolate, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.grid) -def grid( - visible: bool | None = None, - which: Literal["major", "minor", "both"] = "major", - axis: Literal["both", "x", "y"] = "both", - **kwargs, -) -> None: - gca().grid(visible=visible, which=which, axis=axis, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.hexbin) -def hexbin( - x: ArrayLike, - y: ArrayLike, - C: ArrayLike | None = None, - gridsize: int | tuple[int, int] = 100, - bins: Literal["log"] | int | Sequence[float] | None = None, - xscale: Literal["linear", "log"] = "linear", - yscale: Literal["linear", "log"] = "linear", - extent: tuple[float, float, float, float] | None = None, - cmap: str | Colormap | None = None, - norm: str | Normalize | None = None, - vmin: float | None = None, - vmax: float | None = None, - alpha: float | None = None, - linewidths: float | None = None, - edgecolors: Literal["face", "none"] | ColorType = "face", - reduce_C_function: Callable[[np.ndarray | list[float]], float] = np.mean, - mincnt: int | None = None, - marginals: bool = False, - *, - data=None, - **kwargs, -) -> PolyCollection: - __ret = gca().hexbin( - x, - y, - C=C, - gridsize=gridsize, - bins=bins, - xscale=xscale, - yscale=yscale, - extent=extent, - cmap=cmap, - norm=norm, - vmin=vmin, - vmax=vmax, - alpha=alpha, - linewidths=linewidths, - edgecolors=edgecolors, - reduce_C_function=reduce_C_function, - mincnt=mincnt, - marginals=marginals, - **({"data": data} if data is not None else {}), - **kwargs, - ) - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.hist) -def hist( - x: ArrayLike | Sequence[ArrayLike], - bins: int | Sequence[float] | str | None = None, - range: tuple[float, float] | None = None, - density: bool = False, - weights: ArrayLike | None = None, - cumulative: bool | float = False, - bottom: ArrayLike | float | None = None, - histtype: Literal["bar", "barstacked", "step", "stepfilled"] = "bar", - align: Literal["left", "mid", "right"] = "mid", - orientation: Literal["vertical", "horizontal"] = "vertical", - rwidth: float | None = None, - log: bool = False, - color: ColorType | Sequence[ColorType] | None = None, - label: str | Sequence[str] | None = None, - stacked: bool = False, - *, - data=None, - **kwargs, -) -> tuple[ - np.ndarray | list[np.ndarray], - np.ndarray, - BarContainer | Polygon | list[BarContainer | Polygon], -]: - return gca().hist( - x, - bins=bins, - range=range, - density=density, - weights=weights, - cumulative=cumulative, - bottom=bottom, - histtype=histtype, - align=align, - orientation=orientation, - rwidth=rwidth, - log=log, - color=color, - label=label, - stacked=stacked, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.stairs) -def stairs( - values: ArrayLike, - edges: ArrayLike | None = None, - *, - orientation: Literal["vertical", "horizontal"] = "vertical", - baseline: float | ArrayLike | None = 0, - fill: bool = False, - data=None, - **kwargs, -) -> StepPatch: - return gca().stairs( - values, - edges=edges, - orientation=orientation, - baseline=baseline, - fill=fill, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.hist2d) -def hist2d( - x: ArrayLike, - y: ArrayLike, - bins: None | int | tuple[int, int] | ArrayLike | tuple[ArrayLike, ArrayLike] = 10, - range: ArrayLike | None = None, - density: bool = False, - weights: ArrayLike | None = None, - cmin: float | None = None, - cmax: float | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray, np.ndarray, QuadMesh]: - __ret = gca().hist2d( - x, - y, - bins=bins, - range=range, - density=density, - weights=weights, - cmin=cmin, - cmax=cmax, - **({"data": data} if data is not None else {}), - **kwargs, - ) - sci(__ret[-1]) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.hlines) -def hlines( - y: float | ArrayLike, - xmin: float | ArrayLike, - xmax: float | ArrayLike, - colors: ColorType | Sequence[ColorType] | None = None, - linestyles: LineStyleType = "solid", - label: str = "", - *, - data=None, - **kwargs, -) -> LineCollection: - return gca().hlines( - y, - xmin, - xmax, - colors=colors, - linestyles=linestyles, - label=label, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.imshow) -def imshow( - X: ArrayLike | PIL.Image.Image, - cmap: str | Colormap | None = None, - norm: str | Normalize | None = None, - *, - aspect: Literal["equal", "auto"] | float | None = None, - interpolation: str | None = None, - alpha: float | ArrayLike | None = None, - vmin: float | None = None, - vmax: float | None = None, - origin: Literal["upper", "lower"] | None = None, - extent: tuple[float, float, float, float] | None = None, - interpolation_stage: Literal["data", "rgba"] | None = None, - filternorm: bool = True, - filterrad: float = 4.0, - resample: bool | None = None, - url: str | None = None, - data=None, - **kwargs, -) -> AxesImage: - __ret = gca().imshow( - X, - cmap=cmap, - norm=norm, - aspect=aspect, - interpolation=interpolation, - alpha=alpha, - vmin=vmin, - vmax=vmax, - origin=origin, - extent=extent, - interpolation_stage=interpolation_stage, - filternorm=filternorm, - filterrad=filterrad, - resample=resample, - url=url, - **({"data": data} if data is not None else {}), - **kwargs, - ) - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.legend) -def legend(*args, **kwargs) -> Legend: - return gca().legend(*args, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.locator_params) -def locator_params( - axis: Literal["both", "x", "y"] = "both", tight: bool | None = None, **kwargs -) -> None: - gca().locator_params(axis=axis, tight=tight, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.loglog) -def loglog(*args, **kwargs) -> list[Line2D]: - return gca().loglog(*args, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.magnitude_spectrum) -def magnitude_spectrum( - x: ArrayLike, - Fs: float | None = None, - Fc: int | None = None, - window: Callable[[ArrayLike], ArrayLike] | ArrayLike | None = None, - pad_to: int | None = None, - sides: Literal["default", "onesided", "twosided"] | None = None, - scale: Literal["default", "linear", "dB"] | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray, Line2D]: - return gca().magnitude_spectrum( - x, - Fs=Fs, - Fc=Fc, - window=window, - pad_to=pad_to, - sides=sides, - scale=scale, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.margins) -def margins( - *margins: float, - x: float | None = None, - y: float | None = None, - tight: bool | None = True, -) -> tuple[float, float] | None: - return gca().margins(*margins, x=x, y=y, tight=tight) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.minorticks_off) -def minorticks_off() -> None: - gca().minorticks_off() - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.minorticks_on) -def minorticks_on() -> None: - gca().minorticks_on() - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.pcolor) -def pcolor( - *args: ArrayLike, - shading: Literal["flat", "nearest", "auto"] | None = None, - alpha: float | None = None, - norm: str | Normalize | None = None, - cmap: str | Colormap | None = None, - vmin: float | None = None, - vmax: float | None = None, - data=None, - **kwargs, -) -> Collection: - __ret = gca().pcolor( - *args, - shading=shading, - alpha=alpha, - norm=norm, - cmap=cmap, - vmin=vmin, - vmax=vmax, - **({"data": data} if data is not None else {}), - **kwargs, - ) - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.pcolormesh) -def pcolormesh( - *args: ArrayLike, - alpha: float | None = None, - norm: str | Normalize | None = None, - cmap: str | Colormap | None = None, - vmin: float | None = None, - vmax: float | None = None, - shading: Literal["flat", "nearest", "gouraud", "auto"] | None = None, - antialiased: bool = False, - data=None, - **kwargs, -) -> QuadMesh: - __ret = gca().pcolormesh( - *args, - alpha=alpha, - norm=norm, - cmap=cmap, - vmin=vmin, - vmax=vmax, - shading=shading, - antialiased=antialiased, - **({"data": data} if data is not None else {}), - **kwargs, - ) - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.phase_spectrum) -def phase_spectrum( - x: ArrayLike, - Fs: float | None = None, - Fc: int | None = None, - window: Callable[[ArrayLike], ArrayLike] | ArrayLike | None = None, - pad_to: int | None = None, - sides: Literal["default", "onesided", "twosided"] | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray, Line2D]: - return gca().phase_spectrum( - x, - Fs=Fs, - Fc=Fc, - window=window, - pad_to=pad_to, - sides=sides, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.pie) -def pie( - x: ArrayLike, - explode: ArrayLike | None = None, - labels: Sequence[str] | None = None, - colors: ColorType | Sequence[ColorType] | None = None, - autopct: str | Callable[[float], str] | None = None, - pctdistance: float = 0.6, - shadow: bool = False, - labeldistance: float | None = 1.1, - startangle: float = 0, - radius: float = 1, - counterclock: bool = True, - wedgeprops: dict[str, Any] | None = None, - textprops: dict[str, Any] | None = None, - center: tuple[float, float] = (0, 0), - frame: bool = False, - rotatelabels: bool = False, - *, - normalize: bool = True, - hatch: str | Sequence[str] | None = None, - data=None, -) -> tuple[list[Wedge], list[Text]] | tuple[list[Wedge], list[Text], list[Text]]: - return gca().pie( - x, - explode=explode, - labels=labels, - colors=colors, - autopct=autopct, - pctdistance=pctdistance, - shadow=shadow, - labeldistance=labeldistance, - startangle=startangle, - radius=radius, - counterclock=counterclock, - wedgeprops=wedgeprops, - textprops=textprops, - center=center, - frame=frame, - rotatelabels=rotatelabels, - normalize=normalize, - hatch=hatch, - **({"data": data} if data is not None else {}), - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.plot) -def plot( - *args: float | ArrayLike | str, - scalex: bool = True, - scaley: bool = True, - data=None, - **kwargs, -) -> list[Line2D]: - return gca().plot( - *args, - scalex=scalex, - scaley=scaley, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.plot_date) -def plot_date( - x: ArrayLike, - y: ArrayLike, - fmt: str = "o", - tz: str | datetime.tzinfo | None = None, - xdate: bool = True, - ydate: bool = False, - *, - data=None, - **kwargs, -) -> list[Line2D]: - return gca().plot_date( - x, - y, - fmt=fmt, - tz=tz, - xdate=xdate, - ydate=ydate, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.psd) -def psd( - x: ArrayLike, - NFFT: int | None = None, - Fs: float | None = None, - Fc: int | None = None, - detrend: Literal["none", "mean", "linear"] - | Callable[[ArrayLike], ArrayLike] - | None = None, - window: Callable[[ArrayLike], ArrayLike] | ArrayLike | None = None, - noverlap: int | None = None, - pad_to: int | None = None, - sides: Literal["default", "onesided", "twosided"] | None = None, - scale_by_freq: bool | None = None, - return_line: bool | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray] | tuple[np.ndarray, np.ndarray, Line2D]: - return gca().psd( - x, - NFFT=NFFT, - Fs=Fs, - Fc=Fc, - detrend=detrend, - window=window, - noverlap=noverlap, - pad_to=pad_to, - sides=sides, - scale_by_freq=scale_by_freq, - return_line=return_line, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.quiver) -def quiver(*args, data=None, **kwargs) -> Quiver: - __ret = gca().quiver( - *args, **({"data": data} if data is not None else {}), **kwargs - ) - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.quiverkey) -def quiverkey( - Q: Quiver, X: float, Y: float, U: float, label: str, **kwargs -) -> QuiverKey: - return gca().quiverkey(Q, X, Y, U, label, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.scatter) -def scatter( - x: float | ArrayLike, - y: float | ArrayLike, - s: float | ArrayLike | None = None, - c: ArrayLike | Sequence[ColorType] | ColorType | None = None, - marker: MarkerType | None = None, - cmap: str | Colormap | None = None, - norm: str | Normalize | None = None, - vmin: float | None = None, - vmax: float | None = None, - alpha: float | None = None, - linewidths: float | Sequence[float] | None = None, - *, - edgecolors: Literal["face", "none"] | ColorType | Sequence[ColorType] | None = None, - plotnonfinite: bool = False, - data=None, - **kwargs, -) -> PathCollection: - __ret = gca().scatter( - x, - y, - s=s, - c=c, - marker=marker, - cmap=cmap, - norm=norm, - vmin=vmin, - vmax=vmax, - alpha=alpha, - linewidths=linewidths, - edgecolors=edgecolors, - plotnonfinite=plotnonfinite, - **({"data": data} if data is not None else {}), - **kwargs, - ) - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.semilogx) -def semilogx(*args, **kwargs) -> list[Line2D]: - return gca().semilogx(*args, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.semilogy) -def semilogy(*args, **kwargs) -> list[Line2D]: - return gca().semilogy(*args, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.specgram) -def specgram( - x: ArrayLike, - NFFT: int | None = None, - Fs: float | None = None, - Fc: int | None = None, - detrend: Literal["none", "mean", "linear"] - | Callable[[ArrayLike], ArrayLike] - | None = None, - window: Callable[[ArrayLike], ArrayLike] | ArrayLike | None = None, - noverlap: int | None = None, - cmap: str | Colormap | None = None, - xextent: tuple[float, float] | None = None, - pad_to: int | None = None, - sides: Literal["default", "onesided", "twosided"] | None = None, - scale_by_freq: bool | None = None, - mode: Literal["default", "psd", "magnitude", "angle", "phase"] | None = None, - scale: Literal["default", "linear", "dB"] | None = None, - vmin: float | None = None, - vmax: float | None = None, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray, np.ndarray, AxesImage]: - __ret = gca().specgram( - x, - NFFT=NFFT, - Fs=Fs, - Fc=Fc, - detrend=detrend, - window=window, - noverlap=noverlap, - cmap=cmap, - xextent=xextent, - pad_to=pad_to, - sides=sides, - scale_by_freq=scale_by_freq, - mode=mode, - scale=scale, - vmin=vmin, - vmax=vmax, - **({"data": data} if data is not None else {}), - **kwargs, - ) - sci(__ret[-1]) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.spy) -def spy( - Z: ArrayLike, - precision: float | Literal["present"] = 0, - marker: str | None = None, - markersize: float | None = None, - aspect: Literal["equal", "auto"] | float | None = "equal", - origin: Literal["upper", "lower"] = "upper", - **kwargs, -) -> AxesImage: - __ret = gca().spy( - Z, - precision=precision, - marker=marker, - markersize=markersize, - aspect=aspect, - origin=origin, - **kwargs, - ) - if isinstance(__ret, cm.ScalarMappable): - sci(__ret) # noqa - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.stackplot) -def stackplot(x, *args, labels=(), colors=None, baseline="zero", data=None, **kwargs): - return gca().stackplot( - x, - *args, - labels=labels, - colors=colors, - baseline=baseline, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.stem) -def stem( - *args: ArrayLike | str, - linefmt: str | None = None, - markerfmt: str | None = None, - basefmt: str | None = None, - bottom: float = 0, - label: str | None = None, - orientation: Literal["vertical", "horizontal"] = "vertical", - data=None, -) -> StemContainer: - return gca().stem( - *args, - linefmt=linefmt, - markerfmt=markerfmt, - basefmt=basefmt, - bottom=bottom, - label=label, - orientation=orientation, - **({"data": data} if data is not None else {}), - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.step) -def step( - x: ArrayLike, - y: ArrayLike, - *args, - where: Literal["pre", "post", "mid"] = "pre", - data=None, - **kwargs, -) -> list[Line2D]: - return gca().step( - x, - y, - *args, - where=where, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.streamplot) -def streamplot( - x, - y, - u, - v, - density=1, - linewidth=None, - color=None, - cmap=None, - norm=None, - arrowsize=1, - arrowstyle="-|>", - minlength=0.1, - transform=None, - zorder=None, - start_points=None, - maxlength=4.0, - integration_direction="both", - broken_streamlines=True, - *, - data=None, -): - __ret = gca().streamplot( - x, - y, - u, - v, - density=density, - linewidth=linewidth, - color=color, - cmap=cmap, - norm=norm, - arrowsize=arrowsize, - arrowstyle=arrowstyle, - minlength=minlength, - transform=transform, - zorder=zorder, - start_points=start_points, - maxlength=maxlength, - integration_direction=integration_direction, - broken_streamlines=broken_streamlines, - **({"data": data} if data is not None else {}), - ) - sci(__ret.lines) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.table) -def table( - cellText=None, - cellColours=None, - cellLoc="right", - colWidths=None, - rowLabels=None, - rowColours=None, - rowLoc="left", - colLabels=None, - colColours=None, - colLoc="center", - loc="bottom", - bbox=None, - edges="closed", - **kwargs, -): - return gca().table( - cellText=cellText, - cellColours=cellColours, - cellLoc=cellLoc, - colWidths=colWidths, - rowLabels=rowLabels, - rowColours=rowColours, - rowLoc=rowLoc, - colLabels=colLabels, - colColours=colColours, - colLoc=colLoc, - loc=loc, - bbox=bbox, - edges=edges, - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.text) -def text( - x: float, y: float, s: str, fontdict: dict[str, Any] | None = None, **kwargs -) -> Text: - return gca().text(x, y, s, fontdict=fontdict, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.tick_params) -def tick_params(axis: Literal["both", "x", "y"] = "both", **kwargs) -> None: - gca().tick_params(axis=axis, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.ticklabel_format) -def ticklabel_format( - *, - axis: Literal["both", "x", "y"] = "both", - style: Literal["", "sci", "scientific", "plain"] = "", - scilimits: tuple[int, int] | None = None, - useOffset: bool | float | None = None, - useLocale: bool | None = None, - useMathText: bool | None = None, -) -> None: - gca().ticklabel_format( - axis=axis, - style=style, - scilimits=scilimits, - useOffset=useOffset, - useLocale=useLocale, - useMathText=useMathText, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.tricontour) -def tricontour(*args, **kwargs): - __ret = gca().tricontour(*args, **kwargs) - if __ret._A is not None: # type: ignore[attr-defined] - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.tricontourf) -def tricontourf(*args, **kwargs): - __ret = gca().tricontourf(*args, **kwargs) - if __ret._A is not None: # type: ignore[attr-defined] - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.tripcolor) -def tripcolor( - *args, - alpha=1.0, - norm=None, - cmap=None, - vmin=None, - vmax=None, - shading="flat", - facecolors=None, - **kwargs, -): - __ret = gca().tripcolor( - *args, - alpha=alpha, - norm=norm, - cmap=cmap, - vmin=vmin, - vmax=vmax, - shading=shading, - facecolors=facecolors, - **kwargs, - ) - sci(__ret) - return __ret - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.triplot) -def triplot(*args, **kwargs): - return gca().triplot(*args, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.violinplot) -def violinplot( - dataset: ArrayLike | Sequence[ArrayLike], - positions: ArrayLike | None = None, - vert: bool = True, - widths: float | ArrayLike = 0.5, - showmeans: bool = False, - showextrema: bool = True, - showmedians: bool = False, - quantiles: Sequence[float | Sequence[float]] | None = None, - points: int = 100, - bw_method: Literal["scott", "silverman"] - | float - | Callable[[GaussianKDE], float] - | None = None, - *, - data=None, -) -> dict[str, Collection]: - return gca().violinplot( - dataset, - positions=positions, - vert=vert, - widths=widths, - showmeans=showmeans, - showextrema=showextrema, - showmedians=showmedians, - quantiles=quantiles, - points=points, - bw_method=bw_method, - **({"data": data} if data is not None else {}), - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.vlines) -def vlines( - x: float | ArrayLike, - ymin: float | ArrayLike, - ymax: float | ArrayLike, - colors: ColorType | Sequence[ColorType] | None = None, - linestyles: LineStyleType = "solid", - label: str = "", - *, - data=None, - **kwargs, -) -> LineCollection: - return gca().vlines( - x, - ymin, - ymax, - colors=colors, - linestyles=linestyles, - label=label, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.xcorr) -def xcorr( - x: ArrayLike, - y: ArrayLike, - normed: bool = True, - detrend: Callable[[ArrayLike], ArrayLike] = mlab.detrend_none, - usevlines: bool = True, - maxlags: int = 10, - *, - data=None, - **kwargs, -) -> tuple[np.ndarray, np.ndarray, LineCollection | Line2D, Line2D | None]: - return gca().xcorr( - x, - y, - normed=normed, - detrend=detrend, - usevlines=usevlines, - maxlags=maxlags, - **({"data": data} if data is not None else {}), - **kwargs, - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes._sci) -def sci(im: ScalarMappable) -> None: - gca()._sci(im) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.set_title) -def title( - label: str, - fontdict: dict[str, Any] | None = None, - loc: Literal["left", "center", "right"] | None = None, - pad: float | None = None, - *, - y: float | None = None, - **kwargs, -) -> Text: - return gca().set_title(label, fontdict=fontdict, loc=loc, pad=pad, y=y, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.set_xlabel) -def xlabel( - xlabel: str, - fontdict: dict[str, Any] | None = None, - labelpad: float | None = None, - *, - loc: Literal["left", "center", "right"] | None = None, - **kwargs, -) -> Text: - return gca().set_xlabel( - xlabel, fontdict=fontdict, labelpad=labelpad, loc=loc, **kwargs - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.set_ylabel) -def ylabel( - ylabel: str, - fontdict: dict[str, Any] | None = None, - labelpad: float | None = None, - *, - loc: Literal["bottom", "center", "top"] | None = None, - **kwargs, -) -> Text: - return gca().set_ylabel( - ylabel, fontdict=fontdict, labelpad=labelpad, loc=loc, **kwargs - ) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.set_xscale) -def xscale(value: str | ScaleBase, **kwargs) -> None: - gca().set_xscale(value, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -@_copy_docstring_and_deprecators(Axes.set_yscale) -def yscale(value: str | ScaleBase, **kwargs) -> None: - gca().set_yscale(value, **kwargs) - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def autumn() -> None: - """ - Set the colormap to 'autumn'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("autumn") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def bone() -> None: - """ - Set the colormap to 'bone'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("bone") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def cool() -> None: - """ - Set the colormap to 'cool'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("cool") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def copper() -> None: - """ - Set the colormap to 'copper'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("copper") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def flag() -> None: - """ - Set the colormap to 'flag'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("flag") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def gray() -> None: - """ - Set the colormap to 'gray'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("gray") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def hot() -> None: - """ - Set the colormap to 'hot'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("hot") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def hsv() -> None: - """ - Set the colormap to 'hsv'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("hsv") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def jet() -> None: - """ - Set the colormap to 'jet'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("jet") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def pink() -> None: - """ - Set the colormap to 'pink'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("pink") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def prism() -> None: - """ - Set the colormap to 'prism'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("prism") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def spring() -> None: - """ - Set the colormap to 'spring'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("spring") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def summer() -> None: - """ - Set the colormap to 'summer'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("summer") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def winter() -> None: - """ - Set the colormap to 'winter'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("winter") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def magma() -> None: - """ - Set the colormap to 'magma'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("magma") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def inferno() -> None: - """ - Set the colormap to 'inferno'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("inferno") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def plasma() -> None: - """ - Set the colormap to 'plasma'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("plasma") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def viridis() -> None: - """ - Set the colormap to 'viridis'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("viridis") - - -# Autogenerated by boilerplate.py. Do not edit as changes will be lost. -def nipy_spectral() -> None: - """ - Set the colormap to 'nipy_spectral'. - - This changes the default colormap as well as the colormap of the current - image if there is one. See ``help(colormaps)`` for more information. - """ - set_cmap("nipy_spectral") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_determinism.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_determinism.py deleted file mode 100644 index fe0fb34e128ac85234e9385217222b16881b2a98..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_determinism.py +++ /dev/null @@ -1,138 +0,0 @@ -""" -Test output reproducibility. -""" - -import os -import subprocess -import sys - -import pytest - -import matplotlib as mpl -import matplotlib.testing.compare -from matplotlib import pyplot as plt -from matplotlib.testing._markers import needs_ghostscript, needs_usetex - - -def _save_figure(objects='mhi', fmt="pdf", usetex=False): - mpl.use(fmt) - mpl.rcParams.update({'svg.hashsalt': 'asdf', 'text.usetex': usetex}) - - fig = plt.figure() - - if 'm' in objects: - # use different markers... - ax1 = fig.add_subplot(1, 6, 1) - x = range(10) - ax1.plot(x, [1] * 10, marker='D') - ax1.plot(x, [2] * 10, marker='x') - ax1.plot(x, [3] * 10, marker='^') - ax1.plot(x, [4] * 10, marker='H') - ax1.plot(x, [5] * 10, marker='v') - - if 'h' in objects: - # also use different hatch patterns - ax2 = fig.add_subplot(1, 6, 2) - bars = (ax2.bar(range(1, 5), range(1, 5)) + - ax2.bar(range(1, 5), [6] * 4, bottom=range(1, 5))) - ax2.set_xticks([1.5, 2.5, 3.5, 4.5]) - - patterns = ('-', '+', 'x', '\\', '*', 'o', 'O', '.') - for bar, pattern in zip(bars, patterns): - bar.set_hatch(pattern) - - if 'i' in objects: - # also use different images - A = [[1, 2, 3], [2, 3, 1], [3, 1, 2]] - fig.add_subplot(1, 6, 3).imshow(A, interpolation='nearest') - A = [[1, 3, 2], [1, 2, 3], [3, 1, 2]] - fig.add_subplot(1, 6, 4).imshow(A, interpolation='bilinear') - A = [[2, 3, 1], [1, 2, 3], [2, 1, 3]] - fig.add_subplot(1, 6, 5).imshow(A, interpolation='bicubic') - - x = range(5) - ax = fig.add_subplot(1, 6, 6) - ax.plot(x, x) - ax.set_title('A string $1+2+\\sigma$') - ax.set_xlabel('A string $1+2+\\sigma$') - ax.set_ylabel('A string $1+2+\\sigma$') - - stdout = getattr(sys.stdout, 'buffer', sys.stdout) - fig.savefig(stdout, format=fmt) - - -@pytest.mark.parametrize( - "objects, fmt, usetex", [ - ("", "pdf", False), - ("m", "pdf", False), - ("h", "pdf", False), - ("i", "pdf", False), - ("mhi", "pdf", False), - ("mhi", "ps", False), - pytest.param( - "mhi", "ps", True, marks=[needs_usetex, needs_ghostscript]), - ("mhi", "svg", False), - pytest.param("mhi", "svg", True, marks=needs_usetex), - ] -) -def test_determinism_check(objects, fmt, usetex): - """ - Output three times the same graphs and checks that the outputs are exactly - the same. - - Parameters - ---------- - objects : str - Objects to be included in the test document: 'm' for markers, 'h' for - hatch patterns, 'i' for images. - fmt : {"pdf", "ps", "svg"} - Output format. - """ - plots = [ - subprocess.check_output( - [sys.executable, "-R", "-c", - f"from matplotlib.tests.test_determinism import _save_figure;" - f"_save_figure({objects!r}, {fmt!r}, {usetex})"], - env={**os.environ, "SOURCE_DATE_EPOCH": "946684800", - "MPLBACKEND": "Agg"}) - for _ in range(3) - ] - for p in plots[1:]: - if fmt == "ps" and usetex: - if p != plots[0]: - pytest.skip("failed, maybe due to ghostscript timestamps") - else: - assert p == plots[0] - - -@pytest.mark.parametrize( - "fmt, string", [ - ("pdf", b"/CreationDate (D:20000101000000Z)"), - # SOURCE_DATE_EPOCH support is not tested with text.usetex, - # because the produced timestamp comes from ghostscript: - # %%CreationDate: D:20000101000000Z00\'00\', and this could change - # with another ghostscript version. - ("ps", b"%%CreationDate: Sat Jan 01 00:00:00 2000"), - ] -) -def test_determinism_source_date_epoch(fmt, string): - """ - Test SOURCE_DATE_EPOCH support. Output a document with the environment - variable SOURCE_DATE_EPOCH set to 2000-01-01 00:00 UTC and check that the - document contains the timestamp that corresponds to this date (given as an - argument). - - Parameters - ---------- - fmt : {"pdf", "ps", "svg"} - Output format. - string : bytes - Timestamp string for 2000-01-01 00:00 UTC. - """ - buf = subprocess.check_output( - [sys.executable, "-R", "-c", - f"from matplotlib.tests.test_determinism import _save_figure; " - f"_save_figure('', {fmt!r})"], - env={**os.environ, "SOURCE_DATE_EPOCH": "946684800", - "MPLBACKEND": "Agg"}) - assert string in buf diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_typing/setup.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_typing/setup.py deleted file mode 100644 index 24022fdaa32708150cd5d1dcfe586eb33fb7175e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_typing/setup.py +++ /dev/null @@ -1,10 +0,0 @@ -def configuration(parent_package='', top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('_typing', parent_package, top_path) - config.add_data_files('*.pyi') - return config - - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/multiarray.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/multiarray.py deleted file mode 100644 index 26cedfd77566e7b7865345e0775af88153e74ffc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/multiarray.py +++ /dev/null @@ -1,76 +0,0 @@ -import numpy as np -import numpy.typing as npt - -AR_f8: npt.NDArray[np.float64] = np.array([1.0]) -AR_i4 = np.array([1], dtype=np.int32) -AR_u1 = np.array([1], dtype=np.uint8) - -AR_LIKE_f = [1.5] -AR_LIKE_i = [1] - -b_f8 = np.broadcast(AR_f8) -b_i4_f8_f8 = np.broadcast(AR_i4, AR_f8, AR_f8) - -next(b_f8) -b_f8.reset() -b_f8.index -b_f8.iters -b_f8.nd -b_f8.ndim -b_f8.numiter -b_f8.shape -b_f8.size - -next(b_i4_f8_f8) -b_i4_f8_f8.reset() -b_i4_f8_f8.ndim -b_i4_f8_f8.index -b_i4_f8_f8.iters -b_i4_f8_f8.nd -b_i4_f8_f8.numiter -b_i4_f8_f8.shape -b_i4_f8_f8.size - -np.inner(AR_f8, AR_i4) - -np.where([True, True, False]) -np.where([True, True, False], 1, 0) - -np.lexsort([0, 1, 2]) - -np.can_cast(np.dtype("i8"), int) -np.can_cast(AR_f8, "f8") -np.can_cast(AR_f8, np.complex128, casting="unsafe") - -np.min_scalar_type([1]) -np.min_scalar_type(AR_f8) - -np.result_type(int, AR_i4) -np.result_type(AR_f8, AR_u1) -np.result_type(AR_f8, np.complex128) - -np.dot(AR_LIKE_f, AR_i4) -np.dot(AR_u1, 1) -np.dot(1.5j, 1) -np.dot(AR_u1, 1, out=AR_f8) - -np.vdot(AR_LIKE_f, AR_i4) -np.vdot(AR_u1, 1) -np.vdot(1.5j, 1) - -np.bincount(AR_i4) - -np.copyto(AR_f8, [1.6]) - -np.putmask(AR_f8, [True], 1.5) - -np.packbits(AR_i4) -np.packbits(AR_u1) - -np.unpackbits(AR_u1) - -np.shares_memory(1, 2) -np.shares_memory(AR_f8, AR_f8, max_work=1) - -np.may_share_memory(1, 2) -np.may_share_memory(AR_f8, AR_f8, max_work=1) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/info.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/info.py deleted file mode 100644 index d20c2a62c61e2f2f419622159d0b6de9604c6ff2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/info.py +++ /dev/null @@ -1,1101 +0,0 @@ -from __future__ import annotations - -from abc import ( - ABC, - abstractmethod, -) -import sys -from textwrap import dedent -from typing import TYPE_CHECKING - -from pandas._config import get_option - -from pandas.io.formats import format as fmt -from pandas.io.formats.printing import pprint_thing - -if TYPE_CHECKING: - from collections.abc import ( - Iterable, - Iterator, - Mapping, - Sequence, - ) - - from pandas._typing import ( - Dtype, - WriteBuffer, - ) - - from pandas import ( - DataFrame, - Index, - Series, - ) - - -frame_max_cols_sub = dedent( - """\ - max_cols : int, optional - When to switch from the verbose to the truncated output. If the - DataFrame has more than `max_cols` columns, the truncated output - is used. By default, the setting in - ``pandas.options.display.max_info_columns`` is used.""" -) - - -show_counts_sub = dedent( - """\ - show_counts : bool, optional - Whether to show the non-null counts. By default, this is shown - only if the DataFrame is smaller than - ``pandas.options.display.max_info_rows`` and - ``pandas.options.display.max_info_columns``. A value of True always - shows the counts, and False never shows the counts.""" -) - - -frame_examples_sub = dedent( - """\ - >>> int_values = [1, 2, 3, 4, 5] - >>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon'] - >>> float_values = [0.0, 0.25, 0.5, 0.75, 1.0] - >>> df = pd.DataFrame({"int_col": int_values, "text_col": text_values, - ... "float_col": float_values}) - >>> df - int_col text_col float_col - 0 1 alpha 0.00 - 1 2 beta 0.25 - 2 3 gamma 0.50 - 3 4 delta 0.75 - 4 5 epsilon 1.00 - - Prints information of all columns: - - >>> df.info(verbose=True) - - RangeIndex: 5 entries, 0 to 4 - Data columns (total 3 columns): - # Column Non-Null Count Dtype - --- ------ -------------- ----- - 0 int_col 5 non-null int64 - 1 text_col 5 non-null object - 2 float_col 5 non-null float64 - dtypes: float64(1), int64(1), object(1) - memory usage: 248.0+ bytes - - Prints a summary of columns count and its dtypes but not per column - information: - - >>> df.info(verbose=False) - - RangeIndex: 5 entries, 0 to 4 - Columns: 3 entries, int_col to float_col - dtypes: float64(1), int64(1), object(1) - memory usage: 248.0+ bytes - - Pipe output of DataFrame.info to buffer instead of sys.stdout, get - buffer content and writes to a text file: - - >>> import io - >>> buffer = io.StringIO() - >>> df.info(buf=buffer) - >>> s = buffer.getvalue() - >>> with open("df_info.txt", "w", - ... encoding="utf-8") as f: # doctest: +SKIP - ... f.write(s) - 260 - - The `memory_usage` parameter allows deep introspection mode, specially - useful for big DataFrames and fine-tune memory optimization: - - >>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6) - >>> df = pd.DataFrame({ - ... 'column_1': np.random.choice(['a', 'b', 'c'], 10 ** 6), - ... 'column_2': np.random.choice(['a', 'b', 'c'], 10 ** 6), - ... 'column_3': np.random.choice(['a', 'b', 'c'], 10 ** 6) - ... }) - >>> df.info() - - RangeIndex: 1000000 entries, 0 to 999999 - Data columns (total 3 columns): - # Column Non-Null Count Dtype - --- ------ -------------- ----- - 0 column_1 1000000 non-null object - 1 column_2 1000000 non-null object - 2 column_3 1000000 non-null object - dtypes: object(3) - memory usage: 22.9+ MB - - >>> df.info(memory_usage='deep') - - RangeIndex: 1000000 entries, 0 to 999999 - Data columns (total 3 columns): - # Column Non-Null Count Dtype - --- ------ -------------- ----- - 0 column_1 1000000 non-null object - 1 column_2 1000000 non-null object - 2 column_3 1000000 non-null object - dtypes: object(3) - memory usage: 165.9 MB""" -) - - -frame_see_also_sub = dedent( - """\ - DataFrame.describe: Generate descriptive statistics of DataFrame - columns. - DataFrame.memory_usage: Memory usage of DataFrame columns.""" -) - - -frame_sub_kwargs = { - "klass": "DataFrame", - "type_sub": " and columns", - "max_cols_sub": frame_max_cols_sub, - "show_counts_sub": show_counts_sub, - "examples_sub": frame_examples_sub, - "see_also_sub": frame_see_also_sub, - "version_added_sub": "", -} - - -series_examples_sub = dedent( - """\ - >>> int_values = [1, 2, 3, 4, 5] - >>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon'] - >>> s = pd.Series(text_values, index=int_values) - >>> s.info() - - Index: 5 entries, 1 to 5 - Series name: None - Non-Null Count Dtype - -------------- ----- - 5 non-null object - dtypes: object(1) - memory usage: 80.0+ bytes - - Prints a summary excluding information about its values: - - >>> s.info(verbose=False) - - Index: 5 entries, 1 to 5 - dtypes: object(1) - memory usage: 80.0+ bytes - - Pipe output of Series.info to buffer instead of sys.stdout, get - buffer content and writes to a text file: - - >>> import io - >>> buffer = io.StringIO() - >>> s.info(buf=buffer) - >>> s = buffer.getvalue() - >>> with open("df_info.txt", "w", - ... encoding="utf-8") as f: # doctest: +SKIP - ... f.write(s) - 260 - - The `memory_usage` parameter allows deep introspection mode, specially - useful for big Series and fine-tune memory optimization: - - >>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6) - >>> s = pd.Series(np.random.choice(['a', 'b', 'c'], 10 ** 6)) - >>> s.info() - - RangeIndex: 1000000 entries, 0 to 999999 - Series name: None - Non-Null Count Dtype - -------------- ----- - 1000000 non-null object - dtypes: object(1) - memory usage: 7.6+ MB - - >>> s.info(memory_usage='deep') - - RangeIndex: 1000000 entries, 0 to 999999 - Series name: None - Non-Null Count Dtype - -------------- ----- - 1000000 non-null object - dtypes: object(1) - memory usage: 55.3 MB""" -) - - -series_see_also_sub = dedent( - """\ - Series.describe: Generate descriptive statistics of Series. - Series.memory_usage: Memory usage of Series.""" -) - - -series_sub_kwargs = { - "klass": "Series", - "type_sub": "", - "max_cols_sub": "", - "show_counts_sub": show_counts_sub, - "examples_sub": series_examples_sub, - "see_also_sub": series_see_also_sub, - "version_added_sub": "\n.. versionadded:: 1.4.0\n", -} - - -INFO_DOCSTRING = dedent( - """ - Print a concise summary of a {klass}. - - This method prints information about a {klass} including - the index dtype{type_sub}, non-null values and memory usage. - {version_added_sub}\ - - Parameters - ---------- - verbose : bool, optional - Whether to print the full summary. By default, the setting in - ``pandas.options.display.max_info_columns`` is followed. - buf : writable buffer, defaults to sys.stdout - Where to send the output. By default, the output is printed to - sys.stdout. Pass a writable buffer if you need to further process - the output. - {max_cols_sub} - memory_usage : bool, str, optional - Specifies whether total memory usage of the {klass} - elements (including the index) should be displayed. By default, - this follows the ``pandas.options.display.memory_usage`` setting. - - True always show memory usage. False never shows memory usage. - A value of 'deep' is equivalent to "True with deep introspection". - Memory usage is shown in human-readable units (base-2 - representation). Without deep introspection a memory estimation is - made based in column dtype and number of rows assuming values - consume the same memory amount for corresponding dtypes. With deep - memory introspection, a real memory usage calculation is performed - at the cost of computational resources. See the - :ref:`Frequently Asked Questions ` for more - details. - {show_counts_sub} - - Returns - ------- - None - This method prints a summary of a {klass} and returns None. - - See Also - -------- - {see_also_sub} - - Examples - -------- - {examples_sub} - """ -) - - -def _put_str(s: str | Dtype, space: int) -> str: - """ - Make string of specified length, padding to the right if necessary. - - Parameters - ---------- - s : Union[str, Dtype] - String to be formatted. - space : int - Length to force string to be of. - - Returns - ------- - str - String coerced to given length. - - Examples - -------- - >>> pd.io.formats.info._put_str("panda", 6) - 'panda ' - >>> pd.io.formats.info._put_str("panda", 4) - 'pand' - """ - return str(s)[:space].ljust(space) - - -def _sizeof_fmt(num: float, size_qualifier: str) -> str: - """ - Return size in human readable format. - - Parameters - ---------- - num : int - Size in bytes. - size_qualifier : str - Either empty, or '+' (if lower bound). - - Returns - ------- - str - Size in human readable format. - - Examples - -------- - >>> _sizeof_fmt(23028, '') - '22.5 KB' - - >>> _sizeof_fmt(23028, '+') - '22.5+ KB' - """ - for x in ["bytes", "KB", "MB", "GB", "TB"]: - if num < 1024.0: - return f"{num:3.1f}{size_qualifier} {x}" - num /= 1024.0 - return f"{num:3.1f}{size_qualifier} PB" - - -def _initialize_memory_usage( - memory_usage: bool | str | None = None, -) -> bool | str: - """Get memory usage based on inputs and display options.""" - if memory_usage is None: - memory_usage = get_option("display.memory_usage") - return memory_usage - - -class BaseInfo(ABC): - """ - Base class for DataFrameInfo and SeriesInfo. - - Parameters - ---------- - data : DataFrame or Series - Either dataframe or series. - memory_usage : bool or str, optional - If "deep", introspect the data deeply by interrogating object dtypes - for system-level memory consumption, and include it in the returned - values. - """ - - data: DataFrame | Series - memory_usage: bool | str - - @property - @abstractmethod - def dtypes(self) -> Iterable[Dtype]: - """ - Dtypes. - - Returns - ------- - dtypes : sequence - Dtype of each of the DataFrame's columns (or one series column). - """ - - @property - @abstractmethod - def dtype_counts(self) -> Mapping[str, int]: - """Mapping dtype - number of counts.""" - - @property - @abstractmethod - def non_null_counts(self) -> Sequence[int]: - """Sequence of non-null counts for all columns or column (if series).""" - - @property - @abstractmethod - def memory_usage_bytes(self) -> int: - """ - Memory usage in bytes. - - Returns - ------- - memory_usage_bytes : int - Object's total memory usage in bytes. - """ - - @property - def memory_usage_string(self) -> str: - """Memory usage in a form of human readable string.""" - return f"{_sizeof_fmt(self.memory_usage_bytes, self.size_qualifier)}\n" - - @property - def size_qualifier(self) -> str: - size_qualifier = "" - if self.memory_usage: - if self.memory_usage != "deep": - # size_qualifier is just a best effort; not guaranteed to catch - # all cases (e.g., it misses categorical data even with object - # categories) - if ( - "object" in self.dtype_counts - or self.data.index._is_memory_usage_qualified() - ): - size_qualifier = "+" - return size_qualifier - - @abstractmethod - def render( - self, - *, - buf: WriteBuffer[str] | None, - max_cols: int | None, - verbose: bool | None, - show_counts: bool | None, - ) -> None: - pass - - -class DataFrameInfo(BaseInfo): - """ - Class storing dataframe-specific info. - """ - - def __init__( - self, - data: DataFrame, - memory_usage: bool | str | None = None, - ) -> None: - self.data: DataFrame = data - self.memory_usage = _initialize_memory_usage(memory_usage) - - @property - def dtype_counts(self) -> Mapping[str, int]: - return _get_dataframe_dtype_counts(self.data) - - @property - def dtypes(self) -> Iterable[Dtype]: - """ - Dtypes. - - Returns - ------- - dtypes - Dtype of each of the DataFrame's columns. - """ - return self.data.dtypes - - @property - def ids(self) -> Index: - """ - Column names. - - Returns - ------- - ids : Index - DataFrame's column names. - """ - return self.data.columns - - @property - def col_count(self) -> int: - """Number of columns to be summarized.""" - return len(self.ids) - - @property - def non_null_counts(self) -> Sequence[int]: - """Sequence of non-null counts for all columns or column (if series).""" - return self.data.count() - - @property - def memory_usage_bytes(self) -> int: - deep = self.memory_usage == "deep" - return self.data.memory_usage(index=True, deep=deep).sum() - - def render( - self, - *, - buf: WriteBuffer[str] | None, - max_cols: int | None, - verbose: bool | None, - show_counts: bool | None, - ) -> None: - printer = DataFrameInfoPrinter( - info=self, - max_cols=max_cols, - verbose=verbose, - show_counts=show_counts, - ) - printer.to_buffer(buf) - - -class SeriesInfo(BaseInfo): - """ - Class storing series-specific info. - """ - - def __init__( - self, - data: Series, - memory_usage: bool | str | None = None, - ) -> None: - self.data: Series = data - self.memory_usage = _initialize_memory_usage(memory_usage) - - def render( - self, - *, - buf: WriteBuffer[str] | None = None, - max_cols: int | None = None, - verbose: bool | None = None, - show_counts: bool | None = None, - ) -> None: - if max_cols is not None: - raise ValueError( - "Argument `max_cols` can only be passed " - "in DataFrame.info, not Series.info" - ) - printer = SeriesInfoPrinter( - info=self, - verbose=verbose, - show_counts=show_counts, - ) - printer.to_buffer(buf) - - @property - def non_null_counts(self) -> Sequence[int]: - return [self.data.count()] - - @property - def dtypes(self) -> Iterable[Dtype]: - return [self.data.dtypes] - - @property - def dtype_counts(self) -> Mapping[str, int]: - from pandas.core.frame import DataFrame - - return _get_dataframe_dtype_counts(DataFrame(self.data)) - - @property - def memory_usage_bytes(self) -> int: - """Memory usage in bytes. - - Returns - ------- - memory_usage_bytes : int - Object's total memory usage in bytes. - """ - deep = self.memory_usage == "deep" - return self.data.memory_usage(index=True, deep=deep) - - -class InfoPrinterAbstract: - """ - Class for printing dataframe or series info. - """ - - def to_buffer(self, buf: WriteBuffer[str] | None = None) -> None: - """Save dataframe info into buffer.""" - table_builder = self._create_table_builder() - lines = table_builder.get_lines() - if buf is None: # pragma: no cover - buf = sys.stdout - fmt.buffer_put_lines(buf, lines) - - @abstractmethod - def _create_table_builder(self) -> TableBuilderAbstract: - """Create instance of table builder.""" - - -class DataFrameInfoPrinter(InfoPrinterAbstract): - """ - Class for printing dataframe info. - - Parameters - ---------- - info : DataFrameInfo - Instance of DataFrameInfo. - max_cols : int, optional - When to switch from the verbose to the truncated output. - verbose : bool, optional - Whether to print the full summary. - show_counts : bool, optional - Whether to show the non-null counts. - """ - - def __init__( - self, - info: DataFrameInfo, - max_cols: int | None = None, - verbose: bool | None = None, - show_counts: bool | None = None, - ) -> None: - self.info = info - self.data = info.data - self.verbose = verbose - self.max_cols = self._initialize_max_cols(max_cols) - self.show_counts = self._initialize_show_counts(show_counts) - - @property - def max_rows(self) -> int: - """Maximum info rows to be displayed.""" - return get_option("display.max_info_rows", len(self.data) + 1) - - @property - def exceeds_info_cols(self) -> bool: - """Check if number of columns to be summarized does not exceed maximum.""" - return bool(self.col_count > self.max_cols) - - @property - def exceeds_info_rows(self) -> bool: - """Check if number of rows to be summarized does not exceed maximum.""" - return bool(len(self.data) > self.max_rows) - - @property - def col_count(self) -> int: - """Number of columns to be summarized.""" - return self.info.col_count - - def _initialize_max_cols(self, max_cols: int | None) -> int: - if max_cols is None: - return get_option("display.max_info_columns", self.col_count + 1) - return max_cols - - def _initialize_show_counts(self, show_counts: bool | None) -> bool: - if show_counts is None: - return bool(not self.exceeds_info_cols and not self.exceeds_info_rows) - else: - return show_counts - - def _create_table_builder(self) -> DataFrameTableBuilder: - """ - Create instance of table builder based on verbosity and display settings. - """ - if self.verbose: - return DataFrameTableBuilderVerbose( - info=self.info, - with_counts=self.show_counts, - ) - elif self.verbose is False: # specifically set to False, not necessarily None - return DataFrameTableBuilderNonVerbose(info=self.info) - elif self.exceeds_info_cols: - return DataFrameTableBuilderNonVerbose(info=self.info) - else: - return DataFrameTableBuilderVerbose( - info=self.info, - with_counts=self.show_counts, - ) - - -class SeriesInfoPrinter(InfoPrinterAbstract): - """Class for printing series info. - - Parameters - ---------- - info : SeriesInfo - Instance of SeriesInfo. - verbose : bool, optional - Whether to print the full summary. - show_counts : bool, optional - Whether to show the non-null counts. - """ - - def __init__( - self, - info: SeriesInfo, - verbose: bool | None = None, - show_counts: bool | None = None, - ) -> None: - self.info = info - self.data = info.data - self.verbose = verbose - self.show_counts = self._initialize_show_counts(show_counts) - - def _create_table_builder(self) -> SeriesTableBuilder: - """ - Create instance of table builder based on verbosity. - """ - if self.verbose or self.verbose is None: - return SeriesTableBuilderVerbose( - info=self.info, - with_counts=self.show_counts, - ) - else: - return SeriesTableBuilderNonVerbose(info=self.info) - - def _initialize_show_counts(self, show_counts: bool | None) -> bool: - if show_counts is None: - return True - else: - return show_counts - - -class TableBuilderAbstract(ABC): - """ - Abstract builder for info table. - """ - - _lines: list[str] - info: BaseInfo - - @abstractmethod - def get_lines(self) -> list[str]: - """Product in a form of list of lines (strings).""" - - @property - def data(self) -> DataFrame | Series: - return self.info.data - - @property - def dtypes(self) -> Iterable[Dtype]: - """Dtypes of each of the DataFrame's columns.""" - return self.info.dtypes - - @property - def dtype_counts(self) -> Mapping[str, int]: - """Mapping dtype - number of counts.""" - return self.info.dtype_counts - - @property - def display_memory_usage(self) -> bool: - """Whether to display memory usage.""" - return bool(self.info.memory_usage) - - @property - def memory_usage_string(self) -> str: - """Memory usage string with proper size qualifier.""" - return self.info.memory_usage_string - - @property - def non_null_counts(self) -> Sequence[int]: - return self.info.non_null_counts - - def add_object_type_line(self) -> None: - """Add line with string representation of dataframe to the table.""" - self._lines.append(str(type(self.data))) - - def add_index_range_line(self) -> None: - """Add line with range of indices to the table.""" - self._lines.append(self.data.index._summary()) - - def add_dtypes_line(self) -> None: - """Add summary line with dtypes present in dataframe.""" - collected_dtypes = [ - f"{key}({val:d})" for key, val in sorted(self.dtype_counts.items()) - ] - self._lines.append(f"dtypes: {', '.join(collected_dtypes)}") - - -class DataFrameTableBuilder(TableBuilderAbstract): - """ - Abstract builder for dataframe info table. - - Parameters - ---------- - info : DataFrameInfo. - Instance of DataFrameInfo. - """ - - def __init__(self, *, info: DataFrameInfo) -> None: - self.info: DataFrameInfo = info - - def get_lines(self) -> list[str]: - self._lines = [] - if self.col_count == 0: - self._fill_empty_info() - else: - self._fill_non_empty_info() - return self._lines - - def _fill_empty_info(self) -> None: - """Add lines to the info table, pertaining to empty dataframe.""" - self.add_object_type_line() - self.add_index_range_line() - self._lines.append(f"Empty {type(self.data).__name__}\n") - - @abstractmethod - def _fill_non_empty_info(self) -> None: - """Add lines to the info table, pertaining to non-empty dataframe.""" - - @property - def data(self) -> DataFrame: - """DataFrame.""" - return self.info.data - - @property - def ids(self) -> Index: - """Dataframe columns.""" - return self.info.ids - - @property - def col_count(self) -> int: - """Number of dataframe columns to be summarized.""" - return self.info.col_count - - def add_memory_usage_line(self) -> None: - """Add line containing memory usage.""" - self._lines.append(f"memory usage: {self.memory_usage_string}") - - -class DataFrameTableBuilderNonVerbose(DataFrameTableBuilder): - """ - Dataframe info table builder for non-verbose output. - """ - - def _fill_non_empty_info(self) -> None: - """Add lines to the info table, pertaining to non-empty dataframe.""" - self.add_object_type_line() - self.add_index_range_line() - self.add_columns_summary_line() - self.add_dtypes_line() - if self.display_memory_usage: - self.add_memory_usage_line() - - def add_columns_summary_line(self) -> None: - self._lines.append(self.ids._summary(name="Columns")) - - -class TableBuilderVerboseMixin(TableBuilderAbstract): - """ - Mixin for verbose info output. - """ - - SPACING: str = " " * 2 - strrows: Sequence[Sequence[str]] - gross_column_widths: Sequence[int] - with_counts: bool - - @property - @abstractmethod - def headers(self) -> Sequence[str]: - """Headers names of the columns in verbose table.""" - - @property - def header_column_widths(self) -> Sequence[int]: - """Widths of header columns (only titles).""" - return [len(col) for col in self.headers] - - def _get_gross_column_widths(self) -> Sequence[int]: - """Get widths of columns containing both headers and actual content.""" - body_column_widths = self._get_body_column_widths() - return [ - max(*widths) - for widths in zip(self.header_column_widths, body_column_widths) - ] - - def _get_body_column_widths(self) -> Sequence[int]: - """Get widths of table content columns.""" - strcols: Sequence[Sequence[str]] = list(zip(*self.strrows)) - return [max(len(x) for x in col) for col in strcols] - - def _gen_rows(self) -> Iterator[Sequence[str]]: - """ - Generator function yielding rows content. - - Each element represents a row comprising a sequence of strings. - """ - if self.with_counts: - return self._gen_rows_with_counts() - else: - return self._gen_rows_without_counts() - - @abstractmethod - def _gen_rows_with_counts(self) -> Iterator[Sequence[str]]: - """Iterator with string representation of body data with counts.""" - - @abstractmethod - def _gen_rows_without_counts(self) -> Iterator[Sequence[str]]: - """Iterator with string representation of body data without counts.""" - - def add_header_line(self) -> None: - header_line = self.SPACING.join( - [ - _put_str(header, col_width) - for header, col_width in zip(self.headers, self.gross_column_widths) - ] - ) - self._lines.append(header_line) - - def add_separator_line(self) -> None: - separator_line = self.SPACING.join( - [ - _put_str("-" * header_colwidth, gross_colwidth) - for header_colwidth, gross_colwidth in zip( - self.header_column_widths, self.gross_column_widths - ) - ] - ) - self._lines.append(separator_line) - - def add_body_lines(self) -> None: - for row in self.strrows: - body_line = self.SPACING.join( - [ - _put_str(col, gross_colwidth) - for col, gross_colwidth in zip(row, self.gross_column_widths) - ] - ) - self._lines.append(body_line) - - def _gen_non_null_counts(self) -> Iterator[str]: - """Iterator with string representation of non-null counts.""" - for count in self.non_null_counts: - yield f"{count} non-null" - - def _gen_dtypes(self) -> Iterator[str]: - """Iterator with string representation of column dtypes.""" - for dtype in self.dtypes: - yield pprint_thing(dtype) - - -class DataFrameTableBuilderVerbose(DataFrameTableBuilder, TableBuilderVerboseMixin): - """ - Dataframe info table builder for verbose output. - """ - - def __init__( - self, - *, - info: DataFrameInfo, - with_counts: bool, - ) -> None: - self.info = info - self.with_counts = with_counts - self.strrows: Sequence[Sequence[str]] = list(self._gen_rows()) - self.gross_column_widths: Sequence[int] = self._get_gross_column_widths() - - def _fill_non_empty_info(self) -> None: - """Add lines to the info table, pertaining to non-empty dataframe.""" - self.add_object_type_line() - self.add_index_range_line() - self.add_columns_summary_line() - self.add_header_line() - self.add_separator_line() - self.add_body_lines() - self.add_dtypes_line() - if self.display_memory_usage: - self.add_memory_usage_line() - - @property - def headers(self) -> Sequence[str]: - """Headers names of the columns in verbose table.""" - if self.with_counts: - return [" # ", "Column", "Non-Null Count", "Dtype"] - return [" # ", "Column", "Dtype"] - - def add_columns_summary_line(self) -> None: - self._lines.append(f"Data columns (total {self.col_count} columns):") - - def _gen_rows_without_counts(self) -> Iterator[Sequence[str]]: - """Iterator with string representation of body data without counts.""" - yield from zip( - self._gen_line_numbers(), - self._gen_columns(), - self._gen_dtypes(), - ) - - def _gen_rows_with_counts(self) -> Iterator[Sequence[str]]: - """Iterator with string representation of body data with counts.""" - yield from zip( - self._gen_line_numbers(), - self._gen_columns(), - self._gen_non_null_counts(), - self._gen_dtypes(), - ) - - def _gen_line_numbers(self) -> Iterator[str]: - """Iterator with string representation of column numbers.""" - for i, _ in enumerate(self.ids): - yield f" {i}" - - def _gen_columns(self) -> Iterator[str]: - """Iterator with string representation of column names.""" - for col in self.ids: - yield pprint_thing(col) - - -class SeriesTableBuilder(TableBuilderAbstract): - """ - Abstract builder for series info table. - - Parameters - ---------- - info : SeriesInfo. - Instance of SeriesInfo. - """ - - def __init__(self, *, info: SeriesInfo) -> None: - self.info: SeriesInfo = info - - def get_lines(self) -> list[str]: - self._lines = [] - self._fill_non_empty_info() - return self._lines - - @property - def data(self) -> Series: - """Series.""" - return self.info.data - - def add_memory_usage_line(self) -> None: - """Add line containing memory usage.""" - self._lines.append(f"memory usage: {self.memory_usage_string}") - - @abstractmethod - def _fill_non_empty_info(self) -> None: - """Add lines to the info table, pertaining to non-empty series.""" - - -class SeriesTableBuilderNonVerbose(SeriesTableBuilder): - """ - Series info table builder for non-verbose output. - """ - - def _fill_non_empty_info(self) -> None: - """Add lines to the info table, pertaining to non-empty series.""" - self.add_object_type_line() - self.add_index_range_line() - self.add_dtypes_line() - if self.display_memory_usage: - self.add_memory_usage_line() - - -class SeriesTableBuilderVerbose(SeriesTableBuilder, TableBuilderVerboseMixin): - """ - Series info table builder for verbose output. - """ - - def __init__( - self, - *, - info: SeriesInfo, - with_counts: bool, - ) -> None: - self.info = info - self.with_counts = with_counts - self.strrows: Sequence[Sequence[str]] = list(self._gen_rows()) - self.gross_column_widths: Sequence[int] = self._get_gross_column_widths() - - def _fill_non_empty_info(self) -> None: - """Add lines to the info table, pertaining to non-empty series.""" - self.add_object_type_line() - self.add_index_range_line() - self.add_series_name_line() - self.add_header_line() - self.add_separator_line() - self.add_body_lines() - self.add_dtypes_line() - if self.display_memory_usage: - self.add_memory_usage_line() - - def add_series_name_line(self) -> None: - self._lines.append(f"Series name: {self.data.name}") - - @property - def headers(self) -> Sequence[str]: - """Headers names of the columns in verbose table.""" - if self.with_counts: - return ["Non-Null Count", "Dtype"] - return ["Dtype"] - - def _gen_rows_without_counts(self) -> Iterator[Sequence[str]]: - """Iterator with string representation of body data without counts.""" - yield from self._gen_dtypes() - - def _gen_rows_with_counts(self) -> Iterator[Sequence[str]]: - """Iterator with string representation of body data with counts.""" - yield from zip( - self._gen_non_null_counts(), - self._gen_dtypes(), - ) - - -def _get_dataframe_dtype_counts(df: DataFrame) -> Mapping[str, int]: - """ - Create mapping between datatypes and their number of occurrences. - """ - # groupby dtype.name to collect e.g. Categorical columns - return df.dtypes.value_counts().groupby(lambda x: x.name).sum() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_nth.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_nth.py deleted file mode 100644 index 1cf4a90e25f1b5f316182fe82d08091a694b4503..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_nth.py +++ /dev/null @@ -1,875 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - DataFrame, - Index, - MultiIndex, - Series, - Timestamp, - isna, -) -import pandas._testing as tm - - -def test_first_last_nth(df): - # tests for first / last / nth - grouped = df.groupby("A") - first = grouped.first() - expected = df.loc[[1, 0], ["B", "C", "D"]] - expected.index = Index(["bar", "foo"], name="A") - expected = expected.sort_index() - tm.assert_frame_equal(first, expected) - - nth = grouped.nth(0) - expected = df.loc[[0, 1]] - tm.assert_frame_equal(nth, expected) - - last = grouped.last() - expected = df.loc[[5, 7], ["B", "C", "D"]] - expected.index = Index(["bar", "foo"], name="A") - tm.assert_frame_equal(last, expected) - - nth = grouped.nth(-1) - expected = df.iloc[[5, 7]] - tm.assert_frame_equal(nth, expected) - - nth = grouped.nth(1) - expected = df.iloc[[2, 3]] - tm.assert_frame_equal(nth, expected) - - # it works! - grouped["B"].first() - grouped["B"].last() - grouped["B"].nth(0) - - df.loc[df["A"] == "foo", "B"] = np.nan - assert isna(grouped["B"].first()["foo"]) - assert isna(grouped["B"].last()["foo"]) - assert isna(grouped["B"].nth(0).iloc[0]) - - # v0.14.0 whatsnew - df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"]) - g = df.groupby("A") - result = g.first() - expected = df.iloc[[1, 2]].set_index("A") - tm.assert_frame_equal(result, expected) - - expected = df.iloc[[1, 2]] - result = g.nth(0, dropna="any") - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("method", ["first", "last"]) -def test_first_last_with_na_object(method, nulls_fixture): - # https://github.com/pandas-dev/pandas/issues/32123 - groups = DataFrame({"a": [1, 1, 2, 2], "b": [1, 2, 3, nulls_fixture]}).groupby("a") - result = getattr(groups, method)() - - if method == "first": - values = [1, 3] - else: - values = [2, 3] - - values = np.array(values, dtype=result["b"].dtype) - idx = Index([1, 2], name="a") - expected = DataFrame({"b": values}, index=idx) - - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("index", [0, -1]) -def test_nth_with_na_object(index, nulls_fixture): - # https://github.com/pandas-dev/pandas/issues/32123 - df = DataFrame({"a": [1, 1, 2, 2], "b": [1, 2, 3, nulls_fixture]}) - groups = df.groupby("a") - result = groups.nth(index) - expected = df.iloc[[0, 2]] if index == 0 else df.iloc[[1, 3]] - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("method", ["first", "last"]) -def test_first_last_with_None(method): - # https://github.com/pandas-dev/pandas/issues/32800 - # None should be preserved as object dtype - df = DataFrame.from_dict({"id": ["a"], "value": [None]}) - groups = df.groupby("id", as_index=False) - result = getattr(groups, method)() - - tm.assert_frame_equal(result, df) - - -@pytest.mark.parametrize("method", ["first", "last"]) -@pytest.mark.parametrize( - "df, expected", - [ - ( - DataFrame({"id": "a", "value": [None, "foo", np.nan]}), - DataFrame({"value": ["foo"]}, index=Index(["a"], name="id")), - ), - ( - DataFrame({"id": "a", "value": [np.nan]}, dtype=object), - DataFrame({"value": [None]}, index=Index(["a"], name="id")), - ), - ], -) -def test_first_last_with_None_expanded(method, df, expected): - # GH 32800, 38286 - result = getattr(df.groupby("id"), method)() - tm.assert_frame_equal(result, expected) - - -def test_first_last_nth_dtypes(df_mixed_floats): - df = df_mixed_floats.copy() - df["E"] = True - df["F"] = 1 - - # tests for first / last / nth - grouped = df.groupby("A") - first = grouped.first() - expected = df.loc[[1, 0], ["B", "C", "D", "E", "F"]] - expected.index = Index(["bar", "foo"], name="A") - expected = expected.sort_index() - tm.assert_frame_equal(first, expected) - - last = grouped.last() - expected = df.loc[[5, 7], ["B", "C", "D", "E", "F"]] - expected.index = Index(["bar", "foo"], name="A") - expected = expected.sort_index() - tm.assert_frame_equal(last, expected) - - nth = grouped.nth(1) - expected = df.iloc[[2, 3]] - tm.assert_frame_equal(nth, expected) - - # GH 2763, first/last shifting dtypes - idx = list(range(10)) - idx.append(9) - s = Series(data=range(11), index=idx, name="IntCol") - assert s.dtype == "int64" - f = s.groupby(level=0).first() - assert f.dtype == "int64" - - -def test_first_last_nth_nan_dtype(): - # GH 33591 - df = DataFrame({"data": ["A"], "nans": Series([None], dtype=object)}) - grouped = df.groupby("data") - - expected = df.set_index("data").nans - tm.assert_series_equal(grouped.nans.first(), expected) - tm.assert_series_equal(grouped.nans.last(), expected) - - expected = df.nans - tm.assert_series_equal(grouped.nans.nth(-1), expected) - tm.assert_series_equal(grouped.nans.nth(0), expected) - - -def test_first_strings_timestamps(): - # GH 11244 - test = DataFrame( - { - Timestamp("2012-01-01 00:00:00"): ["a", "b"], - Timestamp("2012-01-02 00:00:00"): ["c", "d"], - "name": ["e", "e"], - "aaaa": ["f", "g"], - } - ) - result = test.groupby("name").first() - expected = DataFrame( - [["a", "c", "f"]], - columns=Index([Timestamp("2012-01-01"), Timestamp("2012-01-02"), "aaaa"]), - index=Index(["e"], name="name"), - ) - tm.assert_frame_equal(result, expected) - - -def test_nth(): - df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"]) - g = df.groupby("A") - - tm.assert_frame_equal(g.nth(0), df.iloc[[0, 2]]) - tm.assert_frame_equal(g.nth(1), df.iloc[[1]]) - tm.assert_frame_equal(g.nth(2), df.loc[[]]) - tm.assert_frame_equal(g.nth(-1), df.iloc[[1, 2]]) - tm.assert_frame_equal(g.nth(-2), df.iloc[[0]]) - tm.assert_frame_equal(g.nth(-3), df.loc[[]]) - tm.assert_series_equal(g.B.nth(0), df.B.iloc[[0, 2]]) - tm.assert_series_equal(g.B.nth(1), df.B.iloc[[1]]) - tm.assert_frame_equal(g[["B"]].nth(0), df[["B"]].iloc[[0, 2]]) - - tm.assert_frame_equal(g.nth(0, dropna="any"), df.iloc[[1, 2]]) - tm.assert_frame_equal(g.nth(-1, dropna="any"), df.iloc[[1, 2]]) - - tm.assert_frame_equal(g.nth(7, dropna="any"), df.iloc[:0]) - tm.assert_frame_equal(g.nth(2, dropna="any"), df.iloc[:0]) - - # out of bounds, regression from 0.13.1 - # GH 6621 - df = DataFrame( - { - "color": {0: "green", 1: "green", 2: "red", 3: "red", 4: "red"}, - "food": {0: "ham", 1: "eggs", 2: "eggs", 3: "ham", 4: "pork"}, - "two": { - 0: 1.5456590000000001, - 1: -0.070345000000000005, - 2: -2.4004539999999999, - 3: 0.46206000000000003, - 4: 0.52350799999999997, - }, - "one": { - 0: 0.56573799999999996, - 1: -0.9742360000000001, - 2: 1.033801, - 3: -0.78543499999999999, - 4: 0.70422799999999997, - }, - } - ).set_index(["color", "food"]) - - result = df.groupby(level=0, as_index=False).nth(2) - expected = df.iloc[[-1]] - tm.assert_frame_equal(result, expected) - - result = df.groupby(level=0, as_index=False).nth(3) - expected = df.loc[[]] - tm.assert_frame_equal(result, expected) - - # GH 7559 - # from the vbench - df = DataFrame(np.random.default_rng(2).integers(1, 10, (100, 2)), dtype="int64") - s = df[1] - g = df[0] - expected = s.groupby(g).first() - expected2 = s.groupby(g).apply(lambda x: x.iloc[0]) - tm.assert_series_equal(expected2, expected, check_names=False) - assert expected.name == 1 - assert expected2.name == 1 - - # validate first - v = s[g == 1].iloc[0] - assert expected.iloc[0] == v - assert expected2.iloc[0] == v - - with pytest.raises(ValueError, match="For a DataFrame"): - s.groupby(g, sort=False).nth(0, dropna=True) - - # doc example - df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"]) - g = df.groupby("A") - result = g.B.nth(0, dropna="all") - expected = df.B.iloc[[1, 2]] - tm.assert_series_equal(result, expected) - - # test multiple nth values - df = DataFrame([[1, np.nan], [1, 3], [1, 4], [5, 6], [5, 7]], columns=["A", "B"]) - g = df.groupby("A") - - tm.assert_frame_equal(g.nth(0), df.iloc[[0, 3]]) - tm.assert_frame_equal(g.nth([0]), df.iloc[[0, 3]]) - tm.assert_frame_equal(g.nth([0, 1]), df.iloc[[0, 1, 3, 4]]) - tm.assert_frame_equal(g.nth([0, -1]), df.iloc[[0, 2, 3, 4]]) - tm.assert_frame_equal(g.nth([0, 1, 2]), df.iloc[[0, 1, 2, 3, 4]]) - tm.assert_frame_equal(g.nth([0, 1, -1]), df.iloc[[0, 1, 2, 3, 4]]) - tm.assert_frame_equal(g.nth([2]), df.iloc[[2]]) - tm.assert_frame_equal(g.nth([3, 4]), df.loc[[]]) - - business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B") - df = DataFrame(1, index=business_dates, columns=["a", "b"]) - # get the first, fourth and last two business days for each month - key = [df.index.year, df.index.month] - result = df.groupby(key, as_index=False).nth([0, 3, -2, -1]) - expected_dates = pd.to_datetime( - [ - "2014/4/1", - "2014/4/4", - "2014/4/29", - "2014/4/30", - "2014/5/1", - "2014/5/6", - "2014/5/29", - "2014/5/30", - "2014/6/2", - "2014/6/5", - "2014/6/27", - "2014/6/30", - ] - ) - expected = DataFrame(1, columns=["a", "b"], index=expected_dates) - tm.assert_frame_equal(result, expected) - - -def test_nth_multi_grouper(three_group): - # PR 9090, related to issue 8979 - # test nth on multiple groupers - grouped = three_group.groupby(["A", "B"]) - result = grouped.nth(0) - expected = three_group.iloc[[0, 3, 4, 7]] - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "data, expected_first, expected_last", - [ - ( - { - "id": ["A"], - "time": Timestamp("2012-02-01 14:00:00", tz="US/Central"), - "foo": [1], - }, - { - "id": ["A"], - "time": Timestamp("2012-02-01 14:00:00", tz="US/Central"), - "foo": [1], - }, - { - "id": ["A"], - "time": Timestamp("2012-02-01 14:00:00", tz="US/Central"), - "foo": [1], - }, - ), - ( - { - "id": ["A", "B", "A"], - "time": [ - Timestamp("2012-01-01 13:00:00", tz="America/New_York"), - Timestamp("2012-02-01 14:00:00", tz="US/Central"), - Timestamp("2012-03-01 12:00:00", tz="Europe/London"), - ], - "foo": [1, 2, 3], - }, - { - "id": ["A", "B"], - "time": [ - Timestamp("2012-01-01 13:00:00", tz="America/New_York"), - Timestamp("2012-02-01 14:00:00", tz="US/Central"), - ], - "foo": [1, 2], - }, - { - "id": ["A", "B"], - "time": [ - Timestamp("2012-03-01 12:00:00", tz="Europe/London"), - Timestamp("2012-02-01 14:00:00", tz="US/Central"), - ], - "foo": [3, 2], - }, - ), - ], -) -def test_first_last_tz(data, expected_first, expected_last): - # GH15884 - # Test that the timezone is retained when calling first - # or last on groupby with as_index=False - - df = DataFrame(data) - - result = df.groupby("id", as_index=False).first() - expected = DataFrame(expected_first) - cols = ["id", "time", "foo"] - tm.assert_frame_equal(result[cols], expected[cols]) - - result = df.groupby("id", as_index=False)["time"].first() - tm.assert_frame_equal(result, expected[["id", "time"]]) - - result = df.groupby("id", as_index=False).last() - expected = DataFrame(expected_last) - cols = ["id", "time", "foo"] - tm.assert_frame_equal(result[cols], expected[cols]) - - result = df.groupby("id", as_index=False)["time"].last() - tm.assert_frame_equal(result, expected[["id", "time"]]) - - -@pytest.mark.parametrize( - "method, ts, alpha", - [ - ["first", Timestamp("2013-01-01", tz="US/Eastern"), "a"], - ["last", Timestamp("2013-01-02", tz="US/Eastern"), "b"], - ], -) -def test_first_last_tz_multi_column(method, ts, alpha): - # GH 21603 - category_string = Series(list("abc")).astype("category") - df = DataFrame( - { - "group": [1, 1, 2], - "category_string": category_string, - "datetimetz": pd.date_range("20130101", periods=3, tz="US/Eastern"), - } - ) - result = getattr(df.groupby("group"), method)() - expected = DataFrame( - { - "category_string": pd.Categorical( - [alpha, "c"], dtype=category_string.dtype - ), - "datetimetz": [ts, Timestamp("2013-01-03", tz="US/Eastern")], - }, - index=Index([1, 2], name="group"), - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "values", - [ - pd.array([True, False], dtype="boolean"), - pd.array([1, 2], dtype="Int64"), - pd.to_datetime(["2020-01-01", "2020-02-01"]), - pd.to_timedelta([1, 2], unit="D"), - ], -) -@pytest.mark.parametrize("function", ["first", "last", "min", "max"]) -def test_first_last_extension_array_keeps_dtype(values, function): - # https://github.com/pandas-dev/pandas/issues/33071 - # https://github.com/pandas-dev/pandas/issues/32194 - df = DataFrame({"a": [1, 2], "b": values}) - grouped = df.groupby("a") - idx = Index([1, 2], name="a") - expected_series = Series(values, name="b", index=idx) - expected_frame = DataFrame({"b": values}, index=idx) - - result_series = getattr(grouped["b"], function)() - tm.assert_series_equal(result_series, expected_series) - - result_frame = grouped.agg({"b": function}) - tm.assert_frame_equal(result_frame, expected_frame) - - -def test_nth_multi_index_as_expected(): - # PR 9090, related to issue 8979 - # test nth on MultiIndex - three_group = DataFrame( - { - "A": [ - "foo", - "foo", - "foo", - "foo", - "bar", - "bar", - "bar", - "bar", - "foo", - "foo", - "foo", - ], - "B": [ - "one", - "one", - "one", - "two", - "one", - "one", - "one", - "two", - "two", - "two", - "one", - ], - "C": [ - "dull", - "dull", - "shiny", - "dull", - "dull", - "shiny", - "shiny", - "dull", - "shiny", - "shiny", - "shiny", - ], - } - ) - grouped = three_group.groupby(["A", "B"]) - result = grouped.nth(0) - expected = three_group.iloc[[0, 3, 4, 7]] - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "op, n, expected_rows", - [ - ("head", -1, [0]), - ("head", 0, []), - ("head", 1, [0, 2]), - ("head", 7, [0, 1, 2]), - ("tail", -1, [1]), - ("tail", 0, []), - ("tail", 1, [1, 2]), - ("tail", 7, [0, 1, 2]), - ], -) -@pytest.mark.parametrize("columns", [None, [], ["A"], ["B"], ["A", "B"]]) -@pytest.mark.parametrize("as_index", [True, False]) -def test_groupby_head_tail(op, n, expected_rows, columns, as_index): - df = DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"]) - g = df.groupby("A", as_index=as_index) - expected = df.iloc[expected_rows] - if columns is not None: - g = g[columns] - expected = expected[columns] - result = getattr(g, op)(n) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "op, n, expected_cols", - [ - ("head", -1, [0]), - ("head", 0, []), - ("head", 1, [0, 2]), - ("head", 7, [0, 1, 2]), - ("tail", -1, [1]), - ("tail", 0, []), - ("tail", 1, [1, 2]), - ("tail", 7, [0, 1, 2]), - ], -) -def test_groupby_head_tail_axis_1(op, n, expected_cols): - # GH 9772 - df = DataFrame( - [[1, 2, 3], [1, 4, 5], [2, 6, 7], [3, 8, 9]], columns=["A", "B", "C"] - ) - msg = "DataFrame.groupby with axis=1 is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - g = df.groupby([0, 0, 1], axis=1) - expected = df.iloc[:, expected_cols] - result = getattr(g, op)(n) - tm.assert_frame_equal(result, expected) - - -def test_group_selection_cache(): - # GH 12839 nth, head, and tail should return same result consistently - df = DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"]) - expected = df.iloc[[0, 2]] - - g = df.groupby("A") - result1 = g.head(n=2) - result2 = g.nth(0) - tm.assert_frame_equal(result1, df) - tm.assert_frame_equal(result2, expected) - - g = df.groupby("A") - result1 = g.tail(n=2) - result2 = g.nth(0) - tm.assert_frame_equal(result1, df) - tm.assert_frame_equal(result2, expected) - - g = df.groupby("A") - result1 = g.nth(0) - result2 = g.head(n=2) - tm.assert_frame_equal(result1, expected) - tm.assert_frame_equal(result2, df) - - g = df.groupby("A") - result1 = g.nth(0) - result2 = g.tail(n=2) - tm.assert_frame_equal(result1, expected) - tm.assert_frame_equal(result2, df) - - -def test_nth_empty(): - # GH 16064 - df = DataFrame(index=[0], columns=["a", "b", "c"]) - result = df.groupby("a").nth(10) - expected = df.iloc[:0] - tm.assert_frame_equal(result, expected) - - result = df.groupby(["a", "b"]).nth(10) - expected = df.iloc[:0] - tm.assert_frame_equal(result, expected) - - -def test_nth_column_order(): - # GH 20760 - # Check that nth preserves column order - df = DataFrame( - [[1, "b", 100], [1, "a", 50], [1, "a", np.nan], [2, "c", 200], [2, "d", 150]], - columns=["A", "C", "B"], - ) - result = df.groupby("A").nth(0) - expected = df.iloc[[0, 3]] - tm.assert_frame_equal(result, expected) - - result = df.groupby("A").nth(-1, dropna="any") - expected = df.iloc[[1, 4]] - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("dropna", [None, "any", "all"]) -def test_nth_nan_in_grouper(dropna): - # GH 26011 - df = DataFrame( - { - "a": [np.nan, "a", np.nan, "b", np.nan], - "b": [0, 2, 4, 6, 8], - "c": [1, 3, 5, 7, 9], - } - ) - result = df.groupby("a").nth(0, dropna=dropna) - expected = df.iloc[[1, 3]] - - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("dropna", [None, "any", "all"]) -def test_nth_nan_in_grouper_series(dropna): - # GH 26454 - df = DataFrame( - { - "a": [np.nan, "a", np.nan, "b", np.nan], - "b": [0, 2, 4, 6, 8], - } - ) - result = df.groupby("a")["b"].nth(0, dropna=dropna) - expected = df["b"].iloc[[1, 3]] - - tm.assert_series_equal(result, expected) - - -def test_first_categorical_and_datetime_data_nat(): - # GH 20520 - df = DataFrame( - { - "group": ["first", "first", "second", "third", "third"], - "time": 5 * [np.datetime64("NaT")], - "categories": Series(["a", "b", "c", "a", "b"], dtype="category"), - } - ) - result = df.groupby("group").first() - expected = DataFrame( - { - "time": 3 * [np.datetime64("NaT")], - "categories": Series(["a", "c", "a"]).astype( - pd.CategoricalDtype(["a", "b", "c"]) - ), - } - ) - expected.index = Index(["first", "second", "third"], name="group") - tm.assert_frame_equal(result, expected) - - -def test_first_multi_key_groupby_categorical(): - # GH 22512 - df = DataFrame( - { - "A": [1, 1, 1, 2, 2], - "B": [100, 100, 200, 100, 100], - "C": ["apple", "orange", "mango", "mango", "orange"], - "D": ["jupiter", "mercury", "mars", "venus", "venus"], - } - ) - df = df.astype({"D": "category"}) - result = df.groupby(by=["A", "B"]).first() - expected = DataFrame( - { - "C": ["apple", "mango", "mango"], - "D": Series(["jupiter", "mars", "venus"]).astype( - pd.CategoricalDtype(["jupiter", "mars", "mercury", "venus"]) - ), - } - ) - expected.index = MultiIndex.from_tuples( - [(1, 100), (1, 200), (2, 100)], names=["A", "B"] - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("method", ["first", "last", "nth"]) -def test_groupby_last_first_nth_with_none(method, nulls_fixture): - # GH29645 - expected = Series(["y"]) - data = Series( - [nulls_fixture, nulls_fixture, nulls_fixture, "y", nulls_fixture], - index=[0, 0, 0, 0, 0], - ).groupby(level=0) - - if method == "nth": - result = getattr(data, method)(3) - else: - result = getattr(data, method)() - - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize( - "arg, expected_rows", - [ - [slice(None, 3, 2), [0, 1, 4, 5]], - [slice(None, -2), [0, 2, 5]], - [[slice(None, 2), slice(-2, None)], [0, 1, 2, 3, 4, 6, 7]], - [[0, 1, slice(-2, None)], [0, 1, 2, 3, 4, 6, 7]], - ], -) -def test_slice(slice_test_df, slice_test_grouped, arg, expected_rows): - # Test slices GH #42947 - - result = slice_test_grouped.nth[arg] - equivalent = slice_test_grouped.nth(arg) - expected = slice_test_df.iloc[expected_rows] - - tm.assert_frame_equal(result, expected) - tm.assert_frame_equal(equivalent, expected) - - -def test_nth_indexed(slice_test_df, slice_test_grouped): - # Test index notation GH #44688 - - result = slice_test_grouped.nth[0, 1, -2:] - equivalent = slice_test_grouped.nth([0, 1, slice(-2, None)]) - expected = slice_test_df.iloc[[0, 1, 2, 3, 4, 6, 7]] - - tm.assert_frame_equal(result, expected) - tm.assert_frame_equal(equivalent, expected) - - -def test_invalid_argument(slice_test_grouped): - # Test for error on invalid argument - - with pytest.raises(TypeError, match="Invalid index"): - slice_test_grouped.nth(3.14) - - -def test_negative_step(slice_test_grouped): - # Test for error on negative slice step - - with pytest.raises(ValueError, match="Invalid step"): - slice_test_grouped.nth(slice(None, None, -1)) - - -def test_np_ints(slice_test_df, slice_test_grouped): - # Test np ints work - - result = slice_test_grouped.nth(np.array([0, 1])) - expected = slice_test_df.iloc[[0, 1, 2, 3, 4]] - tm.assert_frame_equal(result, expected) - - -def test_groupby_nth_with_column_axis(): - # GH43926 - df = DataFrame( - [ - [4, 5, 6], - [8, 8, 7], - ], - index=["z", "y"], - columns=["C", "B", "A"], - ) - msg = "DataFrame.groupby with axis=1 is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - gb = df.groupby(df.iloc[1], axis=1) - result = gb.nth(0) - expected = df.iloc[:, [0, 2]] - tm.assert_frame_equal(result, expected) - - -def test_groupby_nth_interval(): - # GH#24205 - idx_result = MultiIndex( - [ - pd.CategoricalIndex([pd.Interval(0, 1), pd.Interval(1, 2)]), - pd.CategoricalIndex([pd.Interval(0, 10), pd.Interval(10, 20)]), - ], - [[0, 0, 0, 1, 1], [0, 1, 1, 0, -1]], - ) - df_result = DataFrame({"col": range(len(idx_result))}, index=idx_result) - result = df_result.groupby(level=[0, 1], observed=False).nth(0) - val_expected = [0, 1, 3] - idx_expected = MultiIndex( - [ - pd.CategoricalIndex([pd.Interval(0, 1), pd.Interval(1, 2)]), - pd.CategoricalIndex([pd.Interval(0, 10), pd.Interval(10, 20)]), - ], - [[0, 0, 1], [0, 1, 0]], - ) - expected = DataFrame(val_expected, index=idx_expected, columns=["col"]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "start, stop, expected_values, expected_columns", - [ - (None, None, [0, 1, 2, 3, 4], list("ABCDE")), - (None, 1, [0, 3], list("AD")), - (None, 9, [0, 1, 2, 3, 4], list("ABCDE")), - (None, -1, [0, 1, 3], list("ABD")), - (1, None, [1, 2, 4], list("BCE")), - (1, -1, [1], list("B")), - (-1, None, [2, 4], list("CE")), - (-1, 2, [4], list("E")), - ], -) -@pytest.mark.parametrize("method", ["call", "index"]) -def test_nth_slices_with_column_axis( - start, stop, expected_values, expected_columns, method -): - df = DataFrame([range(5)], columns=[list("ABCDE")]) - msg = "DataFrame.groupby with axis=1 is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - gb = df.groupby([5, 5, 5, 6, 6], axis=1) - result = { - "call": lambda start, stop: gb.nth(slice(start, stop)), - "index": lambda start, stop: gb.nth[start:stop], - }[method](start, stop) - expected = DataFrame([expected_values], columns=[expected_columns]) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.filterwarnings( - "ignore:invalid value encountered in remainder:RuntimeWarning" -) -def test_head_tail_dropna_true(): - # GH#45089 - df = DataFrame( - [["a", "z"], ["b", np.nan], ["c", np.nan], ["c", np.nan]], columns=["X", "Y"] - ) - expected = DataFrame([["a", "z"]], columns=["X", "Y"]) - - result = df.groupby(["X", "Y"]).head(n=1) - tm.assert_frame_equal(result, expected) - - result = df.groupby(["X", "Y"]).tail(n=1) - tm.assert_frame_equal(result, expected) - - result = df.groupby(["X", "Y"]).nth(n=0) - tm.assert_frame_equal(result, expected) - - -def test_head_tail_dropna_false(): - # GH#45089 - df = DataFrame([["a", "z"], ["b", np.nan], ["c", np.nan]], columns=["X", "Y"]) - expected = DataFrame([["a", "z"], ["b", np.nan], ["c", np.nan]], columns=["X", "Y"]) - - result = df.groupby(["X", "Y"], dropna=False).head(n=1) - tm.assert_frame_equal(result, expected) - - result = df.groupby(["X", "Y"], dropna=False).tail(n=1) - tm.assert_frame_equal(result, expected) - - result = df.groupby(["X", "Y"], dropna=False).nth(n=0) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("selection", ("b", ["b"], ["b", "c"])) -@pytest.mark.parametrize("dropna", ["any", "all", None]) -def test_nth_after_selection(selection, dropna): - # GH#11038, GH#53518 - df = DataFrame( - { - "a": [1, 1, 2], - "b": [np.nan, 3, 4], - "c": [5, 6, 7], - } - ) - gb = df.groupby("a")[selection] - result = gb.nth(0, dropna=dropna) - if dropna == "any" or (dropna == "all" and selection != ["b", "c"]): - locs = [1, 2] - else: - locs = [0, 2] - expected = df.loc[locs, selection] - tm.assert_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/validators.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/validators.py deleted file mode 100644 index 55b0339e9fa69e48e58d2f77395a7cc2a8711d8b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/validators.py +++ /dev/null @@ -1,4 +0,0 @@ -"""The `validators` module is a backport module from V1.""" -from ._migration import getattr_migration - -__getattr__ = getattr_migration(__name__) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/extensions/permessage_deflate.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/extensions/permessage_deflate.py deleted file mode 100644 index b391837c66686678cd1213b4c2b0de278bedc96b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/extensions/permessage_deflate.py +++ /dev/null @@ -1,660 +0,0 @@ -from __future__ import annotations - -import dataclasses -import zlib -from typing import Any, Dict, List, Optional, Sequence, Tuple, Union - -from .. import exceptions, frames -from ..typing import ExtensionName, ExtensionParameter -from .base import ClientExtensionFactory, Extension, ServerExtensionFactory - - -__all__ = [ - "PerMessageDeflate", - "ClientPerMessageDeflateFactory", - "enable_client_permessage_deflate", - "ServerPerMessageDeflateFactory", - "enable_server_permessage_deflate", -] - -_EMPTY_UNCOMPRESSED_BLOCK = b"\x00\x00\xff\xff" - -_MAX_WINDOW_BITS_VALUES = [str(bits) for bits in range(8, 16)] - - -class PerMessageDeflate(Extension): - """ - Per-Message Deflate extension. - - """ - - name = ExtensionName("permessage-deflate") - - def __init__( - self, - remote_no_context_takeover: bool, - local_no_context_takeover: bool, - remote_max_window_bits: int, - local_max_window_bits: int, - compress_settings: Optional[Dict[Any, Any]] = None, - ) -> None: - """ - Configure the Per-Message Deflate extension. - - """ - if compress_settings is None: - compress_settings = {} - - assert remote_no_context_takeover in [False, True] - assert local_no_context_takeover in [False, True] - assert 8 <= remote_max_window_bits <= 15 - assert 8 <= local_max_window_bits <= 15 - assert "wbits" not in compress_settings - - self.remote_no_context_takeover = remote_no_context_takeover - self.local_no_context_takeover = local_no_context_takeover - self.remote_max_window_bits = remote_max_window_bits - self.local_max_window_bits = local_max_window_bits - self.compress_settings = compress_settings - - if not self.remote_no_context_takeover: - self.decoder = zlib.decompressobj(wbits=-self.remote_max_window_bits) - - if not self.local_no_context_takeover: - self.encoder = zlib.compressobj( - wbits=-self.local_max_window_bits, **self.compress_settings - ) - - # To handle continuation frames properly, we must keep track of - # whether that initial frame was encoded. - self.decode_cont_data = False - # There's no need for self.encode_cont_data because we always encode - # outgoing frames, so it would always be True. - - def __repr__(self) -> str: - return ( - f"PerMessageDeflate(" - f"remote_no_context_takeover={self.remote_no_context_takeover}, " - f"local_no_context_takeover={self.local_no_context_takeover}, " - f"remote_max_window_bits={self.remote_max_window_bits}, " - f"local_max_window_bits={self.local_max_window_bits})" - ) - - def decode( - self, - frame: frames.Frame, - *, - max_size: Optional[int] = None, - ) -> frames.Frame: - """ - Decode an incoming frame. - - """ - # Skip control frames. - if frame.opcode in frames.CTRL_OPCODES: - return frame - - # Handle continuation data frames: - # - skip if the message isn't encoded - # - reset "decode continuation data" flag if it's a final frame - if frame.opcode is frames.OP_CONT: - if not self.decode_cont_data: - return frame - if frame.fin: - self.decode_cont_data = False - - # Handle text and binary data frames: - # - skip if the message isn't encoded - # - unset the rsv1 flag on the first frame of a compressed message - # - set "decode continuation data" flag if it's a non-final frame - else: - if not frame.rsv1: - return frame - frame = dataclasses.replace(frame, rsv1=False) - if not frame.fin: - self.decode_cont_data = True - - # Re-initialize per-message decoder. - if self.remote_no_context_takeover: - self.decoder = zlib.decompressobj(wbits=-self.remote_max_window_bits) - - # Uncompress data. Protect against zip bombs by preventing zlib from - # decompressing more than max_length bytes (except when the limit is - # disabled with max_size = None). - data = frame.data - if frame.fin: - data += _EMPTY_UNCOMPRESSED_BLOCK - max_length = 0 if max_size is None else max_size - try: - data = self.decoder.decompress(data, max_length) - except zlib.error as exc: - raise exceptions.ProtocolError("decompression failed") from exc - if self.decoder.unconsumed_tail: - raise exceptions.PayloadTooBig(f"over size limit (? > {max_size} bytes)") - - # Allow garbage collection of the decoder if it won't be reused. - if frame.fin and self.remote_no_context_takeover: - del self.decoder - - return dataclasses.replace(frame, data=data) - - def encode(self, frame: frames.Frame) -> frames.Frame: - """ - Encode an outgoing frame. - - """ - # Skip control frames. - if frame.opcode in frames.CTRL_OPCODES: - return frame - - # Since we always encode messages, there's no "encode continuation - # data" flag similar to "decode continuation data" at this time. - - if frame.opcode is not frames.OP_CONT: - # Set the rsv1 flag on the first frame of a compressed message. - frame = dataclasses.replace(frame, rsv1=True) - # Re-initialize per-message decoder. - if self.local_no_context_takeover: - self.encoder = zlib.compressobj( - wbits=-self.local_max_window_bits, **self.compress_settings - ) - - # Compress data. - data = self.encoder.compress(frame.data) + self.encoder.flush(zlib.Z_SYNC_FLUSH) - if frame.fin and data.endswith(_EMPTY_UNCOMPRESSED_BLOCK): - data = data[:-4] - - # Allow garbage collection of the encoder if it won't be reused. - if frame.fin and self.local_no_context_takeover: - del self.encoder - - return dataclasses.replace(frame, data=data) - - -def _build_parameters( - server_no_context_takeover: bool, - client_no_context_takeover: bool, - server_max_window_bits: Optional[int], - client_max_window_bits: Optional[Union[int, bool]], -) -> List[ExtensionParameter]: - """ - Build a list of ``(name, value)`` pairs for some compression parameters. - - """ - params: List[ExtensionParameter] = [] - if server_no_context_takeover: - params.append(("server_no_context_takeover", None)) - if client_no_context_takeover: - params.append(("client_no_context_takeover", None)) - if server_max_window_bits: - params.append(("server_max_window_bits", str(server_max_window_bits))) - if client_max_window_bits is True: # only in handshake requests - params.append(("client_max_window_bits", None)) - elif client_max_window_bits: - params.append(("client_max_window_bits", str(client_max_window_bits))) - return params - - -def _extract_parameters( - params: Sequence[ExtensionParameter], *, is_server: bool -) -> Tuple[bool, bool, Optional[int], Optional[Union[int, bool]]]: - """ - Extract compression parameters from a list of ``(name, value)`` pairs. - - If ``is_server`` is :obj:`True`, ``client_max_window_bits`` may be - provided without a value. This is only allowed in handshake requests. - - """ - server_no_context_takeover: bool = False - client_no_context_takeover: bool = False - server_max_window_bits: Optional[int] = None - client_max_window_bits: Optional[Union[int, bool]] = None - - for name, value in params: - if name == "server_no_context_takeover": - if server_no_context_takeover: - raise exceptions.DuplicateParameter(name) - if value is None: - server_no_context_takeover = True - else: - raise exceptions.InvalidParameterValue(name, value) - - elif name == "client_no_context_takeover": - if client_no_context_takeover: - raise exceptions.DuplicateParameter(name) - if value is None: - client_no_context_takeover = True - else: - raise exceptions.InvalidParameterValue(name, value) - - elif name == "server_max_window_bits": - if server_max_window_bits is not None: - raise exceptions.DuplicateParameter(name) - if value in _MAX_WINDOW_BITS_VALUES: - server_max_window_bits = int(value) - else: - raise exceptions.InvalidParameterValue(name, value) - - elif name == "client_max_window_bits": - if client_max_window_bits is not None: - raise exceptions.DuplicateParameter(name) - if is_server and value is None: # only in handshake requests - client_max_window_bits = True - elif value in _MAX_WINDOW_BITS_VALUES: - client_max_window_bits = int(value) - else: - raise exceptions.InvalidParameterValue(name, value) - - else: - raise exceptions.InvalidParameterName(name) - - return ( - server_no_context_takeover, - client_no_context_takeover, - server_max_window_bits, - client_max_window_bits, - ) - - -class ClientPerMessageDeflateFactory(ClientExtensionFactory): - """ - Client-side extension factory for the Per-Message Deflate extension. - - Parameters behave as described in `section 7.1 of RFC 7692`_. - - .. _section 7.1 of RFC 7692: https://www.rfc-editor.org/rfc/rfc7692.html#section-7.1 - - Set them to :obj:`True` to include them in the negotiation offer without a - value or to an integer value to include them with this value. - - Args: - server_no_context_takeover: prevent server from using context takeover. - client_no_context_takeover: prevent client from using context takeover. - server_max_window_bits: maximum size of the server's LZ77 sliding window - in bits, between 8 and 15. - client_max_window_bits: maximum size of the client's LZ77 sliding window - in bits, between 8 and 15, or :obj:`True` to indicate support without - setting a limit. - compress_settings: additional keyword arguments for :func:`zlib.compressobj`, - excluding ``wbits``. - - """ - - name = ExtensionName("permessage-deflate") - - def __init__( - self, - server_no_context_takeover: bool = False, - client_no_context_takeover: bool = False, - server_max_window_bits: Optional[int] = None, - client_max_window_bits: Optional[Union[int, bool]] = True, - compress_settings: Optional[Dict[str, Any]] = None, - ) -> None: - """ - Configure the Per-Message Deflate extension factory. - - """ - if not (server_max_window_bits is None or 8 <= server_max_window_bits <= 15): - raise ValueError("server_max_window_bits must be between 8 and 15") - if not ( - client_max_window_bits is None - or client_max_window_bits is True - or 8 <= client_max_window_bits <= 15 - ): - raise ValueError("client_max_window_bits must be between 8 and 15") - if compress_settings is not None and "wbits" in compress_settings: - raise ValueError( - "compress_settings must not include wbits, " - "set client_max_window_bits instead" - ) - - self.server_no_context_takeover = server_no_context_takeover - self.client_no_context_takeover = client_no_context_takeover - self.server_max_window_bits = server_max_window_bits - self.client_max_window_bits = client_max_window_bits - self.compress_settings = compress_settings - - def get_request_params(self) -> List[ExtensionParameter]: - """ - Build request parameters. - - """ - return _build_parameters( - self.server_no_context_takeover, - self.client_no_context_takeover, - self.server_max_window_bits, - self.client_max_window_bits, - ) - - def process_response_params( - self, - params: Sequence[ExtensionParameter], - accepted_extensions: Sequence[Extension], - ) -> PerMessageDeflate: - """ - Process response parameters. - - Return an extension instance. - - """ - if any(other.name == self.name for other in accepted_extensions): - raise exceptions.NegotiationError(f"received duplicate {self.name}") - - # Request parameters are available in instance variables. - - # Load response parameters in local variables. - ( - server_no_context_takeover, - client_no_context_takeover, - server_max_window_bits, - client_max_window_bits, - ) = _extract_parameters(params, is_server=False) - - # After comparing the request and the response, the final - # configuration must be available in the local variables. - - # server_no_context_takeover - # - # Req. Resp. Result - # ------ ------ -------------------------------------------------- - # False False False - # False True True - # True False Error! - # True True True - - if self.server_no_context_takeover: - if not server_no_context_takeover: - raise exceptions.NegotiationError("expected server_no_context_takeover") - - # client_no_context_takeover - # - # Req. Resp. Result - # ------ ------ -------------------------------------------------- - # False False False - # False True True - # True False True - must change value - # True True True - - if self.client_no_context_takeover: - if not client_no_context_takeover: - client_no_context_takeover = True - - # server_max_window_bits - - # Req. Resp. Result - # ------ ------ -------------------------------------------------- - # None None None - # None 8≤M≤15 M - # 8≤N≤15 None Error! - # 8≤N≤15 8≤M≤N M - # 8≤N≤15 N self.server_max_window_bits: - raise exceptions.NegotiationError("unsupported server_max_window_bits") - - # client_max_window_bits - - # Req. Resp. Result - # ------ ------ -------------------------------------------------- - # None None None - # None 8≤M≤15 Error! - # True None None - # True 8≤M≤15 M - # 8≤N≤15 None N - must change value - # 8≤N≤15 8≤M≤N M - # 8≤N≤15 N self.client_max_window_bits: - raise exceptions.NegotiationError("unsupported client_max_window_bits") - - return PerMessageDeflate( - server_no_context_takeover, # remote_no_context_takeover - client_no_context_takeover, # local_no_context_takeover - server_max_window_bits or 15, # remote_max_window_bits - client_max_window_bits or 15, # local_max_window_bits - self.compress_settings, - ) - - -def enable_client_permessage_deflate( - extensions: Optional[Sequence[ClientExtensionFactory]], -) -> Sequence[ClientExtensionFactory]: - """ - Enable Per-Message Deflate with default settings in client extensions. - - If the extension is already present, perhaps with non-default settings, - the configuration isn't changed. - - """ - if extensions is None: - extensions = [] - if not any( - extension_factory.name == ClientPerMessageDeflateFactory.name - for extension_factory in extensions - ): - extensions = list(extensions) + [ - ClientPerMessageDeflateFactory( - compress_settings={"memLevel": 5}, - ) - ] - return extensions - - -class ServerPerMessageDeflateFactory(ServerExtensionFactory): - """ - Server-side extension factory for the Per-Message Deflate extension. - - Parameters behave as described in `section 7.1 of RFC 7692`_. - - .. _section 7.1 of RFC 7692: https://www.rfc-editor.org/rfc/rfc7692.html#section-7.1 - - Set them to :obj:`True` to include them in the negotiation offer without a - value or to an integer value to include them with this value. - - Args: - server_no_context_takeover: prevent server from using context takeover. - client_no_context_takeover: prevent client from using context takeover. - server_max_window_bits: maximum size of the server's LZ77 sliding window - in bits, between 8 and 15. - client_max_window_bits: maximum size of the client's LZ77 sliding window - in bits, between 8 and 15. - compress_settings: additional keyword arguments for :func:`zlib.compressobj`, - excluding ``wbits``. - require_client_max_window_bits: do not enable compression at all if - client doesn't advertise support for ``client_max_window_bits``; - the default behavior is to enable compression without enforcing - ``client_max_window_bits``. - - """ - - name = ExtensionName("permessage-deflate") - - def __init__( - self, - server_no_context_takeover: bool = False, - client_no_context_takeover: bool = False, - server_max_window_bits: Optional[int] = None, - client_max_window_bits: Optional[int] = None, - compress_settings: Optional[Dict[str, Any]] = None, - require_client_max_window_bits: bool = False, - ) -> None: - """ - Configure the Per-Message Deflate extension factory. - - """ - if not (server_max_window_bits is None or 8 <= server_max_window_bits <= 15): - raise ValueError("server_max_window_bits must be between 8 and 15") - if not (client_max_window_bits is None or 8 <= client_max_window_bits <= 15): - raise ValueError("client_max_window_bits must be between 8 and 15") - if compress_settings is not None and "wbits" in compress_settings: - raise ValueError( - "compress_settings must not include wbits, " - "set server_max_window_bits instead" - ) - if client_max_window_bits is None and require_client_max_window_bits: - raise ValueError( - "require_client_max_window_bits is enabled, " - "but client_max_window_bits isn't configured" - ) - - self.server_no_context_takeover = server_no_context_takeover - self.client_no_context_takeover = client_no_context_takeover - self.server_max_window_bits = server_max_window_bits - self.client_max_window_bits = client_max_window_bits - self.compress_settings = compress_settings - self.require_client_max_window_bits = require_client_max_window_bits - - def process_request_params( - self, - params: Sequence[ExtensionParameter], - accepted_extensions: Sequence[Extension], - ) -> Tuple[List[ExtensionParameter], PerMessageDeflate]: - """ - Process request parameters. - - Return response params and an extension instance. - - """ - if any(other.name == self.name for other in accepted_extensions): - raise exceptions.NegotiationError(f"skipped duplicate {self.name}") - - # Load request parameters in local variables. - ( - server_no_context_takeover, - client_no_context_takeover, - server_max_window_bits, - client_max_window_bits, - ) = _extract_parameters(params, is_server=True) - - # Configuration parameters are available in instance variables. - - # After comparing the request and the configuration, the response must - # be available in the local variables. - - # server_no_context_takeover - # - # Config Req. Resp. - # ------ ------ -------------------------------------------------- - # False False False - # False True True - # True False True - must change value to True - # True True True - - if self.server_no_context_takeover: - if not server_no_context_takeover: - server_no_context_takeover = True - - # client_no_context_takeover - # - # Config Req. Resp. - # ------ ------ -------------------------------------------------- - # False False False - # False True True (or False) - # True False True - must change value to True - # True True True (or False) - - if self.client_no_context_takeover: - if not client_no_context_takeover: - client_no_context_takeover = True - - # server_max_window_bits - - # Config Req. Resp. - # ------ ------ -------------------------------------------------- - # None None None - # None 8≤M≤15 M - # 8≤N≤15 None N - must change value - # 8≤N≤15 8≤M≤N M - # 8≤N≤15 N self.server_max_window_bits: - server_max_window_bits = self.server_max_window_bits - - # client_max_window_bits - - # Config Req. Resp. - # ------ ------ -------------------------------------------------- - # None None None - # None True None - must change value - # None 8≤M≤15 M (or None) - # 8≤N≤15 None None or Error! - # 8≤N≤15 True N - must change value - # 8≤N≤15 8≤M≤N M (or None) - # 8≤N≤15 N Sequence[ServerExtensionFactory]: - """ - Enable Per-Message Deflate with default settings in server extensions. - - If the extension is already present, perhaps with non-default settings, - the configuration isn't changed. - - """ - if extensions is None: - extensions = [] - if not any( - ext_factory.name == ServerPerMessageDeflateFactory.name - for ext_factory in extensions - ): - extensions = list(extensions) + [ - ServerPerMessageDeflateFactory( - server_max_window_bits=12, - client_max_window_bits=12, - compress_settings={"memLevel": 5}, - ) - ] - return extensions diff --git a/spaces/qdd319/ChuanhuChatGPT/run_Linux.sh b/spaces/qdd319/ChuanhuChatGPT/run_Linux.sh deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/qdd319/ChuanhuChatGPT/run_Linux.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Autodata 3.40 German Language 106.md b/spaces/quidiaMuxgu/Expedit-SAM/Autodata 3.40 German Language 106.md deleted file mode 100644 index 2633a6c67811cd08606bd2c5c77fcfef77d2b50b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Autodata 3.40 German Language 106.md +++ /dev/null @@ -1,14 +0,0 @@ -

      autodata 3.40 german language 106


      Download File - https://geags.com/2uCq4h



      -
      -Dec 18, 2021 - ... d9cd945bc9 Hyperspin Project - Sony Playstation Games Part 1 Corepackserial do alone in the dark 2008autodata 3.40 German 106. 4 Oct 2019 ... -Sony PlayStation 4 Review for 2020 - The best PS4 console to buy. -Playstation, PlayStation. -Loading... -Unsubscribe from Playstation? -Cancel ... -18 Dec 2019 ... -Cancel Dec 18, 2021 - ... d9cd945bc9 Hyperspin Project - Sony Playstation Games Part 1 Corepackserial do alone in the dark 2008autodata 3.40 German 106. -26 May 2019 ... 8a78ff9644
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Hyperspin Wheel Pack WORK.md b/spaces/quidiaMuxgu/Expedit-SAM/Hyperspin Wheel Pack WORK.md deleted file mode 100644 index a4012cad49264fe714bff5511e2c1d96c8b3d86a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Hyperspin Wheel Pack WORK.md +++ /dev/null @@ -1,49 +0,0 @@ -
      -

      What is a Hyperspin Wheel Pack and How to Get One?

      -

      A Hyperspin wheel pack is a collection of media files that are used to customize the appearance of the Hyperspin front-end. A wheel pack typically contains images of game logos, system logos, genres, and other categories that are displayed on the wheel menu of Hyperspin. A wheel pack can also include videos, sounds, themes, and other media files that enhance the user experience.

      -

      Hyperspin Wheel Pack


      DOWNLOADhttps://geags.com/2uCqit



      -

      Hyperspin wheel packs are created by the community of Hyperspin users and enthusiasts, who share their work on various websites and forums. Some of the most popular sources of wheel packs are:

      -
        -
      • Main Menu Wheels - HyperSpin Forum: This is a section of the official HyperSpin forum where users can upload and download wheel packs for the main menu of Hyperspin. The main menu is where users can select which system or category they want to play. There are hundreds of wheel packs available for different systems, genres, themes, and custom collections.
      • -
      • Wheel Packs - HyperSpin Forum: This is another section of the official HyperSpin forum where users can upload and download wheel packs for specific systems or categories. There are fewer wheel packs available here than in the main menu section, but they are more focused and detailed.
      • -
      • Hyperspin Lightgun Collection Wheel Media Download Pack: This is a video by YouTube user Arcade Forever that showcases a wheel pack for lightgun games. The video also provides a link to download the wheel pack for free.
      • -
      • Hyperspin Wheel Pack - Collection | OpenSea: This is a collection of wheel packs that are sold as NFTs (non-fungible tokens) on the OpenSea marketplace. NFTs are unique digital assets that can be owned and traded on a blockchain. The collection features wheel packs for various systems and genres, such as arcade, console, handheld, pinball, racing, fighting, and more.
      • -
      -

      To get a Hyperspin wheel pack, users need to download the media files from one of these sources and extract them to the appropriate folder in their Hyperspin directory. The folder structure may vary depending on the type of wheel pack, but generally it follows this pattern:

      -
      Hyperspin
      -  Media
      -    Main Menu
      -      Images
      -        Wheel
      -          [Wheel Pack Files]
      -    [System Name]
      -      Images
      -        Wheel
      -          [Wheel Pack Files]
      -
      -

      After copying the wheel pack files to the correct folder, users need to launch Hyperspin and enjoy their new look.

      - -

      In this section, we will show you how to create your own custom wheel pack for Hyperspin using some simple tools and steps. This tutorial is based on the video by YouTube user colpipes1978[^1^], but you can also find other guides and resources on the HyperSpin forum[^2^] and other websites.

      -

      -

      Step 1: Prepare the tools and files

      -

      To create a custom wheel pack, you will need the following tools and files:

      -
        -
      • Art Tools: This is a zip file that contains some useful programs and templates for creating wheel images. You will need to extract this file to a folder on your computer.
      • -
      • Hyperspin 1.5.1 Full Package: This is the latest version of the Hyperspin front-end that you will need to install and run on your computer.
      • -
      • RocketLauncher Latest Version: This is a program that integrates with Hyperspin and allows you to launch games and emulators.
      • -
      • RocketLauncher Media Pack: This is a zip file that contains some media files for RocketLauncher, such as bezels, fades, pause menus, etc.
      • -
      • Main Menu Wheels: This is a section of the HyperSpin forum where you can download some existing wheel packs for the main menu of Hyperspin. You can use these as a reference or inspiration for your own wheel pack.
      • -
      • Wheel Packs: This is another section of the HyperSpin forum where you can download some existing wheel packs for specific systems or categories. You can use these as a reference or inspiration for your own wheel pack.
      • -
      • Games and Emulators: These are the games and emulators that you want to play on Hyperspin. You will need to find and download these files from other sources, as they are not provided by Hyperspin or RocketLauncher.
      • -
      -

      Step 2: Create the wheel images

      -

      To create the wheel images, you will need to use one of the programs in the Art Tools folder. There are two options: Photoshop or Gimp. Photoshop is a professional image editing software that requires a license, while Gimp is a free and open source alternative. Both programs can open and edit PSD files, which are the templates for creating wheel images.

      -

      To create a wheel image, follow these steps:

      -
        -
      1. Open Photoshop or Gimp and load one of the PSD files in the Art Tools folder. There are different templates for different types of wheels, such as bordered, framed, round, etc. Choose the one that suits your preference.
      2. -
      3. On the PSD file, you will see several layers that represent different elements of the wheel image, such as background, border, logo, etc. You can edit each layer by changing its color, size, position, opacity, etc.
      4. -
      5. To add a logo to your wheel image, you will need to find an image of the game or system that you want to represent. You can search online for logos or use one of the existing ones in the Art Tools folder. You will need to resize and crop the logo image to fit inside the wheel template.
      6. -
      7. To add a logo to your wheel image, drag and drop the logo image onto the PSD file. Then, move it to the appropriate layer and position it inside the wheel template. You can also adjust its opacity, brightness, contrast, etc. to make it look better.
      8. -
      9. Repeat steps 3 and 4 for each game or system that you want to include in your wheel pack. You can also create sub-wheels for genres or categories by using

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/pretrained.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/pretrained.py deleted file mode 100644 index 6aac5db100cc7a9084af96d2cd083f0c8fac473c..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/pretrained.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import logging - -from diffq import DiffQuantizer -import torch.hub - -from .model import Demucs -from .tasnet import ConvTasNet -from .utils import set_state - -logger = logging.getLogger(__name__) -ROOT = "https://dl.fbaipublicfiles.com/demucs/v3.0/" - -PRETRAINED_MODELS = { - 'demucs': 'e07c671f', - 'demucs48_hq': '28a1282c', - 'demucs_extra': '3646af93', - 'demucs_quantized': '07afea75', - 'tasnet': 'beb46fac', - 'tasnet_extra': 'df3777b2', - 'demucs_unittest': '09ebc15f', -} - -SOURCES = ["drums", "bass", "other", "vocals"] - - -def get_url(name): - sig = PRETRAINED_MODELS[name] - return ROOT + name + "-" + sig[:8] + ".th" - - -def is_pretrained(name): - return name in PRETRAINED_MODELS - - -def load_pretrained(name): - if name == "demucs": - return demucs(pretrained=True) - elif name == "demucs48_hq": - return demucs(pretrained=True, hq=True, channels=48) - elif name == "demucs_extra": - return demucs(pretrained=True, extra=True) - elif name == "demucs_quantized": - return demucs(pretrained=True, quantized=True) - elif name == "demucs_unittest": - return demucs_unittest(pretrained=True) - elif name == "tasnet": - return tasnet(pretrained=True) - elif name == "tasnet_extra": - return tasnet(pretrained=True, extra=True) - else: - raise ValueError(f"Invalid pretrained name {name}") - - -def _load_state(name, model, quantizer=None): - url = get_url(name) - state = torch.hub.load_state_dict_from_url(url, map_location='cpu', check_hash=True) - set_state(model, quantizer, state) - if quantizer: - quantizer.detach() - - -def demucs_unittest(pretrained=True): - model = Demucs(channels=4, sources=SOURCES) - if pretrained: - _load_state('demucs_unittest', model) - return model - - -def demucs(pretrained=True, extra=False, quantized=False, hq=False, channels=64): - if not pretrained and (extra or quantized or hq): - raise ValueError("if extra or quantized is True, pretrained must be True.") - model = Demucs(sources=SOURCES, channels=channels) - if pretrained: - name = 'demucs' - if channels != 64: - name += str(channels) - quantizer = None - if sum([extra, quantized, hq]) > 1: - raise ValueError("Only one of extra, quantized, hq, can be True.") - if quantized: - quantizer = DiffQuantizer(model, group_size=8, min_size=1) - name += '_quantized' - if extra: - name += '_extra' - if hq: - name += '_hq' - _load_state(name, model, quantizer) - return model - - -def tasnet(pretrained=True, extra=False): - if not pretrained and extra: - raise ValueError("if extra is True, pretrained must be True.") - model = ConvTasNet(X=10, sources=SOURCES) - if pretrained: - name = 'tasnet' - if extra: - name = 'tasnet_extra' - _load_state(name, model) - return model diff --git a/spaces/r3gm/RVC_HF/julius/__init__.py b/spaces/r3gm/RVC_HF/julius/__init__.py deleted file mode 100644 index 69811b0415a291ca1beb845531785ba03c57099a..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/julius/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 - -# flake8: noqa -""" -.. image:: ../logo.png - -Julius contains different Digital Signal Processing algorithms implemented -with PyTorch, so that they are differentiable and available on CUDA. -Note that all the modules implemented here can be used with TorchScript. - -For now, I have implemented: - -- `julius.resample`: fast sinc resampling. -- `julius.fftconv`: FFT based convolutions. -- `julius.lowpass`: FIR low pass filter banks. -- `julius.filters`: FIR high pass and band pass filters. -- `julius.bands`: Decomposition of a waveform signal over mel-scale frequency bands. - -Along that, you might found useful utilities in: - -- `julius.core`: DSP related functions. -- `julius.utils`: Generic utilities. - - -Please checkout [the Github repository](https://github.com/adefossez/julius) for other informations. -For a verification of the speed and correctness of Julius, check the benchmark module `bench`. - - -This package is named in this honor of -[Julius O. Smith](https://ccrma.stanford.edu/~jos/), -whose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want -to learn more about DSP. -""" - -from .bands import SplitBands, split_bands -from .fftconv import fft_conv1d, FFTConv1d -from .filters import bandpass_filter, BandPassFilter -from .filters import highpass_filter, highpass_filters, HighPassFilter, HighPassFilters -from .lowpass import lowpass_filter, lowpass_filters, LowPassFilters, LowPassFilter -from .resample import resample_frac, ResampleFrac diff --git a/spaces/radames/Candle-T5-Generation-Wasm/T5ModelConditionalGeneration.js b/spaces/radames/Candle-T5-Generation-Wasm/T5ModelConditionalGeneration.js deleted file mode 100644 index 5f94c19aab47040c6dab4c4ad941f824a11b73df..0000000000000000000000000000000000000000 --- a/spaces/radames/Candle-T5-Generation-Wasm/T5ModelConditionalGeneration.js +++ /dev/null @@ -1,93 +0,0 @@ -//load Candle Bert Module wasm module -let init, ModelConditionalGeneration; - -async function fetchArrayBuffer(url) { - const cacheName = "t5-candle-cache"; - const cache = await caches.open(cacheName); - const cachedResponse = await cache.match(url); - if (cachedResponse) { - const data = await cachedResponse.arrayBuffer(); - return new Uint8Array(data); - } - const res = await fetch(url, { cache: "force-cache" }); - cache.put(url, res.clone()); - return new Uint8Array(await res.arrayBuffer()); -} -class ConditionalGeneration { - static instance = {}; - - static async getInstance(weightsURL, tokenizerURL, configURL, modelID) { - if (modelID.includes("quantized")) { - ({ default: init, ModelConditionalGeneration } = await import( - "./build/m-quantized.js" - )); - } else { - ({ default: init, ModelConditionalGeneration } = await import( - "./build/m.js" - )); - } - if (!this.instance[modelID]) { - await init(); - - self.postMessage({ status: "loading", message: "Loading Model" }); - const [weightsArrayU8, tokenizerArrayU8, configArrayU8] = - await Promise.all([ - fetchArrayBuffer(weightsURL), - fetchArrayBuffer(tokenizerURL), - fetchArrayBuffer(configURL), - ]); - - this.instance[modelID] = new ModelConditionalGeneration( - weightsArrayU8, - tokenizerArrayU8, - configArrayU8 - ); - } else { - self.postMessage({ status: "ready", message: "Model Already Loaded" }); - } - return this.instance[modelID]; - } -} - -self.addEventListener("message", async (event) => { - const { weightsURL, tokenizerURL, configURL, modelID, prompt, params } = - event.data; - let { - temperature = 0.0, - seed = 299792458, - repeat_penalty = 1.1, - repeat_last_n = 64, - top_p = 1, - } = { ...params }; - try { - self.postMessage({ - status: "ready", - message: "Starting T5 Conditional Generation", - }); - const model = await ConditionalGeneration.getInstance( - weightsURL, - tokenizerURL, - configURL, - modelID - ); - self.postMessage({ - status: "decoding", - message: "Decoding Prompt", - }); - const output = model.decode({ - prompt, - temperature, - seed, - top_p, - repeat_penalty, - repeat_last_n, - }); - self.postMessage({ - status: "complete", - message: "complete", - output: output, - }); - } catch (e) { - self.postMessage({ error: e }); - } -}); diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/glm.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/glm.py deleted file mode 100644 index 8be14b50f0d7edcde6328f1f805b392c8e3ab7e2..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/glm.py +++ /dev/null @@ -1,125 +0,0 @@ -import numpy as np - - -def vec3(x, y, z): - return np.array([x, y, z], dtype=np.float32) - - -def radians(v): - return np.radians(v) - - -def identity(): - return np.identity(4, dtype=np.float32) - - -def empty(): - return np.zeros([4, 4], dtype=np.float32) - - -def magnitude(v): - return np.linalg.norm(v) - - -def normalize(v): - m = magnitude(v) - return v if m == 0 else v / m - - -def dot(u, v): - return np.sum(u * v) - - -def cross(u, v): - res = vec3(0, 0, 0) - res[0] = u[1] * v[2] - u[2] * v[1] - res[1] = u[2] * v[0] - u[0] * v[2] - res[2] = u[0] * v[1] - u[1] * v[0] - return res - - -# below functions can be optimized - -def translate(m, v): - res = np.copy(m) - res[:, 3] = m[:, 0] * v[0] + m[:, 1] * v[1] + m[:, 2] * v[2] + m[:, 3] - return res - - -def rotate(m, angle, v): - a = angle - c = np.cos(a) - s = np.sin(a) - - axis = normalize(v) - temp = (1 - c) * axis - - rot = empty() - rot[0][0] = c + temp[0] * axis[0] - rot[0][1] = temp[0] * axis[1] + s * axis[2] - rot[0][2] = temp[0] * axis[2] - s * axis[1] - - rot[1][0] = temp[1] * axis[0] - s * axis[2] - rot[1][1] = c + temp[1] * axis[1] - rot[1][2] = temp[1] * axis[2] + s * axis[0] - - rot[2][0] = temp[2] * axis[0] + s * axis[1] - rot[2][1] = temp[2] * axis[1] - s * axis[0] - rot[2][2] = c + temp[2] * axis[2] - - res = empty() - res[:, 0] = m[:, 0] * rot[0][0] + m[:, 1] * rot[0][1] + m[:, 2] * rot[0][2] - res[:, 1] = m[:, 0] * rot[1][0] + m[:, 1] * rot[1][1] + m[:, 2] * rot[1][2] - res[:, 2] = m[:, 0] * rot[2][0] + m[:, 1] * rot[2][1] + m[:, 2] * rot[2][2] - res[:, 3] = m[:, 3] - return res - - -def perspective(fovy, aspect, zNear, zFar): - tanHalfFovy = np.tan(fovy / 2) - - res = empty() - res[0][0] = 1 / (aspect * tanHalfFovy) - res[1][1] = 1 / (tanHalfFovy) - res[2][3] = -1 - res[2][2] = - (zFar + zNear) / (zFar - zNear) - res[3][2] = -(2 * zFar * zNear) / (zFar - zNear) - - return res.T - - -def ortho(left, right, bottom, top, zNear, zFar): - # res = np.ones([4, 4], dtype=np.float32) - res = identity() - res[0][0] = 2 / (right - left) - res[1][1] = 2 / (top - bottom) - res[2][2] = - 2 / (zFar - zNear) - res[3][0] = - (right + left) / (right - left) - res[3][1] = - (top + bottom) / (top - bottom) - res[3][2] = - (zFar + zNear) / (zFar - zNear) - return res.T - - -def lookat(eye, center, up): - f = normalize(center - eye) - s = normalize(cross(f, up)) - u = cross(s, f) - - res = identity() - res[0][0] = s[0] - res[1][0] = s[1] - res[2][0] = s[2] - res[0][1] = u[0] - res[1][1] = u[1] - res[2][1] = u[2] - res[0][2] = -f[0] - res[1][2] = -f[1] - res[2][2] = -f[2] - res[3][0] = -dot(s, eye) - res[3][1] = -dot(u, eye) - res[3][2] = -dot(f, eye) - return res.T - - -def transform(d, m): - return np.dot(m, d.T).T diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Ample Guitar VST Torrent Download How to Use It in Your DAW and Create Stunning Music.md b/spaces/raedeXanto/academic-chatgpt-beta/Ample Guitar VST Torrent Download How to Use It in Your DAW and Create Stunning Music.md deleted file mode 100644 index 7d75ef79db7fd7526b6f2ff03fb6f89466f0815f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Ample Guitar VST Torrent Download How to Use It in Your DAW and Create Stunning Music.md +++ /dev/null @@ -1,93 +0,0 @@ -
        -

        Ample Guitar VST Torrent Download: A Guide for Guitar Lovers

        -

        If you are a guitar lover who wants to create realistic and expressive guitar tracks in your music production, you might be interested in Ample Guitar VST. Ample Guitar VST is a virtual instrument that simulates various models of acoustic and electric guitars, using high-quality samples and advanced algorithms. In this article, we will introduce you to the features and benefits of Ample Guitar VST, show you how to download it using torrent, and give you some tips on how to use it in your music production.

        -

        ample guitar vst torrent download


        Download ⇒⇒⇒ https://tinourl.com/2uL0gK



        -

        What is Ample Guitar VST?

        -

        Ample Guitar VST is a series of virtual guitar instruments developed by Ample Sound, a company that specializes in creating realistic sampled instruments. Ample Guitar VST covers a wide range of guitar models, such as Fender Stratocaster, Gibson Les Paul, Taylor acoustic guitar, and more. Each guitar model has its own library of samples, recorded from real guitars with multiple microphones and techniques. You can load Ample Guitar VST as a plugin in your digital audio workstation (DAW), such as FL Studio, Ableton Live, Cubase, Logic Pro, etc., and play or program your guitar parts using MIDI keyboard or mouse.

        -

        Features and Benefits of Ample Guitar VST

        -

        Ample Guitar VST has many features and benefits that make it one of the best virtual guitar instruments on the market. Here are some of them:

        -

        High-Quality Samples of Real Guitars

        -

        Ample Guitar VST uses high-quality samples of real guitars, recorded with multiple microphones and techniques. Each guitar model has its own library of samples, ranging from 3 GB to 12 GB in size. The samples capture the nuances and details of each guitar, such as tone, resonance, sustain, hammer-on, pull-off, slide, palm mute, harmonic, etc. You can also choose between different pickup positions and microphone settings to get different sounds.

        -

        Tab Player for All Popular Tablature Formats

        -

        Ample Guitar VST has a built-in tab player that can load and play all popular tablature formats, such as GPX, GP5, GP4, GP3, PTB, MIDI, etc. You can import tabs from your favorite guitar websites or create your own tabs using the tab editor. The tab player can display the tabs in standard notation or guitar fretboard view. You can also adjust the tempo, pitch, loop, metronome, etc., to practice or learn the tabs.

        -

        Realistic Playing Styles and Articulations

        -

        Ample Guitar VST can simulate realistic playing styles and articulations of guitar players, such as strumming, picking, fingerstyle, tapping, sliding, bending, vibrato, etc. You can use key switches or MIDI controllers to change the articulations on the fly. You can also customize the playing styles using the strumming editor or the riff editor. The strumming editor allows you to create your own strumming patterns using various chords and rhythms. The riff editor allows you to create your own riffs using various notes and techniques.

        -

        Various Algorithms for Realistic Playback

        -

        Ample Guitar VST uses various algorithms to ensure realistic playback of the guitar parts. For example, it uses humanization algorithm to add subtle variations to the velocity, timing, pitch, etc., of each note. It uses legato algorithm to smoothly connect notes with different pitches or lengths. It uses resonance algorithm to simulate the natural resonance of the guitar body and strings. It uses polyphonic algorithm to handle chords with multiple notes.

        -

        ample sound guitar vst free torrent download
        -how to install ample guitar vst crack torrent
        -best sites to download ample guitar vst torrent
        -ample guitar vst full version torrent download
        -ample guitar vst mac torrent download
        -ample guitar vst review and demo torrent download
        -ample guitar vst alternative free torrent download
        -ample guitar vst plugin torrent download for fl studio
        -ample guitar vst acoustic torrent download
        -ample guitar vst electric torrent download
        -ample guitar vst bass torrent download
        -ample guitar vst metal torrent download
        -ample guitar vst classical torrent download
        -ample guitar vst ukulele torrent download
        -ample guitar vst mandolin torrent download
        -ample guitar vst banjo torrent download
        -ample guitar vst strummer torrent download
        -ample guitar vst fingerstyle torrent download
        -ample guitar vst riffer torrent download
        -ample guitar vst tab player torrent download
        -ample guitar vst midi library torrent download
        -ample guitar vst presets torrent download
        -ample guitar vst samples torrent download
        -ample guitar vst loops torrent download
        -ample guitar vst kontakt library torrent download
        -ample sound agm ii lite edition guitar vst torrent download
        -ample sound agt ii lite edition guitar vst torrent download
        -ample sound agl ii lite edition guitar vst torrent download
        -ample sound agg ii lite edition guitar vst torrent download
        -ample sound agf ii lite edition guitar vst torrent download
        -ample sound agp ii lite edition guitar vst torrent download
        -ample sound agm iii premium edition guitar vst torrent download
        -ample sound agt iii premium edition guitar vst torrent download
        -ample sound agl iii premium edition guitar vst torrent download
        -ample sound agg iii premium edition guitar vst torrent download
        -ample sound agf iii premium edition guitar vst torrent download
        -ample sound agp iii premium edition guitar vst torrent download
        -how to use ample sound cloud downloader to get free guitar vst torrents
        -how to get unlimited access to all the amplesound.net products with a single account and password for free via torrents
        -how to fix the common errors and issues when downloading and installing the amplesound.net products from torrents

        -

        Customizable Sound and Effects

        -

        Ample Guitar VST allows you to customize the sound and effects of each guitar model according to your preference. You can adjust parameters such as volume, pan, EQ, compressor, reverb, delay, chorus, phaser, flanger, distortion, etc., using the built-in effects rack. You can also use external effects plugins or amp simulators to further enhance or modify the sound.

        -

        How to Download Ample Guitar VST Torrent?

        -

        If you want to download Ample Guitar VST torrent for free, you need to follow these steps:

        -

        Choose a Reliable Torrent Site

        -

        The first step is to choose a reliable torrent site that offers Ample Guitar VST torrent files. There are many torrent sites on the internet, but not all of them are safe or trustworthy. Some torrent sites may contain malware or viruses that can harm your computer or steal your personal information. Some torrent sites may also have fake or incomplete torrent files that can waste your time or bandwidth. Therefore, you need to do some research and find a reputable torrent site that has positive reviews and feedback from other users.

        -

        Download and Install a Torrent Client

        -

        The second step is to download and install a torrent client on your computer. A torrent client is a software that allows you to download torrent files from torrent sites. There are many torrent clients available for free download online, such as uTorrent, BitTorrent, qBittorrent, etc. You need to choose a torrent client that is compatible with your operating system and has good features and performance.

        -

        Search for Ample Guitar VST Torrent File

        -

        The third step is to search for Ample Guitar VST torrent file on the torrent site that you have chosen. You can use keywords such as "ample guitar vst", "ample sound guitar", "ample guitar bundle", etc., to find relevant results. You need to check the details of each result before downloading it, such as file size, seeders, leechers, comments, etc., to ensure that it is authentic and complete.

        -

        Download and Install Ample Guitar VST Library

        -

        The fourth step is to download and install Ample Guitar VST library on your computer using the torrent client that you have installed. You need to follow the instructions provided by the torrent file or the readme file included in it, such as selecting the directory for installing the library, entering the serial number, copying the crack files, etc., to complete the installation process.

        -

        How to Use Ample Guitar VST in Your Music Production?

        -

        If you want to use Ample Guitar VST in your music production, you need to follow these steps:

        -

        Load Ample Guitar VST as a Plugin in Your DAW

        -

        The first step is to load Ample Guitar VST as a plugin in your DAW, such as FL Studio, Ableton Live, Cubase, Logic Pro, etc. You need to scan your plugin folder in your DAW settings and locate Ample Guitar VST plugin file. You can then drag and drop it onto a new track or insert it into an existing track.

        -

        Select a Guitar Model and a Preset

        -

        The second step is to select a guitar model and a preset from the interface of Ample Guitar VST. You can choose between different models of acoustic and electric guitars, such as Fender Stratocaster, Gibson Les Paul, Taylor acoustic guitar, and more. Each model has its own library of samples, sound characteristics, and presets. You can browse through different presets by clicking on the arrows next to the preset name. You can also create your own presets by adjusting various parameters and saving them.

        -

        Play or Program Your Guitar Parts

        -

        Conclusion

        -

        Ample Guitar VST is a virtual instrument that simulates various models of acoustic and electric guitars, using high-quality samples and advanced algorithms. It has many features and benefits that make it one of the best virtual guitar instruments on the market. You can download Ample Guitar VST torrent for free from reliable torrent sites, and use it in your music production to create realistic and expressive guitar tracks. You can also customize the sound and effects of Ample Guitar VST according to your preference.

        -

        If you are a guitar lover who wants to create realistic and expressive guitar tracks in your music production, you should definitely try Ample Guitar VST. It will give you the feeling and sound of playing a real guitar, without the hassle of tuning, changing strings, or carrying a heavy instrument. You can also explore different guitar models and sounds, and create your own guitar parts using tabs, strumming patterns, or riffs. Ample Guitar VST is a must-have tool for any guitar enthusiast or music producer.

        -

        FAQs

        -

        Here are some frequently asked questions about Ample Guitar VST:

        -

        Q: How much does Ample Guitar VST cost?

        -

        A: Ample Guitar VST has different prices for different guitar models, ranging from $89 to $169. You can also buy bundles of multiple guitar models for a discounted price. You can check the official website of Ample Sound for more details.

        -

        Q: What are the system requirements for Ample Guitar VST?

        -

        A: Ample Guitar VST requires Windows 7 or higher, or Mac OS X 10.9 or higher. It also requires at least 4 GB of RAM and 3 GB to 12 GB of free disk space depending on the guitar model. It supports 64-bit and 32-bit plugin formats such as VST, AU, AAX, and standalone.

        -

        Q: Can I use Ample Guitar VST with MIDI guitar?

        -

        A: Yes, you can use Ample Guitar VST with MIDI guitar. Ample Guitar VST supports MIDI guitar mode, which allows you to play Ample Guitar VST using a MIDI guitar controller. You can also use MIDI guitar software such as Jam Origin MIDI Guitar or Fishman TriplePlay to convert your guitar signal into MIDI signal and play Ample Guitar VST.

        -

        Q: Can I use Ample Guitar VST with other instruments?

        -

        A: Yes, you can use Ample Guitar VST with other instruments. You can mix and match Ample Guitar VST with other virtual instruments or real instruments to create your own music. You can also use Ample Guitar VST as a layer or a complement to other guitar instruments to enrich your sound.

        -

        Q: Where can I find more tutorials or tips on how to use Ample Guitar VST?

        -

        A: You can find more tutorials or tips on how to use Ample Guitar VST on the official website of Ample Sound, their YouTube channel, their Facebook page, or their forum. You can also find user reviews, demos, or feedback on various music websites or forums.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/rajeev12/rajeev_space/README.md b/spaces/rajeev12/rajeev_space/README.md deleted file mode 100644 index 22a2e223e6eb0b5385db36d9ae8db14737b885ef..0000000000000000000000000000000000000000 --- a/spaces/rajeev12/rajeev_space/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Rajeev Space -emoji: ⚡ -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 4.0.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/events.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/events.d.ts deleted file mode 100644 index 4633df19c8b7a64590cdc32b8f612827d751b8e0..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/events.d.ts +++ /dev/null @@ -1,678 +0,0 @@ -/** - * Much of the Node.js core API is built around an idiomatic asynchronous - * event-driven architecture in which certain kinds of objects (called "emitters") - * emit named events that cause `Function` objects ("listeners") to be called. - * - * For instance: a `net.Server` object emits an event each time a peer - * connects to it; a `fs.ReadStream` emits an event when the file is opened; - * a `stream` emits an event whenever data is available to be read. - * - * All objects that emit events are instances of the `EventEmitter` class. These - * objects expose an `eventEmitter.on()` function that allows one or more - * functions to be attached to named events emitted by the object. Typically, - * event names are camel-cased strings but any valid JavaScript property key - * can be used. - * - * When the `EventEmitter` object emits an event, all of the functions attached - * to that specific event are called _synchronously_. Any values returned by the - * called listeners are _ignored_ and discarded. - * - * The following example shows a simple `EventEmitter` instance with a single - * listener. The `eventEmitter.on()` method is used to register listeners, while - * the `eventEmitter.emit()` method is used to trigger the event. - * - * ```js - * const EventEmitter = require('events'); - * - * class MyEmitter extends EventEmitter {} - * - * const myEmitter = new MyEmitter(); - * myEmitter.on('event', () => { - * console.log('an event occurred!'); - * }); - * myEmitter.emit('event'); - * ``` - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/events.js) - */ -declare module 'events' { - // NOTE: This class is in the docs but is **not actually exported** by Node. - // If https://github.com/nodejs/node/issues/39903 gets resolved and Node - // actually starts exporting the class, uncomment below. - - // import { EventListener, EventListenerObject } from '__dom-events'; - // /** The NodeEventTarget is a Node.js-specific extension to EventTarget that emulates a subset of the EventEmitter API. */ - // interface NodeEventTarget extends EventTarget { - // /** - // * Node.js-specific extension to the `EventTarget` class that emulates the equivalent `EventEmitter` API. - // * The only difference between `addListener()` and `addEventListener()` is that addListener() will return a reference to the EventTarget. - // */ - // addListener(type: string, listener: EventListener | EventListenerObject, options?: { once: boolean }): this; - // /** Node.js-specific extension to the `EventTarget` class that returns an array of event `type` names for which event listeners are registered. */ - // eventNames(): string[]; - // /** Node.js-specific extension to the `EventTarget` class that returns the number of event listeners registered for the `type`. */ - // listenerCount(type: string): number; - // /** Node.js-specific alias for `eventTarget.removeListener()`. */ - // off(type: string, listener: EventListener | EventListenerObject): this; - // /** Node.js-specific alias for `eventTarget.addListener()`. */ - // on(type: string, listener: EventListener | EventListenerObject, options?: { once: boolean }): this; - // /** Node.js-specific extension to the `EventTarget` class that adds a `once` listener for the given event `type`. This is equivalent to calling `on` with the `once` option set to `true`. */ - // once(type: string, listener: EventListener | EventListenerObject): this; - // /** - // * Node.js-specific extension to the `EventTarget` class. - // * If `type` is specified, removes all registered listeners for `type`, - // * otherwise removes all registered listeners. - // */ - // removeAllListeners(type: string): this; - // /** - // * Node.js-specific extension to the `EventTarget` class that removes the listener for the given `type`. - // * The only difference between `removeListener()` and `removeEventListener()` is that `removeListener()` will return a reference to the `EventTarget`. - // */ - // removeListener(type: string, listener: EventListener | EventListenerObject): this; - // } - - interface EventEmitterOptions { - /** - * Enables automatic capturing of promise rejection. - */ - captureRejections?: boolean | undefined; - } - // Any EventTarget with a Node-style `once` function - interface _NodeEventTarget { - once(eventName: string | symbol, listener: (...args: any[]) => void): this; - } - // Any EventTarget with a DOM-style `addEventListener` - interface _DOMEventTarget { - addEventListener( - eventName: string, - listener: (...args: any[]) => void, - opts?: { - once: boolean; - } - ): any; - } - interface StaticEventEmitterOptions { - signal?: AbortSignal | undefined; - } - interface EventEmitter extends NodeJS.EventEmitter {} - /** - * The `EventEmitter` class is defined and exposed by the `events` module: - * - * ```js - * const EventEmitter = require('events'); - * ``` - * - * All `EventEmitter`s emit the event `'newListener'` when new listeners are - * added and `'removeListener'` when existing listeners are removed. - * - * It supports the following option: - * @since v0.1.26 - */ - class EventEmitter { - constructor(options?: EventEmitterOptions); - /** - * Creates a `Promise` that is fulfilled when the `EventEmitter` emits the given - * event or that is rejected if the `EventEmitter` emits `'error'` while waiting. - * The `Promise` will resolve with an array of all the arguments emitted to the - * given event. - * - * This method is intentionally generic and works with the web platform [EventTarget](https://dom.spec.whatwg.org/#interface-eventtarget) interface, which has no special`'error'` event - * semantics and does not listen to the `'error'` event. - * - * ```js - * const { once, EventEmitter } = require('events'); - * - * async function run() { - * const ee = new EventEmitter(); - * - * process.nextTick(() => { - * ee.emit('myevent', 42); - * }); - * - * const [value] = await once(ee, 'myevent'); - * console.log(value); - * - * const err = new Error('kaboom'); - * process.nextTick(() => { - * ee.emit('error', err); - * }); - * - * try { - * await once(ee, 'myevent'); - * } catch (err) { - * console.log('error happened', err); - * } - * } - * - * run(); - * ``` - * - * The special handling of the `'error'` event is only used when `events.once()`is used to wait for another event. If `events.once()` is used to wait for the - * '`error'` event itself, then it is treated as any other kind of event without - * special handling: - * - * ```js - * const { EventEmitter, once } = require('events'); - * - * const ee = new EventEmitter(); - * - * once(ee, 'error') - * .then(([err]) => console.log('ok', err.message)) - * .catch((err) => console.log('error', err.message)); - * - * ee.emit('error', new Error('boom')); - * - * // Prints: ok boom - * ``` - * - * An `AbortSignal` can be used to cancel waiting for the event: - * - * ```js - * const { EventEmitter, once } = require('events'); - * - * const ee = new EventEmitter(); - * const ac = new AbortController(); - * - * async function foo(emitter, event, signal) { - * try { - * await once(emitter, event, { signal }); - * console.log('event emitted!'); - * } catch (error) { - * if (error.name === 'AbortError') { - * console.error('Waiting for the event was canceled!'); - * } else { - * console.error('There was an error', error.message); - * } - * } - * } - * - * foo(ee, 'foo', ac.signal); - * ac.abort(); // Abort waiting for the event - * ee.emit('foo'); // Prints: Waiting for the event was canceled! - * ``` - * @since v11.13.0, v10.16.0 - */ - static once(emitter: _NodeEventTarget, eventName: string | symbol, options?: StaticEventEmitterOptions): Promise; - static once(emitter: _DOMEventTarget, eventName: string, options?: StaticEventEmitterOptions): Promise; - /** - * ```js - * const { on, EventEmitter } = require('events'); - * - * (async () => { - * const ee = new EventEmitter(); - * - * // Emit later on - * process.nextTick(() => { - * ee.emit('foo', 'bar'); - * ee.emit('foo', 42); - * }); - * - * for await (const event of on(ee, 'foo')) { - * // The execution of this inner block is synchronous and it - * // processes one event at a time (even with await). Do not use - * // if concurrent execution is required. - * console.log(event); // prints ['bar'] [42] - * } - * // Unreachable here - * })(); - * ``` - * - * Returns an `AsyncIterator` that iterates `eventName` events. It will throw - * if the `EventEmitter` emits `'error'`. It removes all listeners when - * exiting the loop. The `value` returned by each iteration is an array - * composed of the emitted event arguments. - * - * An `AbortSignal` can be used to cancel waiting on events: - * - * ```js - * const { on, EventEmitter } = require('events'); - * const ac = new AbortController(); - * - * (async () => { - * const ee = new EventEmitter(); - * - * // Emit later on - * process.nextTick(() => { - * ee.emit('foo', 'bar'); - * ee.emit('foo', 42); - * }); - * - * for await (const event of on(ee, 'foo', { signal: ac.signal })) { - * // The execution of this inner block is synchronous and it - * // processes one event at a time (even with await). Do not use - * // if concurrent execution is required. - * console.log(event); // prints ['bar'] [42] - * } - * // Unreachable here - * })(); - * - * process.nextTick(() => ac.abort()); - * ``` - * @since v13.6.0, v12.16.0 - * @param eventName The name of the event being listened for - * @return that iterates `eventName` events emitted by the `emitter` - */ - static on(emitter: NodeJS.EventEmitter, eventName: string, options?: StaticEventEmitterOptions): AsyncIterableIterator; - /** - * A class method that returns the number of listeners for the given `eventName`registered on the given `emitter`. - * - * ```js - * const { EventEmitter, listenerCount } = require('events'); - * const myEmitter = new EventEmitter(); - * myEmitter.on('event', () => {}); - * myEmitter.on('event', () => {}); - * console.log(listenerCount(myEmitter, 'event')); - * // Prints: 2 - * ``` - * @since v0.9.12 - * @deprecated Since v3.2.0 - Use `listenerCount` instead. - * @param emitter The emitter to query - * @param eventName The event name - */ - static listenerCount(emitter: NodeJS.EventEmitter, eventName: string | symbol): number; - /** - * Returns a copy of the array of listeners for the event named `eventName`. - * - * For `EventEmitter`s this behaves exactly the same as calling `.listeners` on - * the emitter. - * - * For `EventTarget`s this is the only way to get the event listeners for the - * event target. This is useful for debugging and diagnostic purposes. - * - * ```js - * const { getEventListeners, EventEmitter } = require('events'); - * - * { - * const ee = new EventEmitter(); - * const listener = () => console.log('Events are fun'); - * ee.on('foo', listener); - * getEventListeners(ee, 'foo'); // [listener] - * } - * { - * const et = new EventTarget(); - * const listener = () => console.log('Events are fun'); - * et.addEventListener('foo', listener); - * getEventListeners(et, 'foo'); // [listener] - * } - * ``` - * @since v15.2.0, v14.17.0 - */ - static getEventListeners(emitter: _DOMEventTarget | NodeJS.EventEmitter, name: string | symbol): Function[]; - /** - * ```js - * const { - * setMaxListeners, - * EventEmitter - * } = require('events'); - * - * const target = new EventTarget(); - * const emitter = new EventEmitter(); - * - * setMaxListeners(5, target, emitter); - * ``` - * @since v15.4.0 - * @param n A non-negative number. The maximum number of listeners per `EventTarget` event. - * @param eventsTargets Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, `n` is set as the default max for all newly created {EventTarget} and {EventEmitter} - * objects. - */ - static setMaxListeners(n?: number, ...eventTargets: Array<_DOMEventTarget | NodeJS.EventEmitter>): void; - /** - * This symbol shall be used to install a listener for only monitoring `'error'` - * events. Listeners installed using this symbol are called before the regular - * `'error'` listeners are called. - * - * Installing a listener using this symbol does not change the behavior once an - * `'error'` event is emitted, therefore the process will still crash if no - * regular `'error'` listener is installed. - */ - static readonly errorMonitor: unique symbol; - static readonly captureRejectionSymbol: unique symbol; - /** - * Sets or gets the default captureRejection value for all emitters. - */ - // TODO: These should be described using static getter/setter pairs: - static captureRejections: boolean; - static defaultMaxListeners: number; - } - import internal = require('node:events'); - namespace EventEmitter { - // Should just be `export { EventEmitter }`, but that doesn't work in TypeScript 3.4 - export { internal as EventEmitter }; - export interface Abortable { - /** - * When provided the corresponding `AbortController` can be used to cancel an asynchronous action. - */ - signal?: AbortSignal | undefined; - } - } - global { - namespace NodeJS { - interface EventEmitter { - /** - * Alias for `emitter.on(eventName, listener)`. - * @since v0.1.26 - */ - addListener(eventName: string | symbol, listener: (...args: any[]) => void): this; - /** - * Adds the `listener` function to the end of the listeners array for the - * event named `eventName`. No checks are made to see if the `listener` has - * already been added. Multiple calls passing the same combination of `eventName`and `listener` will result in the `listener` being added, and called, multiple - * times. - * - * ```js - * server.on('connection', (stream) => { - * console.log('someone connected!'); - * }); - * ``` - * - * Returns a reference to the `EventEmitter`, so that calls can be chained. - * - * By default, event listeners are invoked in the order they are added. The`emitter.prependListener()` method can be used as an alternative to add the - * event listener to the beginning of the listeners array. - * - * ```js - * const myEE = new EventEmitter(); - * myEE.on('foo', () => console.log('a')); - * myEE.prependListener('foo', () => console.log('b')); - * myEE.emit('foo'); - * // Prints: - * // b - * // a - * ``` - * @since v0.1.101 - * @param eventName The name of the event. - * @param listener The callback function - */ - on(eventName: string | symbol, listener: (...args: any[]) => void): this; - /** - * Adds a **one-time**`listener` function for the event named `eventName`. The - * next time `eventName` is triggered, this listener is removed and then invoked. - * - * ```js - * server.once('connection', (stream) => { - * console.log('Ah, we have our first user!'); - * }); - * ``` - * - * Returns a reference to the `EventEmitter`, so that calls can be chained. - * - * By default, event listeners are invoked in the order they are added. The`emitter.prependOnceListener()` method can be used as an alternative to add the - * event listener to the beginning of the listeners array. - * - * ```js - * const myEE = new EventEmitter(); - * myEE.once('foo', () => console.log('a')); - * myEE.prependOnceListener('foo', () => console.log('b')); - * myEE.emit('foo'); - * // Prints: - * // b - * // a - * ``` - * @since v0.3.0 - * @param eventName The name of the event. - * @param listener The callback function - */ - once(eventName: string | symbol, listener: (...args: any[]) => void): this; - /** - * Removes the specified `listener` from the listener array for the event named`eventName`. - * - * ```js - * const callback = (stream) => { - * console.log('someone connected!'); - * }; - * server.on('connection', callback); - * // ... - * server.removeListener('connection', callback); - * ``` - * - * `removeListener()` will remove, at most, one instance of a listener from the - * listener array. If any single listener has been added multiple times to the - * listener array for the specified `eventName`, then `removeListener()` must be - * called multiple times to remove each instance. - * - * Once an event is emitted, all listeners attached to it at the - * time of emitting are called in order. This implies that any`removeListener()` or `removeAllListeners()` calls _after_ emitting and _before_ the last listener finishes execution - * will not remove them from`emit()` in progress. Subsequent events behave as expected. - * - * ```js - * const myEmitter = new MyEmitter(); - * - * const callbackA = () => { - * console.log('A'); - * myEmitter.removeListener('event', callbackB); - * }; - * - * const callbackB = () => { - * console.log('B'); - * }; - * - * myEmitter.on('event', callbackA); - * - * myEmitter.on('event', callbackB); - * - * // callbackA removes listener callbackB but it will still be called. - * // Internal listener array at time of emit [callbackA, callbackB] - * myEmitter.emit('event'); - * // Prints: - * // A - * // B - * - * // callbackB is now removed. - * // Internal listener array [callbackA] - * myEmitter.emit('event'); - * // Prints: - * // A - * ``` - * - * Because listeners are managed using an internal array, calling this will - * change the position indices of any listener registered _after_ the listener - * being removed. This will not impact the order in which listeners are called, - * but it means that any copies of the listener array as returned by - * the `emitter.listeners()` method will need to be recreated. - * - * When a single function has been added as a handler multiple times for a single - * event (as in the example below), `removeListener()` will remove the most - * recently added instance. In the example the `once('ping')`listener is removed: - * - * ```js - * const ee = new EventEmitter(); - * - * function pong() { - * console.log('pong'); - * } - * - * ee.on('ping', pong); - * ee.once('ping', pong); - * ee.removeListener('ping', pong); - * - * ee.emit('ping'); - * ee.emit('ping'); - * ``` - * - * Returns a reference to the `EventEmitter`, so that calls can be chained. - * @since v0.1.26 - */ - removeListener(eventName: string | symbol, listener: (...args: any[]) => void): this; - /** - * Alias for `emitter.removeListener()`. - * @since v10.0.0 - */ - off(eventName: string | symbol, listener: (...args: any[]) => void): this; - /** - * Removes all listeners, or those of the specified `eventName`. - * - * It is bad practice to remove listeners added elsewhere in the code, - * particularly when the `EventEmitter` instance was created by some other - * component or module (e.g. sockets or file streams). - * - * Returns a reference to the `EventEmitter`, so that calls can be chained. - * @since v0.1.26 - */ - removeAllListeners(event?: string | symbol): this; - /** - * By default `EventEmitter`s will print a warning if more than `10` listeners are - * added for a particular event. This is a useful default that helps finding - * memory leaks. The `emitter.setMaxListeners()` method allows the limit to be - * modified for this specific `EventEmitter` instance. The value can be set to`Infinity` (or `0`) to indicate an unlimited number of listeners. - * - * Returns a reference to the `EventEmitter`, so that calls can be chained. - * @since v0.3.5 - */ - setMaxListeners(n: number): this; - /** - * Returns the current max listener value for the `EventEmitter` which is either - * set by `emitter.setMaxListeners(n)` or defaults to {@link defaultMaxListeners}. - * @since v1.0.0 - */ - getMaxListeners(): number; - /** - * Returns a copy of the array of listeners for the event named `eventName`. - * - * ```js - * server.on('connection', (stream) => { - * console.log('someone connected!'); - * }); - * console.log(util.inspect(server.listeners('connection'))); - * // Prints: [ [Function] ] - * ``` - * @since v0.1.26 - */ - listeners(eventName: string | symbol): Function[]; - /** - * Returns a copy of the array of listeners for the event named `eventName`, - * including any wrappers (such as those created by `.once()`). - * - * ```js - * const emitter = new EventEmitter(); - * emitter.once('log', () => console.log('log once')); - * - * // Returns a new Array with a function `onceWrapper` which has a property - * // `listener` which contains the original listener bound above - * const listeners = emitter.rawListeners('log'); - * const logFnWrapper = listeners[0]; - * - * // Logs "log once" to the console and does not unbind the `once` event - * logFnWrapper.listener(); - * - * // Logs "log once" to the console and removes the listener - * logFnWrapper(); - * - * emitter.on('log', () => console.log('log persistently')); - * // Will return a new Array with a single function bound by `.on()` above - * const newListeners = emitter.rawListeners('log'); - * - * // Logs "log persistently" twice - * newListeners[0](); - * emitter.emit('log'); - * ``` - * @since v9.4.0 - */ - rawListeners(eventName: string | symbol): Function[]; - /** - * Synchronously calls each of the listeners registered for the event named`eventName`, in the order they were registered, passing the supplied arguments - * to each. - * - * Returns `true` if the event had listeners, `false` otherwise. - * - * ```js - * const EventEmitter = require('events'); - * const myEmitter = new EventEmitter(); - * - * // First listener - * myEmitter.on('event', function firstListener() { - * console.log('Helloooo! first listener'); - * }); - * // Second listener - * myEmitter.on('event', function secondListener(arg1, arg2) { - * console.log(`event with parameters ${arg1}, ${arg2} in second listener`); - * }); - * // Third listener - * myEmitter.on('event', function thirdListener(...args) { - * const parameters = args.join(', '); - * console.log(`event with parameters ${parameters} in third listener`); - * }); - * - * console.log(myEmitter.listeners('event')); - * - * myEmitter.emit('event', 1, 2, 3, 4, 5); - * - * // Prints: - * // [ - * // [Function: firstListener], - * // [Function: secondListener], - * // [Function: thirdListener] - * // ] - * // Helloooo! first listener - * // event with parameters 1, 2 in second listener - * // event with parameters 1, 2, 3, 4, 5 in third listener - * ``` - * @since v0.1.26 - */ - emit(eventName: string | symbol, ...args: any[]): boolean; - /** - * Returns the number of listeners listening to the event named `eventName`. - * @since v3.2.0 - * @param eventName The name of the event being listened for - */ - listenerCount(eventName: string | symbol): number; - /** - * Adds the `listener` function to the _beginning_ of the listeners array for the - * event named `eventName`. No checks are made to see if the `listener` has - * already been added. Multiple calls passing the same combination of `eventName`and `listener` will result in the `listener` being added, and called, multiple - * times. - * - * ```js - * server.prependListener('connection', (stream) => { - * console.log('someone connected!'); - * }); - * ``` - * - * Returns a reference to the `EventEmitter`, so that calls can be chained. - * @since v6.0.0 - * @param eventName The name of the event. - * @param listener The callback function - */ - prependListener(eventName: string | symbol, listener: (...args: any[]) => void): this; - /** - * Adds a **one-time**`listener` function for the event named `eventName` to the _beginning_ of the listeners array. The next time `eventName` is triggered, this - * listener is removed, and then invoked. - * - * ```js - * server.prependOnceListener('connection', (stream) => { - * console.log('Ah, we have our first user!'); - * }); - * ``` - * - * Returns a reference to the `EventEmitter`, so that calls can be chained. - * @since v6.0.0 - * @param eventName The name of the event. - * @param listener The callback function - */ - prependOnceListener(eventName: string | symbol, listener: (...args: any[]) => void): this; - /** - * Returns an array listing the events for which the emitter has registered - * listeners. The values in the array are strings or `Symbol`s. - * - * ```js - * const EventEmitter = require('events'); - * const myEE = new EventEmitter(); - * myEE.on('foo', () => {}); - * myEE.on('bar', () => {}); - * - * const sym = Symbol('symbol'); - * myEE.on(sym, () => {}); - * - * console.log(myEE.eventNames()); - * // Prints: [ 'foo', 'bar', Symbol(symbol) ] - * ``` - * @since v6.0.0 - */ - eventNames(): Array; - } - } - } - export = EventEmitter; -} -declare module 'node:events' { - import events = require('events'); - export = events; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ab Tumhare Hawale Watan Sathiyo Movie Download Utorrent Kickassl ((LINK)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ab Tumhare Hawale Watan Sathiyo Movie Download Utorrent Kickassl ((LINK)).md deleted file mode 100644 index d2368014f96d30085464e664218f72d3b4939388..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ab Tumhare Hawale Watan Sathiyo Movie Download Utorrent Kickassl ((LINK)).md +++ /dev/null @@ -1,46 +0,0 @@ -

        Ab Tumhare Hawale Watan Sathiyo Movie Download Utorrent Kickassl


        Download »»» https://urlgoal.com/2uCMDW



        - -References - -External links - -Category:Living people - -Category:People from Dhaka - -Category:1967 births - -Category:Bangladeshi male writers - -Category:Bengali writers - -Category:Cinema of Bangladesh - -Category:Recipients of the Ekushey Padak - -Category:Recipients of the Ekushey Padak 2013The G.B.V. generator is an electrical device that helps you cut and shape long strands of wire and connect them together. A test probe is connected to the generator, and you move the probe over the metal you want to bend. - -My question is, I was thinking about moving this generator to another job that requires a good strong bend, but I would be unable to bend the wire with my hands. I was thinking I could use the generator, and the wire would be fine as long as I push the generator toward the wire I would be using? - -I was thinking about using the magnetic attraction of the iron or metal that is attracted to the generator. Is this going to work? Would you recommend anything? - -I want to turn this into an item that could help with cables and wiring for various electronic gadgets. Does anyone know a good place to look for info about it? - -Using a transformer as the "tool" is a pretty good solution. - -I have a sawed off transformer (QM6P0302) and it really does a pretty good job of bending. - -I use 2 but they tend to be very long strands of wire so I actually just double them over using insulated alligator clips. - -I prefer either the flat or a folded style but I have seen some bent ones too. - -I don't know if this is an existing thread but I have done a little research on this. I was watching a home improvement video and they were using a metal plate (and pipe insulation) and one side was pointed and the other side flat. They placed the item they were bending on the flat side and then spun it until the metal was in the shape they wanted it. - -I have a transformer in my work area and I used it on a couple of very large pieces of wire I needed to connect. - -I'm curious what happens to the transformer? I know the wire has no effect on it, but what happens to the coils inside? - -Would this work for an older household device that has a transformer? I was thinking the device would be a bit bulky. 4fefd39f24
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free.md deleted file mode 100644 index b0935fa1ebfa642494ad54b7354e7fdfefde59ab..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

        BMW Diagnostic Tools - Ferrum Collection 2013 setup free


        Download Zip === https://urlgoal.com/2uCMFl



        - -BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free. Download Ferrum College students during the 2014 E-Term: Ireland in the wild. Download Students Ferrum College's students during their course of E-Term 2014: Ireland in the natural environment. Download BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free. Download BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free. Download BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free. Download BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free. Download BMW Diagnostic Tools - Ferrum Collection 2013 Setup Free. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Echolink El 2020 Fta Software Download 18.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Echolink El 2020 Fta Software Download 18.md deleted file mode 100644 index 8ad15fa5b70d5135e3a1731c3a25540558c67d1b..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Echolink El 2020 Fta Software Download 18.md +++ /dev/null @@ -1,6 +0,0 @@ -

        echolink el 2020 fta software download 18


        Download File 🌟 https://urlgoal.com/2uCL7p



        -
        -05 20 2017 18 20 30 May 2020 Mediastar MS 15000 forever receiver is one ... MediaStar MS 15000 All MediaStar Receiver New Software 2020 21. ... Download 382 Dump Echolink 4100 4Mb Download 383 Dump Echolink EL 707 Hd N323D. ... Tags Echostar DSB 880 Echostar DSB 890 Echostar DSB 1220 FTA Echostar ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Elbeyli Cccam Server Hack V.1.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Elbeyli Cccam Server Hack V.1.md deleted file mode 100644 index 8fdbc514abf669cd650fd6ffe905f0887f0dffc9..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Elbeyli Cccam Server Hack V.1.md +++ /dev/null @@ -1,156 +0,0 @@ -
        -

        Elbeyli Cccam Server Hack V.1: A Scam or a Miracle?

        - -

        If you are looking for a way to watch all the channels you want for free, you might have come across a software called Elbeyli Cccam Server Hack V.1. This software claims to allow you to hack any CCcam server and get free access to all channels. But is it really true or just a scam?

        -

        elbeyli cccam server hack v.1


        Download File ✔✔✔ https://urlgoal.com/2uCMEx



        - -

        What is Elbeyli Cccam Server Hack V.1?

        - -

        Elbeyli Cccam Server Hack V.1 is a software that claims to allow you to hack any CCcam server and get free access to all channels. It is supposed to work by generating valid CCcam lines that you can use on your receiver or emulator. The software also claims to be easy to use and compatible with all devices and operating systems.

        - -

        Is Elbeyli Cccam Server Hack V.1 a Scam?

        - -

        The short answer is yes, Elbeyli Cccam Server Hack V.1 is a scam that tries to trick you into downloading a malicious file that can harm your computer or steal your personal information. The file name is cccam server hack elbeyli.exe and it is detected as a virus by many antivirus programs . The file can also contain spyware, adware, ransomware, or other malware that can compromise your security and privacy.

        - -

        Why You Should Avoid Elbeyli Cccam Server Hack V.1?

        - -

        There are many reasons why you should avoid Elbeyli Cccam Server Hack V.1 and any similar software that promises to hack CCcam servers and give you free access to all channels. Here are some of them:

        - -
          -
        • It is illegal to hack CCcam servers and watch channels without paying for them. You can face legal consequences if you are caught using such software.
        • -
        • It is unethical to steal from the CCcam providers who invest time and money to provide quality service to their customers.
        • -
        • It is risky to download and run unknown files from untrusted sources. You can expose your computer and personal data to hackers and cybercriminals who can use them for malicious purposes.
        • -
        • It is unreliable to use hacked CCcam lines that can stop working at any time or have poor quality. You can miss your favorite shows or events due to freezing, buffering, or blackouts.
        • -
        - -

        What are the Alternatives to Elbeyli Cccam Server Hack V.1?

        - -

        If you want to watch all the channels you want without risking your security, privacy, or legality, you should avoid Elbeyli Cccam Server Hack V.1 and any similar software. Instead, you should opt for one of the following alternatives:

        - -
          -
        • Buy a legitimate CCcam subscription from a reputable provider who offers high-quality service, customer support, and fair prices.
        • -
        • Use an IPTV service that streams live TV channels over the internet without requiring a satellite dish or receiver.
        • -
        • Use a VPN service that encrypts your internet traffic and changes your IP address, allowing you to access geo-restricted channels from anywhere in the world.
        • -
        - -

        Conclusion

        - -

        Elbeyli Cccam Server Hack V.1 is a scam that tries to lure you into downloading a malicious file that can harm your computer or steal your personal information. It is also illegal, unethical, and unreliable to hack CCcam servers and watch channels without paying for them. You should avoid Elbeyli Cccam Server Hack V.1 and any similar software and choose one of the alternatives we suggested above.

        -

        - -

        We hope this article helped you understand what Elbeyli Cccam Server Hack V.1 is and why you should avoid it. If you have any questions or comments, feel free to leave them below.

        - -: https://tragvenjackcuwegs.wixsite.com/denelcuset/post/elbeyli-cccam-server-hack-v-1 -: https://bitbucket.org/atlassian/openapi-diff/issues/302/elbeyli-cccam-server-hack-v1-11-work -: https://soundcloud.com/queaniscontha/elbeyli-cccam-server-hack-v1-best

        -

        How to Protect Yourself from Elbeyli Cccam Server Hack V.1 and Similar Scams?

        - -

        Elbeyli Cccam Server Hack V.1 and similar scams are designed to trick you into downloading and running malicious files that can harm your computer or steal your personal information. To protect yourself from these scams, you should follow some basic security tips:

        - -
          -
        • Do not download or run any file that claims to hack CCcam servers or give you free access to all channels. These files are most likely viruses or malware that can damage your system or compromise your security.
        • -
        • Do not trust any website that offers Elbeyli Cccam Server Hack V.1 or similar software. These websites are usually fake and may contain malicious links or ads that can infect your computer.
        • -
        • Do not provide any personal or financial information to any website that claims to offer Elbeyli Cccam Server Hack V.1 or similar software. These websites may try to steal your identity or money by phishing, fraud, or extortion.
        • -
        • Use a reliable antivirus program and keep it updated. Scan your computer regularly for any potential threats and remove them as soon as possible.
        • -
        • Use a firewall and a VPN service to protect your internet connection and prevent hackers from accessing your network or data.
        • -
        - -

        What are the Benefits of Using Legitimate CCcam Services?

        - -

        If you want to watch all the channels you want without risking your security, privacy, or legality, you should use legitimate CCcam services instead of Elbeyli Cccam Server Hack V.1 and similar scams. Legitimate CCcam services offer many benefits, such as:

        - -
          -
        • They are legal and ethical. You pay a fair price for the service and support the CCcam providers who work hard to provide quality service to their customers.
        • -
        • They are safe and secure. You do not have to worry about downloading or running any malicious file that can harm your computer or steal your personal information.
        • -
        • They are reliable and high-quality. You get access to a wide range of channels with excellent picture and sound quality. You do not have to deal with freezing, buffering, or blackouts.
        • -
        • They are easy and convenient. You do not have to waste time and effort trying to hack CCcam servers or find valid CCcam lines. You just need a simple subscription and a compatible receiver or emulator.
        • -
        • They are customer-oriented and supportive. You get 24/7 customer support and technical assistance from the CCcam providers in case you have any problem or question.
        • -
        - -

        Conclusion

        - -

        In conclusion, Elbeyli Cccam Server Hack V.1 is a scam that tries to lure you into downloading a malicious file that can harm your computer or steal your personal information. It is also illegal, unethical, and unreliable to hack CCcam servers and watch channels without paying for them. You should avoid Elbeyli Cccam Server Hack V.1 and any similar software and choose legitimate CCcam services instead.

        - -

        We hope this article helped you understand what Elbeyli Cccam Server Hack V.1 is and why you should avoid it. If you have any questions or comments, feel free to leave them below.

        - -: https://tragvenjackcuwegs.wixsite.com/denelcuset/post/elbeyli-cccam-server-hack-v-1 -: https://bitbucket.org/atlassian/openapi-diff/issues/302/elbeyli-cccam-server-hack-v1-11-work -: https://soundcloud.com/queaniscontha/elbeyli-cccam-server-hack-v1-best -

        How to Install and Configure a Receiver or Emulator for CCcam?

        - -

        If you want to use a legitimate CCcam service instead of Elbeyli Cccam Server Hack V.1 and similar scams, you need to install and configure a receiver or emulator that can decode the CCcam lines and display the channels on your TV. But how do you do that? Here are some steps to help you:

        - -
          -
        • Choose a receiver or emulator that is compatible with CCcam. You can use one of the receivers or emulators we mentioned above, such as Dreambox, Vu+, Openbox, CCcamdroid, or Oscam.
        • -
        • Download and install the CCcam software on your receiver or emulator. You can find the CCcam software on the official website of your receiver or emulator, or on other websites that offer CCcam downloads.
        • -
        • Get a CCcam subscription from a legitimate provider. You can choose one of the providers we suggested above, such as cccambox.com, cccamlux.com, cccamserver.com, cccampowerfull.com, cccamstore.tv, or cccamservice.com.
        • -
        • Enter your CCcam subscription details on your receiver or emulator. You will need to enter your username, password, and server address that you received from your provider. You can enter them manually or by using a file called CCcam.cfg.
        • -
        • Restart your receiver or emulator and enjoy the channels. You should be able to see the channels that are included in your CCcam subscription on your TV screen. You can use your remote control to switch between channels.
        • -
        - -

        What are the Advantages and Disadvantages of CCcam and IPTV?

        - -

        If you want to watch all the channels you want without risking your security, privacy, or legality, you have two main options: CCcam and IPTV. CCcam is a service that uses satellite signals to deliver channels to your receiver or emulator. IPTV is a service that streams live TV channels over the internet to your device. But what are the advantages and disadvantages of each service? Here are some of them:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        CCcamIPTV
        AdvantagesAdvantages
        - It offers a wide range of channels from different satellites and regions.- It offers a wide range of channels from different platforms and providers.
        - It has high picture and sound quality with minimal compression.- It has high picture and sound quality with adaptive bitrate.
        - It does not require a high-speed internet connection or data usage.- It does not require a satellite dish or receiver.
        - It is cheaper than IPTV in terms of subscription and equipment costs.- It is more flexible than CCcam in terms of device compatibility and mobility.
        DisadvantagesDisadvantages
        - It requires a satellite dish and receiver that are properly aligned and configured.- It requires a high-speed internet connection and data usage.
        - It is affected by weather conditions and signal interference.- It is affected by network congestion and buffering issues.
        - It is limited by the availability and coverage of satellites.- It is limited by the availability and legality of IPTV providers.
        - It is less flexible than IPTV in terms of device compatibility and mobility.- It is more expensive than CCcam in terms of subscription and equipment costs.
        - -

        Catchy Title for the Article

        - -

        A possible catchy title for the article is: - -Elbeyli Cccam Server Hack V.1: The Truth Behind the Scam and How to Watch All Channels Safely

        -

        Conclusion

        - -

        In conclusion, Elbeyli Cccam Server Hack V.1 is a scam that tries to lure you into downloading a malicious file that can harm your computer or steal your personal information. It is also illegal, unethical, and unreliable to hack CCcam servers and watch channels without paying for them. You should avoid Elbeyli Cccam Server Hack V.1 and any similar software and choose legitimate CCcam services instead.

        - -

        Legitimate CCcam services offer many benefits such as high-quality service, customer support, fair prices, and legal and ethical access to a wide range of channels. You can use a compatible receiver or emulator to decode the CCcam lines and display the channels on your TV. You can also compare CCcam with IPTV and see which service suits your needs and preferences better.

        - -

        We hope this article helped you understand what Elbeyli Cccam Server Hack V.1 is and why you should avoid it. We also hope it helped you learn how to choose a legitimate CCcam service and how to install and configure a receiver or emulator for CCcam. If you have any questions or comments, feel free to leave them below.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Electric Strings Vst Neocymatics Hybrid Strings Torrentrar.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Electric Strings Vst Neocymatics Hybrid Strings Torrentrar.md deleted file mode 100644 index 350d1f8f2c914cfd564d7a3993add1e812ac09f4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Electric Strings Vst Neocymatics Hybrid Strings Torrentrar.md +++ /dev/null @@ -1,14 +0,0 @@ -

        Electric Strings Vst Neocymatics Hybrid Strings Torrentrar


        Download Ziphttps://urlgoal.com/2uCJYB



        -
        -A wide variety of string instruments and accessories are available. 4.88 stars, 1 reviews. A wide variety of string instruments and accessories are available.Reciprocating piston pumps are known for the purpose of supplying compressed air to an air brake system in an automotive vehicle. One such pump is disclosed in U.S. Pat. No. 3,838,825, dated Sept. 24, 1974. In the past, a number of manufacturers of automotive vehicles have utilized one or more piston pumps of this general type as primary compressed air sources. In the majority of cases, the pumps are individually mounted in the fuel tank of the vehicle and are connected to the air brake system by individual dedicated air lines. - -Reciprocating piston pumps for use as primary compressed air sources generally require a fuel supply to supply fuel for combustion during the reciprocal movement of the piston. In the past, these pumps have been supplied with fuel by a carburetor of the type which uses float bowl action to regulate the fuel level in the bowl. Fuel is supplied to the bowl through a fuel conduit. A float controls the fuel level in the bowl and is mounted for movement with the reciprocating piston. The float has an opening therethrough which allows fuel to enter the bowl. As fuel fills the bowl, the float rises, but only to a certain predetermined level at which fuel then rises in the bowl to create a fuel/air mixture, thus creating a prime for the pump. As fuel is drawn from the bowl by the pump, the float descends. - -A pump of this type is shown in U.S. Pat. No. 3,838,825. The only fuel source in the pump shown in that patent is a carburetor bowl which provides fuel to the reciprocating piston of the pump. - -However, there are many potential problems with such a pump. In many, if not most, automotive vehicles, the fuel tank is also used to hold cooling water. It is a well known fact that the fuel/water mixture in a fuel tank can result in the corrosion and clogging of a pump which is mounted in the tank. - -Furthermore, the typical automotive vehicle fuel tank is relatively large. The pumping chamber of the pump is in communication with the interior of the fuel tank. As a result, the filling of the fuel tank with fuel, even if the fuel is not consumed by the vehicle, results in the filling of the pumping chamber with fuel. This results in the pump becoming contaminated with oil from the fuel pump. 4fefd39f24
        -
        -
        -

        diff --git a/spaces/ritwikbiswas/incoder-complete/style.css b/spaces/ritwikbiswas/incoder-complete/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/ritwikbiswas/incoder-complete/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/robin0307/MMOCR/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py b/spaces/robin0307/MMOCR/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py deleted file mode 100644 index 1183974024cf33d814f635ddb1454895fbd3c02c..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_600e.py', - '../../_base_/det_models/panet_r18_fpem_ffm.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/panet_pipeline.py' -] - -model = {{_base_.model_quad}} - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline_icdar2015 = {{_base_.train_pipeline_icdar2015}} -test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}} - -data = dict( - samples_per_gpu=8, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_icdar2015), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/robin0307/MMOCR/configs/textrecog/abinet/abinet_vision_only_academic.py b/spaces/robin0307/MMOCR/configs/textrecog/abinet/abinet_vision_only_academic.py deleted file mode 100644 index 318144d2418c7e77568d4915d72f01882835ba94..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textrecog/abinet/abinet_vision_only_academic.py +++ /dev/null @@ -1,81 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_20e.py', - '../../_base_/recog_pipelines/abinet_pipeline.py', - '../../_base_/recog_datasets/toy_data.py' - # '../../_base_/recog_datasets/ST_MJ_alphanumeric_train.py', - # '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -# Model -num_chars = 37 -max_seq_len = 26 -label_convertor = dict( - type='ABIConvertor', - dict_type='DICT36', - with_unknown=False, - with_padding=False, - lower=True, -) - -model = dict( - type='ABINet', - backbone=dict(type='ResNetABI'), - encoder=dict( - type='ABIVisionModel', - encoder=dict( - type='TransformerEncoder', - n_layers=3, - n_head=8, - d_model=512, - d_inner=2048, - dropout=0.1, - max_len=8 * 32, - ), - decoder=dict( - type='ABIVisionDecoder', - in_channels=512, - num_channels=64, - attn_height=8, - attn_width=32, - attn_mode='nearest', - use_result='feature', - num_chars=num_chars, - max_seq_len=max_seq_len, - init_cfg=dict(type='Xavier', layer='Conv2d')), - ), - loss=dict( - type='ABILoss', - enc_weight=1.0, - dec_weight=1.0, - fusion_weight=1.0, - num_classes=num_chars), - label_convertor=label_convertor, - max_seq_len=max_seq_len, - iter_size=1) - -data = dict( - samples_per_gpu=192, - workers_per_gpu=8, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py deleted file mode 100644 index 41625a61d6d1c38c633062c24b1e3455bd3ae2df..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/seg_heads/panoptic_fusion_heads/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_panoptic_fusion_head import \ - BasePanopticFusionHead # noqa: F401,F403 -from .heuristic_fusion_head import HeuristicFusionHead # noqa: F401,F403 -from .maskformer_fusion_head import MaskFormerFusionHead # noqa: F401,F403 diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/__init__.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/__init__.py deleted file mode 100644 index d61ba5d68aa2e0bc27a06a482f59f4fcc78cb0c2..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -from .dino import build_dino diff --git a/spaces/rune-m/age_guesser/app.py b/spaces/rune-m/age_guesser/app.py deleted file mode 100644 index 2d3b93d37493416c00330b1512e8cacf62a78133..0000000000000000000000000000000000000000 --- a/spaces/rune-m/age_guesser/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -from fastai.vision.all import * - -def get_parent_as_int(o): - return int(Path(o).parent.name) - -learner = load_learner('model.pkl') - -def image_to_text(image): - prediction, _, _ = learner.predict(image) - return round(prediction[0], 1) - -inputs = gr.inputs.Image() -output = gr.outputs.Textbox() - -interface = gr.Interface(fn=image_to_text, inputs=inputs, outputs=output, title="Age Guesser") -interface.launch() \ No newline at end of file diff --git a/spaces/safi842/FashionGen/netdissect/runningstats.py b/spaces/safi842/FashionGen/netdissect/runningstats.py deleted file mode 100644 index fe4093e0318edeecf8aebc34771adbde5043e2d4..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/runningstats.py +++ /dev/null @@ -1,773 +0,0 @@ -''' -Running statistics on the GPU using pytorch. - -RunningTopK maintains top-k statistics for a set of channels in parallel. -RunningQuantile maintains (sampled) quantile statistics for a set of channels. -''' - -import torch, math, numpy -from collections import defaultdict - -class RunningTopK: - ''' - A class to keep a running tally of the the top k values (and indexes) - of any number of torch feature components. Will work on the GPU if - the data is on the GPU. - - This version flattens all arrays to avoid crashes. - ''' - def __init__(self, k=100, state=None): - if state is not None: - self.set_state_dict(state) - return - self.k = k - self.count = 0 - # This version flattens all data internally to 2-d tensors, - # to avoid crashes with the current pytorch topk implementation. - # The data is puffed back out to arbitrary tensor shapes on ouput. - self.data_shape = None - self.top_data = None - self.top_index = None - self.next = 0 - self.linear_index = 0 - self.perm = None - - def add(self, data): - ''' - Adds a batch of data to be considered for the running top k. - The zeroth dimension enumerates the observations. All other - dimensions enumerate different features. - ''' - if self.top_data is None: - # Allocation: allocate a buffer of size 5*k, at least 10, for each. - self.data_shape = data.shape[1:] - feature_size = int(numpy.prod(self.data_shape)) - self.top_data = torch.zeros( - feature_size, max(10, self.k * 5), out=data.new()) - self.top_index = self.top_data.clone().long() - self.linear_index = 0 if len(data.shape) == 1 else torch.arange( - feature_size, out=self.top_index.new()).mul_( - self.top_data.shape[-1])[:,None] - size = data.shape[0] - sk = min(size, self.k) - if self.top_data.shape[-1] < self.next + sk: - # Compression: if full, keep topk only. - self.top_data[:,:self.k], self.top_index[:,:self.k] = ( - self.result(sorted=False, flat=True)) - self.next = self.k - free = self.top_data.shape[-1] - self.next - # Pick: copy the top sk of the next batch into the buffer. - # Currently strided topk is slow. So we clone after transpose. - # TODO: remove the clone() if it becomes faster. - cdata = data.contiguous().view(size, -1).t().clone() - td, ti = cdata.topk(sk, sorted=False) - self.top_data[:,self.next:self.next+sk] = td - self.top_index[:,self.next:self.next+sk] = (ti + self.count) - self.next += sk - self.count += size - - def result(self, sorted=True, flat=False): - ''' - Returns top k data items and indexes in each dimension, - with channels in the first dimension and k in the last dimension. - ''' - k = min(self.k, self.next) - # bti are top indexes relative to buffer array. - td, bti = self.top_data[:,:self.next].topk(k, sorted=sorted) - # we want to report top indexes globally, which is ti. - ti = self.top_index.view(-1)[ - (bti + self.linear_index).view(-1) - ].view(*bti.shape) - if flat: - return td, ti - else: - return (td.view(*(self.data_shape + (-1,))), - ti.view(*(self.data_shape + (-1,)))) - - def to_(self, device): - self.top_data = self.top_data.to(device) - self.top_index = self.top_index.to(device) - if isinstance(self.linear_index, torch.Tensor): - self.linear_index = self.linear_index.to(device) - - def state_dict(self): - return dict( - constructor=self.__module__ + '.' + - self.__class__.__name__ + '()', - k=self.k, - count=self.count, - data_shape=tuple(self.data_shape), - top_data=self.top_data.cpu().numpy(), - top_index=self.top_index.cpu().numpy(), - next=self.next, - linear_index=(self.linear_index.cpu().numpy() - if isinstance(self.linear_index, torch.Tensor) - else self.linear_index), - perm=self.perm) - - def set_state_dict(self, dic): - self.k = dic['k'].item() - self.count = dic['count'].item() - self.data_shape = tuple(dic['data_shape']) - self.top_data = torch.from_numpy(dic['top_data']) - self.top_index = torch.from_numpy(dic['top_index']) - self.next = dic['next'].item() - self.linear_index = (torch.from_numpy(dic['linear_index']) - if len(dic['linear_index'].shape) > 0 - else dic['linear_index'].item()) - -class RunningQuantile: - """ - Streaming randomized quantile computation for torch. - - Add any amount of data repeatedly via add(data). At any time, - quantile estimates (or old-style percentiles) can be read out using - quantiles(q) or percentiles(p). - - Accuracy scales according to resolution: the default is to - set resolution to be accurate to better than 0.1%, - while limiting storage to about 50,000 samples. - - Good for computing quantiles of huge data without using much memory. - Works well on arbitrary data with probability near 1. - - Based on the optimal KLL quantile algorithm by Karnin, Lang, and Liberty - from FOCS 2016. http://ieee-focs.org/FOCS-2016-Papers/3933a071.pdf - """ - - def __init__(self, resolution=6 * 1024, buffersize=None, seed=None, - state=None): - if state is not None: - self.set_state_dict(state) - return - self.depth = None - self.dtype = None - self.device = None - self.resolution = resolution - # Default buffersize: 128 samples (and smaller than resolution). - if buffersize is None: - buffersize = min(128, (resolution + 7) // 8) - self.buffersize = buffersize - self.samplerate = 1.0 - self.data = None - self.firstfree = [0] - self.randbits = torch.ByteTensor(resolution) - self.currentbit = len(self.randbits) - 1 - self.extremes = None - self.size = 0 - - def _lazy_init(self, incoming): - self.depth = incoming.shape[1] - self.dtype = incoming.dtype - self.device = incoming.device - self.data = [torch.zeros(self.depth, self.resolution, - dtype=self.dtype, device=self.device)] - self.extremes = torch.zeros(self.depth, 2, - dtype=self.dtype, device=self.device) - self.extremes[:,0] = float('inf') - self.extremes[:,-1] = -float('inf') - - def to_(self, device): - """Switches internal storage to specified device.""" - if device != self.device: - old_data = self.data - old_extremes = self.extremes - self.data = [d.to(device) for d in self.data] - self.extremes = self.extremes.to(device) - self.device = self.extremes.device - del old_data - del old_extremes - - def add(self, incoming): - if self.depth is None: - self._lazy_init(incoming) - assert len(incoming.shape) == 2 - assert incoming.shape[1] == self.depth, (incoming.shape[1], self.depth) - self.size += incoming.shape[0] - # Convert to a flat torch array. - if self.samplerate >= 1.0: - self._add_every(incoming) - return - # If we are sampling, then subsample a large chunk at a time. - self._scan_extremes(incoming) - chunksize = int(math.ceil(self.buffersize / self.samplerate)) - for index in range(0, len(incoming), chunksize): - batch = incoming[index:index+chunksize] - sample = sample_portion(batch, self.samplerate) - if len(sample): - self._add_every(sample) - - def _add_every(self, incoming): - supplied = len(incoming) - index = 0 - while index < supplied: - ff = self.firstfree[0] - available = self.data[0].shape[1] - ff - if available == 0: - if not self._shift(): - # If we shifted by subsampling, then subsample. - incoming = incoming[index:] - if self.samplerate >= 0.5: - # First time sampling - the data source is very large. - self._scan_extremes(incoming) - incoming = sample_portion(incoming, self.samplerate) - index = 0 - supplied = len(incoming) - ff = self.firstfree[0] - available = self.data[0].shape[1] - ff - copycount = min(available, supplied - index) - self.data[0][:,ff:ff + copycount] = torch.t( - incoming[index:index + copycount,:]) - self.firstfree[0] += copycount - index += copycount - - def _shift(self): - index = 0 - # If remaining space at the current layer is less than half prev - # buffer size (rounding up), then we need to shift it up to ensure - # enough space for future shifting. - while self.data[index].shape[1] - self.firstfree[index] < ( - -(-self.data[index-1].shape[1] // 2) if index else 1): - if index + 1 >= len(self.data): - return self._expand() - data = self.data[index][:,0:self.firstfree[index]] - data = data.sort()[0] - if index == 0 and self.samplerate >= 1.0: - self._update_extremes(data[:,0], data[:,-1]) - offset = self._randbit() - position = self.firstfree[index + 1] - subset = data[:,offset::2] - self.data[index + 1][:,position:position + subset.shape[1]] = subset - self.firstfree[index] = 0 - self.firstfree[index + 1] += subset.shape[1] - index += 1 - return True - - def _scan_extremes(self, incoming): - # When sampling, we need to scan every item still to get extremes - self._update_extremes( - torch.min(incoming, dim=0)[0], - torch.max(incoming, dim=0)[0]) - - def _update_extremes(self, minr, maxr): - self.extremes[:,0] = torch.min( - torch.stack([self.extremes[:,0], minr]), dim=0)[0] - self.extremes[:,-1] = torch.max( - torch.stack([self.extremes[:,-1], maxr]), dim=0)[0] - - def _randbit(self): - self.currentbit += 1 - if self.currentbit >= len(self.randbits): - self.randbits.random_(to=2) - self.currentbit = 0 - return self.randbits[self.currentbit] - - def state_dict(self): - return dict( - constructor=self.__module__ + '.' + - self.__class__.__name__ + '()', - resolution=self.resolution, - depth=self.depth, - buffersize=self.buffersize, - samplerate=self.samplerate, - data=[d.cpu().numpy()[:,:f].T - for d, f in zip(self.data, self.firstfree)], - sizes=[d.shape[1] for d in self.data], - extremes=self.extremes.cpu().numpy(), - size=self.size) - - def set_state_dict(self, dic): - self.resolution = int(dic['resolution']) - self.randbits = torch.ByteTensor(self.resolution) - self.currentbit = len(self.randbits) - 1 - self.depth = int(dic['depth']) - self.buffersize = int(dic['buffersize']) - self.samplerate = float(dic['samplerate']) - firstfree = [] - buffers = [] - for d, s in zip(dic['data'], dic['sizes']): - firstfree.append(d.shape[0]) - buf = numpy.zeros((d.shape[1], s), dtype=d.dtype) - buf[:,:d.shape[0]] = d.T - buffers.append(torch.from_numpy(buf)) - self.firstfree = firstfree - self.data = buffers - self.extremes = torch.from_numpy((dic['extremes'])) - self.size = int(dic['size']) - self.dtype = self.extremes.dtype - self.device = self.extremes.device - - def minmax(self): - if self.firstfree[0]: - self._scan_extremes(self.data[0][:,:self.firstfree[0]].t()) - return self.extremes.clone() - - def median(self): - return self.quantiles([0.5])[:,0] - - def mean(self): - return self.integrate(lambda x: x) / self.size - - def variance(self): - mean = self.mean()[:,None] - return self.integrate(lambda x: (x - mean).pow(2)) / (self.size - 1) - - def stdev(self): - return self.variance().sqrt() - - def _expand(self): - cap = self._next_capacity() - if cap > 0: - # First, make a new layer of the proper capacity. - self.data.insert(0, torch.zeros(self.depth, cap, - dtype=self.dtype, device=self.device)) - self.firstfree.insert(0, 0) - else: - # Unless we're so big we are just subsampling. - assert self.firstfree[0] == 0 - self.samplerate *= 0.5 - for index in range(1, len(self.data)): - # Scan for existing data that needs to be moved down a level. - amount = self.firstfree[index] - if amount == 0: - continue - position = self.firstfree[index-1] - # Move data down if it would leave enough empty space there - # This is the key invariant: enough empty space to fit half - # of the previous level's buffer size (rounding up) - if self.data[index-1].shape[1] - (amount + position) >= ( - -(-self.data[index-2].shape[1] // 2) if (index-1) else 1): - self.data[index-1][:,position:position + amount] = ( - self.data[index][:,:amount]) - self.firstfree[index-1] += amount - self.firstfree[index] = 0 - else: - # Scrunch the data if it would not. - data = self.data[index][:,:amount] - data = data.sort()[0] - if index == 1: - self._update_extremes(data[:,0], data[:,-1]) - offset = self._randbit() - scrunched = data[:,offset::2] - self.data[index][:,:scrunched.shape[1]] = scrunched - self.firstfree[index] = scrunched.shape[1] - return cap > 0 - - def _next_capacity(self): - cap = int(math.ceil(self.resolution * (0.67 ** len(self.data)))) - if cap < 2: - return 0 - # Round up to the nearest multiple of 8 for better GPU alignment. - cap = -8 * (-cap // 8) - return max(self.buffersize, cap) - - def _weighted_summary(self, sort=True): - if self.firstfree[0]: - self._scan_extremes(self.data[0][:,:self.firstfree[0]].t()) - size = sum(self.firstfree) + 2 - weights = torch.FloatTensor(size) # Floating point - summary = torch.zeros(self.depth, size, - dtype=self.dtype, device=self.device) - weights[0:2] = 0 - summary[:,0:2] = self.extremes - index = 2 - for level, ff in enumerate(self.firstfree): - if ff == 0: - continue - summary[:,index:index + ff] = self.data[level][:,:ff] - weights[index:index + ff] = 2.0 ** level - index += ff - assert index == summary.shape[1] - if sort: - summary, order = torch.sort(summary, dim=-1) - weights = weights[order.view(-1).cpu()].view(order.shape) - return (summary, weights) - - def quantiles(self, quantiles, old_style=False): - if self.size == 0: - return torch.full((self.depth, len(quantiles)), torch.nan) - summary, weights = self._weighted_summary() - cumweights = torch.cumsum(weights, dim=-1) - weights / 2 - if old_style: - # To be convenient with torch.percentile - cumweights -= cumweights[:,0:1].clone() - cumweights /= cumweights[:,-1:].clone() - else: - cumweights /= torch.sum(weights, dim=-1, keepdim=True) - result = torch.zeros(self.depth, len(quantiles), - dtype=self.dtype, device=self.device) - # numpy is needed for interpolation - if not hasattr(quantiles, 'cpu'): - quantiles = torch.Tensor(quantiles) - nq = quantiles.cpu().numpy() - ncw = cumweights.cpu().numpy() - nsm = summary.cpu().numpy() - for d in range(self.depth): - result[d] = torch.tensor(numpy.interp(nq, ncw[d], nsm[d]), - dtype=self.dtype, device=self.device) - return result - - def integrate(self, fun): - result = None - for level, ff in enumerate(self.firstfree): - if ff == 0: - continue - term = torch.sum( - fun(self.data[level][:,:ff]) * (2.0 ** level), - dim=-1) - if result is None: - result = term - else: - result += term - if result is not None: - result /= self.samplerate - return result - - def percentiles(self, percentiles): - return self.quantiles(percentiles, old_style=True) - - def readout(self, count=1001, old_style=True): - return self.quantiles( - torch.linspace(0.0, 1.0, count), old_style=old_style) - - def normalize(self, data): - ''' - Given input data as taken from the training distirbution, - normalizes every channel to reflect quantile values, - uniformly distributed, within [0, 1]. - ''' - assert self.size > 0 - assert data.shape[0] == self.depth - summary, weights = self._weighted_summary() - cumweights = torch.cumsum(weights, dim=-1) - weights / 2 - cumweights /= torch.sum(weights, dim=-1, keepdim=True) - result = torch.zeros_like(data).float() - # numpy is needed for interpolation - ndata = data.cpu().numpy().reshape((data.shape[0], -1)) - ncw = cumweights.cpu().numpy() - nsm = summary.cpu().numpy() - for d in range(self.depth): - normed = torch.tensor(numpy.interp(ndata[d], nsm[d], ncw[d]), - dtype=torch.float, device=data.device).clamp_(0.0, 1.0) - if len(data.shape) > 1: - normed = normed.view(*(data.shape[1:])) - result[d] = normed - return result - - -class RunningConditionalQuantile: - ''' - Equivalent to a map from conditions (any python hashable type) - to RunningQuantiles. The reason for the type is to allow limited - GPU memory to be exploited while counting quantile stats on many - different conditions, a few of which are common and which benefit - from GPU, but most of which are rare and would not all fit into - GPU RAM. - - To move a set of conditions to a device, use rcq.to_(device, conds). - Then in the future, move the tallied data to the device before - calling rcq.add, that is, rcq.add(cond, data.to(device)). - - To allow the caller to decide which conditions to allow to use GPU, - rcq.most_common_conditions(n) returns a list of the n most commonly - added conditions so far. - ''' - def __init__(self, resolution=6 * 1024, buffersize=None, seed=None, - state=None): - self.first_rq = None - self.call_stats = defaultdict(int) - self.running_quantiles = {} - if state is not None: - self.set_state_dict(state) - return - self.rq_args = dict(resolution=resolution, buffersize=buffersize, - seed=seed) - - def add(self, condition, incoming): - if condition not in self.running_quantiles: - self.running_quantiles[condition] = RunningQuantile(**self.rq_args) - if self.first_rq is None: - self.first_rq = self.running_quantiles[condition] - self.call_stats[condition] += 1 - rq = self.running_quantiles[condition] - # For performance reasons, the caller can move some conditions to - # the CPU if they are not among the most common conditions. - if rq.device is not None and (rq.device != incoming.device): - rq.to_(incoming.device) - self.running_quantiles[condition].add(incoming) - - def most_common_conditions(self, n): - return sorted(self.call_stats.keys(), - key=lambda c: -self.call_stats[c])[:n] - - def collected_add(self, conditions, incoming): - for c in conditions: - self.add(c, incoming) - - def conditional(self, c): - return self.running_quantiles[c] - - def collected_quantiles(self, conditions, quantiles, old_style=False): - result = torch.zeros( - size=(len(conditions), self.first_rq.depth, len(quantiles)), - dtype=self.first_rq.dtype, - device=self.first_rq.device) - for i, c in enumerate(conditions): - if c in self.running_quantiles: - result[i] = self.running_quantiles[c].quantiles( - quantiles, old_style) - return result - - def collected_normalize(self, conditions, values): - result = torch.zeros( - size=(len(conditions), values.shape[0], values.shape[1]), - dtype=torch.float, - device=self.first_rq.device) - for i, c in enumerate(conditions): - if c in self.running_quantiles: - result[i] = self.running_quantiles[c].normalize(values) - return result - - def to_(self, device, conditions=None): - if conditions is None: - conditions = self.running_quantiles.keys() - for cond in conditions: - if cond in self.running_quantiles: - self.running_quantiles[cond].to_(device) - - def state_dict(self): - conditions = sorted(self.running_quantiles.keys()) - result = dict( - constructor=self.__module__ + '.' + - self.__class__.__name__ + '()', - rq_args=self.rq_args, - conditions=conditions) - for i, c in enumerate(conditions): - result.update({ - '%d.%s' % (i, k): v - for k, v in self.running_quantiles[c].state_dict().items()}) - return result - - def set_state_dict(self, dic): - self.rq_args = dic['rq_args'].item() - conditions = list(dic['conditions']) - subdicts = defaultdict(dict) - for k, v in dic.items(): - if '.' in k: - p, s = k.split('.', 1) - subdicts[p][s] = v - self.running_quantiles = { - c: RunningQuantile(state=subdicts[str(i)]) - for i, c in enumerate(conditions)} - if conditions: - self.first_rq = self.running_quantiles[conditions[0]] - - # example usage: - # levels = rqc.conditional(()).quantiles(1 - fracs) - # denoms = 1 - rqc.collected_normalize(cats, levels) - # isects = 1 - rqc.collected_normalize(labels, levels) - # unions = fracs + denoms[cats] - isects - # iou = isects / unions - - - - -class RunningCrossCovariance: - ''' - Running computation. Use this when an off-diagonal block of the - covariance matrix is needed (e.g., when the whole covariance matrix - does not fit in the GPU). - - Chan-style numerically stable update of mean and full covariance matrix. - Chan, Golub. LeVeque. 1983. http://www.jstor.org/stable/2683386 - ''' - def __init__(self, state=None): - if state is not None: - self.set_state_dict(state) - return - self.count = 0 - self._mean = None - self.cmom2 = None - self.v_cmom2 = None - - def add(self, a, b): - if len(a.shape) == 1: - a = a[None, :] - b = b[None, :] - assert(a.shape[0] == b.shape[0]) - if len(a.shape) > 2: - a, b = [d.view(d.shape[0], d.shape[1], -1).permute(0, 2, 1 - ).contiguous().view(-1, d.shape[1]) for d in [a, b]] - batch_count = a.shape[0] - batch_mean = [d.sum(0) / batch_count for d in [a, b]] - centered = [d - bm for d, bm in zip([a, b], batch_mean)] - # If more than 10 billion operations, divide into batches. - sub_batch = -(-(10 << 30) // (a.shape[1] * b.shape[1])) - # Initial batch. - if self._mean is None: - self.count = batch_count - self._mean = batch_mean - self.v_cmom2 = [c.pow(2).sum(0) for c in centered] - self.cmom2 = a.new(a.shape[1], b.shape[1]).zero_() - progress_addbmm(self.cmom2, centered[0][:,:,None], - centered[1][:,None,:], sub_batch) - return - # Update a batch using Chan-style update for numerical stability. - oldcount = self.count - self.count += batch_count - new_frac = float(batch_count) / self.count - # Update the mean according to the batch deviation from the old mean. - delta = [bm.sub_(m).mul_(new_frac) - for bm, m in zip(batch_mean, self._mean)] - for m, d in zip(self._mean, delta): - m.add_(d) - # Update the cross-covariance using the batch deviation - progress_addbmm(self.cmom2, centered[0][:,:,None], - centered[1][:,None,:], sub_batch) - self.cmom2.addmm_(alpha=new_frac * oldcount, - mat1=delta[0][:,None], mat2=delta[1][None,:]) - # Update the variance using the batch deviation - for c, vc2, d in zip(centered, self.v_cmom2, delta): - vc2.add_(c.pow(2).sum(0)) - vc2.add_(d.pow_(2).mul_(new_frac * oldcount)) - - def mean(self): - return self._mean - - def variance(self): - return [vc2 / (self.count - 1) for vc2 in self.v_cmom2] - - def stdev(self): - return [v.sqrt() for v in self.variance()] - - def covariance(self): - return self.cmom2 / (self.count - 1) - - def correlation(self): - covariance = self.covariance() - rstdev = [s.reciprocal() for s in self.stdev()] - cor = rstdev[0][:,None] * covariance * rstdev[1][None,:] - # Remove NaNs - cor[torch.isnan(cor)] = 0 - return cor - - def to_(self, device): - self._mean = [m.to(device) for m in self._mean] - self.v_cmom2 = [vcs.to(device) for vcs in self.v_cmom2] - self.cmom2 = self.cmom2.to(device) - - def state_dict(self): - return dict( - constructor=self.__module__ + '.' + - self.__class__.__name__ + '()', - count=self.count, - mean_a=self._mean[0].cpu().numpy(), - mean_b=self._mean[1].cpu().numpy(), - cmom2_a=self.v_cmom2[0].cpu().numpy(), - cmom2_b=self.v_cmom2[1].cpu().numpy(), - cmom2=self.cmom2.cpu().numpy()) - - def set_state_dict(self, dic): - self.count = dic['count'].item() - self._mean = [torch.from_numpy(dic[k]) for k in ['mean_a', 'mean_b']] - self.v_cmom2 = [torch.from_numpy(dic[k]) - for k in ['cmom2_a', 'cmom2_b']] - self.cmom2 = torch.from_numpy(dic['cmom2']) - -def progress_addbmm(accum, x, y, batch_size): - ''' - Break up very large adbmm operations into batches so progress can be seen. - ''' - from .progress import default_progress - if x.shape[0] <= batch_size: - return accum.addbmm_(x, y) - progress = default_progress(None) - for i in progress(range(0, x.shape[0], batch_size), desc='bmm'): - accum.addbmm_(x[i:i+batch_size], y[i:i+batch_size]) - return accum - - -def sample_portion(vec, p=0.5): - bits = torch.bernoulli(torch.zeros(vec.shape[0], dtype=torch.uint8, - device=vec.device), p) - return vec[bits] - -if __name__ == '__main__': - import warnings - warnings.filterwarnings("error") - import time - import argparse - parser = argparse.ArgumentParser( - description='Test things out') - parser.add_argument('--mode', default='cpu', help='cpu or cuda') - parser.add_argument('--test_size', type=int, default=1000000) - args = parser.parse_args() - - # An adverarial case: we keep finding more numbers in the middle - # as the stream goes on. - amount = args.test_size - quantiles = 1000 - data = numpy.arange(float(amount)) - data[1::2] = data[-1::-2] + (len(data) - 1) - data /= 2 - depth = 50 - test_cuda = torch.cuda.is_available() - alldata = data[:,None] + (numpy.arange(depth) * amount)[None, :] - actual_sum = torch.FloatTensor(numpy.sum(alldata * alldata, axis=0)) - amt = amount // depth - for r in range(depth): - numpy.random.shuffle(alldata[r*amt:r*amt+amt,r]) - if args.mode == 'cuda': - alldata = torch.cuda.FloatTensor(alldata) - dtype = torch.float - device = torch.device('cuda') - else: - alldata = torch.FloatTensor(alldata) - dtype = torch.float - device = None - starttime = time.time() - qc = RunningQuantile(resolution=6 * 1024) - qc.add(alldata) - # Test state dict - saved = qc.state_dict() - # numpy.savez('foo.npz', **saved) - # saved = numpy.load('foo.npz') - qc = RunningQuantile(state=saved) - assert not qc.device.type == 'cuda' - qc.add(alldata) - actual_sum *= 2 - ro = qc.readout(1001).cpu() - endtime = time.time() - gt = torch.linspace(0, amount, quantiles+1)[None,:] + ( - torch.arange(qc.depth, dtype=torch.float) * amount)[:,None] - maxreldev = torch.max(torch.abs(ro - gt) / amount) * quantiles - print("Maximum relative deviation among %d perentiles: %f" % ( - quantiles, maxreldev)) - minerr = torch.max(torch.abs(qc.minmax().cpu()[:,0] - - torch.arange(qc.depth, dtype=torch.float) * amount)) - maxerr = torch.max(torch.abs((qc.minmax().cpu()[:, -1] + 1) - - (torch.arange(qc.depth, dtype=torch.float) + 1) * amount)) - print("Minmax error %f, %f" % (minerr, maxerr)) - interr = torch.max(torch.abs(qc.integrate(lambda x: x * x).cpu() - - actual_sum) / actual_sum) - print("Integral error: %f" % interr) - medianerr = torch.max(torch.abs(qc.median() - - alldata.median(0)[0]) / alldata.median(0)[0]).cpu() - print("Median error: %f" % interr) - meanerr = torch.max( - torch.abs(qc.mean() - alldata.mean(0)) / alldata.mean(0)).cpu() - print("Mean error: %f" % meanerr) - varerr = torch.max( - torch.abs(qc.variance() - alldata.var(0)) / alldata.var(0)).cpu() - print("Variance error: %f" % varerr) - counterr = ((qc.integrate(lambda x: torch.ones(x.shape[-1]).cpu()) - - qc.size) / (0.0 + qc.size)).item() - print("Count error: %f" % counterr) - print("Time %f" % (endtime - starttime)) - # Algorithm is randomized, so some of these will fail with low probability. - assert maxreldev < 1.0 - assert minerr == 0.0 - assert maxerr == 0.0 - assert interr < 0.01 - assert abs(counterr) < 0.001 - print("OK") diff --git a/spaces/sardor97/Classification_demo/README.md b/spaces/sardor97/Classification_demo/README.md deleted file mode 100644 index c2705dc4087adf7f26c4006c78fe9d26d08a51d2..0000000000000000000000000000000000000000 --- a/spaces/sardor97/Classification_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Classification Demo -emoji: 🌖 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sccstandardteam/ChuanhuChatGPT/modules/models/base_model.py b/spaces/sccstandardteam/ChuanhuChatGPT/modules/models/base_model.py deleted file mode 100644 index 995bac5f72a0a1d8cc2eed8ccdfde87928ba2f41..0000000000000000000000000000000000000000 --- a/spaces/sccstandardteam/ChuanhuChatGPT/modules/models/base_model.py +++ /dev/null @@ -1,593 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback -import pathlib - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from ..presets import * -from ..llama_func import * -from ..utils import * -from .. import shared -from ..config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - StableLM = 4 - MOSS = 5 - YuanAI = 6 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - elif "stablelm" in model_name_lower: - model_type = ModelType.StableLM - elif "moss" in model_name_lower: - model_type = ModelType.MOSS - elif "yuanai" in model_name_lower: - model_type = ModelType.YuanAI - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - # logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - response_synthesizer=None - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
      10. {domain_name}
      11. \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
          \n\n" + "".join(display_append) + "
        " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - self.auto_save(chatbot) - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - pathlib.Path(os.path.join(HISTORY_DIR, self.user_identifier, new_auto_history_filename(os.path.join(HISTORY_DIR, self.user_identifier)))).touch() - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def auto_save(self, chatbot): - history_file_path = get_history_filepath(self.user_identifier) - save_file(history_file_path, self.system_prompt, self.history, chatbot, self.user_identifier) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - logging.info(f"filename: {filename}") - if type(filename) != str and filename is not None: - filename = filename.name - try: - if "/" not in filename: - history_file_path = os.path.join(HISTORY_DIR, user_name, filename) - else: - history_file_path = filename - with open(history_file_path, "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return os.path.basename(filename), json_s["system"], json_s["chatbot"] - except: - # 没有对话历史或者对话历史解析失败 - logging.info(f"没有找到对话历史记录 {filename}") - return gr.update(), self.system_prompt, gr.update() - - def auto_load(self): - if self.user_identifier == "": - self.reset() - return self.system_prompt, gr.update() - history_file_path = get_history_filepath(self.user_identifier) - filename, system_prompt, chatbot = self.load_chat_history(history_file_path, self.user_identifier) - return system_prompt, chatbot - - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/scedlatioru/img-to-music/example/HD Online Player (Dilwale Dulhania Le Jayenge 720p Hd ) [HOT].md b/spaces/scedlatioru/img-to-music/example/HD Online Player (Dilwale Dulhania Le Jayenge 720p Hd ) [HOT].md deleted file mode 100644 index 08a21d9f36f66156d00dc20f7c9d06056973bd25..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/HD Online Player (Dilwale Dulhania Le Jayenge 720p Hd ) [HOT].md +++ /dev/null @@ -1,11 +0,0 @@ -

        HD Online Player (Dilwale Dulhania Le Jayenge 720p hd )


        Download >> https://gohhs.com/2uEAmu



        -
        -HD DVD players were much cheaper than Blu-ray devices, but Blu-ray discs have... Dilwale Dulhania Le Jayenge 1995 PROPER 720p BluRay x264- Pahe [1. 47 GB] (x264-Pahe) ... -Blu-ray Disc Blu-ray Disc Blu-ray Disc. -HD DVD Disc HD DVD Disc HD DVD Disc. -HD DVD Disc HD DVD Disc. -HD DVD Disc. -HD DVD Disc. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/The President Movie Mohsen Makhmalbaf Download 17.md b/spaces/scedlatioru/img-to-music/example/The President Movie Mohsen Makhmalbaf Download 17.md deleted file mode 100644 index 59f1bdd3dcb701443e5a876072468f53aeab156c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/The President Movie Mohsen Makhmalbaf Download 17.md +++ /dev/null @@ -1,5 +0,0 @@ -
        -

        the code, moreover, raises serious questions about the competences of the gendarmerie. because a group with a much higher level of political responsibility is given special privileges in connection with the recruitment of the ministry, it seems clear that its traditional function of policing the boundaries of the state is no longer what these islamic republic men desire. [37] the clerics hope that the ministry will foster a group of special intelligence officers who cannot be fought under civil laws but who could be used to avert political and social crises. the return of political assassinations to the country appears to have met with wide approval. for example, a reporter from vatan-e emruz told voa in december, 1983, that although the badavi was taboo, its writer and editor, mohsen makhmalbaf, ''the scourge of the regime,'' was published in the pan-turkisia newspaper as if he were a celebrity. [38] other journalists and publishers said that they planned to boycott the badavi and the day after the badavi was published it sold out the first day. [39] but the enemies of makhmalbaf were plotting. a day after its publication, islamic revolutionary court judge mohammad-reza khorrami accused makhmalbaf of communicating with the white house of america and the khatami government of the united states, of providing intelligence on iranian activities abroad, and of complicity with baha'is and foreigners who have committed crimes in iran. [40] makhmalbaf was taken into custody on december 13, 1983, while on a trip to scandinavia. [41] he and his wife were sentenced to seven years' imprisonment on september 3, 1984. [42] the president voted that day for their pardon.

        -

        The President Movie Mohsen Makhmalbaf Download 17


        Download Zip ——— https://gohhs.com/2uEAoC



        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet2/text/abs_tokenizer.py b/spaces/segments-tobias/conex/espnet2/text/abs_tokenizer.py deleted file mode 100644 index fc2ccb3c3694fef0fc4d4bc7576c355c7712fee4..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/text/abs_tokenizer.py +++ /dev/null @@ -1,14 +0,0 @@ -from abc import ABC -from abc import abstractmethod -from typing import Iterable -from typing import List - - -class AbsTokenizer(ABC): - @abstractmethod - def text2tokens(self, line: str) -> List[str]: - raise NotImplementedError - - @abstractmethod - def tokens2text(self, tokens: Iterable[str]) -> str: - raise NotImplementedError diff --git a/spaces/senger/AI-TextGenerator/style.css b/spaces/senger/AI-TextGenerator/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/senger/AI-TextGenerator/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/sgxz/bingo/src/components/button-scroll-to-bottom.tsx b/spaces/sgxz/bingo/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/shengyi-qian/3DOI/monoarti/detr/box_ops.py b/spaces/shengyi-qian/3DOI/monoarti/detr/box_ops.py deleted file mode 100644 index 002aef3d25d9322aa3fdc9fa2131360d612f258a..0000000000000000000000000000000000000000 --- a/spaces/shengyi-qian/3DOI/monoarti/detr/box_ops.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Utilities for bounding box manipulation and GIoU. -""" -import torch -from torchvision.ops.boxes import box_area -import pdb - - -def box_cxcywh_to_xyxy(x): - x_c, y_c, w, h = x.unbind(-1) - b = [(x_c - 0.5 * w), (y_c - 0.5 * h), - (x_c + 0.5 * w), (y_c + 0.5 * h)] - return torch.stack(b, dim=-1) - - -def box_xyxy_to_cxcywh(x): - x0, y0, x1, y1 = x.unbind(-1) - b = [(x0 + x1) / 2, (y0 + y1) / 2, - (x1 - x0), (y1 - y0)] - return torch.stack(b, dim=-1) - - -def rescale_bboxes(out_bbox, size): - img_h, img_w = size - b = out_bbox - #b = box_cxcywh_to_xyxy(out_bbox) - b = b * torch.tensor([img_w, img_h, img_w, img_h], dtype=torch.float32, device=b.device) - return b - - -# modified from torchvision to also return the union -def box_iou(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2] - rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2] - - wh = (rb - lt).clamp(min=0) # [N,M,2] - inter = wh[:, :, 0] * wh[:, :, 1] # [N,M] - - union = area1[:, None] + area2 - inter - - iou = inter / union - return iou, union - - -def generalized_box_iou(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - - The boxes should be in [x0, y0, x1, y1] format - - Returns a [N, M] pairwise matrix, where N = len(boxes1) - and M = len(boxes2) - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - iou, union = box_iou(boxes1, boxes2) - - lt = torch.min(boxes1[:, None, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,M,2] - area = wh[:, :, 0] * wh[:, :, 1] - - return iou - (area - union) / area - - -def masks_to_boxes(masks): - """Compute the bounding boxes around the provided masks - - The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions. - - Returns a [N, 4] tensors, with the boxes in xyxy format - """ - if masks.numel() == 0: - return torch.zeros((0, 4), device=masks.device) - - h, w = masks.shape[-2:] - - y = torch.arange(0, h, dtype=torch.float) - x = torch.arange(0, w, dtype=torch.float) - y, x = torch.meshgrid(y, x) - - x_mask = (masks * x.unsqueeze(0)) - x_max = x_mask.flatten(1).max(-1)[0] - x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - y_mask = (masks * y.unsqueeze(0)) - y_max = y_mask.flatten(1).max(-1)[0] - y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - return torch.stack([x_min, y_min, x_max, y_max], 1) \ No newline at end of file diff --git a/spaces/shigel/recipe_0626/app.py b/spaces/shigel/recipe_0626/app.py deleted file mode 100644 index f59bd5c2f5ca6d8439e448c3905e9cb295fcf722..0000000000000000000000000000000000000000 --- a/spaces/shigel/recipe_0626/app.py +++ /dev/null @@ -1,301 +0,0 @@ -import gradio as gr -import openai -import requests -import os -from dotenv import load_dotenv -import io -import sys -import json -import PIL -import time -from stability_sdk import client -import stability_sdk.interfaces.gooseai.generation.generation_pb2 as generation -import markdown2 - -title="najimino AI recipe generator" -inputs_label="どんな料理か教えてくれれば,新しいレシピを考えます" -outputs_label="najimino AIが返信をします" -visual_outputs_label="料理のイメージ" -description=""" -- ※入出力の文字数は最大1000文字程度までを目安に入力してください。回答に50秒くらいかかります. -""" - -article = """ -""" - -load_dotenv() -openai.api_key = os.getenv('OPENAI_API_KEY') -os.environ['STABILITY_HOST'] = 'grpc.stability.ai:443' -stability_api = client.StabilityInference( - key=os.getenv('STABILITY_KEY'), - verbose=True, - # engine="stable-diffusion-512-v2-1", - # engine="stable-diffusion-xl-beta-v2-2-2", - # engine="stable-diffusion-xl-1024-v0-9", - engine="stable-diffusion-xl-1024-v1-0", - # Available engines: stable-diffusion-v1 stable-diffusion-v1-5 stable-diffusion-512-v2-0 stable-diffusion-768-v2-0 - # stable-diffusion-512-v2-1 stable-diffusion-768-v2-1 stable-diffusion-xl-beta-v2-2-2 stable-inpainting-v1-0 stable-inpainting-512-v2-0 -) -# MODEL = "gpt-4" -# MODEL = "gpt-3.5-turbo-16k" -# MODEL = "gpt-3.5-turbo-0613" -MODEL = "gpt-3.5-turbo-1106" - -def get_filetext(filename, cache={}): - if filename in cache: - # キャッシュに保存されている場合は、キャッシュからファイル内容を取得する - return cache[filename] - else: - if not os.path.exists(filename): - raise ValueError(f"ファイル '{filename}' が見つかりませんでした") - with open(filename, "r") as f: - text = f.read() - # ファイル内容をキャッシュする - cache[filename] = text - return text - -def get_functions_from_schema(filename): - schema = get_filetext(filename) - schema_json = json.loads(schema) - functions = schema_json.get("functions") - return functions - -class StabilityAI: - @classmethod - def generate_image(cls, visualize_prompt): - - print("visualize_prompt:"+visualize_prompt) - - answers = stability_api.generate( - prompt=visualize_prompt, - ) - - for resp in answers: - for artifact in resp.artifacts: - if artifact.finish_reason == generation.FILTER: - print("NSFW") - if artifact.type == generation.ARTIFACT_IMAGE: - img = PIL.Image.open(io.BytesIO(artifact.binary)) - return img - -class OpenAI: - - @classmethod - def chat_completion(cls, prompt, start_with=""): - constraints = get_filetext(filename = "constraints.md") - template = get_filetext(filename = "template.md") - - # ChatCompletion APIに渡すデータを定義する - data = { - "model": MODEL, - "messages": [ - {"role": "system", "content": constraints} - ,{"role": "system", "content": template} - ,{"role": "assistant", "content": "Sure!"} - ,{"role": "user", "content": prompt} - ,{"role": "assistant", "content": start_with} - ], - } - - # 文章生成にかかる時間を計測する - start = time.time() - # ChatCompletion APIを呼び出す - response = requests.post( - "https://api.openai.com/v1/chat/completions", - headers={ - "Content-Type": "application/json", - "Authorization": f"Bearer {openai.api_key}" - }, - json=data - ) - print("gpt generation time: "+str(time.time() - start)) - - # ChatCompletion APIから返された結果を取得する - result = response.json() - print(result) - - content = result["choices"][0]["message"]["content"].strip() - - visualize_prompt = content.split("### Prompt for Visual Expression\n\n")[1] - - #print("split_content:"+split_content) - - #if len(split_content) > 1: - # visualize_prompt = split_content[1] - #else: - # visualize_prompt = "vacant dish" - - #print("visualize_prompt:"+visualize_prompt) - - answers = stability_api.generate( - prompt=visualize_prompt, - ) - - @classmethod - def chat_completion_with_function(cls, prompt, messages, functions): - print("prompt:"+prompt) - - # 文章生成にかかる時間を計測する - start = time.time() - # ChatCompletion APIを呼び出す - response = openai.ChatCompletion.create( - model=MODEL, - messages=messages, - functions=functions, - function_call={"name": "format_recipe"} - ) - print("gpt generation time: "+str(time.time() - start)) - - # ChatCompletion APIから返された結果を取得する - message = response.choices[0].message - print("chat completion message: " + json.dumps(message, indent=2)) - - return message - -class NajiminoAI: - - def __init__(self, user_message): - self.user_message = user_message - - def generate_recipe_prompt(self): - template = get_filetext(filename="template.md") - prompt = f""" - {self.user_message} - --- - 上記を元に、下記テンプレートを埋めてください。 - --- - {template} - """ - return prompt - - def format_recipe(self, lang, title, description, ingredients, instruction, comment_feelings_taste, explanation_to_blind_person, prompt_for_visual_expression): - - template = get_filetext(filename = "template.md") - debug_message = template.format( - lang=lang, - title=title, - description=description, - ingredients=ingredients, - instruction=instruction, - comment_feelings_taste=comment_feelings_taste, - explanation_to_blind_person=explanation_to_blind_person, - prompt_for_visual_expression=prompt_for_visual_expression - ) - - print("debug_message: "+debug_message) - - return debug_message - - @classmethod - def generate(cls, user_message): - - najiminoai = NajiminoAI(user_message) - - return najiminoai.generate_recipe() - - def generate_recipe(self): - - user_message = self.user_message - constraints = get_filetext(filename = "constraints.md") - - messages = [ - {"role": "system", "content": constraints} - ,{"role": "user", "content": user_message} - ] - - functions = get_functions_from_schema('schema.json') - - message = OpenAI.chat_completion_with_function(prompt=user_message, messages=messages, functions=functions) - - image = None - html = None - if message.get("function_call"): - function_name = message["function_call"]["name"] - - args = json.loads(message["function_call"]["arguments"]) - - lang=args.get("lang") - title=args.get("title") - description=args.get("description") - ingredients=args.get("ingredients") - instruction=args.get("instruction") - comment_feelings_taste=args.get("comment_feelings_taste") - explanation_to_blind_person=args.get("explanation_to_blind_person") - prompt_for_visual_expression_in_en=args.get("prompt_for_visual_expression_in_en") - - prompt_for_visual_expression = \ - prompt_for_visual_expression_in_en \ - + " delicious looking extremely detailed photo f1.2 (50mm|85mm) award winner depth of field bokeh perfect lighting " - - print("prompt_for_visual_expression: "+prompt_for_visual_expression) - - # 画像生成にかかる時間を計測する - start = time.time() - image = StabilityAI.generate_image(prompt_for_visual_expression) - print("image generation time: "+str(time.time() - start)) - - function_response = self.format_recipe( - lang=lang, - title=title, - description=description, - ingredients=ingredients, - instruction=instruction, - comment_feelings_taste=comment_feelings_taste, - explanation_to_blind_person=explanation_to_blind_person, - prompt_for_visual_expression=prompt_for_visual_expression - ) - - html = ( - "
        " - + "

        " - + markdown2.markdown(function_response) - + "

        " - ) - return [image, html] - -def main(): - # インプット例をクリックした時のコールバック関数 - def click_example(example): - # クリックされたインプット例をテキストボックスに自動入力 - inputs.value = example - time.sleep(0.1) # テキストボックスに文字が表示されるまで待機 - # 自動入力後に実行ボタンをクリックして結果を表示 - execute_button.click() - - iface = gr.Interface(fn=NajiminoAI.generate, - examples=[ - ["ラー麺 スイカ かき氷 八ツ橋"], - ["お好み焼き 鯖"], - ["茹でたアスパラガスに合う季節のソース"], - ], - inputs=gr.Textbox(label=inputs_label), - outputs=[ - gr.Image(label="Visual Expression"), - "html" - ], - title=title, - description=description, - article=article - ) - - iface.launch() - -if __name__ == '__main__': - function = '' - if len(sys.argv) > 1: - function = sys.argv[1] - - if function == 'generate': - NajiminoAI.generate("グルテンフリーの香ばしいサバのお好み焼き") - - elif function == 'generate_image': - image = StabilityAI.generate_image("Imagine a delicious gluten-free okonomiyaki with mackerel. The okonomiyaki is crispy on the outside and chewy on the inside. It is topped with savory sauce and creamy mayonnaise, creating a mouthwatering visual. The dish is garnished with finely chopped green onions and red pickled ginger, adding a pop of color. The mackerel fillets are beautifully grilled and placed on top of the okonomiyaki, adding a touch of elegance. The dish is served on a traditional Japanese plate, completing the visual presentation.") - print("image: " + image) - - # imageが何のクラス確認する - if type(image) == PIL.PngImagePlugin.PngImageFile: - #save image - image.save("image.png") - - else: - main() diff --git a/spaces/shikunl/prismer/prismer/demo.py b/spaces/shikunl/prismer/prismer/demo.py deleted file mode 100644 index 53c5a35c0f4806a8d985c178c4ccdaba5fff3bb1..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/demo.py +++ /dev/null @@ -1,77 +0,0 @@ -import os -import argparse -import torch -try: - import ruamel_yaml as yaml -except ModuleNotFoundError: - import ruamel.yaml as yaml - - -from model.prismer_caption import PrismerCaption -from dataset import create_dataset, create_loader -from tqdm import tqdm - -parser = argparse.ArgumentParser() -parser.add_argument('--mode', default='') -parser.add_argument('--port', default='') - -parser.add_argument('--exp_name', default='', type=str) -args = parser.parse_args() - -# load config -config = yaml.load(open('configs/caption.yaml', 'r'), Loader=yaml.Loader)['demo'] - -# generate expert labels -if len(config['experts']) > 0: - script_name = f'python experts/generate_depth.py' - os.system(script_name) - print('***** Generated Depth *****') - - script_name = f'python experts/generate_edge.py' - os.system(script_name) - print('***** Generated Edge *****') - - script_name = f'python experts/generate_normal.py' - os.system(script_name) - print('***** Generated Surface Normals *****') - - script_name = f'python experts/generate_objdet.py' - os.system(script_name) - print('***** Generated Object Detection Labels *****') - - script_name = f'python experts/generate_ocrdet.py' - os.system(script_name) - print('***** Generated OCR Detection Labels *****') - - script_name = f'python experts/generate_segmentation.py' - os.system(script_name) - print('***** Generated Segmentation Labels *****') - -# load datasets -_, test_dataset = create_dataset('caption', config) -test_loader = create_loader(test_dataset, batch_size=1, num_workers=4, train=False) - -# load pre-trained model -model = PrismerCaption(config) -state_dict = torch.load(f'logging/caption_{args.exp_name}/pytorch_model.bin', map_location='cuda:0') -model.load_state_dict(state_dict) -tokenizer = model.tokenizer - -# inference -model.eval() -with torch.no_grad(): - for step, (experts, data_ids) in enumerate(tqdm(test_loader)): - captions = model(experts, train=False, prefix=config['prefix']) - - captions = tokenizer(captions, max_length=30, padding='max_length', return_tensors='pt').input_ids - caption = captions.to(experts['rgb'].device)[0] - - caption = tokenizer.decode(caption, skip_special_tokens=True) - caption = caption.capitalize() + '.' - - # save caption - save_path = test_loader.dataset.data_list[data_ids[0]]['image'].replace('jpg', 'txt') - with open(save_path, 'w') as f: - f.write(caption) - -print('All Done.') diff --git a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/roi_heads/custom_fast_rcnn.py b/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/roi_heads/custom_fast_rcnn.py deleted file mode 100644 index 9ea19c82f2f08968eb824201d34b9494a46374d4..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/roi_heads/custom_fast_rcnn.py +++ /dev/null @@ -1,192 +0,0 @@ -import logging -import math -import json -import os -from typing import Dict, Union -import torch -from fvcore.nn import giou_loss, smooth_l1_loss -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Linear, ShapeSpec, batched_nms, cat, nonzero_tuple -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.structures import Boxes, Instances -from detectron2.utils.events import get_event_storage -from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers -from detectron2.modeling.roi_heads.fast_rcnn import _log_classification_stats - -__all__ = ["CustomFastRCNNOutputLayers"] - - -def _load_class_freq(cfg): - freq_weight = None - if cfg.MODEL.ROI_BOX_HEAD.USE_EQL_LOSS or cfg.MODEL.ROI_BOX_HEAD.USE_FED_LOSS: - # print('Loading', cfg.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH) - if not os.path.exists(cfg.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH): - return - cat_info = json.load(open(cfg.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH, 'r')) - cat_info = torch.tensor( - [c['image_count'] for c in sorted(cat_info, key=lambda x: x['id'])], - device=torch.device(cfg.MODEL.DEVICE)) - if cfg.MODEL.ROI_BOX_HEAD.USE_FED_LOSS and \ - cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT > 0.: - freq_weight = \ - cat_info.float() ** cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT - else: - thresh, _ = torch.kthvalue( - cat_info, - len(cat_info) - cfg.MODEL.ROI_BOX_HEAD.EQL_FREQ_CAT + 1) - freq_weight = (cat_info < thresh.item()).float() - - return freq_weight - - -def _load_class_hierarchy(cfg): - hierarchy_weight = None - if cfg.MODEL.ROI_BOX_HEAD.HIERARCHY_IGNORE: - if not os.path.exists(cfg.MODEL.ROI_BOX_HEAD.HIERARCHY_PATH): - return - # print('Loading', cfg.MODEL.ROI_BOX_HEAD.HIERARCHY_PATH) - hierarchy_data = json.load( - open(cfg.MODEL.ROI_BOX_HEAD.HIERARCHY_PATH, 'r')) - parents = {int(k): v for k, v in hierarchy_data['parents'].items()} - chirlds = {int(k): v for k, v in hierarchy_data['childs'].items()} - categories = hierarchy_data['categories'] - continousid = sorted([x['id'] for x in categories]) - catid2continous = {x['id']: continousid.index(x['id']) \ - for x in categories} - C = len(categories) - is_parents = torch.zeros((C + 1, C), device=torch.device(cfg.MODEL.DEVICE)).float() - is_chirlds = torch.zeros((C + 1, C), device=torch.device(cfg.MODEL.DEVICE)).float() - for c in categories: - cat_id = catid2continous[c['id']] - is_parents[cat_id, [catid2continous[x] for x in parents[c['id']]]] = 1 - is_chirlds[cat_id, [catid2continous[x] for x in chirlds[c['id']]]] = 1 - assert (is_parents * is_chirlds).sum() == 0 - if cfg.MODEL.ROI_BOX_HEAD.HIERARCHY_POS_PARENTS: - hierarchy_weight = (1 - is_chirlds, is_parents[:C]) - else: - hierarchy_weight = 1 - (is_parents + is_chirlds) # (C + 1) x C - - return hierarchy_weight - - -class CustomFastRCNNOutputLayers(FastRCNNOutputLayers): - def __init__( - self, - cfg, - input_shape: ShapeSpec, - **kwargs - ): - super().__init__(cfg, input_shape, **kwargs) - self.use_sigmoid_ce = cfg.MODEL.ROI_BOX_HEAD.USE_SIGMOID_CE - self.use_eql_loss = cfg.MODEL.ROI_BOX_HEAD.USE_EQL_LOSS - self.use_fed_loss = cfg.MODEL.ROI_BOX_HEAD.USE_FED_LOSS - self.fed_loss_num_cat = cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_NUM_CAT - self.pos_parents = cfg.MODEL.ROI_BOX_HEAD.HIERARCHY_POS_PARENTS - self.hierarchy_ignore = cfg.MODEL.ROI_BOX_HEAD.HIERARCHY_IGNORE - - if self.use_sigmoid_ce: - prior_prob = cfg.MODEL.ROI_BOX_HEAD.PRIOR_PROB - bias_value = -math.log((1 - prior_prob) / prior_prob) - nn.init.constant_(self.cls_score.bias, bias_value) - - self.freq_weight = _load_class_freq(cfg) - hierarchy_weight = _load_class_hierarchy(cfg) - if self.pos_parents and (hierarchy_weight is not None): - self.hierarchy_weight = hierarchy_weight[0] # (C + 1) x C - self.is_parents = hierarchy_weight[1] - else: - self.hierarchy_weight = hierarchy_weight # (C + 1) x C - - - def predict_probs(self, predictions, proposals): - scores, _ = predictions - num_inst_per_image = [len(p) for p in proposals] - if self.use_sigmoid_ce: - probs = scores.sigmoid() - else: - probs = F.softmax(scores, dim=-1) - - return probs.split(num_inst_per_image, dim=0) - - - def sigmoid_cross_entropy_loss( - self, pred_class_logits, gt_classes, use_advanced_loss=True): - if pred_class_logits.numel() == 0: - return pred_class_logits.new_zeros([1])[0] # This is more robust than .sum() * 0. - - B = self.pred_class_logits.shape[0] - C = self.pred_class_logits.shape[1] - 1 - - target = self.pred_class_logits.new_zeros(B, C + 1) - target[range(len(gt_classes)), gt_classes] = 1 # B x (C + 1) - target = target[:, :C] # B x C - - weight = 1 - if use_advanced_loss and (self.freq_weight is not None) and \ - self.use_fed_loss: # fedloss - appeared = torch.unique(gt_classes) # C' - prob = appeared.new_ones(C + 1).float() - if len(appeared) < self.fed_loss_num_cat: - if self.fed_loss_freq_weight > 0: - prob[:C] = self.freq_weight.float().clone() - else: - prob[:C] = prob[:C] * (1 - self.freq_weight) - prob[appeared] = 0 - more_appeared = torch.multinomial( - prob, self.fed_loss_num_cat - len(appeared), - replacement=False) - appeared = torch.cat([appeared, more_appeared]) - appeared_mask = appeared.new_zeros(C + 1) - appeared_mask[appeared] = 1 # C + 1 - appeared_mask = appeared_mask[:C] - fed_w = appeared_mask.view(1, C).expand(B, C) - weight = weight * fed_w - - if use_advanced_loss and (self.hierarchy_weight is not None) and \ - self.hierarchy_ignore: - if self.pos_parents: - target = torch.mm(target, self.is_parents) + target # B x C - hierarchy_w = self.hierarchy_weight[gt_classes] # B x C - weight = weight * hierarchy_w - - cls_loss = F.binary_cross_entropy_with_logits( - self.pred_class_logits[:, :-1], target, reduction='none') # B x C - return torch.sum(cls_loss * weight) / B - - - def losses(self, predictions, proposals, use_advanced_loss=True): - """ - enable advanced loss - """ - scores, proposal_deltas = predictions - gt_classes = ( - cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0) - ) - _log_classification_stats(scores, gt_classes) - - - if len(proposals): - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4 - assert not proposal_boxes.requires_grad, "Proposals should not require gradients!" - gt_boxes = cat( - [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals], - dim=0, - ) - else: - proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device) - - - if self.use_sigmoid_ce: - loss_cls = self.sigmoid_cross_entropy_loss( - scores, gt_classes, use_advanced_loss) - else: - assert not use_advanced_loss - loss_cls = self.softmax_cross_entropy_loss(scores, gt_classes) - return { - "loss_cls": loss_cls, - "loss_box_reg": self.box_reg_loss( - proposal_boxes, gt_boxes, proposal_deltas, gt_classes) - } \ No newline at end of file diff --git a/spaces/silencewing/server/youyou/split.py b/spaces/silencewing/server/youyou/split.py deleted file mode 100644 index d3b57fa285d8f433a939b4d12df467299ed9b20a..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/split.py +++ /dev/null @@ -1,22 +0,0 @@ -import re -s = 'book 书 ruler 尺子 pencil 铅笔 eraser 橡皮pencil case 铅笔盒 backpack 书包 school 学校eye 眼睛 hand 手 ear 耳朵 mouth 嘴nose 鼻子 foot(feet) 脚 face 脸 leg 腿 arm 手臂cat 猫 bird 鸟 rabbit 兔 dog 狗 chicken 鸡 duck 鸭monkey 猴子 tiger 虎 panda 熊猫 elephant 大象 fish 鱼one 一 two 二 three 三 four 四 five 五 six 六seven 七 eight 八 nine 九 ten 十red 红色 yellow 黄色 purple 紫色 brown 棕色 orange 橙色 white 白色 green 绿色 pink 粉红色 blue 蓝色 black 黑色 apple 苹果 banana 香蕉 peach 桃 melon 瓜pear 梨 orange 橙子 grape 葡萄 strawberry 草莓 pineapple 菠萝 classroom 教室 door 门 window 窗 blackboard 黑板wall 墙 desk 课桌 chair 椅子 boy 男孩 girl 女孩in 在...里面 on 在...上面 under 在...下面behind 在...后面 next to 下一个 where 哪里 room 房间 closet 关门 telephone 电话 computer 电脑TV 电视 bed 床 picture 图片 table 桌子lamp 台灯 armchair 沙发 toys 玩具 plane 飞机 boat 小船 train 火车ball 球 teddy bear 泰迪熊 bus 公交车 car 汽车doll 玩偶 pinwheel 纸风车 box 箱子 Shapes 形状 circle 圆形 triangle 三角形 rectangle 长方形square 正方形 eleven 十一 twelve 十二 Thirteen 十三fourteen 十四 fifteen 十五 sixteen 十六 seventeen 十七eighteen 十八 nineteen 十九 twenty 二十 clothes 衣服 T-shirt 丁恤 pants 长裤 shorts 短裤jacket 夹克 sweater 毛衣 skirt 短裙dress 连衣裙 shoe 鞋 sock 袜子 food 食物 drink 饮料 rice 米 noodles 面条jiaozi 饺子 tofu 豆腐 vegetables 蔬菜 meat 肉fish 鱼 chicken 鸡肉 bread 面包 milk 牛奶ice-cream 冰激凌 juice 果汁 egg 鸡蛋 salad 色拉hamburger 汉堡包 cake 蛋糕 ' -# result = "".join(i for i in s if ord(i) < 256) -zh_list = [] -en_list = [] -# result = re.match('(([a-z]|[A-Z])+\b+([\u4e00-\u9fa5])+)+', s) -zh_list = re.findall('[\u4e00-\u9fa5...]+',s) -en_list = re.findall('(([a-zA-Z\s\(\)\-])+)',s) - -# print(zh_list) -# print(en_list) -# print(','.join(["'"+g[0]+"'" for g in en_list])) - -result = [] -for z,e in zip(zh_list,en_list): - print(z,e[0]) - result.append(str.strip(e[0]) + ' —— ' + z) - - -print(len(zh_list)) -print(len(en_list)) -print(','.join(["'"+str.strip(g)+"'" for g in result])) \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 3860 Album by NBA YoungBoy Quando Rondo - Full MP3 Tracks.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 3860 Album by NBA YoungBoy Quando Rondo - Full MP3 Tracks.md deleted file mode 100644 index 507ae788ec6562afd89152513c4de6f9ac737222..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 3860 Album by NBA YoungBoy Quando Rondo - Full MP3 Tracks.md +++ /dev/null @@ -1,170 +0,0 @@ - -

        NBA YoungBoy 3860 Album Zip Download: Everything You Need to Know

        -

        If you are a fan of rap music, you might have heard of NBA YoungBoy, one of the most popular and prolific rappers in the industry. The Baton Rouge native has released over 20 projects since 2015, including six studio albums, two compilation albums, and 26 mixtapes. His latest release is 3860, a collaborative mixtape with his friend and fellow rapper Quando Rondo. Here is everything you need to know about NBA YoungBoy 3860 album zip download.

        -

        nba youngboy 3860 album zip download


        DOWNLOADhttps://ssurll.com/2uNZPE



        -

        Who is NBA YoungBoy?

        -

        NBA YoungBoy, whose real name is Kentrell DeSean Gaulden, was born on October 20, 1999, in Baton Rouge, Louisiana. He started rapping and recording at a young age, inspired by local artists like Lil Phat and Boosie Badazz. He dropped out of high school in ninth grade and was arrested for robbery in 2015. While in juvenile detention, he began writing lyrics for his debut project, Life Before Fame.

        -

        After his release, he continued to release independent mixtapes, such as Mind of a Menace, Before I Go, and 38 Baby. He gained a loyal fan base for his raw and honest street narratives, as well as his melodic vocals and aggressive punch. In late 2017, he signed a deal with Atlantic Records and released his breakthrough single "Outside Today", which peaked at number 31 on the Billboard Hot 100 chart.

        -

        Since then, he has released several chart-topping projects, such as AI YoungBoy 2, Top, Sincerely Kentrell, The Last Slimeto, I Rest My Case, and Don't Try This at Home. He has also collaborated with artists like Juice WRLD, Future, Lil Baby, Rich The Kid, Nicki Minaj, Roddy Ricch, Lil Wayne, and more. He is one of the most streamed artists in the world, with over 10 billion streams on Spotify and YouTube combined.

        -

        nba youngboy and quando rondo 3860 album zip download
        -nba youngboy 3860 full album zip download free
        -nba youngboy 3860 album zip download mp3
        -nba youngboy 3860 album zip download leak
        -nba youngboy 3860 album zip download reddit
        -nba youngboy 3860 album zip download audiomack
        -nba youngboy 3860 album zip download fakaza
        -nba youngboy 3860 album zip download m4a
        -nba youngboy 3860 album zip download zippyshare
        -nba youngboy 3860 album zip download rar
        -nba youngboy 3860 album zip download torrent
        -nba youngboy 3860 album zip download mediafire
        -nba youngboy 3860 album zip download google drive
        -nba youngboy 3860 album zip download dropbox
        -nba youngboy 3860 album zip download mega
        -nba youngboy 3860 album zip download stream
        -nba youngboy 3860 album zip download online
        -nba youngboy 3860 album zip download link
        -nba youngboy 3860 album zip download songs
        -nba youngboy 3860 album zip download tracklist
        -nba youngboy 3860 album zip download lyrics
        -nba youngboy 3860 album zip download review
        -nba youngboy 3860 album zip download release date
        -nba youngboy 3860 album zip download cover art
        -nba youngboy 3860 album zip download features
        -nba youngboy 3860 deluxe edition album zip download
        -nba youngboy 3860 instrumental album zip download
        -nba youngboy 3860 clean version album zip download
        -nba youngboy 3860 explicit version album zip download
        -nba youngboy 3860 bonus tracks album zip download
        -quando rondo and nba youngboy 3860 mixtape zip download
        -quando rondo and nba youngboy 3860 collaboration album zip download
        -quando rondo and nba youngboy 3860 joint project album zip download
        -quando rondo and nba youngboy 3860 new music album zip download
        -quando rondo and nba youngboy 3860 latest songs album zip download
        -quando rondo and nba youngboy 3860 best hits album zip download
        -quando rondo and nba youngboy 3860 rap songs album zip download
        -quando rondo and nba youngboy 3860 hip hop songs album zip download
        -quando rondo and nba youngboy 3860 trap songs album zip download
        -quando rondo and nba youngboy 3860 street songs album zip download
        -quando rondo and nba youngboy 3860 gangsta songs album zip download
        -quando rondo and nba youngboy 3860 real songs album zip download
        -quando rondo and nba youngboy 3860 raw songs album zip download
        -quando rondo and nba youngboy 3860 hard songs album zip download
        -quando rondo and nba youngboy 3860 fire songs album zip download
        -quando rondo and nba youngboy 3860 dope songs album zip download
        -quando rondo and nba youngboy 3860 lit songs album zip download

        -

        What is 3860?

        -

        3860 is a collaborative mixtape by NBA YoungBoy and Quando Rondo, released on December 6, 2022. The title refers to the address of a house in Baton Rouge where the two rappers used to hang out and record music. The mixtape features 16 tracks and guest appearances from Lul Timm and Lil Durk.

        -

        The mixtape showcases the chemistry and friendship between NBA YoungBoy and Quando Rondo, who have been working together since 2018. The two rappers share similar backgrounds and musical styles, blending trap beats with melodic hooks and emotional lyrics. The mixtape covers topics such as loyalty, love, violence, fame, money, drugs, and death.

        -

        What are the songs on 3860?

        -

        Here is the tracklist of 3860:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        No.TitleLength
        1"I Swear" (featuring Lul Timm)2:37
        2"It's On"2:29
        3"Casket Talk"3:07
        4"Give Me A Sign"2:49
        5"Want Me Dead"
        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        No.TitleLength
        6"No Love"3:12
        7"Real As It Gets" (featuring Lil Durk)3:02
        8"Never Change"2:54
        9"Soul Reaper"2:46
        10"Life Goes On"3:18
        11"No Cap"2:51
        12"Too Much"2:58
        13"Double Back"3:05
        14"Ride For Me"3:09
        15"Loyalty Over Love"3:15
        < -t d >" Free Smoke"2:43
        -

        How was 3860 received?

        -

        The mixtape received positive reviews from critics and fans alike, who praised the chemistry and versatility of NBA YoungBoy and Quando Rondo. The mixtape debuted at number 4 on the Billboard 200 chart, selling 86,000 units in its first week. It also reached number 1 on the Billboard Top Rap Albums chart and number 2 on the Billboard Top R&B/Hip-Hop Albums chart.

        -

        The mixtape spawned several singles, such as "Real As It Gets", "No Love", and "Life Goes On". The music videos for these songs have amassed millions of views on YouTube. The mixtape also received attention from other artists, such as Drake, who posted a screenshot of him listening to "Casket Talk" on his Instagram story.

        -

        What are some of the controversies surrounding NBA YoungBoy?

        -

        NBA YoungBoy has been involved in several legal issues and controversies throughout his career. He has been arrested multiple times for charges such as attempted murder, assault, kidnapping, drug possession, firearm possession, and probation violation. He has also been accused of domestic violence by some of his ex-girlfriends, who have alleged that he physically and verbally abused them.

        -

        NBA YoungBoy has also been involved in several feuds with other rappers, such as Fredo Bang, Kevin Gates, Kodak Black, JayDaYoungan, and King Von. He has dissed them in his songs and social media posts, and has also been involved in physical altercations with some of them. He has also faced criticism for his reckless lifestyle and his influence on his young fans.

        -

        Conclusion: Is 3860 worth listening to?

        -

        If you are a fan of rap music, especially trap music, you should definitely check out NBA YoungBoy 3860 album zip download. The mixtape showcases the talent and charisma of NBA YoungBoy and Quando Rondo, who deliver catchy hooks, hard-hitting bars, and emotional stories over trap beats. The mixtape is a testament to their friendship and loyalty, as well as their resilience and ambition.

        -

        NBA YoungBoy 3860 album zip download is one of the best rap projects of 2022, and it deserves your attention. You can download it from various platforms, such as Spotify, Apple Music, Tidal, SoundCloud, or YouTube. You can also buy it from online stores, such as Amazon or iTunes. You won't regret it!

        -

        Frequently Asked Questions about NBA YoungBoy and 3860:

        -

        Q: What does NBA stand for in NBA YoungBoy's name?

        -

        A: NBA stands for Never Broke Again, which is a motto that NBA YoungBoy lives by. He also has a label called Never Broke Again LLC, which includes artists like Quando Rondo, NoCap, P Yungin, OG 3Three, and more.

        -

        Q: How many children does NBA YoungBoy have?

        -

        Q: How many children does NBA YoungBoy have?

        -

        A: NBA YoungBoy has seven children from six different women. His children are Kayden (born in 2016), Kamiri (born in 2017), Taylin (born in 2017), Kamron (born in 2018), Kacey (born in 2019), Kodi (born in 2020), and Kentrell Jr. (born in 2020).

        -

        Q: What is the net worth of NBA YoungBoy?

        -

        A: According to Celebrity Net Worth, NBA YoungBoy has an estimated net worth of $6 million as of 2022. He earns most of his income from his music sales, streams, tours, merchandise, and endorsements.

        -

        Q: Is NBA YoungBoy in jail?

        -

        A: Yes, NBA YoungBoy is currently in jail. He was arrested on March 22, 2021, in Los Angeles, after a federal warrant was issued for his arrest. He was charged with one count of illegal possession of a firearm by a felon and one count of possession of a firearm not registered to him. He is facing up to 10 years in prison if convicted.

        -

        Q: How can I contact NBA YoungBoy?

        -

        A: You can follow NBA YoungBoy on his social media accounts, such as Instagram, Twitter, Facebook, and TikTok. You can also send him fan mail to his official address: NBA YoungBoy, Never Broke Again LLC, P.O. Box 64576, Baton Rouge, LA 70896.

        -

        Q: Where can I find more information about NBA YoungBoy and 3860?

        -

        A: You can visit NBA YoungBoy's official website, where you can find his latest news, music, videos, tour dates, merchandise, and more. You can also check out his YouTube channel, where he uploads his music videos and vlogs. You can also read some of his interviews and articles on various online platforms, such as Complex, XXL, Rolling Stone, Billboard, and more.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/clue1.1/data_preprocessing/tnews_preprocessing.py b/spaces/skf15963/summary/fengshen/examples/clue1.1/data_preprocessing/tnews_preprocessing.py deleted file mode 100644 index 9f187fac71b411d77273a1a45544eb9c35151bc9..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/clue1.1/data_preprocessing/tnews_preprocessing.py +++ /dev/null @@ -1,71 +0,0 @@ -import json -from tqdm import tqdm -import argparse - -label2desc={"news_story": "故事", - "news_culture": "文化", - "news_entertainment": "娱乐", - "news_sports": "体育", - "news_finance": "财经", - "news_house": "房产", - "news_car": "汽车", - "news_edu": "教育", - "news_tech": "科技", - "news_military": "军事", - "news_travel": "旅游", - "news_world": "国际", - "news_stock": "股票", - "news_agriculture": "农业", - "news_game": "电竞"} - -def load_data(file_path,is_training=False): - with open(file_path, 'r', encoding='utf8') as f: - lines = f.readlines() - result=[] - for line in tqdm(lines): - data = json.loads(line) - texta = data['sentence'] - textb = '' - question = '下面新闻属于哪一个类别?' - choice = [v for k,v in label2desc.items()] - answer = label2desc[data['label_desc']] if 'label_desc' in data.keys() else '' - label = choice.index(answer) if 'label_desc' in data.keys() else 0 - text_id = data['id'] if 'id' in data.keys() else 0 - result.append({'texta':texta, - 'textb':textb, - 'question':question, - 'choice':choice, - 'answer':answer, - 'label':label, - 'id':text_id}) - print(result[0]) - return result - - -def save_data(data,file_path): - with open(file_path, 'w', encoding='utf8') as f: - for line in data: - json_data=json.dumps(line,ensure_ascii=False) - f.write(json_data+'\n') - -import os - -if __name__=="__main__": - parser = argparse.ArgumentParser(description="train") - parser.add_argument("--data_path", type=str,default="") - parser.add_argument("--save_path", type=str,default="") - - args = parser.parse_args() - - - data_path = args.data_path - save_path = args.save_path - - if not os.path.exists(save_path): - os.makedirs(save_path) - - file_list = ['train','dev','test1.0','test1.1'] - for file in file_list: - file_path = os.path.join(data_path,file+'.json') - output_path = os.path.join(save_path,file+'.json') - save_data(load_data(file_path),output_path) \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/mt5_summary/mt5_summary.py b/spaces/skf15963/summary/fengshen/examples/mt5_summary/mt5_summary.py deleted file mode 100644 index de564026ae7a32873cc39515f421adfb9d7e4568..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/mt5_summary/mt5_summary.py +++ /dev/null @@ -1,233 +0,0 @@ -from fengshen.data.task_dataloader.task_datasets import LCSTSDataModel -from transformers import T5Tokenizer, MT5ForConditionalGeneration -from transformers.optimization import get_linear_schedule_with_warmup -from pytorch_lightning import Trainer, loggers -from pytorch_lightning.callbacks import ModelCheckpoint -from transformers import AutoTokenizer -import pytorch_lightning as pl -import json -import argparse -import torch -import os -import sys -sys.path.append('./') - -# os.environ["CUDA_VISIBLE_DEVICES"] = '4,5,6,7' - - -def test(): - tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") - article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien." - summary = "Weiter Verhandlung in Syrien." - article = "日前,方舟子发文直指林志颖旗下爱碧丽推销假保健品,引起哗然。调查发现,爱碧丽没有自己的生产加工厂。 \ - 其胶原蛋白饮品无核心研发,全部代工生产。号称有“逆生长”功效的爱碧丽“梦幻奇迹限量组”售价>高达1080元,实际成本仅为每瓶4元!" - summary = "林志颖公司疑涉虚假营销无厂房无研发" - inputs = tokenizer(article, rturn_tensors="pt") - tt = tokenizer.encode_plus(summary, max_length=64, - padding='max_length', truncation='longest_first') - print('tt:', tt) - print('inputs:', inputs) - with tokenizer.as_target_tokenizer(): - labels = tokenizer(summary, return_tensors="pt") - print('labels:', labels) - print('origin labels:', tokenizer.decode(labels['input_ids'][0])) - - model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small") - # outputs = model(input_ids=inputs["input_ids"], labels=labels["input_ids"]) - # print(outputs.keys()) - - # evaluation - model.eval() - generated_ids = model.generate( - input_ids=inputs['input_ids'], - attention_mask=inputs['attention_mask'], - max_length=150, - num_beams=2, - repetition_penalty=2.5, - length_penalty=1.0, - early_stopping=True - ) - preds = [tokenizer.decode(g, skip_special_tokens=True, - clean_up_tokenization_spaces=True) for g in generated_ids] - print(preds) - - -class MT5FinetuneSummaryModelCheckpoint: - @staticmethod - def add_argparse_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - - parser.add_argument('--monitor', default='train_loss', type=str) - parser.add_argument('--mode', default='min', type=str) - parser.add_argument('--dirpath', default='./ckpt/', type=str) - parser.add_argument( - '--filename', default='model-{epoch:02d}-{train_loss:.4f}', type=str) - parser.add_argument('--save_last', action='store_true', default=True) - parser.add_argument('--save_top_k', default=3, type=float) - parser.add_argument('--every_n_train_steps', default=100, type=float) - parser.add_argument('--save_weights_only', default=True, type=bool) - - return parent_args - - def __init__(self, args): - self.callbacks = ModelCheckpoint(monitor=args.monitor, - save_top_k=args.save_top_k, - mode=args.mode, - every_n_train_steps=args.every_n_train_steps, - save_weights_only=args.save_weights_only, - dirpath=args.dirpath, - filename=args.filename, - save_last=args.save_last) - - -class MT5FinetuneSummary(pl.LightningModule): - - @staticmethod - def add_model_specific_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - parser.add_argument('--learning_rate', default=1e-4, type=float) - parser.add_argument('--weight_decay', default=0.1, type=float) - parser.add_argument('--warmup', default=0.01, type=float) - return parent_args - - def __init__(self, args, num_data): - super().__init__() - self.args = args - self.num_data = num_data - print('num_data:', num_data) - self.model = MT5ForConditionalGeneration.from_pretrained(args.pretrained_model_path) - - def setup(self, stage) -> None: - if stage == 'fit': - num_gpus = self.trainer.gpus if self.trainer.gpus is not None else 0 - self.total_step = int(self.trainer.max_epochs * self.num_data / - (max(1, num_gpus) * self.trainer.accumulate_grad_batches)) - print('Total training step:', self.total_step) - - def training_step(self, batch, batch_idx): - output = self.model(input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], labels=batch['labels']) - # output = self.model(input_ids=batch['input_ids'], labels=batch['labels']) - # acc = self.comput_metrix(output.logits, batch['labels']) - self.log('train_loss', output.loss) - return output.loss - - def comput_metrix(self, logits, labels): - y_pred = torch.argmax(logits, dim=-1) - y_pred = y_pred.view(size=(-1,)) - y_true = labels.view(size=(-1,)).float() - corr = torch.eq(y_pred, y_true) - acc = torch.sum(corr.float())/labels.size()[0] - return acc - - def validation_step(self, batch, batch_idx): - output = self.model(input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], labels=batch['labels']) - # output = self.model(input_ids=batch['input_ids'], labels=batch['labels']) - # acc = self.comput_metrix(output.logits, batch['labels']) - self.log('val_loss', output.loss) - # self.log('val_acc', acc) - - def predict_step(self, batch, batch_idx): - text = batch['text'] - summary = batch['summary'] - generated_ids = self.model.generate( - input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], - max_length=self.args.max_dec_length - ) - return {"pred": generated_ids, "text": text, "summary": summary} - - def configure_optimizers(self): - no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight'] - paras = list( - filter(lambda p: p[1].requires_grad, self.named_parameters())) - paras = [{ - 'params': - [p for n, p in paras if not any(nd in n for nd in no_decay)], - 'weight_decay': self.args.weight_decay - }, { - 'params': [p for n, p in paras if any(nd in n for nd in no_decay)], - 'weight_decay': 0.0 - }] - optimizer = torch.optim.AdamW(paras, lr=self.args.learning_rate) - scheduler = get_linear_schedule_with_warmup( - optimizer, int(self.total_step * self.args.warmup), - self.total_step) - - return [{ - 'optimizer': optimizer, - 'lr_scheduler': { - 'scheduler': scheduler, - 'interval': 'step', - 'frequency': 1 - } - }] - - -def save_test(data, args, data_model): - tokenizer = AutoTokenizer.from_pretrained(args.pretrained_model_path) - with open(os.path.join(args.output_save_path), 'w', encoding='utf-8') as f: - for _, batch in enumerate(data): - texts = batch['text'] - summarys = batch['summary'] - preds = batch['pred'] - for idx, pred_ids in enumerate(preds): - text = texts[idx] - summary = summarys[idx] - tmp_result = dict() - preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) - for g in pred_ids] - tmp_result['summary'] = ''.join(preds) - tmp_result['label'] = summary - tmp_result['origin_text'] = text - json_data = json.dumps(tmp_result, ensure_ascii=False) - f.write(json_data+'\n') - print('save the result to '+args.output_save_path) - - -def main(): - total_parser = argparse.ArgumentParser("Summary Task") - total_parser.add_argument('--do_eval_only', action='store_true', default=False) - total_parser.add_argument('--pretrained_model_path', default='google/mt5-small', type=str) - total_parser.add_argument('--output_save_path', default='./predict.json', type=str) - # * Args for data preprocessing - total_parser = LCSTSDataModel.add_data_specific_args(total_parser) - # * Args for training - total_parser = Trainer.add_argparse_args(total_parser) - total_parser = MT5FinetuneSummaryModelCheckpoint.add_argparse_args(total_parser) - total_parser = MT5FinetuneSummary.add_model_specific_args(total_parser) - # * Args for base model - args = total_parser.parse_args() - - data_model = LCSTSDataModel(args) - if not args.do_eval_only: - model = MT5FinetuneSummary(args, len(data_model.train_dataloader())) - checkpoint_callback = MT5FinetuneSummaryModelCheckpoint(args).callbacks - logger = loggers.TensorBoardLogger(save_dir=os.path.join( - args.default_root_dir, 'log/'), name='mt5_summary') - trainer = Trainer.from_argparse_args(args, - logger=logger, - callbacks=[checkpoint_callback] - ) - trainer.fit(model, data_model) - else: - trainer = Trainer.from_argparse_args(args) - model = MT5FinetuneSummary.load_from_checkpoint( - args.resume_from_checkpoint, args=args, num_data=len(data_model.predict_dataloader())) - result = trainer.predict(model, data_model) - if torch.distributed.get_rank() == 0: - save_test(result, args, data_model) - - -if __name__ == '__main__': - main() - # test() - -''' -python examples/mt5_summary.py --gpus=1 --test_data=test_public.jsonl ---default_root_dir=/cognitive_comp/ganruyi/fengshen/mt5_summary/eval ---do_eval_only ---resume_from_checkpoint=/cognitive_comp/ganruyi/fengshen/mt5_summary/ckpt/model-epoch=01-train_loss=1.9166.ckpt ---strategy=ddp -''' diff --git a/spaces/sklearn-docs/Lasso-model-aic-bic/app.py b/spaces/sklearn-docs/Lasso-model-aic-bic/app.py deleted file mode 100644 index adf5815cee4a0c8134835d1616724976d6fef258..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Lasso-model-aic-bic/app.py +++ /dev/null @@ -1,133 +0,0 @@ -import gradio as gr -import matplotlib.pyplot as plt -# from skops import hub_utils -import time -import pickle -import numpy as np -from sklearn.preprocessing import StandardScaler -from sklearn.linear_model import LassoLarsIC -from sklearn.pipeline import make_pipeline -from sklearn.datasets import load_diabetes - - - -def load_dataset(): - X, y = load_diabetes(return_X_y=True, as_frame=True) - return X,y - - -def aic_pipeline(X,y): - lasso_lars_ic = make_pipeline(StandardScaler(), LassoLarsIC(criterion="aic")).fit(X, y) - return lasso_lars_ic - - -def zou_et_al_criterion_rescaling(criterion, n_samples, noise_variance): - """Rescale the information criterion to follow the definition of Zou et al.""" - return criterion - n_samples * np.log(2 * np.pi * noise_variance) - n_samples - - -def zou_et_all_aic(lasso_lars_ic): - aic_criterion = zou_et_al_criterion_rescaling( - lasso_lars_ic[-1].criterion_, - n_samples, - lasso_lars_ic[-1].noise_variance_, - ) - - index_alpha_path_aic = np.flatnonzero( - lasso_lars_ic[-1].alphas_ == lasso_lars_ic[-1].alpha_ - )[0] - - return index_alpha_path_aic, aic_criterion - -def zou_et_all_bic(lasso_lars_ic): - lasso_lars_ic.set_params(lassolarsic__criterion="bic").fit(X, y) - bic_criterion = zou_et_al_criterion_rescaling( - lasso_lars_ic[-1].criterion_, - n_samples, - lasso_lars_ic[-1].noise_variance_, - ) - - index_alpha_path_bic = np.flatnonzero( - lasso_lars_ic[-1].alphas_ == lasso_lars_ic[-1].alpha_ - )[0] - - return index_alpha_path_bic, bic_criterion - -def fn_assert_true(): - assert index_alpha_path_bic == index_alpha_path_aic - - - -def visualize_input_data(choice): - fig = plt.figure(1, facecolor="w", figsize=(5, 5)) - if choice == "AIC criterion": - plt.clf () - plt.plot(aic_criterion, color="tab:blue", marker="x", label="AIC criterion") - elif choice == "BIC criterion": - plt.clf () - plt.plot(bic_criterion, color="tab:orange", marker="o", label="BIC criterion") - else: - plt.clf () - plt.plot(aic_criterion, color="tab:blue", marker="*", label="AIC criterion") - plt.plot(bic_criterion, color="tab:orange", marker="o", label="BIC criterion") - - plt.vlines( - index_alpha_path_bic, - aic_criterion.min(), - aic_criterion.max(), - color="black", - linestyle="--", - label="Selected alpha", - ) - plt.legend() - plt.ylabel("Information criterion") - plt.xlabel("Lasso model sequence") - _ = plt.title("Lasso model selection via AIC and BIC") - - - return fig - -title = " Lasso model selection via information criteria" - -with gr.Blocks(title=title,theme=gr.themes.Default(font=[gr.themes.GoogleFont("Oxygen"), "Arial", "sans-serif"])) as demo: - gr.Markdown(f"# {title}") - gr.Markdown( - """ - # Probabilistic model selection using Information Criterion. - This method in statistics is useful because they dont require a hold out set test set(cross validation set). - AIC and BIC are two ways of scoring a model based on its log-likelihood and complexity. - It is important to note that the optimization to find alpha with LassoLarsIC relies on the AIC or BIC criteria that are computed in-sample, - thus on the training set directly. This approach differs from the cross-validation procedure. - Also one of the drawbacks of these kinds of Probabilistic model is that same general statistic cannot be used across models.Instead a careful metric must be deviced - for each of the models seperately.The uncertainity of the model is not taken into account. - """ - - ) - - - - gr.Markdown(" **https://scikit-learn.org/stable/auto_examples/linear_model/plot_lasso_lars_ic.html#sphx-glr-auto-examples-linear-model-plot-lasso-lars-ic-py**") - - ##process - X,y = load_dataset() - lasso_lars_ic = aic_pipeline(X,y) - n_samples = X.shape[0] - index_alpha_path_aic, aic_criterion = zou_et_all_aic(lasso_lars_ic) - - index_alpha_path_bic, bic_criterion = zou_et_all_bic(lasso_lars_ic) - - fn_assert_true() - - with gr.Tab("AIC BIC Criteria"): - radio = gr.Radio( - ["AIC criterion", "BIC criterion", "Both"], label="What model selection criteria would you choose?" - ) - # btn = gr.Button(value="Plot AIC BIC Criteria w Regularization") - # btn.click(visualize_input_data, outputs= gr.Plot(label='AIC BIC Criteria') ) - radio.change(fn=visualize_input_data, inputs=radio, outputs=gr.Plot(label='AIC BIC Criteria')) - - - - - -demo.launch() \ No newline at end of file diff --git a/spaces/skytnt/moe-tts/text/cleaners.py b/spaces/skytnt/moe-tts/text/cleaners.py deleted file mode 100644 index eedbeaee8ad73dd4aaf6c12e3f900fc34a1ee630..0000000000000000000000000000000000000000 --- a/spaces/skytnt/moe-tts/text/cleaners.py +++ /dev/null @@ -1,150 +0,0 @@ -import re -import pyopenjtalk - -pyopenjtalk._lazy_init() - - -def japanese_cleaners(text): - from text.japanese import japanese_to_romaji_with_accent - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - from text.korean import latin_to_hangul, number_to_hangul, divide_hangul - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) - return text - - -def zh_ja_mixture_cleaners(text): - from text.mandarin import chinese_to_romaji - from text.japanese import japanese_to_romaji_with_accent - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…') + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if text[-1] != '।': - text += ' ।' - return text - - -def cjks_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_lazy_ipa - from text.sanskrit import devanagari_to_ipa - from text.english import english_to_lazy_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn') + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz') + ' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u') + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - from text.mandarin import chinese_to_ipa - from text.japanese import japanese_to_ipa2 - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - from text.thai import num_to_thai, latin_to_thai - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - from text.shanghainese import shanghainese_to_ipa - text = shanghainese_to_ipa(text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def chinese_dialect_cleaners(text): - from text.mandarin import chinese_to_ipa2 - from text.japanese import japanese_to_ipa3 - from text.shanghainese import shanghainese_to_ipa - from text.cantonese import cantonese_to_ipa - from text.english import english_to_lazy_ipa2 - from text.ngu_dialect import ngu_dialect_to_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ') + ' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace( - '6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e') + ' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1)) + ' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1)) + ' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ') + ' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/stmnk/pygen/README.md b/spaces/stmnk/pygen/README.md deleted file mode 100644 index b2931876fb08d7f49caf321ca460ba110f28306b..0000000000000000000000000000000000000000 --- a/spaces/stmnk/pygen/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Pygen -emoji: ⚡ -colorFrom: yellow -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/stomexserde/gpt4-ui/Examples/3 Best Websites To Convert SVG To Vector Drawable For Android ((TOP)).md b/spaces/stomexserde/gpt4-ui/Examples/3 Best Websites To Convert SVG To Vector Drawable For Android ((TOP)).md deleted file mode 100644 index 1e807ab9baf8e430df8a146049c9dc117d7e234a..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/3 Best Websites To Convert SVG To Vector Drawable For Android ((TOP)).md +++ /dev/null @@ -1,26 +0,0 @@ - -

        3 Best Websites To Convert SVG To Vector Drawable For Android

        -

        SVG (Scalable Vector Graphics) is a popular format for creating and displaying vector images on the web. However, if you want to use SVG images in your Android app, you need to convert them to vector drawables, which are XML files that describe the shape and color of a vector graphic.

        -

        Vector drawables have many advantages over bitmap images, such as being scalable without losing quality, being smaller in size, and being easier to animate. However, converting SVG to vector drawable can be a tedious and error-prone task if you do it manually.

        -

        3 Best Websites To Convert SVG To Vector Drawable For Android


        Download ❤❤❤ https://urlgoal.com/2uIamk



        -

        Fortunately, there are some websites that can help you with this conversion process. In this article, we will introduce you to 3 best websites to convert SVG to vector drawable for Android. These websites are:

        - -

        Let's take a look at each of them in detail.

        -

        SVG2Android

        -

        SVG2Android is a simple and fast website that can convert your SVG files to vector drawables in one click. All you need to do is upload your SVG file, choose the name and size of the output file, and click on "Convert". You can also preview the result before downloading it.

        -

        SVG2Android supports most of the SVG features, such as paths, circles, ellipses, rectangles, polygons, text, gradients, transforms, and clip paths. However, it does not support filters, masks, patterns, and some other advanced features. It also does not optimize the output file for reducing the number of nodes and commands.

        -

        Vectorizer

        -

        Vectorizer is a powerful and versatile website that can convert not only SVG files but also raster images (such as PNG, JPG, GIF) to vector drawables. You can upload your file or enter a URL of an online image. You can also adjust some settings such as the number of colors, the smoothness of the curves, and the simplification of the shapes.

        -

        Vectorizer supports all the SVG features and can optimize the output file for minimizing the file size and improving the performance. It also provides some additional features such as editing the vector drawable online, exporting it to other formats (such as PDF, EPS, PNG), and applying some effects (such as shadows, strokes, fills).

        -

        Shape Shifter

        -

        Shape Shifter is a website that can not only convert SVG files to vector drawables but also create and edit complex animations using vector graphics. You can import your SVG file or create a new vector drawable from scratch using the online editor. You can also modify the properties of each element such as the color, stroke width, fill type, etc.

        -

        Shape Shifter supports all the SVG features and can optimize the output file for reducing the complexity and improving the readability. It also allows you to create animations using keyframes and morphing between different shapes. You can preview and export your animations as vector drawables or animated GIFs.

        -

        -

        Conclusion

        -

        In this article, we have introduced you to 3 best websites to convert SVG to vector drawable for Android. These websites are SVG2Android, Vectorizer, and Shape Shifter. Each of them has its own strengths and weaknesses depending on your needs and preferences. We hope this article will help you choose the best website for your project.

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ableton Live 9.7 6 Download _HOT_.md b/spaces/stomexserde/gpt4-ui/Examples/Ableton Live 9.7 6 Download _HOT_.md deleted file mode 100644 index 58eb3aa9ce0aedaa22f517c89867b5516d9972be..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Ableton Live 9.7 6 Download _HOT_.md +++ /dev/null @@ -1,30 +0,0 @@ - -

        How to Download and Install Ableton Live 9.7 6

        -

        Ableton Live 9.7 6 is the latest free update for Live 9 users, packed with improvements for Push, new sampling features and workflows, and more. If you own Live 9 Lite, Intro, Standard or Suite, you can download and install Live 9.7 6 for free from your user account. In this article, we will show you how to download and install Ableton Live 9.7 6 on your computer.

        -

        Ableton Live 9.7 6 Download


        Downloadhttps://urlgoal.com/2uI735



        -

        Step 1: Download the Live installer

        -

        To download the Live installer, log in to your User Account, select the Live version and operating system from the drop down menu and click Download. If you have auto-update enabled in Live, it will download automatically next time you open Live.

        -

        If you are using a Mac with an Apple M1 chip, you can choose the Universal Binary build, which adds native support for Apple M1 computers. If you are using a Mac with an Intel chip, you can also use the Universal Binary build, or the Legacy build if you are running an older macOS version.

        -

        Step 2: Install Live

        -

        Once you have downloaded the Live installer, double-click on it to start the installation process. Follow the instructions on the screen to complete the installation. You may need to enter your administrator password or confirm some security prompts during the installation.

        -

        After the installation is finished, you can launch Live from your Applications folder (Mac) or Start menu (Windows). You will be prompted to authorize Live online or offline. You can authorize Live online by logging in to your User Account, or offline by following the steps here.

        -

        Step 3: Enjoy Live 9.7 6

        -

        Congratulations! You have successfully downloaded and installed Ableton Live 9.7 6 on your computer. You can now enjoy the new features and improvements of this update, such as:

        -

        -
          -
        • New slicing options and drum layout for Simpler
        • -
        • New display info and routing options for Push
        • -
        • Control surface support for Arturia KeyLab Essential, Roland FA series and Roli Blocks
        • -
        • Bug fixes and stability improvements
        • -
        -

        To learn more about Live 9.7 6 features and how to use them, you can watch the feature demos here, or check out the video tutorials here. You can also read the release notes here for more details.

        -

        We hope this article was helpful and informative. If you have any questions or feedback, please feel free to contact us at support@ableton.com. Thank you for choosing Ableton Live!

        - -

        What is Ableton Live?

        -

        Ableton Live is a software for creating, performing and producing music. It is designed to be flexible and intuitive, allowing you to work with audio and MIDI in real time or in a linear way. You can use Live to record, edit, mix, master and export your music, or to improvise and jam with other musicians. Live also comes with a collection of instruments, effects and sounds that you can use to create any kind of music you want.

        -

        What is Push?

        -

        Push is a hardware instrument that integrates seamlessly with Live, giving you hands-on control over your music. You can use Push to play and program beats, melodies, chords and samples, or to tweak and automate parameters in Live. Push also lets you browse and load sounds, devices and clips from Live's browser, without looking at your computer screen. Push is the ultimate companion for Live, whether you are in the studio or on the stage.

        -

        Why update to Live 9.7 6?

        -

        Live 9.7 6 is the latest free update for Live 9 users, and it brings many improvements and new features that will enhance your music making experience. If you use Push, you will benefit from the new sampling options and workflows, the new drum layout and the on-screen display improvements. If you use other control surfaces, you will enjoy the new support for Arturia KeyLab Essential, Roland FA series and Roli Blocks. And if you use Live in general, you will appreciate the bug fixes and stability improvements that make Live more reliable and enjoyable.

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Audio Ease Altiverb 7 For Windows Torrentrar.md b/spaces/stomexserde/gpt4-ui/Examples/Audio Ease Altiverb 7 For Windows Torrentrar.md deleted file mode 100644 index 052c846269ee8fb3ab1714947baa7e2b6ee044ad..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Audio Ease Altiverb 7 For Windows Torrentrar.md +++ /dev/null @@ -1,116 +0,0 @@ - -

        Audio Ease Altiverb 7 For Windows Torrentrar: A Comprehensive Guide

        -

        If you are looking for a way to add realistic and high-quality reverb to your music or sound projects, you might have heard of Audio Ease Altiverb 7. This is a convolution reverb plug-in that uses samples of real spaces and studio hardware to create reverb effects. It is compatible with both Mac OS X and Windows, and it is widely used by professionals in the music and audio industry.

        -

        But how can you get Audio Ease Altiverb 7 for Windows? And how can you use it effectively to enhance your audio productions? In this article, we will answer these questions and more. We will explain what Audio Ease Altiverb 7 is, how to download and install it for Windows, how to use it, and what are the advantages and disadvantages of using it. By the end of this article, you will have a clear idea of whether Audio Ease Altiverb 7 for Windows is the right choice for you.

        -

        Audio Ease Altiverb 7 For Windows Torrentrar


        DOWNLOAD ►►► https://urlgoal.com/2uIa8P



        -

        What is Audio Ease Altiverb 7?

        -

        Audio Ease Altiverb 7 is a convolution reverb plug-in that uses impulse responses (IRs) to apply the sonic characteristics of real acoustic spaces and studio hardware to the input signal. An impulse response is a sample of the reverb generated by the space or gear in question in response to a particular sound. By using these IRs, Audio Ease Altiverb 7 can recreate the reverb effect of any space or device that has been sampled.

        -

        A convolution reverb plug-in for music and sound pros

        -

        Convolution reverb is a type of reverb that is based on the mathematical process of convolution. Convolution is a way of combining two signals to produce a third signal that contains information from both. In convolution reverb, one signal is the dry input signal (the sound source), and the other signal is the impulse response (the reverb source). The result is a wet output signal that sounds like the input signal has been placed in the reverb source.

        -

        Convolution reverb is different from algorithmic reverb, which is another type of reverb that uses mathematical formulas to generate synthetic reverb effects. Algorithmic reverb can create various types of reverb effects, such as plate, spring, hall, room, etc., but it cannot capture the nuances and details of real spaces or gear. Convolution reverb, on the other hand, can reproduce the exact sound of any space or gear that has been sampled, but it cannot create new or artificial reverb effects.

        -

        Audio Ease Altiverb 7 is a convolution reverb plug-in that works with most digital audio workstations (DAWs) on Mac OS X and Windows. It supports up to 5.1 surround input and output, up to 384 kHz sampling rates, AAX Native, VST, RTAS, TDM, AU, MAS formats, total recall automation, and 64-bit support. It also has an intuitive and user-friendly interface that allows you to easily load, browse, tweak, and automate IRs.

        -

        Features and benefits of Audio Ease Altiverb 7

        -

        Some of the features and benefits of Audio Ease Altiverb 7 are:

        -

        -
          -
        • It has a large and diverse library of IRs that covers a wide range of spaces and gear, such as concert halls, cathedrals, studios, plates, springs, amps, speakers, etc. You can also create your own IRs using the built-in IR recording tool or import IRs from other sources.
        • -
        • It has a smart and flexible browser that lets you search, sort, filter, and preview IRs by category, name, keyword, size, etc. You can also use the visual browser to select IRs based on photos of the spaces or gear.
        • -
        • It has a powerful and versatile reverb engine that allows you to adjust and automate various reverb parameters, such as decay time, pre-delay, early reflections, late reflections, damping, EQ, modulation, etc. You can also use the reverse mode to create reverse reverb effects.
        • -
        • It has a unique and innovative feature called "Similar" that lets you find IRs that sound similar to the current one. This is useful for finding alternative or complementary IRs for your audio sources.
        • -
        • It has a handy and creative feature called "Keyword Linking" that lets you link keywords to IRs. This way, you can automatically load IRs based on the keywords in the track names or regions of your DAW. For example, if you have a track named "Vocals", you can link it to an IR that suits vocals.
        • -
        • It has a fun and interactive feature called "Picture-in-Picture" that lets you drag and drop photos of spaces or gear onto the plug-in window to load their corresponding IRs. You can also resize and move the photos around the window to create your own custom layout.
        • -
        -

        How to download and install Audio Ease Altiverb 7 for Windows?

        -

        If you want to download and install Audio Ease Altiverb 7 for Windows, you need to follow these steps:

        -

        Requirements and precautions

        -

        Before you download and install Audio Ease Altiverb 7 for Windows, you need to make sure that you meet the following requirements and take the following precautions:

        -
          -
        • You need to have a Windows computer with at least 2 GB of RAM, 1 GB of free disk space, and a compatible DAW that supports AAX Native, VST, RTAS, TDM, AU, or MAS formats.
        • -
        • You need to have an internet connection to download the installer and activate the license.
        • -
        • You need to have a valid license for Audio Ease Altiverb 7. You can purchase one from the official website or from authorized dealers. You can also try a free demo version for 30 days.
        • -
        • You need to be aware that Audio Ease Altiverb 7 is not a standalone application. It is a plug-in that works within your DAW. You cannot use it without a DAW.
        • -
        • You need to be careful when downloading and installing Audio Ease Altiverb 7 for Windows from unofficial sources. These sources may contain viruses, malware, or corrupted files that can harm your computer or compromise your security. Always download and install Audio Ease Altiverb 7 for Windows from the official website or from authorized dealers.
        • -
        -

        Steps to download and install Audio Ease Altiverb 7 for Windows

        -

        Once you have met the requirements and taken the precautions, you can proceed with the following steps to download and install Audio Ease Altiverb 7 for Windows:

        -
          -
        1. Go to the official website of Audio Ease and log in with your account. If you do not have an account yet, you can create one for free.
        2. -
        3. Go to the downloads section and find the installer for Audio Ease Altiverb 7 for Windows. Click on the download button and save the installer file on your computer.
        4. -
        5. Run the installer file and follow the instructions on the screen. You will be asked to choose the installation location, select the plug-in formats you want to install, agree to the terms and conditions, etc.
        6. -
        7. Wait for the installation process to complete. It may take some time depending on your internet speed and computer performance.
        8. -
        9. Launch your DAW and scan for new plug-ins. You should be able to find Audio Ease Altiverb 7 in your plug-in list.
        10. -
        11. Open Audio Ease Altiverb 7 in your DAW and activate your license. You will be asked to enter your serial number or log in with your account. You can also use an iLok dongle if you have one.
        12. -
        -

        Congratulations! You have successfully downloaded and installed Audio Ease Altiverb 7 for Windows. You are now ready to use it to add reverb to your audio sources.

        -

        How to use Audio Ease Altiverb 7 for Windows?

        -

        Now that you have downloaded and installed Audio Ease Altiverb 7 for Windows, you might be wondering how to use it. In this section, we will show you how to load and browse impulse responses, how to tweak and automate reverb parameters, and how to apply reverb to different types of audio sources.

        -

        How to load and browse impulse responses

        -

        The first thing you need to do when using Audio Ease Altiverb 7 for Windows is to load an impulse response (IR) that matches the reverb effect you want to create. An IR is a sample of the reverb generated by a real space or gear in response to a particular sound. By using an IR, Audio Ease Altiverb 7 can recreate the reverb effect of that space or gear.

        -

        To load and browse IRs, you can use the browser window of Audio Ease Altiverb 7. The browser window is divided into two sections: the list section and the preview section. The list section shows the available IRs in different categories, such as spaces, gear, experimental, etc. The preview section shows the photos and information of the selected IRs.

        -

        You can use the following methods to load and browse IRs:

        -
          -
        • You can use the search box at the top of the browser window to type in keywords or names of the IRs you are looking for. For example, if you want to find IRs of churches, you can type in "church" in the search box.
        • -
        • You can use the filters at the bottom of the browser window to narrow down your search results by category, size, keyword, etc. For example, if you want to find IRs of small spaces, you can select "small" in the size filter.
        • -
        • You can use the sort button at the bottom right of the browser window to sort your search results by name, date, size, etc. For example, if you want to find the newest IRs, you can sort by date.
        • -
        • You can use the visual browser button at the top right of the browser window to switch to the visual browser mode. In this mode, you can select IRs based on photos of the spaces or gear. You can also drag and drop photos onto the plug-in window to load their corresponding IRs.
        • -
        • You can use the similar button at the top right of the preview section to find IRs that sound similar to the current one. This is useful for finding alternative or complementary IRs for your audio sources.
        • -
        -

        Once you have found an IR that you like, you can double-click on it or drag and drop it onto the plug-in window to load it. You will see a waveform of the IR on the plug-in window, along with various reverb parameters that you can adjust and automate.

        -

        How to tweak and automate reverb parameters

        -

        After loading an IR, you can tweak and automate various reverb parameters to fine-tune and customize the reverb effect. These parameters include decay time, pre-delay, early reflections, late reflections, damping, EQ, modulation, etc. You can also use the reverse mode to create reverse reverb effects.

        -

        To tweak and automate reverb parameters, you can use the following methods:

        -
          -
        • You can use the knobs and sliders on the plug-in window to adjust and automate reverb parameters. You can also click on the parameter names or values to enter them manually.
        • -
        • You can use the tabs at the bottom of the plug-in window to access more reverb parameters and options. These tabs include global settings, automation settings, EQ settings, modulation settings, etc.
        • -
        • You can use the waveform display on the plug-in window to edit and automate reverb parameters graphically. You can also zoom in and out of the waveform display by using your mouse wheel or trackpad.
        • -
        • You can use your DAW's automation features to record and edit automation data for reverb parameters. You can also use your DAW's MIDI learn features to assign MIDI controllers to reverb parameters.
        • -
        -

        By tweaking and automating reverb parameters, you can create various reverb effects that suit your audio sources and musical styles.

        -

        How to apply reverb to different types of audio sources

        -

        The final step in using Audio Ease Altiverb 7 for Windows is to apply reverb to different types of audio sources, such as vocals, instruments, drums, synths, etc. Depending on the type of audio source, you may want to use different IRs and reverb parameters to achieve the best results.

        -

        Here are some general tips and guidelines on how to apply reverb to different types of audio sources:

        -
          -
        • Vocals: Vocals are usually the most prominent and expressive element in a song, so you want to use reverb to add depth, warmth, and dimension to them. You can use IRs of natural spaces, such as halls, churches, or rooms, to create a realistic and spacious reverb effect. You can also use IRs of studio hardware, such as plates or springs, to create a classic and vintage reverb effect. You may want to use a short to medium decay time, a low to moderate pre-delay, a high early reflections level, a low to moderate late reflections level, a low damping frequency, and a gentle EQ curve to avoid muddiness and harshness.
        • -
        • Instruments: Instruments are the main components of the musical arrangement, so you want to use reverb to blend them together and create a cohesive and balanced mix. You can use IRs of natural spaces or studio hardware that match the genre and style of your music. For example, you can use IRs of concert halls or orchestras for classical music, IRs of clubs or studios for jazz music, IRs of arenas or stadiums for rock music, etc. You may want to use a medium to long decay time, a moderate to high pre-delay, a moderate early reflections level, a moderate late reflections level, a moderate damping frequency, and a neutral EQ curve to preserve the clarity and character of the instruments.
        • -
        • Drums: Drums are the foundation and backbone of the rhythm section, so you want to use reverb to add punch, energy, and groove to them. You can use IRs of natural spaces or studio hardware that enhance the dynamics and impact of the drums. For example, you can use IRs of rooms or chambers for tight and snappy reverb effects, IRs of plates or springs for bright and lively reverb effects, IRs of halls or cathedrals for epic and dramatic reverb effects, etc. You may want to use a short to medium decay time, a low to moderate pre-delay, a low early reflections level, a high late reflections level, a high damping frequency, and a bright EQ curve to emphasize the transients and frequencies of the drums.
        • -
        • Synths: Synths are the most versatile and creative elements in electronic music, so you want to use reverb to add texture, color, and movement to them. You can use IRs of natural spaces or studio hardware that complement or contrast the timbre and tone of the synths. For example, you can use IRs of warm and organic spaces for cold and digital synths, IRs of cold and digital spaces for warm and organic synths, IRs of experimental or unusual spaces for weird and unique synths, etc. You may want to use a long decay time, a high pre-delay, a high early reflections level, a low late reflections level, a low damping frequency, and a creative EQ curve to create rich and complex reverb effects.
        • -
        -

        Of course, these are just some general tips and guidelines, and you can always experiment with different IRs and reverb parameters to find the best reverb effects for your audio sources.

        -

        What are the advantages and disadvantages of using Audio Ease Altiverb 7 for Windows?

        -

        As with any software, Audio Ease Altiverb 7 for Windows has its advantages and disadvantages. Here are some of them:

        -

        Pros of using Audio Ease Altiverb 7 for Windows

        -
          -
        • It offers realistic and high-quality reverb effects that are based on samples of real spaces and gear.
        • -
        • It has a large and diverse library of IRs that covers a wide range of spaces and gear.
        • -
        • It has an intuitive and user-friendly interface that allows you to easily load, browse, tweak, and automate IRs.
        • -
        • It has unique and innovative features such as "Similar", "Keyword Linking", and "Picture-in-Picture" that enhance your workflow and creativity.
        • -
        • It is compatible with most DAWs on Mac OS X and Windows, and supports up to 5.1 surround input and output.
        • -
        -

        Cons of using Audio Ease Altiverb 7 for Windows

        -
          -
        • It is not a standalone application. It is a plug-in that works within your DAW. You cannot use it without a DAW.
        • -
        • It requires an internet connection to download the installer and activate the license. You may encounter problems if your internet connection is slow or unstable.
        • -
        • It requires a valid license for Audio Ease Altiverb 7. You need to purchase one from the official website or from authorized dealers. You can also use an iLok dongle if you have one.
        • -
        • It consumes a lot of CPU and RAM resources. You may experience performance issues if your computer is not powerful enough or if you use too many instances of the plug-in.
        • -
        • It cannot create new or artificial reverb effects. It can only reproduce the reverb effects of existing spaces or gear that have been sampled.
        • -
        -

        Conclusion

        -

        Audio Ease Altiverb 7 for Windows is a convolution reverb plug-in that uses impulse responses to apply the sonic characteristics of real acoustic spaces and studio hardware to the input signal. It offers realistic and high-quality reverb effects that are suitable for music and sound professionals. It has a large and diverse library of IRs, an intuitive and user-friendly interface, and unique and innovative features that enhance your workflow and creativity. It is compatible with most DAWs on Mac OS X and Windows, and supports up to 5.1 surround input and output.

        -

        However, Audio Ease Altiverb 7 for Windows also has some drawbacks. It is not a standalone application, it requires an internet connection and a valid license, it consumes a lot of CPU and RAM resources, and it cannot create new or artificial reverb effects. You need to weigh the pros and cons of using Audio Ease Altiverb 7 for Windows before deciding whether it is the right choice for you.

        -

        We hope this article has given you a comprehensive guide on Audio Ease Altiverb 7 for Windows. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

        -

        FAQs

        -

        Here are some frequently asked questions about Audio Ease Altiverb 7 for Windows:

        -
          -
        1. How much does Audio Ease Altiverb 7 for Windows cost?
        2. -

          Audio Ease Altiverb 7 for Windows costs $595 USD for the regular version, $995 USD for the XL version, and $1995 USD for the surround version. You can purchase it from the official website or from authorized dealers. You can also try a free demo version for 30 days.

          -
        3. What is the difference between the regular, XL, and surround versions of Audio Ease Altiverb 7?
        4. -

          The regular version of Audio Ease Altiverb 7 supports up to stereo input and output, up to 96 kHz sampling rates, AAX Native, VST, RTAS, AU formats, and includes over 3000 IRs. The XL version supports up to 5.1 surround input and output, up to 384 kHz sampling rates, AAX Native, VST, RTAS, TDM, AU, MAS formats, and includes over 5000 IRs. The surround version supports up to 7.1 surround input and output, up to 384 kHz sampling rates, AAX Native, VST, RTAS, TDM, AU formats, and includes over 7000 IRs.

          -
        5. How can I create my own impulse responses for Audio Ease Altiverb 7?
        6. -

          You can create your own impulse responses for Audio Ease Altiverb 7 using the built-in IR recording tool or by importing IRs from other sources. The built-in IR recording tool lets you record IRs using a microphone or a speaker in any space or gear that you want to sample. You can also edit and optimize your IRs using the built-in IR editing tool. Alternatively, you can import IRs from other sources, such as other convolution reverb plug-ins, websites, CDs, DVDs, etc.

          -
        7. How can I update Audio Ease Altiverb 7 for Windows?
        8. -

          You can update Audio Ease Altiverb 7 for Windows by downloading the latest installer from the official website or by using the built-in update checker. The update checker lets you check for updates automatically or manually within the plug-in window. You can also enable or disable notifications for updates in the global settings tab.

          -
        9. How can I contact Audio Ease support?
        10. -

          You can contact Audio Ease support by sending an email to support@audioease.com or by filling out the contact form on the official website. You can also visit the support section on the official website to find answers to common questions, troubleshooting tips, tutorials, manuals, etc.

          -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ericsson Mini Link Craft 2.2 Download [CRACKED].md b/spaces/stomexserde/gpt4-ui/Examples/Ericsson Mini Link Craft 2.2 Download [CRACKED].md deleted file mode 100644 index 28fef1fb8bc5e71a5edb9ebbc9f59c98de4c5a7c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Ericsson Mini Link Craft 2.2 Download [CRACKED].md +++ /dev/null @@ -1,48 +0,0 @@ - -

        How to Download and Install Ericsson MINI-LINK Craft 2.2

        -

        Ericsson MINI-LINK Craft 2.2 is a software tool that allows you to configure and manage Ericsson TN devices, such as the MINI-LINK 6600 family of microwave radios. With this tool, you can perform various tasks such as setting up radio links, monitoring performance, troubleshooting issues, and upgrading software.

        -

        ericsson mini link craft 2.2 download


        Download Ziphttps://urlgoal.com/2uI9sR



        -

        In this article, we will show you how to download and install Ericsson MINI-LINK Craft 2.2 on your Windows PC.

        -

        Step 1: Download the software

        -

        There are two ways to download Ericsson MINI-LINK Craft 2.2:

        -
          -
        • From the Ericsson website: You can access the Ericsson website and search for the product name "MINI-LINK Craft". You will need to register and log in with your Ericsson account to access the download page. You will also need a valid license key to activate the software.
        • -
        • From the Internet Archive: You can also download the software from the Internet Archive, which is a non-profit digital library that preserves and provides access to various online content. You can find the software under the name "Ericsson Minilink Craft" by Iffat Ahmed. You do not need to register or log in to download the software from this source. However, you may still need a license key to activate it.
        • -
        -

        Step 2: Install the software

        -

        Once you have downloaded the software, you can follow these steps to install it on your PC:

        -
          -
        1. Extract the ZIP file that contains the software files.
        2. -
        3. Run the setup.exe file as an administrator.
        4. -
        5. Follow the instructions on the screen to complete the installation process.
        6. -
        7. Restart your PC if prompted.
        8. -
        -

        Step 3: Launch the software

        -

        After installing the software, you can launch it by doing one of the following:

        -

        -
          -
        • Double-clicking on the MINI-LINK Craft Launcher icon on your desktop.
        • -
        • Navigating to Start > All Programs > Ericsson > MINI-LINK Craft Launcher and clicking on it.
        • -
        -

        You will see a window that asks you to enter your license key. Enter your valid license key and click on OK. If you do not have a license key, you can contact Ericsson support or your local Ericsson representative for assistance.

        -

        You will then see the main window of Ericsson MINI-LINK Craft 2.2, where you can start configuring and managing your Ericsson TN devices.

        -

        Conclusion

        -

        Ericsson MINI-LINK Craft 2.2 is a useful software tool for network operators and engineers who work with Ericsson TN devices. It allows them to set up, monitor, troubleshoot, and upgrade their microwave radio links with ease and efficiency. To download and install this software, you can either visit the Ericsson website or use the Internet Archive as an alternative source. You will also need a valid license key to activate the software and use its features.

        How to Use Ericsson MINI-LINK Craft 2.2

        -

        Ericsson MINI-LINK Craft 2.2 is a user-friendly and intuitive software tool that allows you to perform various tasks on your Ericsson TN devices. Here are some of the main features and functions of this tool:

        -
          -
        • Device discovery: You can discover and connect to your Ericsson TN devices using different methods, such as IP address, IP range, subnet mask, or device name. You can also scan your network for available devices and add them to your device list.
        • -
        • Device configuration: You can configure various parameters and settings on your Ericsson TN devices, such as radio link properties, network interfaces, security options, synchronization sources, and alarms. You can also create and apply configuration templates to multiple devices at once.
        • -
        • Device management: You can monitor the status and performance of your Ericsson TN devices, such as link quality, traffic statistics, error counters, and event logs. You can also perform actions such as rebooting, resetting, locking, or unlocking your devices.
        • -
        • Software upgrade: You can upgrade the software version of your Ericsson TN devices using the built-in software upgrade wizard. You can also download and install software patches and fixes from the Ericsson website.
        • -
        -

        How to Troubleshoot Ericsson MINI-LINK Craft 2.2

        -

        Ericsson MINI-LINK Craft 2.2 is designed to provide you with a smooth and reliable user experience. However, if you encounter any problems or issues while using this tool, you can try some of the following troubleshooting tips:

        -
          -
        • Check your license key: Make sure you have entered a valid license key when launching the software. If you have lost or forgotten your license key, you can contact Ericsson support or your local Ericsson representative for assistance.
        • -
        • Check your network connection: Make sure your PC and your Ericsson TN devices are connected to the same network and have valid IP addresses. You can use the ping command or the network diagnostic tool to test your network connectivity.
        • -
        • Check your device settings: Make sure your Ericsson TN devices are configured correctly and have compatible software versions. You can use the device information tool or the device compatibility tool to check your device settings.
        • -
        • Check the user manual: If you need more information or guidance on how to use Ericsson MINI-LINK Craft 2.2, you can refer to the user manual that is included in the software installation folder or available on the Ericsson website.
        • -
        • Contact Ericsson support: If none of the above tips help you resolve your problem or issue, you can contact Ericsson support or your local Ericsson representative for further assistance. You can also access the online help system or the FAQ section on the Ericsson website for more resources and solutions.
        • -

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fated Love Radclyffe Epub Download UPDATED.md b/spaces/stomexserde/gpt4-ui/Examples/Fated Love Radclyffe Epub Download UPDATED.md deleted file mode 100644 index 690d0d5bfbad4490dec00f927f7b156973fd989e..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Fated Love Radclyffe Epub Download UPDATED.md +++ /dev/null @@ -1,21 +0,0 @@ -
        -

        Fated Love: A Lesbian Romance Novel by Radclyffe

        -

        Fated Love is a lesbian romance novel by Radclyffe, a prolific author of lesbian fiction and romantic intrigue. The book was published in 2004 by Bold Strokes Books, and has received positive reviews from readers and critics alike. Fated Love tells the story of Quinn Maguire and Honor Blake, two doctors who work in a busy emergency room and struggle with their mutual attraction and their personal secrets.

        -

        Fated Love Radclyffe Epub Download


        Download File ✫✫✫ https://urlgoal.com/2uI9cQ



        -

        Quinn Maguire is a trauma surgeon who unexpectedly accepts a position as an ER physician, leaving behind her successful career and her fiancé. Her new boss, Honor Blake, is a senior attending physician who has been widowed for two years and still wears her wedding ring. Honor suspects that Quinn is hiding something from her past, but she can't deny the chemistry between them. As they work together in the chaotic and stressful environment of the ER, they develop a friendship that slowly turns into something more.

        -

        However, both Quinn and Honor have reasons to resist their growing feelings. Quinn is haunted by a tragic event that shattered her life and made her question her choices. Honor is loyal to the memory of her late wife and feels guilty about moving on. They also have to deal with the challenges of their profession, such as ethical dilemmas, life-and-death situations, and hospital politics. Will they be able to overcome their fears and doubts and embrace their fated love?

        -

        Fated Love is a captivating and emotional novel that explores the themes of loss, grief, healing, trust, and destiny. Radclyffe creates realistic and likable characters who face realistic and complex problems. She also writes engaging and steamy scenes that showcase the passion and intimacy between Quinn and Honor. Fated Love is a must-read for fans of lesbian romance and medical drama.

        -

        If you want to read Fated Love by Radclyffe, you can download it in PDF or EPUB format from various online sources. Here are some of them:

        - -

        You can also find more information about Fated Love by Radclyffe on Goodreads:

        -Fated Love by Radclyffe | Goodreads - -

        Fated Love is not only a romance novel, but also a medical drama that gives an insight into the lives and challenges of ER doctors. Radclyffe draws from her own experience as a surgeon and a medical editor to create realistic and accurate descriptions of the medical procedures and situations that Quinn and Honor encounter. She also portrays the ethical and moral dilemmas that they face, such as how to deal with patients who refuse treatment, how to balance their personal and professional lives, and how to cope with the stress and trauma of their work.

        -

        Radclyffe also explores the themes of fate and destiny in Fated Love. She suggests that Quinn and Honor are meant to be together, even though they come from different backgrounds and have different personalities. She shows how their paths cross several times before they meet, and how they have a connection that goes beyond physical attraction. She also hints at a supernatural element that guides them to each other, such as dreams, visions, and coincidences. She makes the reader wonder if there is a higher power or a cosmic plan that brings them together.

        -

        Fated Love is a novel that will appeal to anyone who enjoys a well-written and engaging story of love, friendship, and healing. It is a novel that will make you laugh, cry, and swoon. It is a novel that will make you believe in the power of love and the magic of fate.

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/sunmaiyyyy/combined-GI-RVC-model/infer_pack/models_onnx.py b/spaces/sunmaiyyyy/combined-GI-RVC-model/infer_pack/models_onnx.py deleted file mode 100644 index 3c5be53a572151820de7d82dfce84f2e2979ed56..0000000000000000000000000000000000000000 --- a/spaces/sunmaiyyyy/combined-GI-RVC-model/infer_pack/models_onnx.py +++ /dev/null @@ -1,760 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidO(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/LDSR/scripts/ldsr_model.py b/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/LDSR/scripts/ldsr_model.py deleted file mode 100644 index b8cff29b9f4ca56e3a9f4b1ac8e150abb1a0ff30..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/LDSR/scripts/ldsr_model.py +++ /dev/null @@ -1,69 +0,0 @@ -import os -import sys -import traceback - -from basicsr.utils.download_util import load_file_from_url - -from modules.upscaler import Upscaler, UpscalerData -from ldsr_model_arch import LDSR -from modules import shared, script_callbacks -import sd_hijack_autoencoder, sd_hijack_ddpm_v1 - - -class UpscalerLDSR(Upscaler): - def __init__(self, user_path): - self.name = "LDSR" - self.user_path = user_path - self.model_url = "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" - self.yaml_url = "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1" - super().__init__() - scaler_data = UpscalerData("LDSR", None, self) - self.scalers = [scaler_data] - - def load_model(self, path: str): - # Remove incorrect project.yaml file if too big - yaml_path = os.path.join(self.model_path, "project.yaml") - old_model_path = os.path.join(self.model_path, "model.pth") - new_model_path = os.path.join(self.model_path, "model.ckpt") - safetensors_model_path = os.path.join(self.model_path, "model.safetensors") - if os.path.exists(yaml_path): - statinfo = os.stat(yaml_path) - if statinfo.st_size >= 10485760: - print("Removing invalid LDSR YAML file.") - os.remove(yaml_path) - if os.path.exists(old_model_path): - print("Renaming model from model.pth to model.ckpt") - os.rename(old_model_path, new_model_path) - if os.path.exists(safetensors_model_path): - model = safetensors_model_path - else: - model = load_file_from_url(url=self.model_url, model_dir=self.model_path, - file_name="model.ckpt", progress=True) - yaml = load_file_from_url(url=self.yaml_url, model_dir=self.model_path, - file_name="project.yaml", progress=True) - - try: - return LDSR(model, yaml) - - except Exception: - print("Error importing LDSR:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - return None - - def do_upscale(self, img, path): - ldsr = self.load_model(path) - if ldsr is None: - print("NO LDSR!") - return img - ddim_steps = shared.opts.ldsr_steps - return ldsr.super_resolution(img, ddim_steps, self.scale) - - -def on_ui_settings(): - import gradio as gr - - shared.opts.add_option("ldsr_steps", shared.OptionInfo(100, "LDSR processing steps. Lower = faster", gr.Slider, {"minimum": 1, "maximum": 200, "step": 1}, section=('upscaling', "Upscaling"))) - shared.opts.add_option("ldsr_cached", shared.OptionInfo(False, "Cache LDSR model in memory", gr.Checkbox, {"interactive": True}, section=('upscaling', "Upscaling"))) - - -script_callbacks.on_ui_settings(on_ui_settings) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Medion GoPal 6 0PE 93537 Maps Q2 2011 W O Europa [WORK].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Medion GoPal 6 0PE 93537 Maps Q2 2011 W O Europa [WORK].md deleted file mode 100644 index 4bc686cee869c96f0ae68b533da7f4f7b7b098bc..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Medion GoPal 6 0PE 93537 Maps Q2 2011 W O Europa [WORK].md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        similar to the gopal 6, the gopal 6 0pe is a dedicated gps receiver with a 3.5in, 320x240 touchscreen. this is a high-end model that is in the market for the us. if you live in europe, you will have to wait until next year.

        -

        with the gopal 6 0pe, the company wants to offer a lightweight, economical gps navigation option that can be used in various applications in addition to personal devices. it comes with a rechargeable battery, and offers 12 months of battery life.

        -

        Medion GoPal 6 0PE 93537 Maps Q2 2011 W O Europa


        Download Filehttps://cinurl.com/2uEXuI



        -

        thedatabase of openstreetmapcontains more than 800.000 routes. theinstallation of your navigation assistantallows you to obtain information directly fromthe database. you can download the information simply by clicking on the appropriate icon. the download will be fast, as much as it is possible for a database of this size. however, thegraphicsquality is not as high as it is in the official maps. on the other hand, the mapping data is more complete, which is a source ofadded value.

        -

        you cansave moneybyupdatingyour navigation system. you can do this for free usingthe gps updatefunction on your navigation system. the update allows you todownloadthe latest maps for free. it is possible to selectdownloading toyour navigation system. however, it is better toupdate the maps on your navigation systemdirectly. after the download, yourassistantwill be updated for free.

        -

        you candelete your mapsanddelete their datawiththe gps updatefunction. there are nolimitationsin this process. you can evendeleteeverything if you want. however, it is better todownload your data fromyour navigation assistant. you will find the icon in the applications menu.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Netzwerk A1 Kursbuch Pdf Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Netzwerk A1 Kursbuch Pdf Download.md deleted file mode 100644 index e40a6e1995690d7dff128ff3a299b01dbb5dfe04..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Netzwerk A1 Kursbuch Pdf Download.md +++ /dev/null @@ -1,111 +0,0 @@ - -

        Netzwerk A1 Kursbuch PDF Download: How to Learn German with a Modern Textbook

        - -

        Are you interested in learning German as a foreign language? Do you want to improve your communication skills, broaden your cultural horizons, and open up new opportunities for your personal and professional life? If so, you need a good textbook that can guide you through the language and prepare you for the exams. One of the textbooks that you might want to consider is Netzwerk A1 Kursbuch.

        - -

        Netzwerk A1 Kursbuch is a textbook that covers the A1 level of the Common European Framework of Reference for Languages (CEFR). The CEFR is a standard that describes the language proficiency of learners of foreign languages. The A1 level is the beginner level, where you can understand and use familiar everyday expressions and basic phrases. The textbook consists of 12 units that cover various topics such as personal information, daily life, hobbies, travel, and culture. The textbook also provides exercises, activities, and tests to practice your listening, speaking, reading, and writing skills. The textbook also comes with an audio CD that contains dialogues, texts, and songs.

        -

        netzwerk a1 kursbuch pdf download


        Download Zip ::: https://cinurl.com/2uEYKd



        - -

        However, Netzwerk A1 Kursbuch is not a free textbook. You need to buy a copy from a bookstore or an online platform to use it legally and access all its features. But what if you want to try it out before buying it? Or what if you can't afford the price? In that case, you might be interested in looking for a PDF download of the textbook.

        - -

        What is a PDF Download of Netzwerk A1 Kursbuch?

        - -

        A PDF download of Netzwerk A1 Kursbuch is a file that contains the digital version of the textbook. A PDF download usually comes with a password that allows you to open and view the file on your computer or mobile device. By using a PDF download of Netzwerk A1 Kursbuch, you can access the textbook without paying for it or carrying a physical copy.

        - -

        How to Get and Use a PDF Download of Netzwerk A1 Kursbuch?

        - -

        If you are looking for a PDF download of Netzwerk A1 Kursbuch, you might find some websites that claim to offer it for free download. However, you should be careful when downloading and using such files, as they might contain viruses, malware, or spyware that can harm your device or steal your personal information. Here are some steps to get and use a PDF download of Netzwerk A1 Kursbuch safely and effectively:

        - -
          -
        1. Make sure you have a reliable antivirus software installed on your device and update it regularly.
        2. -
        3. Search for a reputable website that offers a PDF download of Netzwerk A1 Kursbuch. You can use Google or other search engines to find such websites. You can also check the reviews and ratings of the websites to see if they are trustworthy.
        4. -
        5. Download the PDF file from the website. It might be in a ZIP or RAR format, so you need to extract it using a program like WinRAR or 7-Zip.
        6. -
        7. Open the PDF file using a password that is provided by the website or the file itself. You might need to enter your email address or complete a survey to get the password.
        8. -
        9. View the PDF file using a program like Adobe Reader or Foxit Reader. You can also print the file if you want to have a hard copy.
        10. -
        - -

        What are the Risks and Disadvantages of Using a PDF Download of Netzwerk A1 Kursbuch?

        - -

        While using a PDF download of Netzwerk A1 Kursbuch might seem like a convenient and cost-effective option, it also comes with some risks and disadvantages that you should be aware of. Here are some of them:

        - -
          -
        • You are violating the terms and conditions of the publisher and infringing their intellectual property rights. This is illegal and unethical, and you might face legal consequences if you are caught.
        • -
        • You are exposing your device to potential threats from viruses, malware, or spyware that might be hidden in the PDF file or the website. These threats can damage your system, corrupt your files, or steal your personal data.
        • -
        • You are missing out on the updates and support from the publisher. A PDF download of Netzwerk A1 Kursbuch might not be compatible with the latest versions of Windows or other software. You might also encounter errors or glitches that can affect your learning experience and performance.
        • -
        • You are compromising your learning quality and effectiveness. A PDF download of Netzwerk A1 Kursbuch might not have the same layout, design, and features as the original textbook. You might also miss out on some content or exercises that are only available in the physical copy or online platform.
        • -
        - -

        What are the Alternatives to Using a PDF Download of Netzwerk A1 Kursbuch?

        - -

        If you want to use Netzwerk A1 Kursbuch legally and safely, you have some alternatives to using a PDF download of the textbook. Here are some of them:

        - -
          -
        • Buy a copy from a bookstore or an online platform. This is the best option if you want to enjoy all the features and benefits of Netzwerk A1 Kursbuch without any risks or disadvantages. You can choose from different formats and prices that suit your budget and needs.
        • -
        • Use an online platform that provides access to Netzwerk A1 Kursbuch. This is a good option if you want to use Netzwerk A1 Kursbuch without carrying a physical copy or downloading a file. You can access the textbook online using your computer or mobile device with an internet connection.
        • -
        • Use an alternative textbook to Netzwerk A1 Kursbuch. This is an option if you want to use a similar textbook that has comparable content and features to Netzwerk A1 Kursbuch. You can search for such textbooks online or ask for recommendations from other learners or teachers.
        • -
        - -

        Conclusion

        - -

        Netzwerk A1 Kursbuch is a textbook that can help you learn German as a foreign language and prepare you for the A1 level of the CEFR. However, using a PDF download of this textbook is not advisable, as it can pose legal, ethical, technical, and educational risks and disadvantages. Instead, you should consider buying a copy from a bookstore or an online platform, using an online platform that provides access to the textbook, or using an alternative textbook that meets your needs. If you decide to use a PDF download of Netzwerk A1 Kursbuch, you should make sure that your device meets the requirements for viewing the file, and that you know how to get and use it safely and effectively.

        -

        -

        How to Learn German with Netzwerk A1 Kursbuch?

        - -

        If you have a PDF download of Netzwerk A1 Kursbuch, you can use it to learn German at your own pace and convenience. However, you need to have a good learning strategy and motivation to make the most of your self-study. Here are some tips to help you learn German with Netzwerk A1 Kursbuch:

        - -
          -
        • Set a realistic and specific goal for your learning. For example, you can aim to finish one unit per week or pass the A1 exam by a certain date.
        • -
        • Plan a regular and consistent schedule for your learning. For example, you can dedicate one hour per day or three hours per week to study German with Netzwerk A1 Kursbuch.
        • -
        • Use a variety of resources and methods to complement your learning. For example, you can listen to podcasts, watch videos, read articles, or join online forums that are related to the topics and vocabulary of Netzwerk A1 Kursbuch.
        • -
        • Review and practice what you have learned regularly. For example, you can use flashcards, quizzes, games, or writing exercises to reinforce your memory and understanding of the grammar and vocabulary of Netzwerk A1 Kursbuch.
        • -
        • Seek feedback and guidance from others. For example, you can find a language partner, a tutor, or a teacher who can help you with your pronunciation, grammar, or communication skills.
        • -
        - -

        What are the Benefits of Learning German with Netzwerk A1 Kursbuch?

        - -

        Learning German with Netzwerk A1 Kursbuch can bring you many benefits that can enrich your personal and professional life. Here are some of them:

        - -
          -
        • You can communicate with native speakers and other learners of German. You can make new friends, exchange ideas, share experiences, and learn about different cultures.
        • -
        • You can access more information and opportunities. You can read books, watch movies, listen to music, or browse websites that are in German. You can also travel, study, work, or do business in Germany or other German-speaking countries.
        • -
        • You can develop your cognitive and academic skills. You can improve your memory, attention, creativity, problem-solving, and critical thinking skills by learning a new language. You can also enhance your literacy, numeracy, and general knowledge by studying different topics in German.
        • -
        • You can boost your confidence and self-esteem. You can feel proud of yourself for achieving your learning goals and overcoming your challenges. You can also express yourself more freely and confidently in German.
        • -
        - -

        Conclusion

        - -

        Netzwerk A1 Kursbuch is a textbook that can help you learn German as a foreign language and prepare you for the A1 level of the CEFR. However, using a PDF download of this textbook is not advisable, as it can pose legal, ethical, technical, and educational risks and disadvantages. Instead, you should consider buying a copy from a bookstore or an online platform, using an online platform that provides access to the textbook, or using an alternative textbook that meets your needs. If you decide to use a PDF download of Netzwerk A1 Kursbuch, you should make sure that your device meets the requirements for viewing the file, and that you know how to get and use it safely and effectively. You should also know how to learn German with Netzwerk A1 Kursbuch effectively and enjoyably.

        -

        How to Prepare for the A1 Exam with Netzwerk A1 Kursbuch?

        - -

        If you are using Netzwerk A1 Kursbuch to learn German, you might also want to prepare for the A1 exam that can certify your language level. The A1 exam is a test that assesses your listening, reading, writing, and speaking skills in German. The exam consists of four parts: listening comprehension, reading comprehension, written expression, and oral expression. The exam is administered by various institutions, such as Goethe-Institut, telc, or ÖSD.

        - -

        To prepare for the A1 exam with Netzwerk A1 Kursbuch, you need to review the content and exercises of the textbook and practice your skills with mock tests and sample questions. Here are some tips to help you prepare for the A1 exam with Netzwerk A1 Kursbuch:

        - -
          -
        • Review the grammar and vocabulary of each unit of Netzwerk A1 Kursbuch. Make sure you understand the rules and examples and can use them correctly in sentences.
        • -
        • Practice the listening and reading comprehension exercises of Netzwerk A1 Kursbuch. Pay attention to the main ideas and details of the dialogues and texts and answer the questions accurately.
        • -
        • Practice the written and oral expression exercises of Netzwerk A1 Kursbuch. Write short texts and speak about familiar topics using appropriate words and structures.
        • -
        • Take mock tests and sample questions of the A1 exam that are similar to the format and difficulty of the real exam. You can find such resources online or in books.
        • -
        • Check your answers and correct your mistakes. Identify your strengths and weaknesses and focus on improving your skills.
        • -
        - -

        How to Enjoy Learning German with Netzwerk A1 Kursbuch?

        - -

        Learning German with Netzwerk A1 Kursbuch can be fun and enjoyable if you have a positive attitude and a curious mind. You can discover new aspects of the language and culture that can enrich your knowledge and experience. You can also make your learning more interesting and engaging by using some creative methods and techniques. Here are some ideas to help you enjoy learning German with Netzwerk A1 Kursbuch:

        - -
          -
        • Listen to songs that are related to the topics and vocabulary of Netzwerk A1 Kursbuch. Sing along with the lyrics and learn new words and expressions.
        • -
        • Watch videos that are related to the topics and vocabulary of Netzwerk A1 Kursbuch. Watch with subtitles or without subtitles depending on your level and preference.
        • -
        • Read articles that are related to the topics and vocabulary of Netzwerk A1 Kursbuch. Read for gist or for detail depending on your goal and interest.
        • -
        • Join online forums that are related to the topics and vocabulary of Netzwerk A1 Kursbuch. Share your opinions, ask questions, or answer questions from other learners or native speakers.
        • -
        • Create your own content that is related to the topics and vocabulary of Netzwerk A1 Kursbuch. Write a blog post, record a podcast, or make a video about something that interests you in German.
        • -
        - -

        Conclusion

        - -

        Netzwerk A1 Kursbuch is a textbook that can help you learn German as a foreign language and prepare you for the A1 level of the CEFR. However, using a PDF download of this textbook is not advisable, as it can pose legal, ethical, technical, and educational risks and disadvantages. Instead, you should consider buying a copy from a bookstore or an online platform, using an online platform that provides access to the textbook, or using an alternative textbook that meets your needs. If you decide to use a PDF download of Netzwerk A1 Kursbuch, you should make sure that your device meets the requirements for viewing the file, and that you know how to get and use it safely and effectively. You should also know how to prepare for the A1 exam with Netzwerk A1 Kursbuch effectively and enjoyably.

        -

        Conclusion

        - -

        Netzwerk A1 Kursbuch is a textbook that can help you learn German as a foreign language and prepare you for the A1 level of the CEFR. However, using a PDF download of this textbook is not advisable, as it can pose legal, ethical, technical, and educational risks and disadvantages. Instead, you should consider buying a copy from a bookstore or an online platform, using an online platform that provides access to the textbook, or using an alternative textbook that meets your needs. If you decide to use a PDF download of Netzwerk A1 Kursbuch, you should make sure that your device meets the requirements for viewing the file, and that you know how to get and use it safely and effectively. You should also know how to prepare for the A1 exam with Netzwerk A1 Kursbuch effectively and enjoyably.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Oracle Primavera P6 V7 Sp3 Full Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Oracle Primavera P6 V7 Sp3 Full Torrent.md deleted file mode 100644 index 43db482d8a06c43b57fc36c04921f93dda4d3444..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Oracle Primavera P6 V7 Sp3 Full Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

        oracle primavera p6 v7 sp3 full torrent


        Download Zip >>> https://cinurl.com/2uEYnZ



        - -justin bieber believe movie torrent downloadinstmank · oracle primavera p6 v7 sp3 full torrent · download film dragon nest sub indoinstmank 1fdad05405
        -
        -
        -

        diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/stare.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/stare.py deleted file mode 100644 index cbd14e0920e7f6a73baff1432e5a32ccfdb0dfae..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/stare.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class STAREDataset(CustomDataset): - """STARE dataset. - - In segmentation map annotation for STARE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.ah.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(STAREDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.ah.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/common/utils.py b/spaces/t110-ai-admin/InspectLens/video_llama/common/utils.py deleted file mode 100644 index f1768fcdfd73b057877a7b0a7c1f10a3aa057caa..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/common/utils.py +++ /dev/null @@ -1,424 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import io -import json -import logging -import os -import pickle -import re -import shutil -import urllib -import urllib.error -import urllib.request -from typing import Optional -from urllib.parse import urlparse - -import numpy as np -import pandas as pd -import yaml -from iopath.common.download import download -from iopath.common.file_io import file_lock, g_pathmgr -from video_llama.common.registry import registry -from torch.utils.model_zoo import tqdm -from torchvision.datasets.utils import ( - check_integrity, - download_file_from_google_drive, - extract_archive, -) - - -def now(): - from datetime import datetime - - return datetime.now().strftime("%Y%m%d%H%M")[:-1] - - -def is_url(url_or_filename): - parsed = urlparse(url_or_filename) - return parsed.scheme in ("http", "https") - - -def get_cache_path(rel_path): - return os.path.expanduser(os.path.join(registry.get_path("cache_root"), rel_path)) - - -def get_abs_path(rel_path): - return os.path.join(registry.get_path("library_root"), rel_path) - - -def load_json(filename): - with open(filename, "r") as f: - return json.load(f) - - -# The following are adapted from torchvision and vissl -# torchvision: https://github.com/pytorch/vision -# vissl: https://github.com/facebookresearch/vissl/blob/main/vissl/utils/download.py - - -def makedir(dir_path): - """ - Create the directory if it does not exist. - """ - is_success = False - try: - if not g_pathmgr.exists(dir_path): - g_pathmgr.mkdirs(dir_path) - is_success = True - except BaseException: - print(f"Error creating directory: {dir_path}") - return is_success - - -def get_redirected_url(url: str): - """ - Given a URL, returns the URL it redirects to or the - original URL in case of no indirection - """ - import requests - - with requests.Session() as session: - with session.get(url, stream=True, allow_redirects=True) as response: - if response.history: - return response.url - else: - return url - - -def to_google_drive_download_url(view_url: str) -> str: - """ - Utility function to transform a view URL of google drive - to a download URL for google drive - Example input: - https://drive.google.com/file/d/137RyRjvTBkBiIfeYBNZBtViDHQ6_Ewsp/view - Example output: - https://drive.google.com/uc?export=download&id=137RyRjvTBkBiIfeYBNZBtViDHQ6_Ewsp - """ - splits = view_url.split("/") - assert splits[-1] == "view" - file_id = splits[-2] - return f"https://drive.google.com/uc?export=download&id={file_id}" - - -def download_google_drive_url(url: str, output_path: str, output_file_name: str): - """ - Download a file from google drive - Downloading an URL from google drive requires confirmation when - the file of the size is too big (google drive notifies that - anti-viral checks cannot be performed on such files) - """ - import requests - - with requests.Session() as session: - - # First get the confirmation token and append it to the URL - with session.get(url, stream=True, allow_redirects=True) as response: - for k, v in response.cookies.items(): - if k.startswith("download_warning"): - url = url + "&confirm=" + v - - # Then download the content of the file - with session.get(url, stream=True, verify=True) as response: - makedir(output_path) - path = os.path.join(output_path, output_file_name) - total_size = int(response.headers.get("Content-length", 0)) - with open(path, "wb") as file: - from tqdm import tqdm - - with tqdm(total=total_size) as progress_bar: - for block in response.iter_content( - chunk_size=io.DEFAULT_BUFFER_SIZE - ): - file.write(block) - progress_bar.update(len(block)) - - -def _get_google_drive_file_id(url: str) -> Optional[str]: - parts = urlparse(url) - - if re.match(r"(drive|docs)[.]google[.]com", parts.netloc) is None: - return None - - match = re.match(r"/file/d/(?P[^/]*)", parts.path) - if match is None: - return None - - return match.group("id") - - -def _urlretrieve(url: str, filename: str, chunk_size: int = 1024) -> None: - with open(filename, "wb") as fh: - with urllib.request.urlopen( - urllib.request.Request(url, headers={"User-Agent": "vissl"}) - ) as response: - with tqdm(total=response.length) as pbar: - for chunk in iter(lambda: response.read(chunk_size), ""): - if not chunk: - break - pbar.update(chunk_size) - fh.write(chunk) - - -def download_url( - url: str, - root: str, - filename: Optional[str] = None, - md5: Optional[str] = None, -) -> None: - """Download a file from a url and place it in root. - Args: - url (str): URL to download file from - root (str): Directory to place downloaded file in - filename (str, optional): Name to save the file under. - If None, use the basename of the URL. - md5 (str, optional): MD5 checksum of the download. If None, do not check - """ - root = os.path.expanduser(root) - if not filename: - filename = os.path.basename(url) - fpath = os.path.join(root, filename) - - makedir(root) - - # check if file is already present locally - if check_integrity(fpath, md5): - print("Using downloaded and verified file: " + fpath) - return - - # expand redirect chain if needed - url = get_redirected_url(url) - - # check if file is located on Google Drive - file_id = _get_google_drive_file_id(url) - if file_id is not None: - return download_file_from_google_drive(file_id, root, filename, md5) - - # download the file - try: - print("Downloading " + url + " to " + fpath) - _urlretrieve(url, fpath) - except (urllib.error.URLError, IOError) as e: # type: ignore[attr-defined] - if url[:5] == "https": - url = url.replace("https:", "http:") - print( - "Failed download. Trying https -> http instead." - " Downloading " + url + " to " + fpath - ) - _urlretrieve(url, fpath) - else: - raise e - - # check integrity of downloaded file - if not check_integrity(fpath, md5): - raise RuntimeError("File not found or corrupted.") - - -def download_and_extract_archive( - url: str, - download_root: str, - extract_root: Optional[str] = None, - filename: Optional[str] = None, - md5: Optional[str] = None, - remove_finished: bool = False, -) -> None: - download_root = os.path.expanduser(download_root) - if extract_root is None: - extract_root = download_root - if not filename: - filename = os.path.basename(url) - - download_url(url, download_root, filename, md5) - - archive = os.path.join(download_root, filename) - print("Extracting {} to {}".format(archive, extract_root)) - extract_archive(archive, extract_root, remove_finished) - - -def cache_url(url: str, cache_dir: str) -> str: - """ - This implementation downloads the remote resource and caches it locally. - The resource will only be downloaded if not previously requested. - """ - parsed_url = urlparse(url) - dirname = os.path.join(cache_dir, os.path.dirname(parsed_url.path.lstrip("/"))) - makedir(dirname) - filename = url.split("/")[-1] - cached = os.path.join(dirname, filename) - with file_lock(cached): - if not os.path.isfile(cached): - logging.info(f"Downloading {url} to {cached} ...") - cached = download(url, dirname, filename=filename) - logging.info(f"URL {url} cached in {cached}") - return cached - - -# TODO (prigoyal): convert this into RAII-style API -def create_file_symlink(file1, file2): - """ - Simply create the symlinks for a given file1 to file2. - Useful during model checkpointing to symlinks to the - latest successful checkpoint. - """ - try: - if g_pathmgr.exists(file2): - g_pathmgr.rm(file2) - g_pathmgr.symlink(file1, file2) - except Exception as e: - logging.info(f"Could NOT create symlink. Error: {e}") - - -def save_file(data, filename, append_to_json=True, verbose=True): - """ - Common i/o utility to handle saving data to various file formats. - Supported: - .pkl, .pickle, .npy, .json - Specifically for .json, users have the option to either append (default) - or rewrite by passing in Boolean value to append_to_json. - """ - if verbose: - logging.info(f"Saving data to file: {filename}") - file_ext = os.path.splitext(filename)[1] - if file_ext in [".pkl", ".pickle"]: - with g_pathmgr.open(filename, "wb") as fopen: - pickle.dump(data, fopen, pickle.HIGHEST_PROTOCOL) - elif file_ext == ".npy": - with g_pathmgr.open(filename, "wb") as fopen: - np.save(fopen, data) - elif file_ext == ".json": - if append_to_json: - with g_pathmgr.open(filename, "a") as fopen: - fopen.write(json.dumps(data, sort_keys=True) + "\n") - fopen.flush() - else: - with g_pathmgr.open(filename, "w") as fopen: - fopen.write(json.dumps(data, sort_keys=True) + "\n") - fopen.flush() - elif file_ext == ".yaml": - with g_pathmgr.open(filename, "w") as fopen: - dump = yaml.dump(data) - fopen.write(dump) - fopen.flush() - else: - raise Exception(f"Saving {file_ext} is not supported yet") - - if verbose: - logging.info(f"Saved data to file: {filename}") - - -def load_file(filename, mmap_mode=None, verbose=True, allow_pickle=False): - """ - Common i/o utility to handle loading data from various file formats. - Supported: - .pkl, .pickle, .npy, .json - For the npy files, we support reading the files in mmap_mode. - If the mmap_mode of reading is not successful, we load data without the - mmap_mode. - """ - if verbose: - logging.info(f"Loading data from file: {filename}") - - file_ext = os.path.splitext(filename)[1] - if file_ext == ".txt": - with g_pathmgr.open(filename, "r") as fopen: - data = fopen.readlines() - elif file_ext in [".pkl", ".pickle"]: - with g_pathmgr.open(filename, "rb") as fopen: - data = pickle.load(fopen, encoding="latin1") - elif file_ext == ".npy": - if mmap_mode: - try: - with g_pathmgr.open(filename, "rb") as fopen: - data = np.load( - fopen, - allow_pickle=allow_pickle, - encoding="latin1", - mmap_mode=mmap_mode, - ) - except ValueError as e: - logging.info( - f"Could not mmap {filename}: {e}. Trying without g_pathmgr" - ) - data = np.load( - filename, - allow_pickle=allow_pickle, - encoding="latin1", - mmap_mode=mmap_mode, - ) - logging.info("Successfully loaded without g_pathmgr") - except Exception: - logging.info("Could not mmap without g_pathmgr. Trying without mmap") - with g_pathmgr.open(filename, "rb") as fopen: - data = np.load(fopen, allow_pickle=allow_pickle, encoding="latin1") - else: - with g_pathmgr.open(filename, "rb") as fopen: - data = np.load(fopen, allow_pickle=allow_pickle, encoding="latin1") - elif file_ext == ".json": - with g_pathmgr.open(filename, "r") as fopen: - data = json.load(fopen) - elif file_ext == ".yaml": - with g_pathmgr.open(filename, "r") as fopen: - data = yaml.load(fopen, Loader=yaml.FullLoader) - elif file_ext == ".csv": - with g_pathmgr.open(filename, "r") as fopen: - data = pd.read_csv(fopen) - else: - raise Exception(f"Reading from {file_ext} is not supported yet") - return data - - -def abspath(resource_path: str): - """ - Make a path absolute, but take into account prefixes like - "http://" or "manifold://" - """ - regex = re.compile(r"^\w+://") - if regex.match(resource_path) is None: - return os.path.abspath(resource_path) - else: - return resource_path - - -def makedir(dir_path): - """ - Create the directory if it does not exist. - """ - is_success = False - try: - if not g_pathmgr.exists(dir_path): - g_pathmgr.mkdirs(dir_path) - is_success = True - except BaseException: - logging.info(f"Error creating directory: {dir_path}") - return is_success - - -def is_url(input_url): - """ - Check if an input string is a url. look for http(s):// and ignoring the case - """ - is_url = re.match(r"^(?:http)s?://", input_url, re.IGNORECASE) is not None - return is_url - - -def cleanup_dir(dir): - """ - Utility for deleting a directory. Useful for cleaning the storage space - that contains various training artifacts like checkpoints, data etc. - """ - if os.path.exists(dir): - logging.info(f"Deleting directory: {dir}") - shutil.rmtree(dir) - logging.info(f"Deleted contents of directory: {dir}") - - -def get_file_size(filename): - """ - Given a file, get the size of file in MB - """ - size_in_mb = os.path.getsize(filename) / float(1024**2) - return size_in_mb diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/app.py b/spaces/taesiri/ChatGPT-ImageCaptioner/app.py deleted file mode 100644 index 3a8200a045203d75df96c7412b332ab172733ec5..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/app.py +++ /dev/null @@ -1,179 +0,0 @@ -import os -from langchain.llms import OpenAI, OpenAIChat - -os.system("pip install -U gradio") - -import sys -import gradio as gr - -os.system( - "pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html" -) - -# clone and install Detic -os.system( - "git clone https://github.com/facebookresearch/Detic.git --recurse-submodules" -) -os.chdir("Detic") - -# Install detectron2 -import torch - -# Some basic setup: -# Setup detectron2 logger -import detectron2 -from detectron2.utils.logger import setup_logger - -setup_logger() - -# import some common libraries -import sys -import numpy as np -import os, json, cv2, random - -# import some common detectron2 utilities -from detectron2 import model_zoo -from detectron2.engine import DefaultPredictor -from detectron2.config import get_cfg -from detectron2.utils.visualizer import Visualizer -from detectron2.data import MetadataCatalog, DatasetCatalog - -# Detic libraries -sys.path.insert(0, "third_party/CenterNet2/projects/CenterNet2/") -sys.path.insert(0, "third_party/CenterNet2/") -from centernet.config import add_centernet_config -from detic.config import add_detic_config -from detic.modeling.utils import reset_cls_test - -from PIL import Image - -# Build the detector and download our pretrained weights -cfg = get_cfg() -add_centernet_config(cfg) -add_detic_config(cfg) -cfg.MODEL.DEVICE = "cpu" -cfg.merge_from_file("configs/Detic_LCOCOI21k_CLIP_SwinB_896b32_4x_ft4x_max-size.yaml") -cfg.MODEL.WEIGHTS = "https://dl.fbaipublicfiles.com/detic/Detic_LCOCOI21k_CLIP_SwinB_896b32_4x_ft4x_max-size.pth" -cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model -cfg.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_PATH = "rand" -cfg.MODEL.ROI_HEADS.ONE_CLASS_PER_PROPOSAL = ( - True # For better visualization purpose. Set to False for all classes. -) -predictor = DefaultPredictor(cfg) - -BUILDIN_CLASSIFIER = { - "lvis": "datasets/metadata/lvis_v1_clip_a+cname.npy", - "objects365": "datasets/metadata/o365_clip_a+cnamefix.npy", - "openimages": "datasets/metadata/oid_clip_a+cname.npy", - "coco": "datasets/metadata/coco_clip_a+cname.npy", -} - -BUILDIN_METADATA_PATH = { - "lvis": "lvis_v1_val", - "objects365": "objects365_v2_val", - "openimages": "oid_val_expanded", - "coco": "coco_2017_val", -} - -session_token = os.environ.get("SessionToken") - - -def generate_caption(object_list_str, api_key, temperature): - query = f"You are an intelligent image captioner. I will hand you the objects and their position, and you should give me a detailed description for the photo. In this photo we have the following objects\n{object_list_str}" - llm = OpenAIChat( - model_name="gpt-3.5-turbo", openai_api_key=api_key, temperature=temperature - ) - - try: - caption = llm(query) - caption = caption.strip() - except: - caption = "Sorry, something went wrong!" - - return caption - - -def inference(img, vocabulary, api_key, temperature): - metadata = MetadataCatalog.get(BUILDIN_METADATA_PATH[vocabulary]) - classifier = BUILDIN_CLASSIFIER[vocabulary] - num_classes = len(metadata.thing_classes) - reset_cls_test(predictor.model, classifier, num_classes) - - im = cv2.imread(img) - - outputs = predictor(im) - v = Visualizer(im[:, :, ::-1], metadata) - out = v.draw_instance_predictions(outputs["instances"].to("cpu")) - - detected_objects = [] - object_list_str = [] - - box_locations = outputs["instances"].pred_boxes - box_loc_screen = box_locations.tensor.cpu().numpy() - - for i, box_coord in enumerate(box_loc_screen): - x0, y0, x1, y1 = box_coord - width = x1 - x0 - height = y1 - y0 - predicted_label = metadata.thing_classes[outputs["instances"].pred_classes[i]] - detected_objects.append( - { - "prediction": predicted_label, - "x": int(x0), - "y": int(y0), - "w": int(width), - "h": int(height), - } - ) - object_list_str.append( - f"{predicted_label} - X:({int(x0)} Y: {int(y0)} Width {int(width)} Height: {int(height)})" - ) - - if api_key is not None: - gpt_response = generate_caption(object_list_str, api_key, temperature) - else: - gpt_response = "Please paste your OpenAI key to use" - - return ( - Image.fromarray(np.uint8(out.get_image())).convert("RGB"), - gpt_response, - ) - - -with gr.Blocks() as demo: - with gr.Column(): - gr.Markdown("# Image Captioning using Detic and ChatGPT with LangChain 🦜️🔗") - gr.Markdown( - "Use Detic to detect objects in an image and then use `gpt-3.5-turbo` to describe the image." - ) - - with gr.Row(): - with gr.Column(): - inp = gr.Image(label="Input Image", type="filepath") - with gr.Column(): - openai_api_key_textbox = gr.Textbox( - placeholder="Paste your OpenAI API key (sk-...)", - show_label=False, - lines=1, - type="password", - ) - temperature = gr.Slider(0, 1, 0.1, label="Temperature") - vocab = gr.Dropdown( - ["lvis", "objects365", "openimages", "coco"], - label="Detic Vocabulary", - value="lvis", - ) - - btn_detic = gr.Button("Run Detic and ChatGPT") - with gr.Column(): - output_desc = gr.Textbox(label="Description Description", lines=5) - outviz = gr.Image(label="Visualization", type="pil") - - btn_detic.click( - fn=inference, - inputs=[inp, vocab, openai_api_key_textbox, temperature], - outputs=[outviz, output_desc], - ) - - -demo.launch(debug=False) diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/custom_dataset_mapper.py b/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/custom_dataset_mapper.py deleted file mode 100644 index c7727dded3f93f5eeafdcd72e257197e3fdc817b..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/custom_dataset_mapper.py +++ /dev/null @@ -1,280 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import copy -import logging -import numpy as np -from typing import List, Optional, Union -import torch -import pycocotools.mask as mask_util - -from detectron2.config import configurable - -from detectron2.data import detection_utils as utils -from detectron2.data.detection_utils import transform_keypoint_annotations -from detectron2.data import transforms as T -from detectron2.data.dataset_mapper import DatasetMapper -from detectron2.structures import Boxes, BoxMode, Instances -from detectron2.structures import Keypoints, PolygonMasks, BitMasks -from fvcore.transforms.transform import TransformList -from .custom_build_augmentation import build_custom_augmentation -from .tar_dataset import DiskTarDataset - -__all__ = ["CustomDatasetMapper"] - -class CustomDatasetMapper(DatasetMapper): - @configurable - def __init__(self, is_train: bool, - with_ann_type=False, - dataset_ann=[], - use_diff_bs_size=False, - dataset_augs=[], - is_debug=False, - use_tar_dataset=False, - tarfile_path='', - tar_index_dir='', - **kwargs): - """ - add image labels - """ - self.with_ann_type = with_ann_type - self.dataset_ann = dataset_ann - self.use_diff_bs_size = use_diff_bs_size - if self.use_diff_bs_size and is_train: - self.dataset_augs = [T.AugmentationList(x) for x in dataset_augs] - self.is_debug = is_debug - self.use_tar_dataset = use_tar_dataset - if self.use_tar_dataset: - print('Using tar dataset') - self.tar_dataset = DiskTarDataset(tarfile_path, tar_index_dir) - super().__init__(is_train, **kwargs) - - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - ret = super().from_config(cfg, is_train) - ret.update({ - 'with_ann_type': cfg.WITH_IMAGE_LABELS, - 'dataset_ann': cfg.DATALOADER.DATASET_ANN, - 'use_diff_bs_size': cfg.DATALOADER.USE_DIFF_BS_SIZE, - 'is_debug': cfg.IS_DEBUG, - 'use_tar_dataset': cfg.DATALOADER.USE_TAR_DATASET, - 'tarfile_path': cfg.DATALOADER.TARFILE_PATH, - 'tar_index_dir': cfg.DATALOADER.TAR_INDEX_DIR, - }) - if ret['use_diff_bs_size'] and is_train: - if cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop': - dataset_scales = cfg.DATALOADER.DATASET_INPUT_SCALE - dataset_sizes = cfg.DATALOADER.DATASET_INPUT_SIZE - ret['dataset_augs'] = [ - build_custom_augmentation(cfg, True, scale, size) \ - for scale, size in zip(dataset_scales, dataset_sizes)] - else: - assert cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge' - min_sizes = cfg.DATALOADER.DATASET_MIN_SIZES - max_sizes = cfg.DATALOADER.DATASET_MAX_SIZES - ret['dataset_augs'] = [ - build_custom_augmentation( - cfg, True, min_size=mi, max_size=ma) \ - for mi, ma in zip(min_sizes, max_sizes)] - else: - ret['dataset_augs'] = [] - - return ret - - def __call__(self, dataset_dict): - """ - include image labels - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - # USER: Write your own image loading if it's not from a file - if 'file_name' in dataset_dict: - ori_image = utils.read_image( - dataset_dict["file_name"], format=self.image_format) - else: - ori_image, _, _ = self.tar_dataset[dataset_dict["tar_index"]] - ori_image = utils._apply_exif_orientation(ori_image) - ori_image = utils.convert_PIL_to_numpy(ori_image, self.image_format) - utils.check_image_size(dataset_dict, ori_image) - - # USER: Remove if you don't do semantic/panoptic segmentation. - if "sem_seg_file_name" in dataset_dict: - sem_seg_gt = utils.read_image( - dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2) - else: - sem_seg_gt = None - - if self.is_debug: - dataset_dict['dataset_source'] = 0 - - not_full_labeled = 'dataset_source' in dataset_dict and \ - self.with_ann_type and \ - self.dataset_ann[dataset_dict['dataset_source']] != 'box' - - aug_input = T.AugInput(copy.deepcopy(ori_image), sem_seg=sem_seg_gt) - if self.use_diff_bs_size and self.is_train: - transforms = \ - self.dataset_augs[dataset_dict['dataset_source']](aug_input) - else: - transforms = self.augmentations(aug_input) - image, sem_seg_gt = aug_input.image, aug_input.sem_seg - - image_shape = image.shape[:2] # h, w - dataset_dict["image"] = torch.as_tensor( - np.ascontiguousarray(image.transpose(2, 0, 1))) - - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long")) - - # USER: Remove if you don't use pre-computed proposals. - # Most users would not need this feature. - if self.proposal_topk is not None: - utils.transform_proposals( - dataset_dict, image_shape, transforms, - proposal_topk=self.proposal_topk - ) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - dataset_dict.pop("sem_seg_file_name", None) - return dataset_dict - - if "annotations" in dataset_dict: - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.use_instance_mask: - anno.pop("segmentation", None) - if not self.use_keypoint: - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - all_annos = [ - (utils.transform_instance_annotations( - obj, transforms, image_shape, - keypoint_hflip_indices=self.keypoint_hflip_indices, - ), obj.get("iscrowd", 0)) - for obj in dataset_dict.pop("annotations") - ] - annos = [ann[0] for ann in all_annos if ann[1] == 0] - instances = utils.annotations_to_instances( - annos, image_shape, mask_format=self.instance_mask_format - ) - - del all_annos - if self.recompute_boxes: - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - dataset_dict["instances"] = utils.filter_empty_instances(instances) - if self.with_ann_type: - dataset_dict["pos_category_ids"] = dataset_dict.get( - 'pos_category_ids', []) - dataset_dict["ann_type"] = \ - self.dataset_ann[dataset_dict['dataset_source']] - if self.is_debug and (('pos_category_ids' not in dataset_dict) or \ - (dataset_dict['pos_category_ids'] == [])): - dataset_dict['pos_category_ids'] = [x for x in sorted(set( - dataset_dict['instances'].gt_classes.tolist() - ))] - return dataset_dict - -# DETR augmentation -def build_transform_gen(cfg, is_train): - """ - """ - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN - max_size = cfg.INPUT.MAX_SIZE_TRAIN - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - if sample_style == "range": - assert len(min_size) == 2, "more than 2 ({}) min_size(s) are provided for ranges".format(len(min_size)) - - logger = logging.getLogger(__name__) - tfm_gens = [] - if is_train: - tfm_gens.append(T.RandomFlip()) - tfm_gens.append(T.ResizeShortestEdge(min_size, max_size, sample_style)) - if is_train: - logger.info("TransformGens used in training: " + str(tfm_gens)) - return tfm_gens - - -class DetrDatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by DETR. - The callable currently does the following: - 1. Read the image from "file_name" - 2. Applies geometric transforms to the image and annotation - 3. Find and applies suitable cropping to the image and annotation - 4. Prepare image and annotation to Tensors - """ - - def __init__(self, cfg, is_train=True): - if cfg.INPUT.CROP.ENABLED and is_train: - self.crop_gen = [ - T.ResizeShortestEdge([400, 500, 600], sample_style="choice"), - T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE), - ] - else: - self.crop_gen = None - - self.mask_on = cfg.MODEL.MASK_ON - self.tfm_gens = build_transform_gen(cfg, is_train) - logging.getLogger(__name__).info( - "Full TransformGens used in training: {}, crop: {}".format(str(self.tfm_gens), str(self.crop_gen)) - ) - - self.img_format = cfg.INPUT.FORMAT - self.is_train = is_train - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.img_format) - utils.check_image_size(dataset_dict, image) - - if self.crop_gen is None: - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - else: - if np.random.rand() > 0.5: - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - else: - image, transforms = T.apply_transform_gens( - self.tfm_gens[:-1] + self.crop_gen + self.tfm_gens[-1:], image - ) - - image_shape = image.shape[:2] # h, w - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - return dataset_dict - - if "annotations" in dataset_dict: - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.mask_on: - anno.pop("segmentation", None) - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations(obj, transforms, image_shape) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = utils.annotations_to_instances(annos, image_shape) - dataset_dict["instances"] = utils.filter_empty_instances(instances) - return dataset_dict \ No newline at end of file diff --git a/spaces/taesiri/DeticChatGPT/detic/modeling/backbone/swintransformer.py b/spaces/taesiri/DeticChatGPT/detic/modeling/backbone/swintransformer.py deleted file mode 100644 index 21cabb37dd87a443e27eeb805f9739bef86540bf..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/detic/modeling/backbone/swintransformer.py +++ /dev/null @@ -1,750 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Xingyi Zhou from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py - - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from detectron2.layers import ShapeSpec -from detectron2.modeling.backbone.backbone import Backbone -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.modeling.backbone.fpn import FPN - -from centernet.modeling.backbone.fpn_p5 import LastLevelP6P7_P5 -from centernet.modeling.backbone.bifpn import BiFPN -# from .checkpoint import load_checkpoint - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(Backbone): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - self._freeze_stages() - self._out_features = ['swin{}'.format(i) for i in self.out_indices] - self._out_feature_channels = { - 'swin{}'.format(i): self.embed_dim * 2 ** i for i in self.out_indices - } - self._out_feature_strides = { - 'swin{}'.format(i): 2 ** (i + 2) for i in self.out_indices - } - self._size_devisibility = 32 - - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - if isinstance(pretrained, str): - self.apply(_init_weights) - # load_checkpoint(self, pretrained, strict=False) - elif pretrained is None: - self.apply(_init_weights) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - # outs = [] - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - # outs.append(out) - outs['swin{}'.format(i)] = out - - return outs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - -size2config = { - 'T': { - 'window_size': 7, - 'embed_dim': 96, - 'depth': [2, 2, 6, 2], - 'num_heads': [3, 6, 12, 24], - 'drop_path_rate': 0.2, - 'pretrained': 'models/swin_tiny_patch4_window7_224.pth' - }, - 'S': { - 'window_size': 7, - 'embed_dim': 96, - 'depth': [2, 2, 18, 2], - 'num_heads': [3, 6, 12, 24], - 'drop_path_rate': 0.2, - 'pretrained': 'models/swin_small_patch4_window7_224.pth' - }, - 'B': { - 'window_size': 7, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window7_224.pth' - }, - 'B-22k': { - 'window_size': 7, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window7_224_22k.pth' - }, - 'B-22k-384': { - 'window_size': 12, - 'embed_dim': 128, - 'depth': [2, 2, 18, 2], - 'num_heads': [4, 8, 16, 32], - 'drop_path_rate': 0.3, - 'pretrained': 'models/swin_base_patch4_window12_384_22k.pth' - }, - 'L-22k': { - 'window_size': 7, - 'embed_dim': 192, - 'depth': [2, 2, 18, 2], - 'num_heads': [6, 12, 24, 48], - 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear - 'pretrained': 'models/swin_large_patch4_window7_224_22k.pth' - }, - 'L-22k-384': { - 'window_size': 12, - 'embed_dim': 192, - 'depth': [2, 2, 18, 2], - 'num_heads': [6, 12, 24, 48], - 'drop_path_rate': 0.3, # TODO (xingyi): this is unclear - 'pretrained': 'models/swin_large_patch4_window12_384_22k.pth' - } -} - -@BACKBONE_REGISTRY.register() -def build_swintransformer_backbone(cfg, input_shape): - """ - """ - config = size2config[cfg.MODEL.SWIN.SIZE] - out_indices = cfg.MODEL.SWIN.OUT_FEATURES - model = SwinTransformer( - embed_dim=config['embed_dim'], - window_size=config['window_size'], - depths=config['depth'], - num_heads=config['num_heads'], - drop_path_rate=config['drop_path_rate'], - out_indices=out_indices, - frozen_stages=-1, - use_checkpoint=cfg.MODEL.SWIN.USE_CHECKPOINT - ) - # print('Initializing', config['pretrained']) - model.init_weights(config['pretrained']) - return model - - -@BACKBONE_REGISTRY.register() -def build_swintransformer_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - """ - bottom_up = build_swintransformer_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7_P5(out_channels, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_swintransformer_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - """ - bottom_up = build_swintransformer_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - backbone = BiFPN( - cfg=cfg, - bottom_up=bottom_up, - in_features=in_features, - out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS, - norm=cfg.MODEL.BIFPN.NORM, - num_levels=cfg.MODEL.BIFPN.NUM_LEVELS, - num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN, - separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV, - ) - return backbone \ No newline at end of file diff --git a/spaces/teragron/TinyStories/test_all.py b/spaces/teragron/TinyStories/test_all.py deleted file mode 100644 index a4d09760ac3c8d6cdb8dd48a69acbb94a1b5c290..0000000000000000000000000000000000000000 --- a/spaces/teragron/TinyStories/test_all.py +++ /dev/null @@ -1,89 +0,0 @@ -""" -Run simply with -$ pytest -""" -import os -import pytest # pip install pytest -import requests -import subprocess - - -import torch -from model import ModelArgs, Transformer -from tokenizer import Tokenizer - -# ----------------------------------------------------------------------------- -# test utilities - -test_ckpt_dir = "test" - -def download_file(url, filename): - print(f"Downloading {url} to {filename}") - response = requests.get(url, stream=True) - response.raise_for_status() # Raise an HTTPError on bad status code - with open(filename, 'wb') as file: - for chunk in response.iter_content(chunk_size=8192): - file.write(chunk) - -def attempt_download_files(): - os.makedirs(test_ckpt_dir, exist_ok=True) - root_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories260K" - need = ["stories260K.bin", "stories260K.pt", "tok512.bin", "tok512.model"] - for file in need: - url = root_url + '/' + file #os.path.join inserts \\ on windows - filename = os.path.join(test_ckpt_dir, file) - if not os.path.exists(filename): - download_file(url, filename) - -expected_stdout = b'Once upon a time, there was a little girl named Lily. She loved to play outside in the park. One day, she saw a big, red ball. She wanted to play with it, but it was too high.\nLily\'s mom said, "Lily, let\'s go to the park." Lily was sad and didn\'t know what to do. She said, "I want to play with your ball, but I can\'t find it."\nLily was sad and didn\'t know what to do. She said, "I\'m sorry, Lily. I didn\'t know what to do."\nLily didn\'t want to help her mom, so she' - -# ----------------------------------------------------------------------------- -# actual tests - -def test_runc(): - """ Forwards a model against a known-good desired outcome in run.c for 200 steps""" - attempt_download_files() - - model_path = os.path.join(test_ckpt_dir, "stories260K.bin") - tokenizer_path = os.path.join(test_ckpt_dir, "tok512.bin") - command = ["./run", model_path, "-z", tokenizer_path, "-t", "0.0", "-n", "200"] - with open('err.txt', mode='wb') as fe: - with open('stdout.txt', mode='wb') as fo: - proc = subprocess.Popen(command, stdout=fo, stderr=fe) #pipe in windows terminal does funny things like replacing \n with \r\n - proc.wait() - - with open('stdout.txt', mode='r') as f: - stdout = f.read() - # strip the very last \n that is added by run.c for aesthetic reasons - stdout = stdout[:-1].encode('ascii') - - assert stdout == expected_stdout - -def test_python(): - """ Forwards a model against a known-good desired outcome in sample.py for 200 steps""" - attempt_download_files() - - device = "cpu" # stories260K is small enough to just breeze through it on CPU - checkpoint = os.path.join(test_ckpt_dir, "stories260K.pt") - checkpoint_dict = torch.load(checkpoint, map_location=device) - gptconf = ModelArgs(**checkpoint_dict['model_args']) - model = Transformer(gptconf) - state_dict = checkpoint_dict['model'] - unwanted_prefix = '_orig_mod.' - for k,v in list(state_dict.items()): - if k.startswith(unwanted_prefix): - state_dict[k[len(unwanted_prefix):]] = state_dict.pop(k) - model.load_state_dict(state_dict, strict=False) - model.eval() - model.to(device) - x = torch.tensor([[1]], dtype=torch.long, device=device) # 1 is BOS - with torch.inference_mode(): - y = model.generate(x, max_new_tokens=200, temperature=0.0) - pt_tokens = y[0].tolist() - - tokenizer_model = os.path.join(test_ckpt_dir, "tok512.model") - enc = Tokenizer(tokenizer_model=tokenizer_model) - text = enc.decode(pt_tokens) - text = text.encode('ascii') # turn into bytes - - assert text == expected_stdout diff --git a/spaces/terfces0erbo/CollegeProjectV2/Audi Code Calculator Auz1z1 120 [HOT].md b/spaces/terfces0erbo/CollegeProjectV2/Audi Code Calculator Auz1z1 120 [HOT].md deleted file mode 100644 index 7e18828cb5a810262b1569a126424dd659bd34b9..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Audi Code Calculator Auz1z1 120 [HOT].md +++ /dev/null @@ -1,50 +0,0 @@ -

        audi code calculator auz1z1 120


        Download Zip 🗹 https://bytlly.com/2uGlEF



        - -AUD, INDICATION DES REGIONS DE BELGIQUE, FRANCE, GARAGE DEDICATE SUR PRINTEMPS, 021 370.3932.027MAY, 3560.2.0.0-00, 23.11.2012, 17:15:03, 0, 1, 1,, 0, 1, 1,, 0, 0, 1, 0, 0,, AUTOMATIC,, CODES AT UNIT DEKAD, BCD, 39-9999, 0, 7, 0, SAME,, 39-9999,, 7, 0, 0, 1,, - -A: - -You could use the * to indicate a wildcard. To match a single character, you could use?: - -[0-9]*\.[0-9]*-[0-9]*\.[0-9]*-[0-9]*[0-9]\ - -The basic rule is that a literal. followed by another literal. must match anything. The - is a literal. - -To match anything except a., the regex would be: - -[^.][0-9]*\.[0-9]*-[0-9]*\.[0-9]*-[0-9]*[0-9]\ - -The basics of regex is on Wikipedia. - -Q: - -Bash Script that says what directory is it in - -I want a bash script to output the directory it's in. - -Currently I have this: - -DONE="" - -PWD=`pwd` - -(cd $PWD/../../../etc/../../etc/cntrl.d/ && \ - -(for file in *; do echo "$file"; done)); echo -e "$DONE" - -What I'd like the script to output is like: - -../../../etc/../../etc/cntrl.d/somelabel.txt - -You can use the tilde expansion: - -DONE="$(cd `dirname "$PWD"` && \ - - for file in *; do echo "$file"; done)" - -echo -e "$DONE" - -You can use backticks in case you 4fefd39f24
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/Kniffel Blatt Zum Ausdrucken Pdf Download _HOT_.md b/spaces/terfces0erbo/CollegeProjectV2/Kniffel Blatt Zum Ausdrucken Pdf Download _HOT_.md deleted file mode 100644 index 1c0594340d8d655a6fd74459c76d22b245798257..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Kniffel Blatt Zum Ausdrucken Pdf Download _HOT_.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Kniffel Blatt Zum Ausdrucken Pdf Download


        Download File ✑ ✑ ✑ https://bytlly.com/2uGk7A



        -
        -Download 1 Download 2 ... Kniffel Vorlagen Din A4 Pdf / Kniffel blatt zum ausdrucken pdf downloadbooksks · the history of grid autosport ios ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/thesven/image-to-story/README.md b/spaces/thesven/image-to-story/README.md deleted file mode 100644 index f2f23329763654531f24e790dbafdc976d143236..0000000000000000000000000000000000000000 --- a/spaces/thesven/image-to-story/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image To Story -emoji: 📚 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/thibobo78/stabilityai-stable-diffusion-2-1/app.py b/spaces/thibobo78/stabilityai-stable-diffusion-2-1/app.py deleted file mode 100644 index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000 --- a/spaces/thibobo78/stabilityai-stable-diffusion-2-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch() \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Beats Audio With HP Triple Bass Reflex Subwoofer Driver.md b/spaces/tialenAdioni/chat-gpt-api/logs/Beats Audio With HP Triple Bass Reflex Subwoofer Driver.md deleted file mode 100644 index c89663338cf6151c5543fad9376d777f25632810..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Beats Audio With HP Triple Bass Reflex Subwoofer Driver.md +++ /dev/null @@ -1,30 +0,0 @@ - -Here is the content I generated: - -

        How to Enhance Your Audio Experience with Beats Audio and HP Triple Bass Reflex Subwoofer

        -

        If you are looking for a way to enjoy a rich and immersive sound quality on your HP laptop or desktop, you might want to check out the Beats Audio and HP Triple Bass Reflex Subwoofer features. These are two of the most advanced audio technologies that HP offers, and they can make a big difference in your listening experience.

        -

        Beats Audio With HP Triple Bass Reflex Subwoofer Driver


        Download Zip ->->->-> https://urlcod.com/2uK3DV



        -

        In this article, we will explain what Beats Audio and HP Triple Bass Reflex Subwoofer are, how they work, and how you can use them to customize your sound settings. We will also provide some tips on how to troubleshoot some common issues that might arise with these features.

        -

        What is Beats Audio?

        -

        Beats Audio is an enhanced audio controller that provides a deep, controlled bass while maintaining a clear sound. It was developed by HP in collaboration with Beats by Dr. Dre, a leading brand of headphones and speakers. Beats Audio uses a sophisticated algorithm to optimize the sound quality for different types of audio devices, such as music, movies, and video games.

        -

        Beats Audio is available on select HP laptops and desktops, and it comes with a dedicated control panel that allows you to adjust various settings, such as equalizer, volume, and speaker configuration. You can also choose from different preset modes, such as Music, Movie, or Voice, depending on what you are listening to.

        -

        What is HP Triple Bass Reflex Subwoofer?

        -

        HP Triple Bass Reflex Subwoofer is a feature that enhances the low-frequency response of your laptop or desktop speakers. It consists of three subwoofers that are built into the chassis of your device, creating a powerful and realistic bass effect. HP Triple Bass Reflex Subwoofer works in conjunction with Beats Audio to deliver a balanced and dynamic sound quality.

        -

        HP Triple Bass Reflex Subwoofer is available on select HP laptops and desktops, such as the Pavilion dv6 series. You can access the settings for this feature through the Beats Audio control panel, where you can adjust the bass level and enable or disable the subwoofer.

        -

        How to Use Beats Audio and HP Triple Bass Reflex Subwoofer?

        -

        To use Beats Audio and HP Triple Bass Reflex Subwoofer, you need to have a compatible HP laptop or desktop with these features installed. You also need to have the latest audio driver for your device, which you can download from the HP website or through Windows Update.

        -

        -

        Once you have the driver installed, you can open the Beats Audio control panel by clicking on the Start menu, Control Panel, and Beats Audio Control Panel. Alternatively, you can search for Beats Audio in the Start menu search field.

        -

        In the Beats Audio control panel, you can configure the playback and recording settings for your audio devices. Here are some of the options you can choose from:

        -
          -
        • Playback: This tab allows you to adjust the settings for the integrated speakers and headphones, a line in headset, and headphones that are plugged into the Real Time Communication (RTC) jack. You can change the volume, balance, equalizer, speaker configuration, and preset mode for each device.
        • -
        • Recording: This tab allows you to adjust the settings for a headset microphone, the integrated microphone, line in devices, and stereo mix devices. You can change the volume, balance, noise cancellation, boost level, and default device for each device.
        • -
        • Preferences: This tab allows you to customize the appearance and behavior of the Beats Audio control panel. You can change the background theme, text and slider color, opacity level, and system tray icon.
        • -
        -

        How to Troubleshoot Beats Audio and HP Triple Bass Reflex Subwoofer?

        -

        Sometimes, you might encounter some issues with Beats Audio and HP Triple Bass Reflex Subwoofer that affect your sound quality or performance. Here are some of the common problems and their possible solutions:

        -
          -
        • No sound or low sound: This could be caused by a faulty audio driver, a muted or low volume setting, a disabled audio device, or a loose or damaged cable or jack. To fix this issue, you can try updating or reinstalling your audio driver from the HP website or Windows Update; checking your volume settings in Windows and in the Beats Audio control panel; enabling your audio device in Windows Sound settings; and inspecting your cables and jacks for any damage or dirt.
        • -
        • No bass or low bass: This could be caused by a

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Comptia Network Todd Lammle Pdf Download.md b/spaces/tialenAdioni/chat-gpt-api/logs/Comptia Network Todd Lammle Pdf Download.md deleted file mode 100644 index b6f337de2cb79db55b2e2d7eedfd32170bcbd742..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Comptia Network Todd Lammle Pdf Download.md +++ /dev/null @@ -1,46 +0,0 @@ -
          -

          How to Download CompTIA Network+ Study Guide by Todd Lammle for Free

          -

          If you are preparing for the CompTIA Network+ certification exam, you might be looking for a reliable and comprehensive study guide to help you master the concepts and skills required for the test. One of the most popular and recommended books for this purpose is CompTIA Network+ Study Guide by Todd Lammle, a network expert and bestselling author with over 30 years of experience in the field.

          -

          In this article, we will show you how to download CompTIA Network+ Study Guide by Todd Lammle for free in PDF format, so you can access it anytime and anywhere on your device. We will also give you a brief overview of what the book covers and why it is a valuable resource for your exam preparation.

          -

          Comptia Network Todd Lammle Pdf Download


          Downloadhttps://urlcod.com/2uK5RN



          - -

          What is CompTIA Network+ Study Guide by Todd Lammle?

          -

          CompTIA Network+ Study Guide by Todd Lammle is a comprehensive and updated book that covers all the topics and objectives of the CompTIA Network+ exam N10-008, which is the latest version of the certification as of 2021. The book provides clear and concise explanations of networking fundamentals, implementations, operations, security, and troubleshooting, with real-world examples and scenarios. It also includes review questions, practice exams, flashcards, and a glossary of key terms to help you test your knowledge and reinforce your learning.

          -

          The book is divided into 15 chapters, each covering a specific domain of the exam. The chapters are as follows:

          -
            -
          • Introduction to Networks
          • -
          • The Open Systems Interconnection Specifications
          • -
          • Networking Topologies, Connectors, and Wiring Standards
          • -
          • The Current Ethernet Specifications
          • -
          • Networking Devices
          • -
          • Introduction to the Internet Protocol
          • -
          • IP Addressing
          • -
          • IP Subnetting, Troubleshooting IP, and Introduction to NAT
          • -
          • Introduction to IP Routing
          • -
          • Routing Protocols
          • -
          • Switching and Virtual LANs
          • -
          • Wireless Networking
          • -
          • Authentication and Access Control
          • -
          • Network Threats and Mitigation
          • -
          • Physical Security and Risk
          • -
          - -

          Why should you download CompTIA Network+ Study Guide by Todd Lammle?

          -

          There are many reasons why you should download CompTIA Network+ Study Guide by Todd Lammle for your exam preparation. Here are some of them:

          -
            -
          • The book is written by an experienced and certified network professional who knows what it takes to pass the exam and succeed in the industry.
          • -
          • The book is aligned with the latest version of the exam objectives and reflects the current trends and technologies in networking.
          • -
          • The book is easy to read and understand, with clear diagrams, tables, figures, and screenshots to illustrate the concepts.
          • -
          • The book is comprehensive and thorough, covering all the topics and skills you need to know for the exam.
          • -
          • The book is practical and relevant, with real-world examples and scenarios that show you how to apply your knowledge in different situations.
          • -
          • The book is interactive and engaging, with review questions, practice exams, flashcards, and a glossary of key terms that help you test your knowledge and reinforce your learning.
          • -
          • The book is accessible and convenient, as you can download it for free in PDF format and read it on any device.
          • -
          - -

          How to download CompTIA Network+ Study Guide by Todd Lammle for free?

          -

          If you want to download CompTIA Network+ Study Guide by Todd Lammle for free in PDF format, you have a few options. Here are some of them:

          - -
            -
          1. You can visit archive.org, which is a website that provides free access to millions of books, movies, music, software, and more. You can search for CompTIA Network+ Study Guide by Todd Lammle

            cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Crack PC Software Direct Download Links A Bad Idea for Your PC and Your Wallet.md b/spaces/tialenAdioni/chat-gpt-api/logs/Crack PC Software Direct Download Links A Bad Idea for Your PC and Your Wallet.md deleted file mode 100644 index ecbb24444be1d16e3a51526d02f56191010a09ea..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Crack PC Software Direct Download Links A Bad Idea for Your PC and Your Wallet.md +++ /dev/null @@ -1,24 +0,0 @@ -
            -

            Crack PC Software Direct Download Links: What You Need to Know

            -

            If you are looking for a way to get crack PC software direct download links, you may be tempted by various websites that claim to offer them for free. Crack PC software are modified versions of the original software that bypass the license verification and activation process. By using crack PC software, you can use the software without any limitations or restrictions.

            -

            However, before you click on any crack PC software direct download link, you should be aware of the risks and consequences of doing so. In this article, we will tell you what you need to know about crack PC software direct download links and why you should avoid them.

            -

            crack pc software direct download links


            Downloadhttps://urlcod.com/2uK9Dn



            -

            The Risks of Crack PC Software Direct Download Links

            -

            Crack PC software direct download links are not only illegal but also risky. By using them, you are exposing yourself to various dangers, such as:

            -
              -
            • Legal trouble: By using crack PC software, you are violating the terms and conditions of the software and infringing the intellectual property rights of the software developers. You may face legal actions or penalties from the software developers or authorities.
            • -
            • Viruses and malware: Many websites that offer crack PC software direct download links are not trustworthy or secure. They may contain viruses, malware, or other threats that can infect your computer or data. You may lose your important files, personal information, or money.
            • -
            • Poor performance and quality: Crack PC software are not reliable or stable. They may contain errors, bugs, or glitches that can affect the performance and quality of the software. You may experience crashes, freezes, or errors while using the software.
            • -
            • No updates or support: Crack PC software are not eligible for updates or support from the software developers. You may miss out on important features, improvements, or fixes that are available for the original software. You may also not be able to get help or assistance if you encounter any problems with the software.
            • -
            -

            The Alternatives to Crack PC Software Direct Download Links

            -

            If you want to use PC software without paying for them, there are better and safer alternatives than crack PC software direct download links. Some of them are:

            -
              -
            • Free or open source software: There are many free or open source software that offer similar or better features and functions than crack PC software. You can use them legally and safely without any limitations or restrictions. You can also get updates and support from the developers or communities.
            • -
            • Trial or demo versions: Many PC software offer trial or demo versions that allow you to use them for a limited time or with limited features. You can use them to test the software before buying them. You can also get updates and support from the developers.
            • -
            • Discounts or coupons: Many PC software offer discounts or coupons that allow you to buy them at a lower price. You can look for them online or on the official websites of the software developers. You can also get updates and support from the developers.
            • -
            -

            The Bottom Line

            -

            Crack PC software direct download links are not worth the risk or trouble. They are illegal and risky, and they can harm your computer or data. They also do not offer the best performance and quality of the software. You should avoid them and look for better and safer alternatives instead. Remember, always use legal and safe software for your PC needs.

            ddb901b051
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Honestech TVR 2.5 for Free with Product Key The Benefits and Advantages of the Software.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Honestech TVR 2.5 for Free with Product Key The Benefits and Advantages of the Software.md deleted file mode 100644 index 14488e338cfbf0e60a20be8b42de220291fd9b21..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Honestech TVR 2.5 for Free with Product Key The Benefits and Advantages of the Software.md +++ /dev/null @@ -1,111 +0,0 @@ - -

            How to Get Honestech TVR 2.5 Product Key for Free

            -

            Honestech TVR 2.5 is a software that allows you to capture and edit video from various sources such as TV, VCR, camcorder, or DVD player. It also lets you burn your videos to DVD or CD, or upload them to YouTube or Facebook. Honestech TVR 2.5 is compatible with Windows XP, Vista, 7, 8, and 10.

            -

            honestech tvr 2.5 product key free download


            Download Filehttps://urlcod.com/2uK3S5



            -

            If you are looking for a way to get Honestech TVR 2.5 product key for free, you have come to the right place. In this article, we will show you how to download and install Honestech TVR 2.5 without paying anything. We will also provide you with a working product key that you can use to activate the software.

            -

            Step 1: Download Honestech TVR 2.5

            -

            The first step is to download Honestech TVR 2.5 from a reliable source. You can use the link below to download the software for free.

            -Download Honestech TVR 2.5 -

            Once you click on the link, you will be redirected to the official website of Honestech. There, you will see a button that says "Download". Click on it and save the file on your computer.

            -

            Step 2: Install Honestech TVR 2.5

            -

            The next step is to install Honestech TVR 2.5 on your computer. To do that, follow these steps:

            -

            honestech tvr 2.5 serial number free download
            -honestech tvr 2.5 activation code free download
            -honestech tvr 2.5 crack free download
            -honestech tvr 2.5 license key free download
            -honestech tvr 2.5 full version free download
            -honestech tvr 2.5 software free download
            -honestech tvr 2.5 setup free download
            -honestech tvr 2.5 driver free download
            -honestech tvr 2.5 windows 10 free download
            -honestech tvr 2.5 windows 7 free download
            -how to get honestech tvr 2.5 product key for free
            -how to install honestech tvr 2.5 without product key
            -how to use honestech tvr 2.5 without product key
            -how to activate honestech tvr 2.5 without product key
            -how to crack honestech tvr 2.5 without product key
            -how to register honestech tvr 2.5 without product key
            -how to update honestech tvr 2.5 without product key
            -how to uninstall honestech tvr 2.5 without product key
            -how to fix honestech tvr 2.5 product key error
            -how to recover honestech tvr 2.5 product key
            -where to find honestech tvr 2.5 product key
            -where to buy honestech tvr 2.5 product key
            -where to enter honestech tvr 2.5 product key
            -where to download honestech tvr 2.5 product key generator
            -where to download honestech tvr 2.5 product key finder
            -what is the best alternative to honestech tvr 2.5 product key
            -what is the latest version of honestech tvr 2.5 product key
            -what is the difference between honestech tvr 2.5 and honestech vhs to dvd product key
            -what is the price of honestech tvr 2.5 product key
            -what is the validity of honestech tvr 2.5 product key
            -why do i need honestech tvr 2.5 product key
            -why is my honestech tvr 2.5 product key not working
            -why is my honestech tvr 2.5 product key expired
            -why is my honestech tvr 2.5 product key invalid
            -why is my honestech tvr 2.5 product key blocked
            -who can help me with honestech tvr 2.5 product key issues
            -who can give me a free honestech tvr 2.5 product key
            -who can sell me a cheap honestech tvr 2.5 product key
            -who can share their honestech tvr 2.5 product key with me
            -who can verify my honestech tvr 2.5 product key online
            -when will i receive my honestech tvr 2.5 product key after purchase
            -when will i need to renew my honestech tvr 2.5 product key subscription
            -when will i get an update for my honestech tvr 2.5 product key software
            -when will i lose access to my honestech tvr 2.5 product key features
            -when did i buy my honestech tvr 2.5 product key last time
            -when did i activate my honestech tvr 2.5 product key first time
            -when did i use my honestech tvr 2.5 product key last time
            -when did i change my honestech tvr 2.5 product key password

            -
              -
            1. Locate the downloaded file and double-click on it.
            2. -
            3. Follow the instructions on the screen to complete the installation process.
            4. -
            5. When prompted, enter the product key that we will provide you later in this article.
            6. -
            7. Restart your computer after the installation is finished.
            8. -
            -

            Step 3: Activate Honestech TVR 2.5

            -

            The final step is to activate Honestech TVR 2.5 using the product key that we have provided below. To do that, follow these steps:

            -
              -
            1. Open Honestech TVR 2.5 on your computer.
            2. -
            3. Click on the "Help" menu and select "Enter Product Key".
            4. -
            5. Enter the product key that we have given you below and click "OK".
            6. -
            7. You should see a message that says "Your product has been activated successfully".
            8. -
            -

            Congratulations! You have successfully installed and activated Honestech TVR 2.5 for free.

            -

            Honestech TVR 2.5 Product Key

            -

            Here is the product key that you can use to activate Honestech TVR 2.5:

            -
            HTTV25-8GA6Z-BQZ8W-88Z8E-68W88
            -
            -

            Please note that this product key is only for educational purposes and we do not encourage any illegal use of the software. If you like Honestech TVR 2.5 and want to support the developers, please buy a genuine license from their website.

            - -

            How to Use Honestech TVR 2.5

            -

            Now that you have installed and activated Honestech TVR 2.5, you can start using it to capture and edit your videos. Here are some basic steps to help you get started:

            -

            How to Capture Video

            -

            To capture video from your TV, VCR, camcorder, or DVD player, you need to connect them to your computer using a video capture device. You can use a USB video capture device or a PCI video capture card. Make sure you have the drivers installed for your device before you proceed.

            -

            Once you have connected your device, follow these steps:

            -
              -
            1. Open Honestech TVR 2.5 on your computer.
            2. -
            3. Click on the "Capture" button on the main screen.
            4. -
            5. Select your video capture device from the drop-down menu.
            6. -
            7. Select the video source (TV, VCR, etc.) and the video format (NTSC, PAL, etc.) from the options.
            8. -
            9. Adjust the brightness, contrast, hue, and saturation of the video if needed.
            10. -
            11. Click on the "Record" button to start capturing the video.
            12. -
            13. Click on the "Stop" button when you are done.
            14. -
            15. The captured video will be saved in the folder that you have specified in the settings.
            16. -
            -

            How to Edit Video

            -

            To edit your captured video or any other video file on your computer, follow these steps:

            -
              -
            1. Open Honestech TVR 2.5 on your computer.
            2. -
            3. Click on the "Edit" button on the main screen.
            4. -
            5. Click on the "Import" button and browse for the video file that you want to edit.
            6. -
            7. The video will appear on the timeline at the bottom of the screen.
            8. -
            9. You can use the tools on the left side of the screen to trim, split, crop, rotate, add effects, transitions, text, and audio to your video.
            10. -
            11. You can preview your edited video on the right side of the screen.
            12. -
            13. When you are satisfied with your editing, click on the "Export" button and choose the output format and quality that you want.
            14. -
            15. The edited video will be saved in the folder that you have specified in the settings.
            16. -
            -

            Conclusion

            -

            Honestech TVR 2.5 is a powerful and easy-to-use software that allows you to capture and edit videos from various sources. In this article, we have shown you how to get Honestech TVR 2.5 product key for free and how to use it. We hope you found this article helpful and informative. If you have any questions or feedback, please leave a comment below. Thank you for reading!

            e753bf7129
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blue WhatsApp Plus 9.11 APK Download The Best Alternative to WhatsApp.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blue WhatsApp Plus 9.11 APK Download The Best Alternative to WhatsApp.md deleted file mode 100644 index a411423137400c44e83709bd2c57f948c96b6749..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blue WhatsApp Plus 9.11 APK Download The Best Alternative to WhatsApp.md +++ /dev/null @@ -1,113 +0,0 @@ - -

            Blue WhatsApp Plus 9.11 APK Download: A Powerful Alternative to the Original WhatsApp

            -

            If you're sick of the same old WhatsApp, you've probably heard about the Blue WhatsApp Plus APK. This latest app is being developed by some of the most popular WhatsApp modders. It is a powerful alternative to the original WhatsApp and has a ton of amazing features. To download Blue WhatsApp Plus, you can simply follow the steps below.

            -

            blue whatsapp plus 9.11 apk download


            Download File ––– https://bltlly.com/2uOsq5



            -

            What is Blue WhatsApp Plus?

            -

            Blue WhatsApp Plus is a modified version of the official WhatsApp app that allows you to customize your chats, themes, privacy settings, and more. It is not available on the Google Play Store, so you have to download it from a third-party source. However, it is completely safe and secure to use.

            -

            Features of Blue WhatsApp Plus

            -

            Some of the features that make Blue WhatsApp Plus stand out from the original WhatsApp are:

            -
              -
            • You can change the color and theme of your app according to your preference.
            • -
            • You can hide your online status, last seen, blue ticks, typing indicator, and more.
            • -
            • You can send unlimited messages, images, videos, audio files, documents, and stickers without any restrictions.
            • -
            • You can backup and restore your chats and media files easily.
            • -
            • You can use multiple accounts on the same device.
            • -
            • You can lock your app with a password or fingerprint.
            • -
            • You can use various fonts and emojis to spice up your conversations.
            • -
            • You can make video calls with up to 8 people at once.
            • -
            • You can disable voice calls if you don't want to be disturbed.
            • -
            • You can use anti-revoke feature to prevent others from deleting messages for you.
            • -
            -

            How to download and install Blue WhatsApp Plus

            -

            To download and install Blue WhatsApp Plus on your Android device, you need to follow these steps:

            -
              -
            1. First, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
            2. -
            3. Next, you need to download the Blue WhatsApp Plus APK file from a trusted source. You can use this link to get the latest version of the app.
            4. -
            5. After downloading the APK file, locate it in your file manager and tap on it to start the installation process.
            6. -
            7. Follow the instructions on the screen and grant the necessary permissions to the app.
            8. -
            9. Once the installation is complete, open the app and verify your phone number.
            10. -
            11. Now you can enjoy all the features of Blue WhatsApp Plus on your device.
            12. -
            -

            Why use Blue WhatsApp Plus?

            -

            Blue WhatsApp Plus is a great app for those who want more control and customization over their WhatsApp experience. It offers many benefits that the original WhatsApp does not provide. However, it also has some drawbacks that you should be aware of before using it.

            -

            Advantages of Blue WhatsApp Plus

            -

            Some of the advantages of using Blue WhatsApp Plus are:

            -
              -
            • You can personalize your app with different colors and themes.
            • -
            • You can enhance your privacy and security with various options.
            • -
            • You can send and receive more media files without any limits or compression.
            • -
            • You can backup and restore your data easily.
            • -
            • You can use multiple accounts on one device.
            • -
            • You can access more fonts and emojis for your chats.
            • -
            • You can make group video calls with more people.
            • -
            -

            Disadvantages of Blue WhatsApp Plus

            -

            Some of the disadvantages of using Blue WhatsApp Plus are:

            -

            How to install blue whatsapp plus 9.11 on android
            -Blue whatsapp plus 9.11 latest version free download
            -Blue whatsapp plus 9.11 features and benefits
            -Blue whatsapp plus 9.11 vs original whatsapp comparison
            -Blue whatsapp plus 9.11 mod apk download for pc
            -Blue whatsapp plus 9.11 update issues and solutions
            -Blue whatsapp plus 9.11 review and rating
            -Blue whatsapp plus 9.11 download link from official website
            -Blue whatsapp plus 9.11 alternative apps and clones
            -Blue whatsapp plus 9.11 backup and restore guide
            -Blue whatsapp plus 9.11 privacy and security tips
            -Blue whatsapp plus 9.11 customization and themes
            -Blue whatsapp plus 9.11 support and contact information
            -Blue whatsapp plus 9.11 compatibility and requirements
            -Blue whatsapp plus 9.11 faq and troubleshooting
            -Blue whatsapp plus 9.11 premium apk download with license key
            -Blue whatsapp plus 9.11 anti-ban and anti-revoke tricks
            -Blue whatsapp plus 9.11 hidden features and hacks
            -Blue whatsapp plus 9.11 group chat and video call options
            -Blue whatsapp plus 9.11 emoji and sticker packs
            -Blue whatsapp plus 9.11 status and story saver
            -Blue whatsapp plus 9.11 online and offline mode
            -Blue whatsapp plus 9.11 notifications and sounds settings
            -Blue whatsapp plus 9.11 file sharing and media quality
            -Blue whatsapp plus 9.11 dark mode and night mode
            -Blue whatsapp plus 9.11 app lock and fingerprint unlock
            -Blue whatsapp plus 9.11 dual account and multi device support
            -Blue whatsapp plus 9.11 auto reply and scheduled messages
            -Blue whatsapp plus 9.11 delete for everyone and recall messages
            -Blue whatsapp plus 9.11 message translation and language settings
            -How to uninstall blue whatsapp plus 9.11 from android
            -How to downgrade blue whatsapp plus 9.11 to older version
            -How to upgrade blue whatsapp plus 9.11 to newer version
            -How to transfer blue whatsapp plus 9.11 data to another phone
            -How to use blue whatsapp plus 9.11 on bluestacks emulator[^2^]
            -How to fix blue whatsapp plus 9.11 not working or crashing problem
            -How to enable blue whatsapp plus 9.11 permissions and accessibility
            -How to disable blue whatsapp plus 9.11 ads and pop-ups
            -How to verify blue whatsapp plus 9.11 phone number and account
            -How to change blue whatsapp plus 9.11 font size and style

            -
              -
            • You may face some compatibility issues with the original WhatsApp and other apps.
            • -
            • You may get banned from WhatsApp if they detect that you are using a modded app.
            • -
            • You may not receive timely updates and bug fixes from the developers.
            • -
            • You may expose your device to malware or viruses if you download the app from an untrusted source.
            • -
            -

            Conclusion

            -

            Blue WhatsApp Plus is a powerful alternative to the original WhatsApp that offers many features and customization options. It is a great app for those who want more control and flexibility over their WhatsApp experience. However, it also has some risks and drawbacks that you should consider before using it. You should always download the app from a trusted source and backup your data regularly. You should also be careful not to violate the terms and conditions of WhatsApp and respect the privacy of others.

            -

            FAQs

            -

            What is the difference between Blue WhatsApp Plus and GBWhatsApp?

            -

            Blue WhatsApp Plus and GBWhatsApp are both popular WhatsApp mods that have similar features and functions. However, they are developed by different teams and have different user interfaces and themes. You can choose the one that suits your preference and device compatibility.

            -

            Is Blue WhatsApp Plus legal?

            -

            Blue WhatsApp Plus is not legal as it violates the terms and conditions of WhatsApp. WhatsApp does not allow any third-party apps or mods to use its services or platform. If you use Blue WhatsApp Plus, you may face legal actions or penalties from WhatsApp.

            -

            Is Blue WhatsApp Plus safe?

            -

            Blue WhatsApp Plus is safe to use as long as you download it from a trusted source and scan it for any malware or viruses. You should also avoid sharing any sensitive or personal information on the app as it may not be encrypted or protected. You should also backup your data regularly in case of any data loss or corruption.

            -

            How can I update Blue WhatsApp Plus?

            -

            To update Blue WhatsApp Plus, you need to check for the latest version of the app on the official website or source. You can then download the APK file and install it on your device. You should also backup your data before updating the app to avoid any data loss or issues.

            -

            How can I avoid getting banned from WhatsApp while using Blue WhatsApp Plus?

            -

            To avoid getting banned from WhatsApp while using Blue WhatsApp Plus, you should follow these tips:

            -
              -
            • Do not use multiple accounts on the same device.
            • -
            • Do not spam or abuse other users with messages or calls.
            • -
            • Do not send or receive any illegal or inappropriate content on the app.
            • -
            • Do not use any features that may harm or annoy other users, such as anti-revoke or disable voice calls.
            • -
            • Do not update the app too frequently or too late.
            • -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Build Your Dream Space City with The Final Earth 2 Mod APK - Free Download for Android.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Build Your Dream Space City with The Final Earth 2 Mod APK - Free Download for Android.md deleted file mode 100644 index e7245b350bfa9f125480226b3354f685e9d1732e..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Build Your Dream Space City with The Final Earth 2 Mod APK - Free Download for Android.md +++ /dev/null @@ -1,129 +0,0 @@ - -

            Download The Final Earth 2 Mod Apk: A Vertical Sci-Fi City Builder Game

            -

            If you are looking for a relaxing, creative, and fun city building game, you should try The Final Earth 2. This game lets you build your own futuristic metropolis in space, with thousands of inhabitants, dozens of buildings, and various resources to manage. And if you want to enjoy the game even more, you can download The Final Earth 2 mod apk, which gives you unlimited resources, money, and access to all features. In this article, we will tell you what The Final Earth 2 is, why you should download the mod apk, how to download it, and some tips and tricks to play the game.

            -

            What is The Final Earth 2?

            -

            The Final Earth 2 is a vertical sci-fi city builder game developed by Florian van Strien. It was released in June 2019 for web browsers, and later for iOS, Android, and Steam platforms. The game has received very positive reviews from players and critics alike, who praised its gameplay, graphics, music, and story.

            -

            download the final earth 2 mod apk


            DOWNLOADhttps://bltlly.com/2uOimI



            -

            The game is set in the year 2142, when Earth is unlivable due to climate change. You are one of the survivors who built a spaceship to escape from the dying planet. You find a small rock in space that becomes your new home. You start by building some farms and houses to provide food and shelter for your people. Then, you research advanced technology and expand your city to a huge metropolis, full of skyscrapers, factories, gardens, and more. You can also explore the universe with rockets and teleporters, and colonize other worlds.

            -

            The game offers a relaxing, creative city building experience with exploration elements and an optional story. You can play in different modes, such as scenarios, free play, or creative mode. You can also customize your city with various settings, such as day/night cycle, weather effects, population growth rate, etc. There are also modding tools that allow you to create your own buildings, scenarios, and worlds.

            -

            Features of The Final Earth 2

            -

            The Final Earth 2 has many features that make it a unique and enjoyable city building game. Here are some of them:

            -

            Build a huge city with thousands of inhabitants

            -

            You can build a vertical city that can reach up to hundreds of floors high. You can design your city as you wish, with different layouts, styles, and themes. You can also manage your population's needs, such as food, water, energy, happiness, education, health, etc. Your population is fully simulated, meaning they have individual names, jobs, skills, preferences, etc. You can follow any citizen and see their daily life in detail.

            -

            Discover over 80 different buildings and technologies

            -

            You can unlock over 80 different buildings as you progress through the game. Each building has its own function and effect on your city. For example, farms produce food, houses provide shelter, schools increase education level, factories produce goods, etc. You can also research various technologies that improve your city's efficiency and quality, such as solar panels, water purification, recycling, etc. You can also upgrade your buildings to make them more efficient and productive.

            -

            Explore the universe and colonize new worlds

            -

            You can build rockets and teleporters to explore the vast universe. You can find and colonize new planets, each with its own environment, resources, and challenges. You can also encounter other civilizations and interact with them. You can trade, cooperate, or compete with them. You can also discover ancient secrets and mysteries hidden in the galaxy.

            -

            Enjoy the relaxing music and graphics

            -

            The game has a relaxing and soothing soundtrack that matches the mood of the game. You can listen to over 20 different songs composed by Stijn Cappetijn. The game also has a minimalist and colorful pixel art style that creates a charming and cozy atmosphere. You can zoom in and out to see the details of your city and the surrounding space.

            -

            Why download The Final Earth 2 mod apk?

            -

            The Final Earth 2 is a free-to-play game that you can enjoy without spending any money. However, if you want to enhance your gaming experience, you can download The Final Earth 2 mod apk, which gives you some advantages and benefits over the original game. Here are some reasons why you should download the mod apk:

            -

            Download The Final Earth 2 Mod APK v1.0.16 (Unlocked) - Apkmody
            -How to Download and Install The Final Earth 2 Mod APK on Android
            -The Final Earth 2 Mod APK - Unlimited Money and Resources
            -The Final Earth 2 Mod APK Review - A Fun and Challenging City Building Game
            -The Final Earth 2 Mod APK - Tips and Tricks to Survive the Apocalypse
            -The Final Earth 2 Mod APK - How to Unlock All Buildings and Upgrades
            -The Final Earth 2 Mod APK - Best Mods and Cheats for The Final Earth 2
            -The Final Earth 2 Mod APK - How to Play The Final Earth 2 on PC
            -The Final Earth 2 Mod APK - How to Backup and Restore Your Game Data
            -The Final Earth 2 Mod APK - How to Solve Common Problems and Errors
            -The Final Earth 2 Mod APK - How to Join and Create Multiplayer Games
            -The Final Earth 2 Mod APK - How to Customize Your City and Characters
            -The Final Earth 2 Mod APK - How to Earn More Coins and Gems
            -The Final Earth 2 Mod APK - How to Complete All Quests and Achievements
            -The Final Earth 2 Mod APK - How to Get Free Premium Features and Rewards
            -Download The Final Earth 2 Mod APK Latest Version for Android
            -Download The Final Earth 2 Mod APK with No Ads and No Root Required
            -Download The Final Earth 2 Mod APK with High Graphics and Sound Quality
            -Download The Final Earth 2 Mod APK with All Languages Supported
            -Download The Final Earth 2 Mod APK with Offline Mode and Auto Save
            -Download The Final Earth 2 Original APK from Google Play Store
            -Download The Final Earth 2 Original APK with In-app Purchases and Ads
            -Download The Final Earth 2 Original APK with Regular Updates and Bug Fixes
            -Download The Final Earth 2 Original APK with Online Mode and Cloud Save
            -Download The Final Earth 2 Original APK with User Reviews and Ratings
            -Compare The Final Earth 2 Mod APK and Original APK - Which One is Better?
            -Compare The Final Earth 2 Mod APK and Other City Building Games - Which One is More Fun?
            -Compare The Final Earth 2 Mod APK and Other Apocalyptic Games - Which One is More Realistic?
            -Compare The Final Earth 2 Mod APK and Other Simulation Games - Which One is More Educational?
            -Compare The Final Earth 2 Mod APK and Other Strategy Games - Which One is More Challenging?
            -Why You Should Download The Final Earth 2 Mod APK - Top Reasons to Play The Final Earth 2
            -Why You Should Download The Final Earth 2 Original APK - Top Reasons to Support the Developer of The Final Earth 2
            -Why You Should Download Both The Final Earth 2 Mod APK and Original APK - Top Reasons to Enjoy the Best of Both Worlds
            -What is The Final Earth 2 Game - A Brief Introduction to the Story and Gameplay of The Final Earth 2
            -What is The Final Earth 2 Mod APK - A Brief Introduction to the Features and Benefits of The Final Earth 2 Mod APK
            -What is The Difference Between The Final Earth 2 Mod APK and Original APK - A Brief Comparison of the Advantages and Disadvantages of Both Versions of The Final Earth 2
            -What are the Requirements to Download and Run The Final Earth 2 Mod APK on Your Device - A Brief Guide on the Compatibility and Performance of The Final Earth 2 Mod APK
            -What are the Risks of Downloading and Using The Final Earth 2 Mod APK on Your Device - A Brief Warning on the Potential Dangers and Consequences of Using Unofficial or Modified Versions of The Final Earth 2
            -What are the Alternatives to Downloading and Using The Final Earth 2 Mod APK on Your Device - A Brief Suggestion on Other Ways to Enjoy or Enhance Your Experience of Playing The Final Earth 2
            -What are the Best Sources to Download The Final Earth 2 Mod APK Safely and Securely - A Brief Recommendation on Where to Find Reliable and Trustworthy Websites or Apps that Offer Free or Paid Downloads of The Final Earth 2 Mod APK

            -

            Unlimited resources and money

            -

            The mod apk gives you unlimited resources and money, which means you can build anything you want without any limitations or restrictions. You don't have to worry about running out of food, water, energy, or other resources. You can also buy anything you need without worrying about your budget. You can enjoy the game without any stress or frustration.

            -

            Unlock all buildings and scenarios

            -

            The mod apk also unlocks all the buildings and scenarios that are otherwise locked or require real money to access. You can use any building you want without having to research or upgrade it first. You can also play any scenario you want without having to complete the previous ones. You can access all the features and content of the game without any limitations.

            -

            No ads and no in-app purchases

            -

            The mod apk also removes all the ads and in-app purchases that are present in the original game. You don't have to watch any annoying ads or pop-ups that interrupt your gameplay. You also don't have to spend any real money to buy anything in the game. You can enjoy the game without any distractions or interruptions.

            -

            How to download The Final Earth 2 mod apk?

            -

            If you are interested in downloading The Final Earth 2 mod apk, you need to follow some simple steps. Here is how to download it:

            -

            Step 1: Find a reliable source

            -

            The first step is to find a reliable source that provides The Final Earth 2 mod apk file. There are many websites that offer mod apk files for various games, but not all of them are trustworthy or safe. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading anything from unknown sources.

            -

            One way to find a reliable source is to read reviews and ratings from other users who have downloaded the mod apk file from the same website. You can also check the reputation and credibility of the website by looking at its domain name, design, content, etc. You should avoid websites that have suspicious or misleading domain names, poor design, irrelevant or outdated content, etc.

            -

            Step 2: Download the mod apk file

            -

            The second step is to download the mod apk file from the reliable source that you have found. You need to make sure that the file is compatible with your device's operating system and version. You also need to check the file size and format before downloading it.

            -

            To download the mod apk file, you need to click on the download button or link provided by the website. You may need to complete some verification steps before downloading, such as entering a captcha code, completing a survey, watching a video, etc. These steps are usually done to prevent bots or spam from downloading the file.

            -

            Once you have completed the verification steps, the download will start automatically. You need to wait for a few minutes until the download is finished. You can check the progress of the download by looking at the notification bar or the download manager of your device.

            -

            Step 3: Install the mod apk file

            -

            The third step is to install the mod apk file on your device. Before installing it, you need to make sure that you have enabled the option of "Unknown Sources" in your device's settings. This option allows you to install apps from sources other than the official app store. To enable this option, you need to go to your device's settings, then security, then unknown sources, and then toggle it on.

            -

            After enabling the option, you need to locate the mod apk file that you have downloaded on your device. You can use a file manager app to find the file in your device's storage. You can also check the download folder or the notification bar for the file.

            -

            Once you have found the mod apk file, you need to tap on it to start the installation process. You may see a warning message that says "This type of file can harm your device. Do you want to keep it anyway?". You can ignore this message and tap on "OK" or "Yes" to continue. You may also see a pop-up that asks for your permission to install the app. You need to tap on "Install" or "Allow" to proceed.

            -

            The installation process may take a few seconds or minutes, depending on the size and complexity of the mod apk file. You can check the progress of the installation by looking at the notification bar or the screen of your device. Once the installation is finished, you will see a message that says "App installed" or "Done". You can then tap on "Open" or "Launch" to start the game.

            -

            Step 4: Enjoy the game

            -

            The fourth and final step is to enjoy the game with the mod apk features. You can launch the game from your device's app drawer or home screen. You can also create a shortcut for the game on your device's desktop for easy access.

            -

            When you start the game, you will see that you have unlimited resources and money, and that all the buildings and scenarios are unlocked. You can use these features to build your city as you wish, without any limitations or restrictions. You can also enjoy the game without any ads or in-app purchases.

            -

            You can also customize your game settings, such as sound, music, graphics, language, etc. You can also save and load your game progress, and share it with other players online. You can also access the modding tools and create your own content for the game.

            -

            Tips and tricks for The Final Earth 2

            -

            The Final Earth 2 is a simple and easy game to play, but it also has some challenges and difficulties that require some strategy and planning. Here are some tips and tricks that can help you play the game better:

            -

            Manage your resources and workforce wisely

            -

            You need to balance your resources and workforce in order to keep your city running smoothly. You need to produce enough food, water, energy, and other resources for your population's needs. You also need to assign enough workers for each building and task. You can check your resource and workforce status by tapping on the icons at the top of the screen.

            -

            You should also avoid wasting resources or overproducing them. For example, if you have too much food, you can sell it or store it in warehouses. If you have too little energy, you can build more power plants or use solar panels. You should also recycle your waste and use water purification systems to save resources.

            -

            Use teleporters and rockets to transport people and goods

            -

            You can use teleporters and rockets to transport people and goods between different locations in your city or in other planets. Teleporters are faster and cheaper than rockets, but they have a limited range and capacity. Rockets are slower and more expensive than teleporters, but they have a longer range and capacity.

            -

            You should use teleporters for short-distance transportation within your city or planet. For example, you can use teleporters to move people from residential buildings to workplaces or schools. You can also use teleporters to move goods from farms or factories to warehouses or markets.

            -

            You should use rockets for long-distance transportation between different planets or galaxies. For example, you can use rockets to explore new worlds or colonize them. You can also use rockets to trade with other civilizations or visit them.

            -

            Follow the story or play in sandbox mode

            -

            You can choose to follow the story mode or play in sandbox mode in The Final Earth 2. The story mode has a linear progression that follows a plot and has specific objectives and challenges. The sandbox mode has no plot or objectives, and lets you play freely with unlimited resources and options.

            -

            You should follow the story mode if you want to learn more about the game's background and lore, and experience some exciting events and twists. The story mode also gives you some guidance and tips on how to play the game effectively.

            -

            You should play in sandbox mode if you want to unleash your creativity and imagination, and build your city as you wish. The sandbox mode also lets you experiment with different settings, such as day/night cycle, weather effects, population growth rate, etc. The sandbox mode also lets you access the modding tools and create your own content for the game.

            -

            Use secret codes and cheats for fun

            -

            You can also use some secret codes and cheats to have some fun and spice up your game. These codes and cheats can give you some advantages or disadvantages, or change some aspects of the game. You can enter these codes and cheats by tapping on the menu button at the top right corner of the screen, then tapping on the settings button, then tapping on the secret code button.

            -

            Here are some examples of secret codes and cheats that you can use:

            -
              -
            • cheat: This code gives you unlimited resources and money.
            • -
            • fast: This code speeds up the game by 10 times.
            • -
            • slow: This code slows down the game by 10 times.
            • -
            • tiny: This code makes your city very small.
            • -
            • huge: This code makes your city very big.
            • -
            • rainbow: This code makes your city colorful.
            • -
            • dark: This code makes your city dark.
            • -
            • zombie: This code turns your population into zombies.
            • -
            • alien: This code turns your population into aliens.
            • -
            • pizza: This code makes your population love pizza.
            • -
            -

            You can also combine some codes to create more effects. For example, you can enter tiny zombie to make your city small and full of zombies. You can also enter reset to undo all the codes and cheats.

            -

            Conclusion

            -

            The Final Earth 2 is a vertical sci-fi city builder game that lets you build your own futuristic metropolis in space. You can enjoy the game for free, or download The Final Earth 2 mod apk to get unlimited resources, money, and access to all features. You can also follow the story mode or play in sandbox mode, and use some secret codes and cheats for fun. The game is relaxing, creative, and fun, and it will keep you entertained for hours. If you are interested in downloading The Final Earth 2 mod apk, you can follow the steps that we have provided in this article. We hope you enjoy the game!

            -

            Frequently Asked Questions

            -

            Here are some frequently asked questions about The Final Earth 2 mod apk:

            -

            Q: Is The Final Earth 2 mod apk safe to download?

            -

            A: Yes, The Final Earth 2 mod apk is safe to download if you find a reliable source that provides the file. However, you should always be careful and do some research before downloading anything from unknown sources. You should also scan the file with an antivirus or malware detector before installing it on your device.

            -

            Q: Is The Final Earth 2 mod apk compatible with my device?

            -

            A: The Final Earth 2 mod apk is compatible with most devices that run on Android 4.4 or higher. However, some devices may have different specifications or settings that may affect the performance or compatibility of the game. You should check the file size and format before downloading it, and make sure that your device has enough storage space and memory to run the game smoothly.

            -

            Q: Can I play The Final Earth 2 mod apk online with other players?

            -

            A: Yes, you can play The Final Earth 2 mod apk online with other players who have downloaded the same file. You can share your game progress, chat with other players, and join online communities. However, you may not be able to play with players who have the original game or a different version of the mod apk file. You may also encounter some bugs or glitches when playing online.

            -

            Q: Can I update The Final Earth 2 mod apk when a new version is released?

            -

            A: Yes, you can update The Final Earth 2 mod apk when a new version is released by the developer. However, you may need to download and install the new version of the mod apk file from the same source that you downloaded the previous one. You may also need to uninstall the old version of the mod apk file before installing the new one. You should also backup your game data before updating, in case something goes wrong during the process.

            -

            Q: Can I uninstall The Final Earth 2 mod apk if I don't like it?

            -

            A: Yes, you can uninstall The Final Earth 2 mod apk if you don't like it or want to switch back to the original game. You can uninstall the mod apk file by going to your device's settings, then apps, then The Final Earth 2, then uninstall. You can also delete the mod apk file from your device's storage. However, you may lose your game data and progress when you uninstall the mod apk file. You should backup your game data before uninstalling, in case you want to restore it later.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CR TUNNEL VPN APK Unblock and Bypass IPDomain Based Restrictions.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CR TUNNEL VPN APK Unblock and Bypass IPDomain Based Restrictions.md deleted file mode 100644 index 9181a351e204c008d002969839efa0664d7b4df1..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CR TUNNEL VPN APK Unblock and Bypass IPDomain Based Restrictions.md +++ /dev/null @@ -1,96 +0,0 @@ -
            -

            What is CR Tunnel VPN and How to Download It

            -

            If you are looking for a free, unlimited, and secure VPN app for your Android device, you might want to check out CR Tunnel VPN. This app allows you to encrypt your internet traffic, hide your IP address, and access blocked websites and apps with ease. In this article, we will explain what a VPN is, why you need one, what CR Tunnel VPN is and how it works, how to download and install it on your device, and how to use it for different purposes.

            -

            What is a VPN and Why You Need One

            -

            A VPN, or virtual private network, is a service that creates a secure and encrypted connection between your device and a remote server. This way, your online activity is hidden from anyone who might be snooping on your network, such as hackers, ISPs, advertisers, or government agencies. A VPN also allows you to change your IP address and location, making it seem like you are browsing from another country or region. This can help you access geo-restricted content, such as streaming services, websites, or apps that are not available in your area.

            -

            cr tunnel vpn apk download


            Download Filehttps://bltlly.com/2uOohf



            -

            Benefits of using a VPN

            -

            Some of the main benefits of using a VPN include:

            -
              -
            • Protecting your privacy: A VPN can prevent websites and apps from tracking your online activity, analyzing your data, and targeting you with ads. It can also protect your sensitive information from hackers and identity thieves when you use public Wi-Fi networks.
            • -
            • Enhancing your security: A VPN can encrypt your internet traffic, making it unreadable for anyone who might intercept it. It can also protect you from malware, phishing, and other online threats.
            • -
            • Improving your performance: A VPN can prevent your ISP from throttling or limiting your bandwidth based on your usage or the type of content you access. It can also improve your connection speed and stability by choosing the best server for your needs.
            • -
            • Expanding your options: A VPN can help you access geo-blocked content from anywhere in the world. You can also bypass censorship and firewalls that might restrict your online freedom.
            • -
            -

            Risks of using a VPN

            -

            While using a VPN can offer many advantages, it also comes with some risks that you should be aware of. Some of these risks include:

            -
              -
            • Choosing a bad VPN provider: Not all VPNs are created equal. Some may have poor security features, weak encryption protocols, or shady logging policies. Some may even sell your data or expose you to malware. That's why you should always do your research before choosing a VPN provider and avoid free or cheap ones that may compromise your privacy and security.
            • -
            • Breaking the law or violating terms of service: Using a VPN does not give you a license to do anything illegal or unethical online. You are still subject to the laws and regulations of the country or region you are connecting from and to. You are also bound by the terms of service of the websites and apps you use. Some of them may prohibit or restrict the use of VPNs and may suspend or terminate your account if they detect it.
            • -
            • Experiencing technical issues or compatibility problems: Using a VPN may sometimes cause some glitches or errors in your connection or device. For example, you may experience slower speeds, connection drops, or IP leaks. You may also encounter compatibility issues with some websites or apps that may not work well with a VPN. That's why you should always test your VPN before using it for important tasks and have a sharing, such as Netherlands, Switzerland, or Canada. You can check the server list in the app to see which ones have a P2P icon.
            • -
            • Open your torrent client and add the torrent file or magnet link that you want to download. For example, if you want to download a movie, you can go to a torrent site and find the file or link.
            • -
            • Start the download and enjoy your torrenting without worrying about your privacy or security.
            • -
          -

          How to bypass censorship and firewalls with CR Tunnel VPN

          -

          If you want to access websites or apps that are blocked by your government, school, or workplace, you can use CR Tunnel VPN to bypass the censorship and firewalls. Here is how you can do it:

          -
            -
          1. Open CR Tunnel VPN and connect to a server that is not in the same country or region as the one where the censorship or firewall is applied. For example, if you want to access Facebook in China, you can connect to a server in Hong Kong or Singapore.
          2. -
          3. Open your browser or app and go to the website or app that you want to access. For example, if you want to access Facebook in China, you can go to facebook.com.
          4. -
          5. Enjoy the website or app that was previously blocked for you.
          6. -
          -

          Conclusion

          -

          Summary of the main points

          -

          In conclusion, CR Tunnel VPN is a free, unlimited, and secure VPN app for Android devices that can help you protect your privacy and security online, access geo-blocked content, torrent safely, and bypass censorship and firewalls. It has a simple and user-friendly interface, a large network of servers, a strict no-logs policy, and a high encryption standard. It is easy to download and install on your device and use for different purposes.

          -

          Call to action and recommendation

          -

          If you are looking for a reliable and trustworthy VPN app for your Android device, we recommend you to try CR Tunnel VPN today. You can download it from Google Play Store or from this link: CR Tunnel VPN - Apps on Google Play. You will not regret it!

          -

          FAQs

          -

          Here are some of the frequently asked questions about CR Tunnel VPN:

          -

          cr tunnel vpn apk free download
          -cr tunnel vpn apk latest version
          -cr tunnel vpn apk for android
          -cr tunnel vpn apk mod
          -cr tunnel vpn apk pro
          -cr tunnel vpn apk 2023
          -cr tunnel vpn apk old version
          -cr tunnel vpn apk premium
          -cr tunnel vpn apk cracked
          -cr tunnel vpn apk unlimited
          -cr tunnel vpn apk update
          -cr tunnel vpn apk file
          -cr tunnel vpn apk mirror
          -cr tunnel vpn apk pure
          -cr tunnel vpn apk uptodown
          -cr tunnel vpn apk revdl
          -cr tunnel vpn apk hack
          -cr tunnel vpn apk full
          -cr tunnel vpn apk no ads
          -cr tunnel vpn apk offline
          -cr tunnel vpn apk online
          -cr tunnel vpn apk for pc
          -cr tunnel vpn apk for ios
          -cr tunnel vpn apk for windows
          -cr tunnel vpn apk for mac
          -cr tunnel vpn apk for firestick
          -cr tunnel vpn apk for smart tv
          -cr tunnel vpn apk for chromebook
          -cr tunnel vpn apk for linux
          -cr tunnel vpn apk for iphone
          -cr tunnel vpn apk for ipad
          -cr tunnel vpn apk for laptop
          -cr tunnel vpn apk for tablet
          -cr tunnel vpn apk for samsung
          -cr tunnel vpn apk for huawei
          -cr tunnel vpn apk for xiaomi
          -cr tunnel vpn apk for oppo
          -cr tunnel vpn apk for vivo
          -cr tunnel vpn apk for nokia
          -cr tunnel vpn apk for lg
          -cr tunnel vpn app download
          -download entclass cr tunnel - free unlimited proxy VPN with SSH, HTTP & SSL connections APK
          -download CR TUNNEL VPN APK - Version: 1.9 - com.crtunnel.pro - Hyper Network Devs - App APKCombo
          -download CR TUNNEL VPN APK (Android App) - Free Download APKCombo Apps Entertainment CR TUNNEL VPN
          -download CR TUNNEL VPN APK - Latest Version 2023 - APKCombo

          -
            -
          • Q: Is CR Tunnel VPN safe?
          • -
          • A: Yes, CR Tunnel VPN is safe to use. It uses the OpenVPN protocol and AES-256 encryption to secure your data and traffic. It also has a no-logs policy that means it does not store or share any of your personal information or online activity.
          • -
          • Q: Is CR Tunnel VPN free?
          • -
          • A: Yes, CR Tunnel VPN is free to use. You do not have to pay anything or register an account to use its service. You can also enjoy unlimited bandwidth, speed, and time without any restrictions or limitations.
          • -
          • Q: Does CR Tunnel VPN work with Netflix?
          • -
          • A: Yes, CR Tunnel VPN works with Netflix and other streaming services. You can connect to a server in the country or region where the content is available and watch it without any issues.
          • -
          • Q: Does CR Tunnel VPN support P2P sharing?
          • -
          • A: Yes, CR Tunnel VPN supports P2P sharing. You can connect to a server that has a P2P icon in the app and torrent files without exposing your IP address or risking legal troubles.
          • -
          • Q: How can I contact CR Tunnel VPN support?
          • -
          • A: If you have any questions or issues with CR Tunnel VPN, you can contact its support team by sending an email to crtechvpn@gmail.com. They will respond to you as soon as possible.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Acronis True Image Serial Key !EXCLUSIVE! Download.md b/spaces/tioseFevbu/cartoon-converter/scripts/Acronis True Image Serial Key !EXCLUSIVE! Download.md deleted file mode 100644 index 0442e8b7e61e7ec3b9b7051f7ed6753e3c1a9d73..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Acronis True Image Serial Key !EXCLUSIVE! Download.md +++ /dev/null @@ -1,40 +0,0 @@ - -

          How to Download and Activate Acronis True Image 2021 with Serial Key

          -

          Acronis True Image 2021 is a powerful and reliable backup and recovery software that allows you to protect your data from any disaster. Whether you want to back up your entire computer, files, disks, or cloud storage, Acronis True Image 2021 can help you do it easily and securely. In this article, we will show you how to download and activate Acronis True Image 2021 with serial key.

          -

          Download Acronis True Image 2021

          -

          To download Acronis True Image 2021, you need to visit the official website of Acronis and choose the edition that suits your needs. There are three editions available: Essential, Advanced, and Premium. Each edition has different features and prices. You can compare them on the website and select the one that meets your requirements.

          -

          acronis true image serial key download


          Download ✦✦✦ https://urlcod.com/2uHxR3



          -

          After choosing the edition, you need to click on the Buy Now button and complete the purchase process. You will receive an email with a download link and a serial key. Alternatively, you can download a free trial version of Acronis True Image 2021 for 30 days from the website.

          -

          Activate Acronis True Image 2021

          -

          To activate Acronis True Image 2021, you need to have a valid serial key. If you purchased the product online, you will find the serial key in your email. If you bought a boxed version of the product, you will find the activation key inside the box. You need to convert the activation key into a full serial number by following these steps:

          -

          - -

          Once you have your serial number, you can activate Acronis True Image 2021 on your computer by following these steps:

          -
            -
          • Install Acronis True Image 2021 on your computer.
          • -
          • Launch the program and click on Account on the left sidebar.
          • -
          • Sign in to your Acronis account or create one if you don't have it.
          • -
          • Enter your serial number and click on Activate.
          • -
          -

          If your computer is not connected to the Internet, you can activate Acronis True Image 2021 offline by following these steps:

          -
            -
          • Install Acronis True Image 2021 on your computer.
          • -
          • Launch the program and click on Account on the left sidebar.
          • -
          • Click on the arrow icon next to Resolve activation problem and select Activate offline.
          • -
          • The program will generate an installation code that identifies your computer and license key.
          • -
          • Save the installation code to a file or write it down on a paper.
          • -
          • On a computer with Internet connection, visit https://www.acronis.com/en-us/support/activation/offline/
          • -
          • Enter your installation code and your email address.
          • -
          • Click on Submit.
          • -
          • You will receive an email with an activation code.
          • -
          • Enter the activation code in Acronis True Image 2021 on your computer and click on Activate.
          • -
          -

          Conclusion

          -

          Acronis True Image 2021 is a comprehensive backup and recovery solution that can help you protect your data from any disaster. To use it, you need to download and activate it with a serial key. You can either activate it online or offline depending on your Internet connection. We hope this article has helped you learn how to download and activate Acronis True Image 2021 with serial key. If you have any questions or problems, please contact Acronis support team or visit their knowledge base for more information.

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Best Finance App For Mac.md b/spaces/tioseFevbu/cartoon-converter/scripts/Best Finance App For Mac.md deleted file mode 100644 index 4a2dce932af9ddf2b2ff443dbfec9f2c34bdb68f..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Best Finance App For Mac.md +++ /dev/null @@ -1,27 +0,0 @@ -
          -

          How to Choose the Best Finance App for Mac in 2023

          -

          If you are a Mac user who wants to manage your personal finances better, you might be looking for a finance app that can help you track your income, expenses, budget, investments, and more. But with so many options available, how do you choose the best one for your needs?

          -

          Best Finance App For Mac


          Download File » https://urlcod.com/2uHv9P



          -

          In this article, we will compare some of the most popular and highly rated finance apps for Mac in 2023, and give you some tips on how to choose the best one for you.

          -

          What to Look for in a Finance App for Mac

          -

          Before you start browsing the App Store or downloading any finance app for Mac, you should consider what features and functions you need from a finance app. Here are some questions to ask yourself:

          -
            -
          • Do you want a free or paid app? Free apps might have limited features or ads, while paid apps might offer more advanced tools or support.
          • -
          • Do you want an online or offline app? Online apps might require an internet connection and sync your data across devices, while offline apps might store your data locally and offer more privacy.
          • -
          • Do you want an app that connects to your bank accounts and other financial institutions? This can save you time and hassle by automatically importing your transactions and balances, but it might also pose some security risks.
          • -
          • Do you want an app that helps you create and follow a budget? This can help you control your spending and save more money, but it might also require some discipline and customization.
          • -
          • Do you want an app that tracks your investments and net worth? This can help you monitor your portfolio performance and financial goals, but it might also require some technical knowledge and accuracy.
          • -
          • Do you want an app that offers financial advice and education? This can help you improve your financial literacy and make better decisions, but it might also be biased or generic.
          • -
          -

          Depending on your answers to these questions, you can narrow down your choices and look for a finance app for Mac that meets your specific needs and preferences.

          -

          -

          Some of the Best Finance Apps for Mac in 2023

          -

          To help you get started, here are some of the best finance apps for Mac in 2023 that we have tested and reviewed. We have included both free and paid apps, as well as online and offline apps, so you can find the one that suits you best.

          -

          Empower (Free)

          -

          Empower is one of the best free finance apps for Mac in 2023. It was formerly known as Personal Capital, which was already a popular personal finance software for Mac users. Empower allows you to connect to your bank accounts, credit cards, loans, investments, and more, and gives you a comprehensive overview of your financial situation. You can also create budgets, track your spending habits, monitor your net worth, analyze your investment performance, and get personalized financial advice. Empower has a user-friendly interface and a powerful dashboard that displays all your financial information in one place. Empower also has a mobile app that syncs with your Mac app, so you can access your finances anytime, anywhere.

          -

          Moneyspire (Paid)

          -

          Moneyspire is one of the best paid finance apps for Mac in 2023. It is a simple and easy-to-use app that helps you manage your money without any hassle. You can import your transactions from your bank accounts or enter them manually, and categorize them according to your preferences. You can also create budgets, track your bills, generate reports, reconcile your accounts, and plan for the future. Moneyspire has a clean and intuitive interface that lets you see all your finances at a glance. Moneyspire also has a mobile app that syncs with your Mac app via Dropbox or Wi-Fi.

          -

          Mint (Free)

          -

          Mint is another free finance app for Mac in 2023 that has been around for a long time. Mint allows you to connect to your bank accounts, credit cards, bills, investments, and more, and gives you a complete picture of your financial health. You can also create budgets, track your spending patterns, set goals, get alerts, and receive tips on how to improve your finances. Mint has a colorful and attractive interface that makes managing your money fun and easy. Mint also has a mobile app that syncs with your Mac app, so you can stay on top of your finances on the go. 7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack __LINK__.md b/spaces/tioseFevbu/cartoon-converter/scripts/Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack __LINK__.md deleted file mode 100644 index 8f8faa548c12d2fb0309413e3c07996da0469800..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack __LINK__.md +++ /dev/null @@ -1,67 +0,0 @@ - -

          to

          tags to mark the headings and subheadings, depending on their level of importance. The

          tag is the most important and should be used only once for the title of the article. The

          tag is for the main headings, the

          tag is for the subheadings under each main heading, and the

          tag is for the subheadings under each subheading. - Use to tags to define a table and its rows. Each table row starts with a tag and ends with a tag. - Use tags to add additional information or formatting to the table. The tag is for grouping one or more columns together for styling purposes.
          to tags to define the table cells. The tag is for the header cells, which are usually bold and centered. The tag is for the data cells, which contain the content of the table. - Use
          to tag is for adding a title or description to the table. The
          - - - - -
          Article
          -

          Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack

          -

          Are you looking for a way to get the latest version of Microsoft Office Professional Plus for free? Do you want to enjoy all the features and benefits of this powerful software suite without paying a dime? If yes, then you are in the right place. In this article, I will show you how to download and install Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack, which is a fully activated and working version of the software that you can use for any purpose.

          -

          But before we get into the details, let me tell you what Microsoft Office Professional Plus is and why you should get it.

          -

          Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack


          Download Zip ->>->>->> https://urlcod.com/2uHxbU



          -

          What is Microsoft Office Professional Plus?

          -

          Microsoft Office Professional Plus is a premium edition of Microsoft Office, which is a collection of productivity applications that help you create, edit, and share documents, spreadsheets, presentations, databases, and more. Microsoft Office Professional Plus includes all the applications that are available in Microsoft Office Standard, such as Word, Excel, PowerPoint, Outlook, and OneNote. But it also adds some extra applications and features that are not available in the standard edition, such as:

          -
            -
          • Access: A database management system that allows you to create and manage databases for various purposes.
          • -
          • Publisher: A desktop publishing application that allows you to create and design professional-looking publications, such as flyers, brochures, newsletters, and more.
          • -
          • Skype for Business: A communication and collaboration tool that allows you to make voice and video calls, send instant messages, and share your screen with others.
          • -
          • OneDrive for Business: A cloud storage service that allows you to store and access your files online from any device.
          • -
          • SharePoint: A web-based platform that allows you to create and manage websites, intranets, extranets, and online communities.
          • -
          • Teams: A chat-based workspace that allows you to communicate and collaborate with your team members in real time.
          • -
          • Power BI: A business intelligence tool that allows you to analyze and visualize data from various sources.
          • -
          -

          As you can see, Microsoft Office Professional Plus offers a lot of value for anyone who needs to work with different types of documents and data. Whether you are a student, a professional, a business owner, or a hobbyist, you can benefit from using this software suite.

          -

          Why should you get Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack?

          -

          Now that you know what Microsoft Office Professional Plus is and what it can do for you, you might be wondering why you should get the cracked version instead of buying the official one. Well, there are several reasons why getting Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack is a good idea. Here are some of them:

          -
            -
          • You can save money: The official price of Microsoft Office Professional Plus is $499.99 for a one-time purchase or $12.50 per month for an annual subscription. That's quite expensive for most people, especially if you don't use all the applications and features that are included in the package. By getting the cracked version, you can get the same software for free, without spending a single cent.
          • -
          • You can enjoy all the features: The cracked version of Microsoft Office Professional Plus is fully activated and working. That means you can use all the applications and features that are available in the software suite without any limitations or restrictions. You don't have to worry about activation keys, license codes, or expiration dates. You can use the software as much as you want, for any purpose you want.
          • -
          • You can update the software: The cracked version of Microsoft Office Professional Plus is based on the latest version of the software that was released in January 2023. That means it has all the latest updates and improvements that were made by Microsoft. You can also update the software yourself whenever there is a new update available. You don't have to wait for a new crack to be released or worry about compatibility issues.
          • -
          • You can use it on any device: The cracked version of Microsoft Office Professional Plus is compatible with any device that runs on Windows 10 (64-bit) operating system. You can install it on your laptop, desktop, tablet, or even your smartphone. You can also use it offline or online, depending on your preference and availability.
          • -
          -

          As you can see, there are many advantages of getting Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack. It's a great way to get the most out of this software suite without breaking the bank.

          -

          How to download and install Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack?

          -

          Now that you know why you should get Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack, let me show you how to download and install it on your device. It's a very simple and easy process that will only take a few minutes of your time. Just follow these steps:

          -

          -
            -
          1. Download the crack file: The first thing you need to do is to download the crack file from a reliable and trusted source. You can use the link below to download the file from our website. The file is in a compressed format, so you will need to extract it using a tool like WinRAR or 7-Zip.
          2. -
          3. Disable your antivirus software: The next thing you need to do is to disable your antivirus software temporarily. This is because some antivirus programs might detect the crack file as a virus or malware and block it from running. Don't worry, the crack file is safe and clean, and you can enable your antivirus software again after installing the software.
          4. -
          5. Run the setup file: The third thing you need to do is to run the setup file that is inside the extracted folder. This will start the installation process of Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64). You will need to follow the instructions on the screen and choose the options that suit your preferences. You can also customize the installation by selecting or deselecting the applications and features that you want to install.
          6. -
          7. Activate the software: The last thing you need to do is to activate the software using the crack file. To do this, you will need to copy the crack file from the extracted folder and paste it into the installation directory of Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64). The installation directory is usually located at C:\Program Files\Microsoft Office\. You will need to replace the original file with the crack file and confirm the action.
          8. -
          -

          Congratulations! You have successfully downloaded and installed Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack on your device. You can now enjoy using all the applications and features of this software suite for free.

          -

          What are some tips and tricks for using Microsoft Office Professional Plus?

          -

          To help you get the most out of Microsoft Office Professional Plus, here are some tips and tricks that you can use to improve your productivity and efficiency:

          -
            -
          • Use keyboard shortcuts: Keyboard shortcuts are combinations of keys that perform certain actions or commands in an application. They can save you time and effort by allowing you to access various functions without using your mouse or menus. For example, you can use Ctrl+C to copy, Ctrl+V to paste, Ctrl+Z to undo, Ctrl+F to find, Ctrl+P to print, and so on. You can find a list of keyboard shortcuts for each application in Microsoft Office Professional Plus by pressing F1 or clicking on Help.
          • -
          • Use templates: Templates are pre-designed documents that have a specific layout, style, and content. They can help you create professional-looking documents quickly and easily by providing you with a ready-made format and structure. You can choose from a variety of templates for different purposes and occasions, such as resumes, reports, invoices, newsletters, flyers, and more. You can access templates in Microsoft Office Professional Plus by clicking on File > New > Templates.
          • -
          • Use cloud services: Cloud services are online platforms that allow you to store and access your files from anywhere and any device. They can help you backup your files, sync your files across devices, share your files with others, and collaborate with others in real time. You can use cloud services in Microsoft Office Professional Plus by signing in with your Microsoft account and using OneDrive for Business, SharePoint, Teams, or Power BI.
          • -
          • Use add-ins: Add-ins are additional features or functions that enhance or extend the capabilities of an application. They can help you perform specific tasks or integrate with other applications or services that are not part of Microsoft Office Professional Plus. You can find and install add-ins in Microsoft Office Professional Plus by clicking on Insert > Add-ins. Some examples of add-ins are Grammarly, Wikipedia, Translator, and Adobe Sign.
          • -
          -

          These are just some of the tips and tricks that you can use to make the most of Microsoft Office Professional Plus. You can find more tips and tricks by exploring the applications and their features, or by searching online for tutorials and guides.

          -

          Conclusion

          -

          In conclusion, Microsoft Office Professional Plus is a powerful and versatile software suite that can help you create, edit, and share various types of documents and data. It offers a lot of value for anyone who needs to work with different applications and features. However, it can also be quite expensive and inaccessible for some people. That's why getting Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack is a great alternative that allows you to get the same software for free, without any limitations or restrictions.

          -

          In this article, I have shown you what Microsoft Office Professional Plus is, why you should get it, how to download and install it, and how to use it. I hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

          -

          FAQs

          -

          Is Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack safe to use?

          -

          Yes, Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack is safe to use, as long as you download it from a reliable and trusted source. The crack file is clean and does not contain any viruses or malware that might harm your device or data. However, you should always be careful when downloading and installing any software from the internet, and scan the files with your antivirus software before running them.

          -

          Is Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack legal to use?

          -

          No, Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack is not legal to use, as it violates the terms and conditions of Microsoft. By using the cracked version, you are bypassing the activation process and using the software without paying for it. This is considered piracy and theft, and can result in legal consequences if you are caught. Therefore, we do not encourage or endorse the use of the cracked version, and we are not responsible for any damages or liabilities that might arise from using it.

          -

          Can I use Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack on multiple devices?

          -

          Yes, you can use Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack on multiple devices, as long as they run on Windows 10 (64-bit) operating system. You can install the software on your laptop, desktop, tablet, or smartphone, and use it offline or online. However, you should not share the crack file with others or upload it to any online platforms, as this might expose you to security risks or legal issues.

          -

          Can I uninstall Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack if I don't like it?

          -

          Yes, you can uninstall Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack if you don't like it or if you want to switch to another version or edition of Microsoft Office. You can uninstall the software by going to Control Panel > Programs > Uninstall a program > Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64), and following the instructions on the screen.

          -

          Can I get support from Microsoft if I use Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack?

          -

          No, you cannot get support from Microsoft if you use Microsoft Office Professional Plus Version 1901 Build 11231.20130 (x64) Crack, as you are not a legitimate customer of Microsoft. You will not be able to access the official website, forums, or customer service of Microsoft if you have any issues or problems with the software. You will also not be eligible for any updates or upgrades that might be released by Microsoft in the future.

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/tomofi/MMOCR/mmocr/models/ner/__init__.py b/spaces/tomofi/MMOCR/mmocr/models/ner/__init__.py deleted file mode 100644 index 2d9866e755153cedb20aed79c43aa72a4860933e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/ner/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from . import classifiers, convertors, decoders, encoders, losses -from .classifiers import * # NOQA -from .convertors import * # NOQA -from .decoders import * # NOQA -from .encoders import * # NOQA -from .losses import * # NOQA - -__all__ = ( - classifiers.__all__ + convertors.__all__ + decoders.__all__ + - encoders.__all__ + losses.__all__) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ld/ld_r101_gflv1_r101dcn_fpn_coco_2x.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ld/ld_r101_gflv1_r101dcn_fpn_coco_2x.py deleted file mode 100644 index 37c66a9e1c0c0fd9be181540c749f6c71c01a6fc..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ld/ld_r101_gflv1_r101dcn_fpn_coco_2x.py +++ /dev/null @@ -1,43 +0,0 @@ -_base_ = ['./ld_r18_gflv1_r101_fpn_coco_1x.py'] -teacher_ckpt = 'http://download.openmmlab.com/mmdetection/v2.0/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco_20200630_102002-134b07df.pth' # noqa -model = dict( - pretrained='torchvision://resnet101', - teacher_config='configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py', - teacher_ckpt=teacher_ckpt, - backbone=dict( - type='ResNet', - depth=101, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_output', - num_outs=5)) - -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) -# multi-scale training -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 800)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -data = dict(train=dict(pipeline=train_pipeline)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/losses/accuracy.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/losses/accuracy.py deleted file mode 100644 index 789a2240a491289c5801b6690116e8ca657d004f..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/losses/accuracy.py +++ /dev/null @@ -1,78 +0,0 @@ -import mmcv -import torch.nn as nn - - -@mmcv.jit(coderize=True) -def accuracy(pred, target, topk=1, thresh=None): - """Calculate accuracy according to the prediction and target. - - Args: - pred (torch.Tensor): The model prediction, shape (N, num_class) - target (torch.Tensor): The target of each prediction, shape (N, ) - topk (int | tuple[int], optional): If the predictions in ``topk`` - matches the target, the predictions will be regarded as - correct ones. Defaults to 1. - thresh (float, optional): If not None, predictions with scores under - this threshold are considered incorrect. Default to None. - - Returns: - float | tuple[float]: If the input ``topk`` is a single integer, - the function will return a single float as accuracy. If - ``topk`` is a tuple containing multiple integers, the - function will return a tuple containing accuracies of - each ``topk`` number. - """ - assert isinstance(topk, (int, tuple)) - if isinstance(topk, int): - topk = (topk, ) - return_single = True - else: - return_single = False - - maxk = max(topk) - if pred.size(0) == 0: - accu = [pred.new_tensor(0.) for i in range(len(topk))] - return accu[0] if return_single else accu - assert pred.ndim == 2 and target.ndim == 1 - assert pred.size(0) == target.size(0) - assert maxk <= pred.size(1), \ - f'maxk {maxk} exceeds pred dimension {pred.size(1)}' - pred_value, pred_label = pred.topk(maxk, dim=1) - pred_label = pred_label.t() # transpose to shape (maxk, N) - correct = pred_label.eq(target.view(1, -1).expand_as(pred_label)) - if thresh is not None: - # Only prediction values larger than thresh are counted as correct - correct = correct & (pred_value > thresh).t() - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / pred.size(0))) - return res[0] if return_single else res - - -class Accuracy(nn.Module): - - def __init__(self, topk=(1, ), thresh=None): - """Module to calculate the accuracy. - - Args: - topk (tuple, optional): The criterion used to calculate the - accuracy. Defaults to (1,). - thresh (float, optional): If not None, predictions with scores - under this threshold are considered incorrect. Default to None. - """ - super().__init__() - self.topk = topk - self.thresh = thresh - - def forward(self, pred, target): - """Forward function to calculate accuracy. - - Args: - pred (torch.Tensor): Prediction of models. - target (torch.Tensor): Target for each prediction. - - Returns: - tuple[float]: The accuracies under different topk criterions. - """ - return accuracy(pred, target, self.topk, self.thresh) diff --git a/spaces/trttung1610/musicgen/audiocraft/modules/conditioners.py b/spaces/trttung1610/musicgen/audiocraft/modules/conditioners.py deleted file mode 100644 index d10ac8dc96466375379c883cd62f7c04a1bb0a73..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/modules/conditioners.py +++ /dev/null @@ -1,1411 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -from copy import deepcopy -from dataclasses import dataclass, field -from itertools import chain -import logging -import math -from pathlib import Path -import random -import re -import typing as tp -import warnings - -import einops -from num2words import num2words -import spacy -from transformers import RobertaTokenizer, T5EncoderModel, T5Tokenizer # type: ignore -import torch -from torch import nn -import torch.nn.functional as F -from torch.nn.utils.rnn import pad_sequence - -from .chroma import ChromaExtractor -from .streaming import StreamingModule -from .transformer import create_sin_embedding -from ..data.audio import audio_read -from ..data.audio_dataset import SegmentInfo -from ..data.audio_utils import convert_audio -from ..environment import AudioCraftEnvironment -from ..quantization import ResidualVectorQuantizer -from ..utils.autocast import TorchAutocast -from ..utils.cache import EmbeddingCache -from ..utils.utils import collate, hash_trick, length_to_mask, load_clap_state_dict, warn_once - - -logger = logging.getLogger(__name__) -TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist) -ConditionType = tp.Tuple[torch.Tensor, torch.Tensor] # condition, mask - - -class WavCondition(tp.NamedTuple): - wav: torch.Tensor - length: torch.Tensor - sample_rate: tp.List[int] - path: tp.List[tp.Optional[str]] = [] - seek_time: tp.List[tp.Optional[float]] = [] - - -class JointEmbedCondition(tp.NamedTuple): - wav: torch.Tensor - text: tp.List[tp.Optional[str]] - length: torch.Tensor - sample_rate: tp.List[int] - path: tp.List[tp.Optional[str]] = [] - seek_time: tp.List[tp.Optional[float]] = [] - - -@dataclass -class ConditioningAttributes: - text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict) - wav: tp.Dict[str, WavCondition] = field(default_factory=dict) - joint_embed: tp.Dict[str, JointEmbedCondition] = field(default_factory=dict) - - def __getitem__(self, item): - return getattr(self, item) - - @property - def text_attributes(self): - return self.text.keys() - - @property - def wav_attributes(self): - return self.wav.keys() - - @property - def joint_embed_attributes(self): - return self.joint_embed.keys() - - @property - def attributes(self): - return { - "text": self.text_attributes, - "wav": self.wav_attributes, - "joint_embed": self.joint_embed_attributes, - } - - def to_flat_dict(self): - return { - **{f"text.{k}": v for k, v in self.text.items()}, - **{f"wav.{k}": v for k, v in self.wav.items()}, - **{f"joint_embed.{k}": v for k, v in self.joint_embed.items()} - } - - @classmethod - def from_flat_dict(cls, x): - out = cls() - for k, v in x.items(): - kind, att = k.split(".") - out[kind][att] = v - return out - - -class SegmentWithAttributes(SegmentInfo): - """Base class for all dataclasses that are used for conditioning. - All child classes should implement `to_condition_attributes` that converts - the existing attributes to a dataclass of type ConditioningAttributes. - """ - def to_condition_attributes(self) -> ConditioningAttributes: - raise NotImplementedError() - - -def nullify_condition(condition: ConditionType, dim: int = 1): - """Transform an input condition to a null condition. - The way it is done by converting it to a single zero vector similarly - to how it is done inside WhiteSpaceTokenizer and NoopTokenizer. - - Args: - condition (ConditionType): A tuple of condition and mask (tuple[torch.Tensor, torch.Tensor]) - dim (int): The dimension that will be truncated (should be the time dimension) - WARNING!: dim should not be the batch dimension! - Returns: - ConditionType: A tuple of null condition and mask - """ - assert dim != 0, "dim cannot be the batch dimension!" - assert isinstance(condition, tuple) and \ - isinstance(condition[0], torch.Tensor) and \ - isinstance(condition[1], torch.Tensor), "'nullify_condition' got an unexpected input type!" - cond, mask = condition - B = cond.shape[0] - last_dim = cond.dim() - 1 - out = cond.transpose(dim, last_dim) - out = 0. * out[..., :1] - out = out.transpose(dim, last_dim) - mask = torch.zeros((B, 1), device=out.device).int() - assert cond.dim() == out.dim() - return out, mask - - -def nullify_wav(cond: WavCondition) -> WavCondition: - """Transform a WavCondition to a nullified WavCondition. - It replaces the wav by a null tensor, forces its length to 0, and replaces metadata by dummy attributes. - - Args: - cond (WavCondition): Wav condition with wav, tensor of shape [B, T]. - Returns: - WavCondition: Nullified wav condition. - """ - null_wav, _ = nullify_condition((cond.wav, torch.zeros_like(cond.wav)), dim=cond.wav.dim() - 1) - return WavCondition( - wav=null_wav, - length=torch.tensor([0] * cond.wav.shape[0], device=cond.wav.device), - sample_rate=cond.sample_rate, - path=[None] * cond.wav.shape[0], - seek_time=[None] * cond.wav.shape[0], - ) - - -def nullify_joint_embed(embed: JointEmbedCondition) -> JointEmbedCondition: - """Nullify the joint embedding condition by replacing it by a null tensor, forcing its length to 0, - and replacing metadata by dummy attributes. - - Args: - cond (JointEmbedCondition): Joint embedding condition with wav and text, wav tensor of shape [B, C, T]. - """ - null_wav, _ = nullify_condition((embed.wav, torch.zeros_like(embed.wav)), dim=embed.wav.dim() - 1) - return JointEmbedCondition( - wav=null_wav, text=[None] * len(embed.text), - length=torch.LongTensor([0]).to(embed.wav.device), - sample_rate=embed.sample_rate, - path=[None] * embed.wav.shape[0], - seek_time=[0] * embed.wav.shape[0], - ) - - -class Tokenizer: - """Base tokenizer implementation - (in case we want to introduce more advances tokenizers in the future). - """ - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - raise NotImplementedError() - - -class WhiteSpaceTokenizer(Tokenizer): - """This tokenizer should be used for natural language descriptions. - For example: - ["he didn't, know he's going home.", 'shorter sentence'] => - [[78, 62, 31, 4, 78, 25, 19, 34], - [59, 77, 0, 0, 0, 0, 0, 0]] - """ - PUNCTUATION = "?:!.,;" - - def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm", - lemma: bool = True, stopwords: bool = True) -> None: - self.n_bins = n_bins - self.pad_idx = pad_idx - self.lemma = lemma - self.stopwords = stopwords - try: - self.nlp = spacy.load(language) - except IOError: - spacy.cli.download(language) # type: ignore - self.nlp = spacy.load(language) - - @tp.no_type_check - def __call__(self, texts: tp.List[tp.Optional[str]], - return_text: bool = False) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Take a list of strings and convert them to a tensor of indices. - - Args: - texts (list[str]): List of strings. - return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False. - Returns: - tuple[torch.Tensor, torch.Tensor]: - - Indices of words in the LUT. - - And a mask indicating where the padding tokens are - """ - output, lengths = [], [] - texts = deepcopy(texts) - for i, text in enumerate(texts): - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(torch.Tensor([self.pad_idx])) - lengths.append(0) - continue - - # convert numbers to words - text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore - # normalize text - text = self.nlp(text) # type: ignore - # remove stopwords - if self.stopwords: - text = [w for w in text if not w.is_stop] # type: ignore - # remove punctuation - text = [w for w in text if w.text not in self.PUNCTUATION] # type: ignore - # lemmatize if needed - text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore - - texts[i] = " ".join(text) - lengths.append(len(text)) - # convert to tensor - tokens = torch.Tensor([hash_trick(w, self.n_bins) for w in text]) - output.append(tokens) - - mask = length_to_mask(torch.IntTensor(lengths)).int() - padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t() - if return_text: - return padded_output, mask, texts # type: ignore - return padded_output, mask - - -class NoopTokenizer(Tokenizer): - """This tokenizer should be used for global conditioners such as: artist, genre, key, etc. - The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split - strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will - split it to ["Jeff", "Buckley"] and return an index per word. - - For example: - ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101] - ["Metal", "Rock", "Classical"] => [0, 223, 51] - """ - def __init__(self, n_bins: int, pad_idx: int = 0): - self.n_bins = n_bins - self.pad_idx = pad_idx - - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - output, lengths = [], [] - for text in texts: - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(self.pad_idx) - lengths.append(0) - else: - output.append(hash_trick(text, self.n_bins)) - lengths.append(1) - - tokens = torch.LongTensor(output).unsqueeze(1) - mask = length_to_mask(torch.IntTensor(lengths)).int() - return tokens, mask - - -class BaseConditioner(nn.Module): - """Base model for all conditioner modules. - We allow the output dim to be different than the hidden dim for two reasons: - 1) keep our LUTs small when the vocab is large; - 2) make all condition dims consistent. - - Args: - dim (int): Hidden dim of the model. - output_dim (int): Output dim of the conditioner. - """ - def __init__(self, dim: int, output_dim: int): - super().__init__() - self.dim = dim - self.output_dim = output_dim - self.output_proj = nn.Linear(dim, output_dim) - - def tokenize(self, *args, **kwargs) -> tp.Any: - """Should be any part of the processing that will lead to a synchronization - point, e.g. BPE tokenization with transfer to the GPU. - - The returned value will be saved and return later when calling forward(). - """ - raise NotImplementedError() - - def forward(self, inputs: tp.Any) -> ConditionType: - """Gets input that should be used as conditioning (e.g, genre, description or a waveform). - Outputs a ConditionType, after the input data was embedded as a dense vector. - - Returns: - ConditionType: - - A tensor of size [B, T, D] where B is the batch size, T is the length of the - output embedding and D is the dimension of the embedding. - - And a mask indicating where the padding tokens. - """ - raise NotImplementedError() - - -class TextConditioner(BaseConditioner): - ... - - -class LUTConditioner(TextConditioner): - """Lookup table TextConditioner. - - Args: - n_bins (int): Number of bins. - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - tokenizer (str): Name of the tokenizer. - pad_idx (int, optional): Index for padding token. Defaults to 0. - """ - def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0): - super().__init__(dim, output_dim) - self.embed = nn.Embedding(n_bins, dim) - self.tokenizer: Tokenizer - if tokenizer == 'whitespace': - self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx) - elif tokenizer == 'noop': - self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx) - else: - raise ValueError(f"unrecognized tokenizer `{tokenizer}`.") - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - device = self.embed.weight.device - tokens, mask = self.tokenizer(x) - tokens, mask = tokens.to(device), mask.to(device) - return tokens, mask - - def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType: - tokens, mask = inputs - embeds = self.embed(tokens) - embeds = self.output_proj(embeds) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class T5Conditioner(TextConditioner): - """T5-based TextConditioner. - - Args: - name (str): Name of the T5 model. - output_dim (int): Output dim of the conditioner. - finetune (bool): Whether to fine-tune T5 at train time. - device (str): Device for T5 Conditioner. - autocast_dtype (tp.Optional[str], optional): Autocast dtype. - word_dropout (float, optional): Word dropout probability. - normalize_text (bool, optional): Whether to apply text normalization. - """ - MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b", - "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large", - "google/flan-t5-xl", "google/flan-t5-xxl"] - MODELS_DIMS = { - "t5-small": 512, - "t5-base": 768, - "t5-large": 1024, - "t5-3b": 1024, - "t5-11b": 1024, - "google/flan-t5-small": 512, - "google/flan-t5-base": 768, - "google/flan-t5-large": 1024, - "google/flan-t5-3b": 1024, - "google/flan-t5-11b": 1024, - } - - def __init__(self, name: str, output_dim: int, finetune: bool, device: str, - autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0., - normalize_text: bool = False): - assert name in self.MODELS, f"Unrecognized t5 model name (should in {self.MODELS})" - super().__init__(self.MODELS_DIMS[name], output_dim) - self.device = device - self.name = name - self.finetune = finetune - self.word_dropout = word_dropout - if autocast_dtype is None or self.device == 'cpu': - self.autocast = TorchAutocast(enabled=False) - if self.device != 'cpu': - logger.warning("T5 has no autocast, this might lead to NaN") - else: - dtype = getattr(torch, autocast_dtype) - assert isinstance(dtype, torch.dtype) - logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}") - self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype) - # Let's disable logging temporarily because T5 will vomit some errors otherwise. - # thanks https://gist.github.com/simon-weber/7853144 - previous_level = logging.root.manager.disable - logging.disable(logging.ERROR) - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - try: - self.t5_tokenizer = T5Tokenizer.from_pretrained(name) - t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune) - finally: - logging.disable(previous_level) - if finetune: - self.t5 = t5 - else: - # this makes sure that the t5 models is not part - # of the saved checkpoint - self.__dict__['t5'] = t5.to(device) - - self.normalize_text = normalize_text - if normalize_text: - self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True) - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]: - # if current sample doesn't have a certain attribute, replace with empty string - entries: tp.List[str] = [xi if xi is not None else "" for xi in x] - if self.normalize_text: - _, _, entries = self.text_normalizer(entries, return_text=True) - if self.word_dropout > 0. and self.training: - new_entries = [] - for entry in entries: - words = [word for word in entry.split(" ") if random.random() >= self.word_dropout] - new_entries.append(" ".join(words)) - entries = new_entries - - empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""]) - - inputs = self.t5_tokenizer(entries, return_tensors='pt', padding=True).to(self.device) - mask = inputs['attention_mask'] - mask[empty_idx, :] = 0 # zero-out index where the input is non-existant - return inputs - - def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType: - mask = inputs['attention_mask'] - with torch.set_grad_enabled(self.finetune), self.autocast: - embeds = self.t5(**inputs).last_hidden_state - embeds = self.output_proj(embeds.to(self.output_proj.weight)) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class WaveformConditioner(BaseConditioner): - """Base class for all conditioners that take a waveform as input. - Classes that inherit must implement `_get_wav_embedding` that outputs - a continuous tensor, and `_downsampling_factor` that returns the down-sampling - factor of the embedding model. - - Args: - dim (int): The internal representation dimension. - output_dim (int): Output dimension. - device (tp.Union[torch.device, str]): Device. - """ - def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]): - super().__init__(dim, output_dim) - self.device = device - - def tokenize(self, x: WavCondition) -> WavCondition: - wav, length, sample_rate, path, seek_time = x - assert length is not None - return WavCondition(wav.to(self.device), length.to(self.device), sample_rate, path, seek_time) - - def _get_wav_embedding(self, x: WavCondition) -> torch.Tensor: - """Gets as input a WavCondition and returns a dense embedding.""" - raise NotImplementedError() - - def _downsampling_factor(self): - """Returns the downsampling factor of the embedding model.""" - raise NotImplementedError() - - def forward(self, x: WavCondition) -> ConditionType: - """Extract condition embedding and mask from a waveform and its metadata. - Args: - x (WavCondition): Waveform condition containing raw waveform and metadata. - Returns: - ConditionType: a dense vector representing the conditioning along with its mask - """ - wav, lengths, *_ = x - with torch.no_grad(): - embeds = self._get_wav_embedding(x) - embeds = embeds.to(self.output_proj.weight) - embeds = self.output_proj(embeds) - - if lengths is not None: - lengths = lengths / self._downsampling_factor() - mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore - else: - mask = torch.ones_like(embeds) - embeds = (embeds * mask.unsqueeze(2).to(self.device)) - - return embeds, mask - - -class ChromaStemConditioner(WaveformConditioner): - """Chroma conditioner based on stems. - The ChromaStemConditioner uses DEMUCS to first filter out drums and bass, as - the drums and bass often dominate the chroma leading to the chroma features - not containing information about the melody. - - Args: - output_dim (int): Output dimension for the conditioner. - sample_rate (int): Sample rate for the chroma extractor. - n_chroma (int): Number of chroma bins for the chroma extractor. - radix2_exp (int): Size of stft window for the chroma extractor (power of 2, e.g. 12 -> 2^12). - duration (int): duration used during training. This is later used for correct padding - in case we are using chroma as prefix. - match_len_on_eval (bool, optional): if True then all chromas are padded to the training - duration. Defaults to False. - eval_wavs (str, optional): path to a dataset manifest with waveform, this waveforms are used as - conditions during eval (for cases where we don't want to leak test conditions like MusicCaps). - Defaults to None. - n_eval_wavs (int, optional): limits the number of waveforms used for conditioning. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for the conditioner. - **kwargs: Additional parameters for the chroma extractor. - """ - def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int, - duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None, - n_eval_wavs: int = 0, cache_path: tp.Optional[tp.Union[str, Path]] = None, - device: tp.Union[torch.device, str] = 'cpu', **kwargs): - from demucs import pretrained - super().__init__(dim=n_chroma, output_dim=output_dim, device=device) - self.autocast = TorchAutocast(enabled=device != 'cpu', device_type=self.device, dtype=torch.float32) - self.sample_rate = sample_rate - self.match_len_on_eval = match_len_on_eval - self.duration = duration - self.__dict__['demucs'] = pretrained.get_model('htdemucs').to(device) - stem_sources: list = self.demucs.sources # type: ignore - self.stem_indices = torch.LongTensor([stem_sources.index('vocals'), stem_sources.index('other')]).to(device) - self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, - radix2_exp=radix2_exp, **kwargs).to(device) - self.chroma_len = self._get_chroma_len() - self.eval_wavs: tp.Optional[torch.Tensor] = self._load_eval_wavs(eval_wavs, n_eval_wavs) - self.cache = None - if cache_path is not None: - self.cache = EmbeddingCache(Path(cache_path) / 'wav', self.device, - compute_embed_fn=self._get_full_chroma_for_cache, - extract_embed_fn=self._extract_chroma_chunk) - - def _downsampling_factor(self) -> int: - return self.chroma.winhop - - def _load_eval_wavs(self, path: tp.Optional[str], num_samples: int) -> tp.Optional[torch.Tensor]: - """Load pre-defined waveforms from a json. - These waveforms will be used for chroma extraction during evaluation. - This is done to make the evaluation on MusicCaps fair (we shouldn't see the chromas of MusicCaps). - """ - if path is None: - return None - - logger.info(f"Loading evaluation wavs from {path}") - from audiocraft.data.audio_dataset import AudioDataset - dataset: AudioDataset = AudioDataset.from_meta( - path, segment_duration=self.duration, min_audio_duration=self.duration, - sample_rate=self.sample_rate, channels=1) - - if len(dataset) > 0: - eval_wavs = dataset.collater([dataset[i] for i in range(num_samples)]).to(self.device) - logger.info(f"Using {len(eval_wavs)} evaluation wavs for chroma-stem conditioner") - return eval_wavs - else: - raise ValueError("Could not find evaluation wavs, check lengths of wavs") - - def reset_eval_wavs(self, eval_wavs: tp.Optional[torch.Tensor]) -> None: - self.eval_wavs = eval_wavs - - def has_eval_wavs(self) -> bool: - return self.eval_wavs is not None - - def _sample_eval_wavs(self, num_samples: int) -> torch.Tensor: - """Sample wavs from a predefined list.""" - assert self.eval_wavs is not None, "Cannot sample eval wavs as no eval wavs provided." - total_eval_wavs = len(self.eval_wavs) - out = self.eval_wavs - if num_samples > total_eval_wavs: - out = self.eval_wavs.repeat(num_samples // total_eval_wavs + 1, 1, 1) - return out[torch.randperm(len(out))][:num_samples] - - def _get_chroma_len(self) -> int: - """Get length of chroma during training.""" - dummy_wav = torch.zeros((1, int(self.sample_rate * self.duration)), device=self.device) - dummy_chr = self.chroma(dummy_wav) - return dummy_chr.shape[1] - - @torch.no_grad() - def _get_stemmed_wav(self, wav: torch.Tensor, sample_rate: int) -> torch.Tensor: - """Get parts of the wav that holds the melody, extracting the main stems from the wav.""" - from demucs.apply import apply_model - from demucs.audio import convert_audio - with self.autocast: - wav = convert_audio( - wav, sample_rate, self.demucs.samplerate, self.demucs.audio_channels) # type: ignore - stems = apply_model(self.demucs, wav, device=self.device) - stems = stems[:, self.stem_indices] # extract relevant stems for melody conditioning - mix_wav = stems.sum(1) # merge extracted stems to single waveform - mix_wav = convert_audio(mix_wav, self.demucs.samplerate, self.sample_rate, 1) # type: ignore - return mix_wav - - @torch.no_grad() - def _extract_chroma(self, wav: torch.Tensor) -> torch.Tensor: - """Extract chroma features from the waveform.""" - with self.autocast: - return self.chroma(wav) - - @torch.no_grad() - def _compute_wav_embedding(self, wav: torch.Tensor, sample_rate: int) -> torch.Tensor: - """Compute wav embedding, applying stem and chroma extraction.""" - # avoid 0-size tensors when we are working with null conds - if wav.shape[-1] == 1: - return self._extract_chroma(wav) - stems = self._get_stemmed_wav(wav, sample_rate) - chroma = self._extract_chroma(stems) - return chroma - - @torch.no_grad() - def _get_full_chroma_for_cache(self, path: tp.Union[str, Path], x: WavCondition, idx: int) -> torch.Tensor: - """Extract chroma from the whole audio waveform at the given path.""" - wav, sr = audio_read(path) - wav = wav[None].to(self.device) - wav = convert_audio(wav, sr, self.sample_rate, to_channels=1) - chroma = self._compute_wav_embedding(wav, self.sample_rate)[0] - return chroma - - def _extract_chroma_chunk(self, full_chroma: torch.Tensor, x: WavCondition, idx: int) -> torch.Tensor: - """Extract a chunk of chroma from the full chroma derived from the full waveform.""" - wav_length = x.wav.shape[-1] - seek_time = x.seek_time[idx] - assert seek_time is not None, ( - "WavCondition seek_time is required " - "when extracting chroma chunks from pre-computed chroma.") - full_chroma = full_chroma.float() - frame_rate = self.sample_rate / self._downsampling_factor() - target_length = int(frame_rate * wav_length / self.sample_rate) - index = int(frame_rate * seek_time) - out = full_chroma[index: index + target_length] - out = F.pad(out[None], (0, 0, 0, target_length - out.shape[0]))[0] - return out.to(self.device) - - @torch.no_grad() - def _get_wav_embedding(self, x: WavCondition) -> torch.Tensor: - """Get the wav embedding from the WavCondition. - The conditioner will either extract the embedding on-the-fly computing it from the condition wav directly - or will rely on the embedding cache to load the pre-computed embedding if relevant. - """ - sampled_wav: tp.Optional[torch.Tensor] = None - if not self.training and self.eval_wavs is not None: - warn_once(logger, "Using precomputed evaluation wavs!") - sampled_wav = self._sample_eval_wavs(len(x.wav)) - - no_undefined_paths = all(p is not None for p in x.path) - no_nullified_cond = x.wav.shape[-1] > 1 - if sampled_wav is not None: - chroma = self._compute_wav_embedding(sampled_wav, self.sample_rate) - elif self.cache is not None and no_undefined_paths and no_nullified_cond: - paths = [Path(p) for p in x.path if p is not None] - chroma = self.cache.get_embed_from_cache(paths, x) - else: - assert all(sr == x.sample_rate[0] for sr in x.sample_rate), "All sample rates in batch should be equal." - chroma = self._compute_wav_embedding(x.wav, x.sample_rate[0]) - - if self.match_len_on_eval: - B, T, C = chroma.shape - if T > self.chroma_len: - chroma = chroma[:, :self.chroma_len] - logger.debug(f"Chroma was truncated to match length! ({T} -> {chroma.shape[1]})") - elif T < self.chroma_len: - n_repeat = int(math.ceil(self.chroma_len / T)) - chroma = chroma.repeat(1, n_repeat, 1) - chroma = chroma[:, :self.chroma_len] - logger.debug(f"Chroma was repeated to match length! ({T} -> {chroma.shape[1]})") - - return chroma - - def tokenize(self, x: WavCondition) -> WavCondition: - """Apply WavConditioner tokenization and populate cache if needed.""" - x = super().tokenize(x) - no_undefined_paths = all(p is not None for p in x.path) - if self.cache is not None and no_undefined_paths: - paths = [Path(p) for p in x.path if p is not None] - self.cache.populate_embed_cache(paths, x) - return x - - -class JointEmbeddingConditioner(BaseConditioner): - """Joint embedding conditioning supporting both audio or text conditioning. - - Args: - dim (int): Dimension. - output_dim (int): Output dimension. - device (str): Device. - attribute (str): Attribute used by the conditioner. - autocast_dtype (str): Autocast for the conditioner. - quantize (bool): Whether to quantize the CLAP embedding. - n_q (int): Number of residual quantizers (used if quantize is true). - bins (int): Quantizers' codebooks size (used if quantize is true). - kwargs: Additional parameters for residual vector quantizer. - """ - def __init__(self, dim: int, output_dim: int, device: str, attribute: str, - autocast_dtype: tp.Optional[str] = 'float32', quantize: bool = True, - n_q: int = 12, bins: int = 1024, **kwargs): - super().__init__(dim=dim, output_dim=output_dim) - self.device = device - self.attribute = attribute - if autocast_dtype is None or device == 'cpu': - self.autocast = TorchAutocast(enabled=False) - logger.warning("JointEmbeddingConditioner has no autocast, this might lead to NaN.") - else: - dtype = getattr(torch, autocast_dtype) - assert isinstance(dtype, torch.dtype) - logger.info(f"JointEmbeddingConditioner will be evaluated with autocast as {autocast_dtype}.") - self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype) - # residual vector quantizer to discretize the conditioned embedding - self.quantizer: tp.Optional[ResidualVectorQuantizer] = None - if quantize: - self.quantizer = ResidualVectorQuantizer(dim, n_q=n_q, bins=bins, **kwargs) - - def _get_embed(self, x: JointEmbedCondition) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Get joint embedding in latent space from the inputs. - - Returns: - tuple[torch.Tensor, torch.Tensor]: Tensor for the latent embedding - and corresponding empty indexes. - """ - raise NotImplementedError() - - def forward(self, x: JointEmbedCondition) -> ConditionType: - with self.autocast: - embed, empty_idx = self._get_embed(x) - if self.quantizer is not None: - embed = embed.view(-1, self.dim, 1) - q_res = self.quantizer(embed, frame_rate=1) - out_embed = q_res.x.view(-1, self.dim) - else: - out_embed = embed - out_embed = self.output_proj(out_embed).view(-1, 1, self.output_dim) - mask = torch.ones(*out_embed.shape[:2], device=out_embed.device) - mask[empty_idx, :] = 0 # zero-out index where the input is non-existant - out_embed = (out_embed * mask.unsqueeze(-1)) - return out_embed, mask - - def tokenize(self, x: JointEmbedCondition) -> JointEmbedCondition: - return x - - -class CLAPEmbeddingConditioner(JointEmbeddingConditioner): - """Joint Embedding conditioner based on pre-trained CLAP model. - - This CLAP-based conditioner supports a caching mechanism - over the computed embeddings for faster training. - - Args: - dim (int): Dimension. - output_dim (int): Output dimension. - device (str): Device. - attribute (str): Attribute used by the conditioner. - quantize (bool): Whether to quantize the CLAP embedding. - n_q (int): Number of residual quantizers (used if quantize is true). - bins (int): Quantizers' codebooks size (used if quantize is true). - checkpoint (str): Path to CLAP checkpoint. - model_arch (str): CLAP model architecture. - enable_fusion (bool): Enable fusion for CLAP model. - sample_rate (int): Sample rate used by CLAP model. - max_audio_length (float): Maximum audio length for CLAP model. - audio_stride (float): Stride to use for getting a CLAP embedding on the full sequence. - normalize (bool): Whether to normalize the CLAP embedding. - text_p (float): Probability of using text representation instead of audio at train time. - batch_size (Optional[int]): Batch size for CLAP embedding computation. - autocast_dtype (str): Autocast for the conditioner. - cache_path (Optional[str]): Path for pre-computed embeddings caching. - kwargs: Additional parameters for residual vector quantizer. - """ - def __init__(self, dim: int, output_dim: int, device: str, attribute: str, - quantize: bool, n_q: int, bins: int, checkpoint: tp.Union[str, Path], model_arch: str, - enable_fusion: bool, sample_rate: int, max_audio_length: int, audio_stride: int, - normalize: bool, text_p: bool, batch_size: tp.Optional[int] = None, - autocast_dtype: tp.Optional[str] = 'float32', cache_path: tp.Optional[str] = None, **kwargs): - try: - import laion_clap # type: ignore - except ImportError: - raise ImportError("Please install CLAP to use the CLAPEmbeddingConditioner: 'pip install laion_clap'") - checkpoint = AudioCraftEnvironment.resolve_reference_path(checkpoint) - clap_tokenize = RobertaTokenizer.from_pretrained('roberta-base') - clap_model = laion_clap.CLAP_Module(enable_fusion=enable_fusion, amodel=model_arch) - load_clap_state_dict(clap_model, checkpoint) - clap_model.eval() - clap_model.to(device) - super().__init__(dim=dim, output_dim=output_dim, device=device, attribute=attribute, - autocast_dtype=autocast_dtype, quantize=quantize, n_q=n_q, bins=bins, - **kwargs) - self.checkpoint = checkpoint - self.enable_fusion = enable_fusion - self.model_arch = model_arch - self.clap: laion_clap.CLAP_Module - self.clap_tokenize: RobertaTokenizer - self.clap_sample_rate = sample_rate - self.clap_max_frames = int(self.clap_sample_rate * max_audio_length) - self.clap_stride = int(self.clap_sample_rate * audio_stride) - self.batch_size = batch_size or 1 - self.normalize = normalize - self.text_p = text_p - self.__dict__['clap_tokenize'] = clap_tokenize - self.__dict__['clap'] = clap_model - self.wav_cache, self.text_cache = None, None - if cache_path is not None: - self.wav_cache = EmbeddingCache(Path(cache_path) / 'wav', self.device, - compute_embed_fn=self._get_wav_embedding_for_cache, - extract_embed_fn=self._extract_wav_embedding_chunk) - self.text_cache = EmbeddingCache(Path(cache_path) / 'text', self.device, - compute_embed_fn=self._get_text_embedding_for_cache) - - def _tokenizer(self, texts: tp.Union[str, tp.List[str]]) -> dict: - # we use the default params from CLAP module here as well - return self.clap_tokenize(texts, padding="max_length", truncation=True, max_length=77, return_tensors="pt") - - def _compute_text_embedding(self, text: tp.List[str]) -> torch.Tensor: - """Compute text embedding from CLAP model on a given a batch of text. - - Args: - text (list[str]): List of text for the batch, with B items. - Returns: - torch.Tensor: CLAP embedding derived from text, of shape [B, 1, D], with D the CLAP embedding dimension. - """ - with torch.no_grad(): - embed = self.clap.get_text_embedding(text, tokenizer=self._tokenizer, use_tensor=True) - return embed.view(embed.size(0), 1, embed.size(-1)) - - def _get_text_embedding_for_cache(self, path: tp.Union[Path, str], - x: JointEmbedCondition, idx: int) -> torch.Tensor: - """Get text embedding function for the cache.""" - text = x.text[idx] - text = text if text is not None else "" - return self._compute_text_embedding([text])[0] - - def _preprocess_wav(self, wav: torch.Tensor, length: torch.Tensor, sample_rates: tp.List[int]) -> torch.Tensor: - """Preprocess wav to expected format by CLAP model. - - Args: - wav (torch.Tensor): Audio wav, of shape [B, C, T]. - length (torch.Tensor): Actual length of the audio for each item in the batch, of shape [B]. - sample_rates (list[int]): Sample rates for each sample in the batch - Returns: - torch.Tensor: Audio wav of shape [B, T]. - """ - assert wav.dim() == 3, "Expecting wav to be [B, C, T]" - if sample_rates is not None: - _wav = [] - for i, audio in enumerate(wav): - sr = sample_rates[i] - audio = convert_audio(audio, from_rate=sr, to_rate=self.clap_sample_rate, to_channels=1) - _wav.append(audio) - wav = torch.stack(_wav, dim=0) - wav = wav.mean(dim=1) - return wav - - def _compute_wav_embedding(self, wav: torch.Tensor, length: torch.Tensor, - sample_rates: tp.List[int], reduce_mean: bool = False) -> torch.Tensor: - """Compute audio wave embedding from CLAP model. - - Since CLAP operates on a fixed sequence length audio inputs and we need to process longer audio sequences, - we calculate the wav embeddings on `clap_max_frames` windows with `clap_stride`-second stride and - average the resulting embeddings. - - Args: - wav (torch.Tensor): Audio wav, of shape [B, C, T]. - length (torch.Tensor): Actual length of the audio for each item in the batch, of shape [B]. - sample_rates (list[int]): Sample rates for each sample in the batch. - reduce_mean (bool): Whether to get the average tensor. - Returns: - torch.Tensor: Audio embedding of shape [B, F, D], F being the number of chunks, D the dimension. - """ - with torch.no_grad(): - wav = self._preprocess_wav(wav, length, sample_rates) - B, T = wav.shape - if T >= self.clap_max_frames: - wav = wav.unfold(-1, self.clap_max_frames, self.clap_stride) # [B, F, T] - else: - wav = wav.view(-1, 1, T) # [B, F, T] with F=1 - wav = einops.rearrange(wav, 'b f t -> (b f) t') - embed_list = [] - for i in range(0, wav.size(0), self.batch_size): - _wav = wav[i:i+self.batch_size, ...] - _embed = self.clap.get_audio_embedding_from_data(_wav, use_tensor=True) - embed_list.append(_embed) - embed = torch.cat(embed_list, dim=0) - embed = einops.rearrange(embed, '(b f) d -> b f d', b=B) - if reduce_mean: - embed = embed.mean(dim=1, keepdim=True) - return embed # [B, F, D] with F=1 if reduce_mean is True - - def _get_wav_embedding_for_cache(self, path: tp.Union[str, Path], - x: JointEmbedCondition, idx: int) -> torch.Tensor: - """Compute audio wave embedding for the cache. - The embedding is computed on a given audio read from file. - - Args: - path (str or Path): Path to the full audio file. - Returns: - torch.Tensor: Single-item tensor of shape [F, D], F being the number of chunks, D the dimension. - """ - wav, sr = audio_read(path) # [C, T] - wav = wav.unsqueeze(0).to(self.device) # [1, C, T] - wav_len = torch.LongTensor([wav.shape[-1]]).to(self.device) - embed = self._compute_wav_embedding(wav, wav_len, [sr], reduce_mean=False) # [B, F, D] - return embed.squeeze(0) # [F, D] - - def _extract_wav_embedding_chunk(self, full_embed: torch.Tensor, x: JointEmbedCondition, idx: int) -> torch.Tensor: - """Extract the chunk of embedding matching the seek_time and length from the full CLAP audio embedding. - - Args: - full_embed (torch.Tensor): CLAP embedding computed on the full wave, of shape [F, D]. - x (JointEmbedCondition): Joint embedding condition for the full batch. - idx (int): Index considered for the given embedding to extract. - Returns: - torch.Tensor: Wav embedding averaged on sliding window, of shape [1, D]. - """ - sample_rate = x.sample_rate[idx] - seek_time = x.seek_time[idx] - seek_time = 0. if seek_time is None else seek_time - clap_stride = int(self.clap_stride / self.clap_sample_rate) * sample_rate - end_seek_time = seek_time + self.clap_max_frames / self.clap_sample_rate - start_offset = int(seek_time * sample_rate // clap_stride) - end_offset = int(end_seek_time * sample_rate // clap_stride) - wav_embed = full_embed[start_offset:end_offset, ...] - wav_embed = wav_embed.mean(dim=0, keepdim=True) - return wav_embed.to(self.device) # [F, D] - - def _get_text_embedding(self, x: JointEmbedCondition) -> torch.Tensor: - """Get CLAP embedding from a batch of text descriptions.""" - no_nullified_cond = x.wav.shape[-1] > 1 # we don't want to read from cache when condition dropout - if self.text_cache is not None and no_nullified_cond: - assert all(p is not None for p in x.path), "Cache requires all JointEmbedCondition paths to be provided" - paths = [Path(p) for p in x.path if p is not None] - embed = self.text_cache.get_embed_from_cache(paths, x) - else: - text = [xi if xi is not None else "" for xi in x.text] - embed = self._compute_text_embedding(text) - if self.normalize: - embed = torch.nn.functional.normalize(embed, p=2.0, dim=-1) - return embed - - def _get_wav_embedding(self, x: JointEmbedCondition) -> torch.Tensor: - """Get CLAP embedding from a batch of audio tensors (and corresponding sample rates).""" - no_undefined_paths = all(p is not None for p in x.path) - no_nullified_cond = x.wav.shape[-1] > 1 # we don't want to read from cache when condition dropout - if self.wav_cache is not None and no_undefined_paths and no_nullified_cond: - paths = [Path(p) for p in x.path if p is not None] - embed = self.wav_cache.get_embed_from_cache(paths, x) - else: - embed = self._compute_wav_embedding(x.wav, x.length, x.sample_rate, reduce_mean=True) - if self.normalize: - embed = torch.nn.functional.normalize(embed, p=2.0, dim=-1) - return embed - - def tokenize(self, x: JointEmbedCondition) -> JointEmbedCondition: - # Trying to limit as much as possible sync points when the cache is warm. - no_undefined_paths = all(p is not None for p in x.path) - if self.wav_cache is not None and no_undefined_paths: - assert all([p is not None for p in x.path]), "Cache requires all JointEmbedCondition paths to be provided" - paths = [Path(p) for p in x.path if p is not None] - self.wav_cache.populate_embed_cache(paths, x) - if self.text_cache is not None and no_undefined_paths: - assert all([p is not None for p in x.path]), "Cache requires all JointEmbedCondition paths to be provided" - paths = [Path(p) for p in x.path if p is not None] - self.text_cache.populate_embed_cache(paths, x) - return x - - def _get_embed(self, x: JointEmbedCondition) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Extract shared latent representation from either the wav or the text using CLAP.""" - # decide whether to use text embedding at train time or not - use_text_embed = random.random() < self.text_p - if self.training and not use_text_embed: - embed = self._get_wav_embedding(x) - empty_idx = torch.LongTensor([]) # we assume we always have the audio wav - else: - embed = self._get_text_embedding(x) - empty_idx = torch.LongTensor([i for i, xi in enumerate(x.text) if xi is None or xi == ""]) - return embed, empty_idx - - -def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str) -> ConditioningAttributes: - """Utility function for nullifying an attribute inside an ConditioningAttributes object. - If the condition is of type "wav", then nullify it using `nullify_condition` function. - If the condition is of any other type, set its value to None. - Works in-place. - """ - if condition_type not in ['text', 'wav', 'joint_embed']: - raise ValueError( - "dropout_condition got an unexpected condition type!" - f" expected 'text', 'wav' or 'joint_embed' but got '{condition_type}'" - ) - - if condition not in getattr(sample, condition_type): - raise ValueError( - "dropout_condition received an unexpected condition!" - f" expected wav={sample.wav.keys()} and text={sample.text.keys()}" - f" but got '{condition}' of type '{condition_type}'!" - ) - - if condition_type == 'wav': - wav_cond = sample.wav[condition] - sample.wav[condition] = nullify_wav(wav_cond) - elif condition_type == 'joint_embed': - embed = sample.joint_embed[condition] - sample.joint_embed[condition] = nullify_joint_embed(embed) - else: - sample.text[condition] = None - - return sample - - -class DropoutModule(nn.Module): - """Base module for all dropout modules.""" - def __init__(self, seed: int = 1234): - super().__init__() - self.rng = torch.Generator() - self.rng.manual_seed(seed) - - -class AttributeDropout(DropoutModule): - """Dropout with a given probability per attribute. - This is different from the behavior of ClassifierFreeGuidanceDropout as this allows for attributes - to be dropped out separately. For example, "artist" can be dropped while "genre" remains. - This is in contrast to ClassifierFreeGuidanceDropout where if "artist" is dropped "genre" - must also be dropped. - - Args: - p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example: - ... - "genre": 0.1, - "artist": 0.5, - "wav": 0.25, - ... - active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False. - seed (int, optional): Random seed. - """ - def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234): - super().__init__(seed=seed) - self.active_on_eval = active_on_eval - # construct dict that return the values from p otherwise 0 - self.p = {} - for condition_type, probs in p.items(): - self.p[condition_type] = defaultdict(lambda: 0, probs) - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (list[ConditioningAttributes]): List of conditions. - Returns: - list[ConditioningAttributes]: List of conditions after certain attributes were set to None. - """ - if not self.training and not self.active_on_eval: - return samples - - samples = deepcopy(samples) - for condition_type, ps in self.p.items(): # for condition types [text, wav] - for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre]) - if torch.rand(1, generator=self.rng).item() < p: - for sample in samples: - dropout_condition(sample, condition_type, condition) - return samples - - def __repr__(self): - return f"AttributeDropout({dict(self.p)})" - - -class ClassifierFreeGuidanceDropout(DropoutModule): - """Classifier Free Guidance dropout. - All attributes are dropped with the same probability. - - Args: - p (float): Probability to apply condition dropout during training. - seed (int): Random seed. - """ - def __init__(self, p: float, seed: int = 1234): - super().__init__(seed=seed) - self.p = p - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (list[ConditioningAttributes]): List of conditions. - Returns: - list[ConditioningAttributes]: List of conditions after all attributes were set to None. - """ - if not self.training: - return samples - - # decide on which attributes to drop in a batched fashion - drop = torch.rand(1, generator=self.rng).item() < self.p - if not drop: - return samples - - # nullify conditions of all attributes - samples = deepcopy(samples) - for condition_type in ["wav", "text"]: - for sample in samples: - for condition in sample.attributes[condition_type]: - dropout_condition(sample, condition_type, condition) - return samples - - def __repr__(self): - return f"ClassifierFreeGuidanceDropout(p={self.p})" - - -class ConditioningProvider(nn.Module): - """Prepare and provide conditions given all the supported conditioners. - - Args: - conditioners (dict): Dictionary of conditioners. - device (torch.device or str, optional): Device for conditioners and output condition types. - """ - def __init__(self, conditioners: tp.Dict[str, BaseConditioner], device: tp.Union[torch.device, str] = "cpu"): - super().__init__() - self.device = device - self.conditioners = nn.ModuleDict(conditioners) - - @property - def joint_embed_conditions(self): - return [m.attribute for m in self.conditioners.values() if isinstance(m, JointEmbeddingConditioner)] - - @property - def has_joint_embed_conditions(self): - return len(self.joint_embed_conditions) > 0 - - @property - def text_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)] - - @property - def wav_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)] - - @property - def has_wav_condition(self): - return len(self.wav_conditions) > 0 - - def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]: - """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly. - This should be called before starting any real GPU work to avoid synchronization points. - This will return a dict matching conditioner names to their arbitrary tokenized representations. - - Args: - inputs (list[ConditioningAttributes]): List of ConditioningAttributes objects containing - text and wav conditions. - """ - assert all([isinstance(x, ConditioningAttributes) for x in inputs]), ( - "Got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]", - f" but types were {set([type(x) for x in inputs])}" - ) - - output = {} - text = self._collate_text(inputs) - wavs = self._collate_wavs(inputs) - joint_embeds = self._collate_joint_embeds(inputs) - - assert set(text.keys() | wavs.keys() | joint_embeds.keys()).issubset(set(self.conditioners.keys())), ( - f"Got an unexpected attribute! Expected {self.conditioners.keys()}, ", - f"got {text.keys(), wavs.keys(), joint_embeds.keys()}" - ) - - for attribute, batch in chain(text.items(), wavs.items(), joint_embeds.items()): - output[attribute] = self.conditioners[attribute].tokenize(batch) - return output - - def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]: - """Compute pairs of `(embedding, mask)` using the configured conditioners and the tokenized representations. - The output is for example: - { - "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])), - "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])), - ... - } - - Args: - tokenized (dict): Dict of tokenized representations as returned by `tokenize()`. - """ - output = {} - for attribute, inputs in tokenized.items(): - condition, mask = self.conditioners[attribute](inputs) - output[attribute] = (condition, mask) - return output - - def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]: - """Given a list of ConditioningAttributes objects, compile a dictionary where the keys - are the attributes and the values are the aggregated input per attribute. - For example: - Input: - [ - ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...), - ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...), - ] - Output: - { - "genre": ["Rock", "Hip-hop"], - "description": ["A rock song with a guitar solo", "A hip-hop verse"] - } - - Args: - samples (list of ConditioningAttributes): List of ConditioningAttributes samples. - Returns: - dict[str, list[str, optional]]: A dictionary mapping an attribute name to text batch. - """ - out: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list) - texts = [x.text for x in samples] - for text in texts: - for condition in self.text_conditions: - out[condition].append(text[condition]) - return out - - def _collate_wavs(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, WavCondition]: - """Generate a dict where the keys are attributes by which we fetch similar wavs, - and the values are Tensors of wavs according to said attributes. - - *Note*: by the time the samples reach this function, each sample should have some waveform - inside the "wav" attribute. It should be either: - 1. A real waveform - 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset) - 3. A null waveform due to it being dropped in a dropout module (nullified by dropout) - - Args: - samples (list of ConditioningAttributes): List of ConditioningAttributes samples. - Returns: - dict[str, WavCondition]: A dictionary mapping an attribute name to wavs. - """ - wavs = defaultdict(list) - lengths = defaultdict(list) - sample_rates = defaultdict(list) - paths = defaultdict(list) - seek_times = defaultdict(list) - out: tp.Dict[str, WavCondition] = {} - - for sample in samples: - for attribute in self.wav_conditions: - wav, length, sample_rate, path, seek_time = sample.wav[attribute] - assert wav.dim() == 3, f"Got wav with dim={wav.dim()}, but expected 3 [1, C, T]" - assert wav.size(0) == 1, f"Got wav [B, C, T] with shape={wav.shape}, but expected B == 1" - # mono-channel conditioning - wav = wav.mean(1, keepdim=True) # [1, 1, T] - wavs[attribute].append(wav.flatten()) # [T] - lengths[attribute].append(length) - sample_rates[attribute].extend(sample_rate) - paths[attribute].extend(path) - seek_times[attribute].extend(seek_time) - - # stack all wavs to a single tensor - for attribute in self.wav_conditions: - stacked_wav, _ = collate(wavs[attribute], dim=0) - out[attribute] = WavCondition( - stacked_wav.unsqueeze(1), torch.cat(lengths[attribute]), sample_rates[attribute], - paths[attribute], seek_times[attribute]) - - return out - - def _collate_joint_embeds(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, JointEmbedCondition]: - """Generate a dict where the keys are attributes by which we compute joint embeddings, - and the values are Tensors of pre-computed embeddings and the corresponding text attributes. - - Args: - samples (list[ConditioningAttributes]): List of ConditioningAttributes samples. - Returns: - A dictionary mapping an attribute name to joint embeddings. - """ - texts = defaultdict(list) - wavs = defaultdict(list) - lengths = defaultdict(list) - sample_rates = defaultdict(list) - paths = defaultdict(list) - seek_times = defaultdict(list) - channels: int = 0 - - out = {} - for sample in samples: - for attribute in self.joint_embed_conditions: - wav, text, length, sample_rate, path, seek_time = sample.joint_embed[attribute] - assert wav.dim() == 3 - if channels == 0: - channels = wav.size(1) - else: - assert channels == wav.size(1), "not all audio has same number of channels in batch" - assert wav.size(0) == 1, "Expecting single-wav batch in the collate method" - wav = einops.rearrange(wav, "b c t -> (b c t)") # [1, C, T] => [C * T] - wavs[attribute].append(wav) - texts[attribute].extend(text) - lengths[attribute].append(length) - sample_rates[attribute].extend(sample_rate) - paths[attribute].extend(path) - seek_times[attribute].extend(seek_time) - - for attribute in self.joint_embed_conditions: - stacked_texts = texts[attribute] - stacked_paths = paths[attribute] - stacked_seek_times = seek_times[attribute] - stacked_wavs = pad_sequence(wavs[attribute]).to(self.device) - stacked_wavs = einops.rearrange(stacked_wavs, "(c t) b -> b c t", c=channels) - stacked_sample_rates = sample_rates[attribute] - stacked_lengths = torch.cat(lengths[attribute]).to(self.device) - assert stacked_lengths.size(0) == stacked_wavs.size(0) - assert len(stacked_sample_rates) == stacked_wavs.size(0) - assert len(stacked_texts) == stacked_wavs.size(0) - out[attribute] = JointEmbedCondition( - text=stacked_texts, wav=stacked_wavs, - length=stacked_lengths, sample_rate=stacked_sample_rates, - path=stacked_paths, seek_time=stacked_seek_times) - - return out - - -class ConditionFuser(StreamingModule): - """Condition fuser handles the logic to combine the different conditions - to the actual model input. - - Args: - fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse - each condition. For example: - { - "prepend": ["description"], - "sum": ["genre", "bpm"], - "cross": ["description"], - } - cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention. - cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used. - """ - FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"] - - def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False, - cross_attention_pos_emb_scale: float = 1.0): - super().__init__() - assert all( - [k in self.FUSING_METHODS for k in fuse2cond.keys()] - ), f"Got invalid fuse method, allowed methods: {self.FUSING_METHODS}" - self.cross_attention_pos_emb = cross_attention_pos_emb - self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale - self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond - self.cond2fuse: tp.Dict[str, str] = {} - for fuse_method, conditions in fuse2cond.items(): - for condition in conditions: - self.cond2fuse[condition] = fuse_method - - def forward( - self, - input: torch.Tensor, - conditions: tp.Dict[str, ConditionType] - ) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """Fuse the conditions to the provided model input. - - Args: - input (torch.Tensor): Transformer input. - conditions (dict[str, ConditionType]): Dict of conditions. - Returns: - tuple[torch.Tensor, torch.Tensor]: The first tensor is the transformer input - after the conditions have been fused. The second output tensor is the tensor - used for cross-attention or None if no cross attention inputs exist. - """ - B, T, _ = input.shape - - if 'offsets' in self._streaming_state: - first_step = False - offsets = self._streaming_state['offsets'] - else: - first_step = True - offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device) - - assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \ - f"given conditions contain unknown attributes for fuser, " \ - f"expected {self.cond2fuse.keys()}, got {conditions.keys()}" - cross_attention_output = None - for cond_type, (cond, cond_mask) in conditions.items(): - op = self.cond2fuse[cond_type] - if op == 'sum': - input += cond - elif op == 'input_interpolate': - cond = einops.rearrange(cond, "b t d -> b d t") - cond = F.interpolate(cond, size=input.shape[1]) - input += einops.rearrange(cond, "b d t -> b t d") - elif op == 'prepend': - if first_step: - input = torch.cat([cond, input], dim=1) - elif op == 'cross': - if cross_attention_output is not None: - cross_attention_output = torch.cat([cross_attention_output, cond], dim=1) - else: - cross_attention_output = cond - else: - raise ValueError(f"unknown op ({op})") - - if self.cross_attention_pos_emb and cross_attention_output is not None: - positions = torch.arange( - cross_attention_output.shape[1], - device=cross_attention_output.device - ).view(1, -1, 1) - pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1]) - cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return input, cross_attention_output diff --git a/spaces/trysem/Colorizer_Models/README.md b/spaces/trysem/Colorizer_Models/README.md deleted file mode 100644 index ff506a6b2d392d2a117b9ea74cdd986e10ed9910..0000000000000000000000000000000000000000 --- a/spaces/trysem/Colorizer_Models/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Colorizer Models -emoji: 🌈🎨 -colorFrom: red -colorTo: orange -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false -license: bsd-2-clause -duplicated_from: nightfury/Colorizer_Models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tsi-org/LLaVA/llava/model/language_model/llava_mpt.py b/spaces/tsi-org/LLaVA/llava/model/language_model/llava_mpt.py deleted file mode 100644 index 39dc8807ef8d339fb7cde331c0deabfe5ce7f93e..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/model/language_model/llava_mpt.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright 2023 Haotian Liu -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from typing import List, Optional, Tuple -import warnings - -import torch -import torch.nn.functional as F -import math - -from transformers import AutoConfig, AutoModelForCausalLM -from transformers.modeling_outputs import CausalLMOutputWithPast - -from .mpt.modeling_mpt import MPTConfig, MPTForCausalLM, MPTModel -from llava.model.llava_arch import LlavaMetaModel, LlavaMetaForCausalLM - - -class LlavaMPTConfig(MPTConfig): - model_type = "llava_mpt" - - -class LlavaMPTModel(LlavaMetaModel, MPTModel): - config_class = LlavaMPTConfig - - def __init__(self, config: MPTConfig): - config.hidden_size = config.d_model - super(LlavaMPTModel, self).__init__(config) - - def embed_tokens(self, x): - return self.wte(x) - - -class LlavaMPTForCausalLM(MPTForCausalLM, LlavaMetaForCausalLM): - config_class = LlavaMPTConfig - supports_gradient_checkpointing = True - - def __init__(self, config): - super(MPTForCausalLM, self).__init__(config) - - if not config.tie_word_embeddings: - raise ValueError('MPTForCausalLM only supports tied word embeddings') - self.transformer = LlavaMPTModel(config) - self.logit_scale = None - if config.logit_scale is not None: - logit_scale = config.logit_scale - if isinstance(logit_scale, str): - if logit_scale == 'inv_sqrt_d_model': - logit_scale = 1 / math.sqrt(config.d_model) - else: - raise ValueError(f"logit_scale={logit_scale!r} is not recognized as an option; use numeric value or 'inv_sqrt_d_model'.") - self.logit_scale = logit_scale - - def get_model(self): - return self.transformer - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, LlavaMPTModel): - module.gradient_checkpointing = value - - def forward(self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]]=None, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None, labels: Optional[torch.LongTensor]=None, return_dict: Optional[bool]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, use_cache: Optional[bool]=None, images=None): - return_dict = return_dict if return_dict is not None else self.config.return_dict - use_cache = use_cache if use_cache is not None else self.config.use_cache - - input_ids, attention_mask, past_key_values, inputs_embeds, labels = self.prepare_inputs_labels_for_multimodal(input_ids, attention_mask, past_key_values, labels, images) - outputs = self.transformer(input_ids=input_ids, inputs_embeds=inputs_embeds, past_key_values=past_key_values, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id, return_dict=return_dict, output_attentions=output_attentions, output_hidden_states=output_hidden_states, use_cache=use_cache) - # FIXME: this is a hack to fix the multiple gpu inference issue in https://github.com/haotian-liu/LLaVA/issues/338 - logits = F.linear(outputs.last_hidden_state.to(self.transformer.wte.weight.device), self.transformer.wte.weight) - if self.logit_scale is not None: - if self.logit_scale == 0: - warnings.warn(f'Multiplying logits by self.logit_scale={self.logit_scale!r}. This will produce uniform (uninformative) outputs.') - logits *= self.logit_scale - loss = None - if labels is not None: - labels = torch.roll(labels, shifts=-1) - labels[:, -1] = -100 - loss = F.cross_entropy(logits.view(-1, logits.size(-1)), labels.to(logits.device).view(-1)) - return CausalLMOutputWithPast(loss=loss, logits=logits, past_key_values=outputs.past_key_values, hidden_states=outputs.hidden_states) - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs): - if inputs_embeds is not None: - raise NotImplementedError('inputs_embeds is not implemented for MPT yet') - attention_mask = kwargs['attention_mask'].bool() - if attention_mask[:, -1].sum() != attention_mask.shape[0]: - raise NotImplementedError('MPT does not support generation with right padding.') - if self.transformer.attn_uses_sequence_id and self.training: - sequence_id = torch.zeros_like(input_ids[:1]) - else: - sequence_id = None - if past_key_values is not None: - input_ids = input_ids[:, -1].unsqueeze(-1) - if self.transformer.prefix_lm: - prefix_mask = torch.ones_like(attention_mask) - if kwargs.get('use_cache') == False: - raise NotImplementedError('MPT with prefix_lm=True does not support use_cache=False.') - else: - prefix_mask = None - return {'input_ids': input_ids, 'attention_mask': attention_mask, 'prefix_mask': prefix_mask, 'sequence_id': sequence_id, 'past_key_values': past_key_values, 'use_cache': kwargs.get('use_cache', True), "images": kwargs.get("images", None)} - - -AutoConfig.register("llava_mpt", LlavaMPTConfig) -AutoModelForCausalLM.register(LlavaMPTConfig, LlavaMPTForCausalLM) diff --git a/spaces/ttt246/brain/Brain/README.md b/spaces/ttt246/brain/Brain/README.md deleted file mode 100644 index 83dea07dda182c950c7b8909c756a28f048f2b85..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Brain/README.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: RisingBrain -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -app_file: app.py -sdk: gradio -sdk_version: 2.9.1 -python_version: 3.10.4 -pinned: false -license: other ---- - -# 🧠 RisingBrain - Powering Your AI Enhanced OS 💡 - -Welcome to the heartbeat of **RisingBrain**, our main backend component. ⚽ Kickstart your **RisingBrain** project right from here. -

          - -

          - -## Running FastAPI Application 🚀 -Our backend runs on a FastAPI application. Here's a quick guide to get it up and running: - -### Step 1: Install all required packages using the provided requirements.txt file. - - ``` bash - pip install -r requirements.txt - ``` - -### Step 2: Start the FastAPI application with hot reloads enabled using Uvicorn. - ``` bash - uvicorn app:app --reload - ``` - -Bravo!👏 You should now see your **Brain Backend** is alive and ready for action, empowering your AI interactions in **RisingBrain**. - -Happy coding! 🎉 - -## Contributing 💪 -We appreciate your interest in enhancing our work! Please respect the style and contribution guidelines of every project when submitting patches and additions. Our general Git workflow of choice is "fork-and-pull". - - 1. **Fork** the repository on GitHub - 2. **Clone** your fork to your machine - 3. **Commit** the changes to your personal branch - 4. **Push** these updates back to your fork - 5. Don't forget to submit a **Pull Request** for us to study your contributions. - -NOTE: Sync with "upstream" to have the latest updates before you make a pull request! diff --git a/spaces/ttt246/brain/Brain/tests/functional/__init__.py b/spaces/ttt246/brain/Brain/tests/functional/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ulysses115/diffsvc_test/modules/hifigan/hifigan.py b/spaces/ulysses115/diffsvc_test/modules/hifigan/hifigan.py deleted file mode 100644 index ae7e61f56b00d60bcc49a18ece3edbe54746f7ea..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/modules/hifigan/hifigan.py +++ /dev/null @@ -1,365 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from modules.parallel_wavegan.layers import UpsampleNetwork, ConvInUpsampleNetwork -from modules.parallel_wavegan.models.source import SourceModuleHnNSF -import numpy as np - -LRELU_SLOPE = 0.1 - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Conv1d1x1(Conv1d): - """1x1 Conv1d with customized initialization.""" - - def __init__(self, in_channels, out_channels, bias): - """Initialize 1x1 Conv1d module.""" - super(Conv1d1x1, self).__init__(in_channels, out_channels, - kernel_size=1, padding=0, - dilation=1, bias=bias) - - -class HifiGanGenerator(torch.nn.Module): - def __init__(self, h, c_out=1): - super(HifiGanGenerator, self).__init__() - self.h = h - self.num_kernels = len(h['resblock_kernel_sizes']) - self.num_upsamples = len(h['upsample_rates']) - - if h['use_pitch_embed']: - self.harmonic_num = 8 - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h['upsample_rates'])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h['audio_sample_rate'], - harmonic_num=self.harmonic_num) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(80, h['upsample_initial_channel'], 7, 1, padding=3)) - resblock = ResBlock1 if h['resblock'] == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h['upsample_rates'], h['upsample_kernel_sizes'])): - c_cur = h['upsample_initial_channel'] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(c_cur * 2, c_cur, k, u, padding=(k - u) // 2))) - if h['use_pitch_embed']: - if i + 1 < len(h['upsample_rates']): - stride_f0 = np.prod(h['upsample_rates'][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h['upsample_initial_channel'] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h['resblock_kernel_sizes'], h['resblock_dilation_sizes'])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, c_out, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x, f0=None): - if f0 is not None: - # harmonic-source signal, noise-source signal, uv flag - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - if f0 is not None: - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False, use_cond=False, c_in=1): - super(DiscriminatorP, self).__init__() - self.use_cond = use_cond - if use_cond: - from utils.hparams import hparams - t = hparams['hop_size'] - self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2) - c_in = 2 - - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(c_in, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x, mel): - fmap = [] - if self.use_cond: - x_mel = self.cond_net(mel) - x = torch.cat([x_mel, x], 1) - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_cond=False, c_in=1): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorP(2, use_cond=use_cond, c_in=c_in), - DiscriminatorP(3, use_cond=use_cond, c_in=c_in), - DiscriminatorP(5, use_cond=use_cond, c_in=c_in), - DiscriminatorP(7, use_cond=use_cond, c_in=c_in), - DiscriminatorP(11, use_cond=use_cond, c_in=c_in), - ]) - - def forward(self, y, y_hat, mel=None): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y, mel) - y_d_g, fmap_g = d(y_hat, mel) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False, use_cond=False, upsample_rates=None, c_in=1): - super(DiscriminatorS, self).__init__() - self.use_cond = use_cond - if use_cond: - t = np.prod(upsample_rates) - self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2) - c_in = 2 - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(c_in, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x, mel): - if self.use_cond: - x_mel = self.cond_net(mel) - x = torch.cat([x_mel, x], 1) - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self, use_cond=False, c_in=1): - super(MultiScaleDiscriminator, self).__init__() - from utils.hparams import hparams - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True, use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 16], - c_in=c_in), - DiscriminatorS(use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 32], - c_in=c_in), - DiscriminatorS(use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 64], - c_in=c_in), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=1), - AvgPool1d(4, 2, padding=1) - ]) - - def forward(self, y, y_hat, mel=None): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y, mel) - y_d_g, fmap_g = d(y_hat, mel) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - r_losses = 0 - g_losses = 0 - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - r_losses += r_loss - g_losses += g_loss - r_losses = r_losses / len(disc_real_outputs) - g_losses = g_losses / len(disc_real_outputs) - return r_losses, g_losses - - -def cond_discriminator_loss(outputs): - loss = 0 - for dg in outputs: - g_loss = torch.mean(dg ** 2) - loss += g_loss - loss = loss / len(outputs) - return loss - - -def generator_loss(disc_outputs): - loss = 0 - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - loss += l - loss = loss / len(disc_outputs) - return loss diff --git a/spaces/ulysses115/diffsvc_test/utils/plot.py b/spaces/ulysses115/diffsvc_test/utils/plot.py deleted file mode 100644 index bdca62a8cd80869c707890cd9febd39966cd3658..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/utils/plot.py +++ /dev/null @@ -1,56 +0,0 @@ -import matplotlib.pyplot as plt -import numpy as np -import torch - -LINE_COLORS = ['w', 'r', 'y', 'cyan', 'm', 'b', 'lime'] - - -def spec_to_figure(spec, vmin=None, vmax=None): - if isinstance(spec, torch.Tensor): - spec = spec.cpu().numpy() - fig = plt.figure(figsize=(12, 6)) - plt.pcolor(spec.T, vmin=vmin, vmax=vmax) - return fig - - -def spec_f0_to_figure(spec, f0s, figsize=None): - max_y = spec.shape[1] - if isinstance(spec, torch.Tensor): - spec = spec.detach().cpu().numpy() - f0s = {k: f0.detach().cpu().numpy() for k, f0 in f0s.items()} - f0s = {k: f0 / 10 for k, f0 in f0s.items()} - fig = plt.figure(figsize=(12, 6) if figsize is None else figsize) - plt.pcolor(spec.T) - for i, (k, f0) in enumerate(f0s.items()): - plt.plot(f0.clip(0, max_y), label=k, c=LINE_COLORS[i], linewidth=1, alpha=0.8) - plt.legend() - return fig - - -def dur_to_figure(dur_gt, dur_pred, txt): - dur_gt = dur_gt.long().cpu().numpy() - dur_pred = dur_pred.long().cpu().numpy() - dur_gt = np.cumsum(dur_gt) - dur_pred = np.cumsum(dur_pred) - fig = plt.figure(figsize=(12, 6)) - for i in range(len(dur_gt)): - shift = (i % 8) + 1 - plt.text(dur_gt[i], shift, txt[i]) - plt.text(dur_pred[i], 10 + shift, txt[i]) - plt.vlines(dur_gt[i], 0, 10, colors='b') # blue is gt - plt.vlines(dur_pred[i], 10, 20, colors='r') # red is pred - return fig - - -def f0_to_figure(f0_gt, f0_cwt=None, f0_pred=None): - fig = plt.figure() - f0_gt = f0_gt.cpu().numpy() - plt.plot(f0_gt, color='r', label='gt') - if f0_cwt is not None: - f0_cwt = f0_cwt.cpu().numpy() - plt.plot(f0_cwt, color='b', label='cwt') - if f0_pred is not None: - f0_pred = f0_pred.cpu().numpy() - plt.plot(f0_pred, color='green', label='pred') - plt.legend() - return fig diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/APR H4S Platinum Hacking Software [PATCHED] Free Download.rar.md b/spaces/usbethFlerru/sovits-modelsV2/example/APR H4S Platinum Hacking Software [PATCHED] Free Download.rar.md deleted file mode 100644 index 8a96326510205bfe43d53c200c1228e6bb9d8d3c..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/APR H4S Platinum Hacking Software [PATCHED] Free Download.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

          APR H4S platinum hacking software free download.rar


          Download Zip >> https://urlcod.com/2uyUYZ



          -
          -Which is the best hacking tool that is capable of hacking an instagram account to ... Free Download Filename: APR H4S Platinum Hacking Software v [New] rar. 1fdad05405
          -
          -
          -

          diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/midas/vit.py b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/vilsonrodrigues/youtube-retrieval-qa/qa/vector_store.py b/spaces/vilsonrodrigues/youtube-retrieval-qa/qa/vector_store.py deleted file mode 100644 index 0e439a88f3ee1eef80b1344bd69373be1bb22f57..0000000000000000000000000000000000000000 --- a/spaces/vilsonrodrigues/youtube-retrieval-qa/qa/vector_store.py +++ /dev/null @@ -1,25 +0,0 @@ -from typing import Callable, List - -def create_vector_store( - docs: List, - metric: str = 'cos', - top_k: int = 4 -) -> Callable: - - from langchain.vectorstores import FAISS - from langchain.embeddings.openai import OpenAIEmbeddings - - embeddings = OpenAIEmbeddings() - - # Embed your documents and combine with the raw text in a pseudo db. - # Note: This will make an API call to OpenAI - docsearch = FAISS.from_documents(docs, embeddings) - - # Retriver object - retriever = docsearch.as_retriever() - - # Retriver configs - retriever.search_kwargs['distance_metric'] = metric - retriever.search_kwargs['k'] = top_k - - return retriever \ No newline at end of file diff --git a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/docs/install.md b/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/docs/install.md deleted file mode 100644 index 6314a40441285e9236438e468caf8b71a407531a..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/docs/install.md +++ /dev/null @@ -1,51 +0,0 @@ -## v1.8.0 -### Linux and Windows -```shell -# CUDA 11.0 -pip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 - -# CPU only -pip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -``` - - -## v1.7.1 -### Linux and Windows -```shell -# CUDA 11.0 -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 - -# CUDA 10.1 -pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html -``` - - -## v1.6.0 - -### Linux and Windows -```shell -# CUDA 10.2 -pip install torch==1.6.0 torchvision==0.7.0 - -# CUDA 10.1 -pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -``` \ No newline at end of file diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/cnn/bricks/activation.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/cnn/bricks/activation.py deleted file mode 100644 index cab2712287d5ef7be2f079dcb54a94b96394eab5..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/cnn/bricks/activation.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, build_from_cfg, digit_version -from .registry import ACTIVATION_LAYERS - -for module in [ - nn.ReLU, nn.LeakyReLU, nn.PReLU, nn.RReLU, nn.ReLU6, nn.ELU, - nn.Sigmoid, nn.Tanh -]: - ACTIVATION_LAYERS.register_module(module=module) - - -@ACTIVATION_LAYERS.register_module(name='Clip') -@ACTIVATION_LAYERS.register_module() -class Clamp(nn.Module): - """Clamp activation layer. - - This activation function is to clamp the feature map value within - :math:`[min, max]`. More details can be found in ``torch.clamp()``. - - Args: - min (Number | optional): Lower-bound of the range to be clamped to. - Default to -1. - max (Number | optional): Upper-bound of the range to be clamped to. - Default to 1. - """ - - def __init__(self, min=-1., max=1.): - super(Clamp, self).__init__() - self.min = min - self.max = max - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): The input tensor. - - Returns: - torch.Tensor: Clamped tensor. - """ - return torch.clamp(x, min=self.min, max=self.max) - - -class GELU(nn.Module): - r"""Applies the Gaussian Error Linear Units function: - - .. math:: - \text{GELU}(x) = x * \Phi(x) - where :math:`\Phi(x)` is the Cumulative Distribution Function for - Gaussian Distribution. - - Shape: - - Input: :math:`(N, *)` where `*` means, any number of additional - dimensions - - Output: :math:`(N, *)`, same shape as the input - - .. image:: scripts/activation_images/GELU.png - - Examples:: - - >>> m = nn.GELU() - >>> input = torch.randn(2) - >>> output = m(input) - """ - - def forward(self, input): - return F.gelu(input) - - -if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.4')): - ACTIVATION_LAYERS.register_module(module=GELU) -else: - ACTIVATION_LAYERS.register_module(module=nn.GELU) - - -def build_activation_layer(cfg): - """Build activation layer. - - Args: - cfg (dict): The activation layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an activation layer. - - Returns: - nn.Module: Created activation layer. - """ - return build_from_cfg(cfg, ACTIVATION_LAYERS) diff --git a/spaces/vumichien/canvas_controlnet/ldm/modules/image_degradation/__init__.py b/spaces/vumichien/canvas_controlnet/ldm/modules/image_degradation/__init__.py deleted file mode 100644 index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/ldm/modules/image_degradation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr -from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light diff --git a/spaces/waheedwaqar/Toyota_Youtube_Chatbot/README.md b/spaces/waheedwaqar/Toyota_Youtube_Chatbot/README.md deleted file mode 100644 index df232b75ddd483ef4a7f37af1242e8e4f604cce1..0000000000000000000000000000000000000000 --- a/spaces/waheedwaqar/Toyota_Youtube_Chatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Toyota Youtube Chatbot -emoji: 😻 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wy213/213a/src/components/ui/dialog.tsx b/spaces/wy213/213a/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
          - {children} -
          -
          -) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/wy213/213a/src/pages/api/blob.ts b/spaces/wy213/213a/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/wydgg/bingo-wyd-ai/next.config.js b/spaces/wydgg/bingo-wyd-ai/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/wydgg/bingo-wyd-ai/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/xswu/HPSv2/src/open_clip/modified_resnet.py b/spaces/xswu/HPSv2/src/open_clip/modified_resnet.py deleted file mode 100644 index 6a8d3aeda91ecb394303becbbfccc8acd8cddcd9..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/src/open_clip/modified_resnet.py +++ /dev/null @@ -1,181 +0,0 @@ -from collections import OrderedDict - -import torch -from torch import nn -from torch.nn import functional as F - -from .utils import freeze_batch_norm_2d - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.act1 = nn.ReLU(inplace=True) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.act2 = nn.ReLU(inplace=True) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.act3 = nn.ReLU(inplace=True) - - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential(OrderedDict([ - ("-1", nn.AvgPool2d(stride)), - ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)), - ("1", nn.BatchNorm2d(planes * self.expansion)) - ])) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.act1(self.bn1(self.conv1(x))) - out = self.act2(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.act3(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None): - super().__init__() - self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x, key=x, value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0., - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False - ) - - return x[0] - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, image_size=224, width=64): - super().__init__() - self.output_dim = output_dim - self.image_size = image_size - - # the 3-layer stem - self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(width // 2) - self.act1 = nn.ReLU(inplace=True) - self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(width // 2) - self.act2 = nn.ReLU(inplace=True) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.act3 = nn.ReLU(inplace=True) - self.avgpool = nn.AvgPool2d(2) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(image_size // 32, embed_dim, heads, output_dim) - - self.init_parameters() - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def init_parameters(self): - if self.attnpool is not None: - std = self.attnpool.c_proj.in_features ** -0.5 - nn.init.normal_(self.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.attnpool.c_proj.weight, std=std) - - for resnet_block in [self.layer1, self.layer2, self.layer3, self.layer4]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert unlocked_groups == 0, 'partial locking not currently supported for this model' - for param in self.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self) - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - # FIXME support for non-transformer - pass - - def stem(self, x): - x = self.act1(self.bn1(self.conv1(x))) - x = self.act2(self.bn2(self.conv2(x))) - x = self.act3(self.bn3(self.conv3(x))) - x = self.avgpool(x) - return x - - def forward(self, x): - x = self.stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x diff --git a/spaces/xxie92/antibody_visulization/abnumber/alignment.py b/spaces/xxie92/antibody_visulization/abnumber/alignment.py deleted file mode 100644 index 625303d0a310406060120c1cddec10e6e420b398..0000000000000000000000000000000000000000 --- a/spaces/xxie92/antibody_visulization/abnumber/alignment.py +++ /dev/null @@ -1,195 +0,0 @@ -from typing import Union - -from abnumber.common import is_similar_residue, is_integer -from abnumber.position import Position - - -class Alignment: - """Antibody chain alignment of two or more chains - - >>> from abnumber import Chain - >>> - >>> seq1 = 'QVQLQQSGAELARPGASVKMSCKASGYTFTRYTMHWVKQRPGQGLEWIGYINPSRGYTNYNQKFKDKATLTTDKSSSTAYMQLSSLTSEDSAVYYCARYYDDHYCLDYWGQGTTLTVSSAKTTAP' - >>> chain1 = Chain(seq1, scheme='imgt') - >>> - >>> seq2 = 'QVQLVQSGAELDRPGATVKMSCKASGYTTTRYTMHWVKQRPGQGLDWIGYINPSDRSYTNYNQKFKDKATLTTDKSSSTAYMQKTSLTSEDSAVYYCARYYDDYLDRWGQGTTLTVSSAKTTAP' - >>> chain2 = Chain(seq2, scheme='imgt') - >>> alignment = chain1.align(chain2) - - Alignment can be sliced and iterated: - - >>> for pos, (aa, bb) in alignment[:'5']: - >>> print(pos, aa, bb) - H1 Q Q - H2 V V - H3 Q Q - H4 L L - H5 Q V - ... - - """ - def __init__(self, positions, residues, scheme, chain_type): - assert isinstance(positions, list), 'Expected list of positions and residues. ' \ - 'Use chain.align(other) to create an alignment.' - assert len(positions) == len(residues) - unique_cdr_definitions = set(pos.cdr_definition for pos in positions) - assert len(unique_cdr_definitions) <= 1, f'Aligned chains should use the same CDR definitions, got: {unique_cdr_definitions}' - self.positions = positions - self.residues = residues - self.scheme = scheme - self.chain_type = chain_type - self._zipped = list(zip(self.positions, self.residues)) - - def __repr__(self): - return self.format() - - def __iter__(self): - yield from self._zipped.__iter__() - - def __len__(self): - return len(self.positions) - - def __getitem__(self, item): - if isinstance(item, slice): - if item.step is not None and item.step != 1: - raise IndexError(f'Slicing with step != 1 is not implemented, got: {item}') - return self.slice(start=item.start, stop=item.stop) - pos = self._parse_position(item) - raw_pos = self.positions.index(pos) - return self.residues[raw_pos] - - def slice(self, start: Union[str, int, 'Position'] = None, stop: Union[str, int, 'Position'] = None, - stop_inclusive: bool = True, allow_raw: bool = False): - """Create a slice of this alignment - - You can also slice directly using ``alignment['111':'112A']`` or ``alignment.raw[10:20]``. - - :param start: Slice start position (inclusive), :class:`Position` or string (e.g. '111A') - :param stop: Slice stop position (inclusive), :class:`Position` or string (e.g. '112A') - :param stop_inclusive: Include stop position in slice - :param allow_raw: Allow unaligned numeric indexing from 0 to length of sequence - 1 - :return: new sliced Alignment object - """ - - start = self._parse_position(start, allow_raw=allow_raw) if start is not None else None - stop = self._parse_position(stop, allow_raw=allow_raw) if stop is not None else None - - new_positions = [] - new_residues = [] - for pos, residues in zip(self.positions, self.residues): - if start is not None and pos < start: - continue - if stop is not None and (pos > stop or (not stop_inclusive and pos >= stop)): - break - new_positions.append(pos) - new_residues.append(residues) - - return Alignment(positions=new_positions, residues=new_residues, scheme=self.scheme, chain_type=self.chain_type) - - def _parse_position(self, position: Union[int, str, 'Position'], allow_raw=False): - """Create :class:`Position` key object from string or int. - - Note: The position should only be used for indexing, CDR definition is not preserved! - - :param position: Numeric or string position representation - :param allow_raw: Also allow unaligned numeric (int) indexing from 0 to length of sequence - 1 - :return: new Position object, should only be used for indexing, CDR definition is not preserved! - """ - if isinstance(position, str): - return Position.from_string(position, chain_type=self.chain_type, scheme=self.scheme) - if isinstance(position, Position): - return position - try: - position = int(position) - except TypeError: - raise IndexError(f'Invalid position key, expected Position, string or integer, got {type(position)}: "{position}"') - if not allow_raw: - raise IndexError("Use chain.raw[i] for raw numeric indexing or pass allow_raw=True. " - "For named position indexing, use string (e.g. chain['111A'] or chain['H111A'])") - if position >= len(self.positions): - return None - return self.positions[position] - - def format(self, mark_identity=True, mark_cdrs=True): - """Format alignment to string - - :param mark_identity: Add BLAST style middle line showing identity (``|``), similar residue (``+``) or different residue (``.``) - :param mark_cdrs: Add line highlighting CDR regions using ``^`` - :return: formatted string - """ - - def _identity_symbol(a, b): - return '|' if a == b else ('+' if is_similar_residue(a, b) else '.') - - lines = [] - for i in range(len(self.residues[0])): - if mark_identity and i != 0: - lines.append(''.join(_identity_symbol(aas[i], aas[i-1]) for pos, aas in self)) - lines.append(''.join(aas[i] for pos, aas in self)) - if mark_cdrs: - if self.positions[0].cdr_definition == 'kabat': - lines.append(''.join('^' if pos.is_in_cdr() else ("°" if pos.is_in_vernier() else ' ') for pos in self.positions)) - else: - lines.append(''.join('^' if pos.is_in_cdr() else ' ' for pos in self.positions)) - return '\n'.join(lines) - - def print(self, mark_identity=True, mark_cdrs=True): - """Print string representation of alignment created using :meth:`Alignment.format` - - >>> alignment.print() - QVQLQQSGAELARPGASVKMSCKASGYTFTRYTMHWVKQRPGQGLEWIGYINPS-RGYTNYNQKFKDKATLTTDKSSSTAYMQLSSLTSEDSAVYYCARYYDDHYCLDYWGQGTTLTVSS - ||||.||||||.||||+|||||||||||.||||||||||||||||+||||||||.|.||||||||||||||||||||||||||.+|||||||||||||||||....||.||||||||||| - QVQLVQSGAELDRPGATVKMSCKASGYTTTRYTMHWVKQRPGQGLDWIGYINPSDRSYTNYNQKFKDKATLTTDKSSSTAYMQKTSLTSEDSAVYYCARYYD--DYLDRWGQGTTLTVSS - ^^^^^^^^ ^^^^^^^^^ ^^^^^^^^^^^^ - >>> alignment.print(mark_identity=False, mark_cdrs=False) - QVQLQQSGAELARPGASVKMSCKASGYTFTRYTMHWVKQRPGQGLEWIGYINPS-RGYTNYNQKFKDKATLTTDKSSSTAYMQLSSLTSEDSAVYYCARYYDDHYCLDYWGQGTTLTVSS - QVQLVQSGAELDRPGATVKMSCKASGYTTTRYTMHWVKQRPGQGLDWIGYINPSDRSYTNYNQKFKDKATLTTDKSSSTAYMQKTSLTSEDSAVYYCARYYD--DYLDRWGQGTTLTVSS - - :param mark_identity: Add BLAST style middle line showing identity (``|``), similar residue (``+``) or different residue (``.``) - :param mark_cdrs: Add line highlighting CDR regions using ``^`` - """ - print(self.format(mark_identity=mark_identity, mark_cdrs=mark_cdrs)) - - def has_mutation(self): - """Check if there is a mutation in the alignment or not""" - return any(len(set(aas)) != 1 for aas in self.residues) - - def num_mutations(self): - """Get number of mutations (positions with more than one type of residue)""" - return sum(len(set(aas)) != 1 for aas in self.residues) - - @property - def raw(self): - """Access raw representation of this alignment to allow unaligned numeric indexing and slicing - - >>> # Numbering of ``chain.raw`` starts at 0 - >>> alignment.raw[0] - 'H1' - >>> # Slicing with string is based on schema numbering, the end is inclusive - >>> chain['1':'10'] - 'QVQLQQSGAE' - >>> # Slicing with ``chain.raw`` starts at 0, the end is exclusive (Python style) - >>> chain.raw[0:10] - 'QVQLQQSGAE' - :return: Raw alignment accessor that can be sliced or indexed to produce a new :class:`Alignment` object - """ - return RawAlignmentAccessor(self) - - -class RawAlignmentAccessor: - def __init__(self, alignment: Alignment): - self.alignment = alignment - - def __getitem__(self, item): - if isinstance(item, slice): - if item.step is not None and item.step != 1: - raise IndexError(f'Slicing with step != 1 is not implemented, got: {item}') - if item.start is not None and not is_integer(item.start): - raise IndexError(f'Expected int start index for alignment.raw, got {type(item.start)}: {item.start}') - if item.stop is not None and not is_integer(item.stop): - raise IndexError(f'Expected int end index for alignment.raw, got {type(item.stop)}: {item.stop}') - return self.alignment.slice(start=item.start, stop=item.stop, stop_inclusive=False, allow_raw=True) - if not is_integer(item): - raise IndexError(f'Expected int indexing for alignment.raw, got {type(item)}: {item}') - pos = self.alignment.positions[item] - return self.alignment[pos] diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/App/App.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/App/App.tsx deleted file mode 100644 index 8fe055f0cb7228f0c3ceb216d658f6175010055e..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/App/App.tsx +++ /dev/null @@ -1,50 +0,0 @@ -import * as Sentry from "@sentry/react" -import { Integrations } from "@sentry/tracing" -import React from "react" -import { HelmetProvider } from "react-helmet-async" -import { defaultTheme } from "../../../common/theme/Theme" -import { ActionDialog } from "../../../components/ActionDialog" -import { PromptDialog } from "../../../components/PromptDialog" -import { Toast } from "../../../components/Toast" -import { DialogProvider } from "../../hooks/useDialog" -import { PromptProvider } from "../../hooks/usePrompt" -import { StoreContext } from "../../hooks/useStores" -import { ThemeContext } from "../../hooks/useTheme" -import { ToastProvider } from "../../hooks/useToast" -import RootStore from "../../stores/RootStore" -import { GlobalKeyboardShortcut } from "../KeyboardShortcut/GlobalKeyboardShortcut" -import { RootView } from "../RootView/RootView" -import { EmotionThemeProvider } from "../Theme/EmotionThemeProvider" -import { GlobalCSS } from "../Theme/GlobalCSS" - -Sentry.init({ - dsn: process.env.SENTRY_DSN, - release: process.env.VERCEL_GIT_COMMIT_SHA, - environment: process.env.VERCEL_ENV, - integrations: [new Integrations.BrowserTracing()], - tracesSampleRate: 1.0, -}) - -export function App() { - return ( - - - - - - - - - - - - - - - - - - - - ) -} diff --git a/spaces/yiningmao/metaphor-detection-baseline/utils/Logger.py b/spaces/yiningmao/metaphor-detection-baseline/utils/Logger.py deleted file mode 100644 index c09a2810b4c49f603dfd5e4c51e62c0888ae1313..0000000000000000000000000000000000000000 --- a/spaces/yiningmao/metaphor-detection-baseline/utils/Logger.py +++ /dev/null @@ -1,84 +0,0 @@ -import os -from time import strftime -import logging - - -def make_log_dir(log_dir): - """ - Generate directory path to log - - :param log_dir: - - :return: - """ - if not os.path.exists(log_dir): - os.mkdir(log_dir) - - log_dirs = os.listdir(log_dir) - if len(log_dirs) == 0: - idx = 0 - else: - idx_list = sorted([int(d.split("_")[0]) for d in log_dirs]) - idx = idx_list[-1] + 1 - - cur_log_dir = "%d_%s" % (idx, strftime("%Y%m%d-%H%M")) - full_log_dir = os.path.join(log_dir, cur_log_dir) - if not os.path.exists(full_log_dir): - os.mkdir(full_log_dir) - - return full_log_dir - - -class Logger: - def __init__(self, log_dir): - log_file_format = "[%(lineno)d]%(asctime)s: %(message)s" - log_console_format = "%(message)s" - - # Main logger - self.log_dir = log_dir - - self.logger = logging.getLogger(log_dir) - self.logger.setLevel(logging.INFO) - self.logger.propagate = False - - console_handler = logging.StreamHandler() - console_handler.setLevel(logging.INFO) - console_handler.setFormatter(logging.Formatter(log_console_format)) - - file_handler = logging.FileHandler(os.path.join(log_dir, "experiments.log")) - file_handler.setLevel(logging.DEBUG) - file_handler.setFormatter(logging.Formatter(log_file_format)) - - self.logger.addHandler(console_handler) - self.logger.addHandler(file_handler) - - def info(self, msg): - self.logger.info(msg) - - def close(self): - for handle in self.logger.handlers[:]: - self.logger.removeHandler(handle) - logging.shutdown() - - -def setup_logger(log_dir): - log_file_format = "[%(lineno)d]%(asctime)s: %(message)s" - log_console_format = "%(message)s" - - # Main logger - logger = logging.getLogger() - logger.setLevel(logging.INFO) - logger.propagate = False - - console_handler = logging.StreamHandler() - console_handler.setLevel(logging.INFO) - console_handler.setFormatter(logging.Formatter(log_console_format)) - - file_handler = logging.FileHandler(os.path.join(log_dir, "experiments.log")) - file_handler.setLevel(logging.DEBUG) - file_handler.setFormatter(logging.Formatter(log_file_format)) - - logger.addHandler(console_handler) - logger.addHandler(file_handler) - - return logger diff --git a/spaces/yooch/yooch/chat_func.py b/spaces/yooch/yooch/chat_func.py deleted file mode 100644 index 374178f3d22c5c23d1dc2952336cdc298a77315d..0000000000000000000000000000000000000000 --- a/spaces/yooch/yooch/chat_func.py +++ /dev/null @@ -1,456 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import os -import requests -import urllib3 - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp - -from presets import * -from llama_func import * -from utils import * - -# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -def get_response( - openai_api_key, system_prompt, history, temperature, top_p, stream, selected_model -): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - history = [construct_system(system_prompt), *history] - - payload = { - "model": selected_model, - "messages": history, # [{"role": "user", "content": f"{inputs}"}], - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - if stream: - timeout = timeout_streaming - else: - timeout = timeout_all - - # 获取环境变量中的代理设置 - http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy") - https_proxy = os.environ.get("HTTPS_PROXY") or os.environ.get("https_proxy") - - # 如果存在代理设置,使用它们 - proxies = {} - if http_proxy: - logging.info(f"Using HTTP proxy: {http_proxy}") - proxies["http"] = http_proxy - if https_proxy: - logging.info(f"Using HTTPS proxy: {https_proxy}") - proxies["https"] = https_proxy - - # 如果有代理,使用代理发送请求,否则使用默认设置发送请求 - if proxies: - response = requests.post( - API_URL, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - proxies=proxies, - ) - else: - response = requests.post( - API_URL, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - ) - return response - - -def stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - def get_return_value(): - return chatbot, history, status_text, all_token_counts - - logging.info("实时回答模式") - partial_words = "" - counter = 0 - status_text = "开始实时传输回答……" - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - user_token_count = 0 - if len(all_token_counts) == 0: - system_prompt_token_count = count_token(construct_system(system_prompt)) - user_token_count = ( - count_token(construct_user(inputs)) + system_prompt_token_count - ) - else: - user_token_count = count_token(construct_user(inputs)) - all_token_counts.append(user_token_count) - logging.info(f"输入token计数: {user_token_count}") - yield get_return_value() - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - True, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - yield get_return_value() - return - except requests.exceptions.ReadTimeout: - status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt - yield get_return_value() - return - - yield get_return_value() - error_json_str = "" - - for chunk in tqdm(response.iter_lines()): - if counter == 0: - counter += 1 - continue - counter += 1 - # check whether each line is non-empty - if chunk: - chunk = chunk.decode() - chunklength = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - logging.info(chunk) - error_json_str += chunk - status_text = f"JSON解析错误。请重置对话。收到的内容: {error_json_str}" - yield get_return_value() - continue - # decode each line as response data is in bytes - if chunklength > 6 and "delta" in chunk["choices"][0]: - finish_reason = chunk["choices"][0]["finish_reason"] - status_text = construct_token_message( - sum(all_token_counts), stream=True - ) - if finish_reason == "stop": - yield get_return_value() - break - try: - partial_words = ( - partial_words + chunk["choices"][0]["delta"]["content"] - ) - except KeyError: - status_text = ( - standard_error_msg - + "API回复中找不到内容。很可能是Token计数达到上限了。请重置对话。当前Token计数: " - + str(sum(all_token_counts)) - ) - yield get_return_value() - break - history[-1] = construct_assistant(partial_words) - chatbot[-1] = (chatbot[-1][0], partial_words+display_append) - all_token_counts[-1] += 1 - yield get_return_value() - - -def predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - logging.info("一次性回答模式") - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - all_token_counts.append(count_token(construct_user(inputs))) - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - False, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - return chatbot, history, status_text, all_token_counts - except requests.exceptions.ProxyError: - status_text = standard_error_msg + proxy_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - except requests.exceptions.SSLError: - status_text = standard_error_msg + ssl_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - history[-1] = construct_assistant(content) - chatbot[-1] = (chatbot[-1][0], content+display_append) - total_token_count = response["usage"]["total_tokens"] - all_token_counts[-1] = total_token_count - sum(all_token_counts) - status_text = construct_token_message(total_token_count) - return chatbot, history, status_text, all_token_counts - - -def predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], - use_websearch=False, - files = None, - should_check_token_count=True, -): # repetition_penalty, top_k - logging.info("输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL) - if files: - msg = "构建索引中……(这可能需要比较久的时间)" - logging.info(msg) - yield chatbot, history, msg, all_token_counts - index = construct_index(openai_api_key, file_src=files) - msg = "索引构建完成,获取回答中……" - yield chatbot, history, msg, all_token_counts - history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot) - yield chatbot, history, status_text, all_token_counts - return - - old_inputs = "" - link_references = [] - if use_websearch: - search_results = ddg(inputs, max_results=5) - old_inputs = inputs - web_results = [] - for idx, result in enumerate(search_results): - logging.info(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - web_results.append(f'[{idx+1}]"{result["body"]}"\nURL: {result["href"]}') - link_references.append(f"{idx+1}. [{domain_name}]({result['href']})\n") - link_references = "\n\n" + "".join(link_references) - inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", inputs) - .replace("{web_results}", "\n\n".join(web_results)) - ) - else: - link_references = "" - - if len(openai_api_key) != 51: - status_text = standard_error_msg + no_apikey_msg - logging.info(status_text) - chatbot.append((inputs, "")) - if len(history) == 0: - history.append(construct_user(inputs)) - history.append("") - all_token_counts.append(0) - else: - history[-2] = construct_user(inputs) - yield chatbot, history, status_text, all_token_counts - return - - yield chatbot, history, "开始生成回答……", all_token_counts - - if stream: - logging.info("使用流式传输") - iter = stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - for chatbot, history, status_text, all_token_counts in iter: - yield chatbot, history, status_text, all_token_counts - else: - logging.info("不使用流式传输") - chatbot, history, status_text, all_token_counts = predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - yield chatbot, history, status_text, all_token_counts - - logging.info(f"传输完毕。当前token计数为{all_token_counts}") - if len(history) > 1 and history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if stream: - max_token = max_token_streaming - else: - max_token = max_token_all - - if sum(all_token_counts) > max_token and should_check_token_count: - status_text = f"精简token中{all_token_counts}/{max_token}" - logging.info(status_text) - yield chatbot, history, status_text, all_token_counts - iter = reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - all_token_counts, - top_p, - temperature, - max_token//2, - selected_model=selected_model, - ) - for chatbot, history, status_text, all_token_counts in iter: - status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}" - yield chatbot, history, status_text, all_token_counts - - -def retry( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], -): - logging.info("重试中……") - if len(history) == 0: - yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count - return - history.pop() - inputs = history.pop()["content"] - token_count.pop() - iter = predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - token_count, - top_p, - temperature, - stream=stream, - selected_model=selected_model, - ) - logging.info("重试中……") - for x in iter: - yield x - logging.info("重试完毕") - - -def reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - max_token_count, - selected_model=MODELS[0], -): - logging.info("开始减少token数量……") - iter = predict( - openai_api_key, - system_prompt, - history, - summarize_prompt, - chatbot, - token_count, - top_p, - temperature, - selected_model=selected_model, - should_check_token_count=False, - ) - logging.info(f"chatbot: {chatbot}") - flag = False - for chatbot, history, status_text, previous_token_count in iter: - num_chat = find_n(previous_token_count, max_token_count) - if flag: - chatbot = chatbot[:-1] - flag = True - history = history[-2*num_chat:] if num_chat > 0 else [] - token_count = previous_token_count[-num_chat:] if num_chat > 0 else [] - msg = f"保留了最近{num_chat}轮对话" - yield chatbot, history, msg + "," + construct_token_message( - sum(token_count) if len(token_count) > 0 else 0, - ), token_count - logging.info(msg) - logging.info("减少token数量完毕") \ No newline at end of file diff --git a/spaces/yuhanbo/chat-gpt/README.md b/spaces/yuhanbo/chat-gpt/README.md deleted file mode 100644 index bf3087f3c692c3f1bdd2da66928e6ce7300358c9..0000000000000000000000000000000000000000 --- a/spaces/yuhanbo/chat-gpt/README.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: ChatGpt Web -emoji: 📊 -colorFrom: green -colorTo: blue -sdk: docker -pinned: false -license: openrail -app_port: 3000 -duplicated_from: fengmuxi/ChatGpt-Web ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -
          -预览 - -

          ChatGPT Next Web

          - -一键免费部署你的私人 ChatGPT 网页应用。 - -One-Click to deploy your own ChatGPT web UI. - -[演示 Demo](https://chat-gpt-next-web.vercel.app/) / [反馈 Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [加入 Discord](https://discord.gg/zrhvHCr79N) / [QQ 群](https://user-images.githubusercontent.com/16968934/228190818-7dd00845-e9b9-4363-97e5-44c507ac76da.jpeg) / [打赏开发者](https://user-images.githubusercontent.com/16968934/227772541-5bcd52d8-61b7-488c-a203-0330d8006e2b.jpg) - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web) - -[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) - -![主界面](../ChatGPT-Next-Web/static/cover.png) - -
          - -## 主要功能 - -- 在 1 分钟内使用 Vercel **免费一键部署** -- 精心设计的 UI,响应式设计,支持深色模式 -- 极快的首屏加载速度(~85kb) -- 海量的内置 prompt 列表,来自[中文](https://github.com/PlexPt/awesome-chatgpt-prompts-zh)和[英文](https://github.com/f/awesome-chatgpt-prompts) -- 自动压缩上下文聊天记录,在节省 Token 的同时支持超长对话 -- 一键导出聊天记录,完整的 Markdown 支持 -- 拥有自己的域名?好上加好,绑定后即可在任何地方**无障碍**快速访问 - -## Features - -- **Deploy for free with one-click** on Vercel in under 1 minute -- Responsive design, and dark mode -- Fast first screen loading speed (~85kb) -- Awesome prompts powered by [awesome-chatgpt-prompts-zh](https://github.com/PlexPt/awesome-chatgpt-prompts-zh) and [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts) -- Automatically compresses chat history to support long conversations while also saving your tokens -- One-click export all chat history with full Markdown support - -## 使用 - -1. 准备好你的 [OpenAI API Key](https://platform.openai.com/account/api-keys)、NewBingCookie、WanJuanToken; -2. 点击右侧按钮开始部署: - [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web),直接使用 Github 账号登陆即可,记得在环境变量页填入 API Key; -3. 部署完毕后,即可开始使用; -4. (可选)[绑定自定义域名](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercel 分配的域名 DNS 在某些区域被污染了,绑定自定义域名即可直连。 - -## Get Started - -1. Get [OpenAI API Key](https://platform.openai.com/account/api-keys); -2. Click - [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web); -3. Enjoy :) - -## 保持更新 Keep Updated - -如果你按照上述步骤一键部署了自己的项目,可能会发现总是提示“存在更新”的问题,这是由于 Vercel 会默认为你创建一个新项目而不是 fork 本项目,这会导致无法正确地检测更新。 -推荐你按照下列步骤重新部署: - -- 删除掉原先的 repo; -- fork 本项目; -- 前往 vercel 控制台,删除掉原先的 project,然后新建 project,选择你刚刚 fork 出来的项目重新进行部署即可; -- 在重新部署的过程中,请手动添加名为 `OPENAI_API_KEY` 的环境变量,并填入你的 api key 作为值。 - -本项目会持续更新,如果你想让代码库总是保持更新,可以查看 [Github 的文档](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork) 了解如何让 fork 的项目与上游代码同步,建议定期进行同步操作以获得新功能。 - -你可以 star/watch 本项目或者 follow 作者来及时获得新功能更新通知。 - -If you have deployed your own project with just one click following the steps above, you may encounter the issue of "Updates Available" constantly showing up. This is because Vercel will create a new project for you by default instead of forking this project, resulting in the inability to detect updates correctly. - -We recommend that you follow the steps below to re-deploy: - -- Delete the original repo; -- Fork this project; -- Go to the Vercel dashboard, delete the original project, then create a new project and select the project you just forked to redeploy; -- Please manually add an environment variable named `OPENAI_API_KEY` and enter your API key as the value during the redeploy process. - -This project will be continuously maintained. If you want to keep the code repository up to date, you can check out the [Github documentation](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork) to learn how to synchronize a forked project with upstream code. It is recommended to perform synchronization operations regularly. - -You can star or watch this project or follow author to get release notifictions in time. - -## 访问控制 Access Control - -本项目提供有限的权限控制功能,请在环境变量页增加名为 `CODE` 的环境变量,值为用英文逗号分隔的自定义控制码: - -``` -code1,code2,code3 -``` - -增加或修改该环境变量后,请**重新部署**项目使改动生效。 - -This project provides limited access control. Please add an environment variable named `CODE` on the environment variables page. The value should be a custom control code separated by comma like this: - -``` -code1,code2,code3 -``` - -After adding or modifying this environment variable, please redeploy the project for the changes to take effect. - -## 开发 Development - -点击下方按钮,开始二次开发: - -[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) - -在开始写代码之前,需要在项目根目录新建一个 `.env.local` 文件,里面填入环境变量: - -新必应只需要名为_u的cookie的值 - -Before starting development, you must create a new `.env.local` file at project root, and place your api key into it: - -``` -OPENAI_API_KEY= -CODE= -COOKIES= -WANJUAN_TOKEN= -``` - -### 本地开发 Local Development - -> 如果你是中国大陆用户,不建议在本地进行开发,除非你能够独立解决 OpenAI API 本地代理问题。 - -1. 安装 nodejs 和 yarn,具体细节请询问 ChatGPT; -2. 执行 `yarn install && yarn dev` 即可。 - -### 本地部署 Local Deployment - -```shell -bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/scripts/setup.sh) -``` - -### 容器部署 Docker Deployment - -```shell -docker pull yidadaa/chatgpt-next-web - -docker run -d -p 3000:3000 -e OPENAI_API_KEY="" -e CODE="" yidadaa/chatgpt-next-web -``` - -## 截图 Screenshots - -![设置 Settings](../ChatGPT-Next-Web/static/settings.png) - -![更多展示 More](../ChatGPT-Next-Web/static/more.png) - -## 说明 Attention - -本项目的演示地址所用的 OpenAI 账户的免费额度将于 2023-04-01 过期,届时将无法通过演示地址在线体验。 - -如果你想贡献出自己的 API Key,可以通过作者主页的邮箱发送给作者,并标注过期时间。 - -The free trial of the OpenAI account used by the demo will expire on April 1, 2023, and the demo will not be available at that time. - -If you would like to contribute your API key, you can email it to the author and indicate the expiration date of the API key. - -## 鸣谢 Special Thanks - -### 捐赠者 Sponsor - -[@mushan0x0](https://github.com/mushan0x0) -[@ClarenceDan](https://github.com/ClarenceDan) -[@zhangjia](https://github.com/zhangjia) -[@hoochanlon](https://github.com/hoochanlon) - -### 贡献者 Contributor - -[Contributors](https://github.com/Yidadaa/ChatGPT-Next-Web/graphs/contributors) - -## LICENSE - -- [Anti 996 License](https://github.com/kattgu7/Anti-996-License/blob/master/LICENSE_CN_EN) \ No newline at end of file diff --git a/spaces/zhang-wei-jian/docker/node_modules/is-generator-function/test/uglified.js b/spaces/zhang-wei-jian/docker/node_modules/is-generator-function/test/uglified.js deleted file mode 100644 index fd82b55352c75697654406e686f2b0732291eaf0..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/is-generator-function/test/uglified.js +++ /dev/null @@ -1,8 +0,0 @@ -'use strict'; - -require('uglify-register/api').register({ - exclude: [/\/node_modules\//, /\/test\//], - uglify: { mangle: true } -}); - -require('./'); diff --git a/spaces/zhangyd/bingo/src/components/chat-suggestions.tsx b/spaces/zhangyd/bingo/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
          -
          - - { - currentSuggestions.map(suggestion => ( - - )) - } -
          -
          - ) : null -} diff --git a/spaces/zideliu/styledrop/timm/models/inception_v3.py b/spaces/zideliu/styledrop/timm/models/inception_v3.py deleted file mode 100644 index 9ae7105feeb107cf74978e917cfdc15f1bea733e..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/inception_v3.py +++ /dev/null @@ -1,468 +0,0 @@ -""" Inception-V3 - -Originally from torchvision Inception3 model -Licensed BSD-Clause 3 https://github.com/pytorch/vision/blob/master/LICENSE -""" -import torch -import torch.nn as nn -import torch.nn.functional as F - -from timm.data import IMAGENET_DEFAULT_STD, IMAGENET_DEFAULT_MEAN, IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD -from .helpers import build_model_with_cfg -from .registry import register_model -from .layers import trunc_normal_, create_classifier, Linear - - -def _cfg(url='', **kwargs): - return { - 'url': url, - 'num_classes': 1000, 'input_size': (3, 299, 299), 'pool_size': (8, 8), - 'crop_pct': 0.875, 'interpolation': 'bicubic', - 'mean': IMAGENET_INCEPTION_MEAN, 'std': IMAGENET_INCEPTION_STD, - 'first_conv': 'Conv2d_1a_3x3.conv', 'classifier': 'fc', - **kwargs - } - - -default_cfgs = { - # original PyTorch weights, ported from Tensorflow but modified - 'inception_v3': _cfg( - url='https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth', - has_aux=True), # checkpoint has aux logit layer weights - # my port of Tensorflow SLIM weights (http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz) - 'tf_inception_v3': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_inception_v3-e0069de4.pth', - num_classes=1001, has_aux=False), - # my port of Tensorflow adversarially trained Inception V3 from - # http://download.tensorflow.org/models/adv_inception_v3_2017_08_18.tar.gz - 'adv_inception_v3': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/adv_inception_v3-9e27bd63.pth', - num_classes=1001, has_aux=False), - # from gluon pretrained models, best performing in terms of accuracy/loss metrics - # https://gluon-cv.mxnet.io/model_zoo/classification.html - 'gluon_inception_v3': _cfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_inception_v3-9f746940.pth', - mean=IMAGENET_DEFAULT_MEAN, # also works well with inception defaults - std=IMAGENET_DEFAULT_STD, # also works well with inception defaults - has_aux=False, - ) -} - - -class InceptionA(nn.Module): - - def __init__(self, in_channels, pool_features, conv_block=None): - super(InceptionA, self).__init__() - if conv_block is None: - conv_block = BasicConv2d - self.branch1x1 = conv_block(in_channels, 64, kernel_size=1) - - self.branch5x5_1 = conv_block(in_channels, 48, kernel_size=1) - self.branch5x5_2 = conv_block(48, 64, kernel_size=5, padding=2) - - self.branch3x3dbl_1 = conv_block(in_channels, 64, kernel_size=1) - self.branch3x3dbl_2 = conv_block(64, 96, kernel_size=3, padding=1) - self.branch3x3dbl_3 = conv_block(96, 96, kernel_size=3, padding=1) - - self.branch_pool = conv_block(in_channels, pool_features, kernel_size=1) - - def _forward(self, x): - branch1x1 = self.branch1x1(x) - - branch5x5 = self.branch5x5_1(x) - branch5x5 = self.branch5x5_2(branch5x5) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) - - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool] - return outputs - - def forward(self, x): - outputs = self._forward(x) - return torch.cat(outputs, 1) - - -class InceptionB(nn.Module): - - def __init__(self, in_channels, conv_block=None): - super(InceptionB, self).__init__() - if conv_block is None: - conv_block = BasicConv2d - self.branch3x3 = conv_block(in_channels, 384, kernel_size=3, stride=2) - - self.branch3x3dbl_1 = conv_block(in_channels, 64, kernel_size=1) - self.branch3x3dbl_2 = conv_block(64, 96, kernel_size=3, padding=1) - self.branch3x3dbl_3 = conv_block(96, 96, kernel_size=3, stride=2) - - def _forward(self, x): - branch3x3 = self.branch3x3(x) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) - - branch_pool = F.max_pool2d(x, kernel_size=3, stride=2) - - outputs = [branch3x3, branch3x3dbl, branch_pool] - return outputs - - def forward(self, x): - outputs = self._forward(x) - return torch.cat(outputs, 1) - - -class InceptionC(nn.Module): - - def __init__(self, in_channels, channels_7x7, conv_block=None): - super(InceptionC, self).__init__() - if conv_block is None: - conv_block = BasicConv2d - self.branch1x1 = conv_block(in_channels, 192, kernel_size=1) - - c7 = channels_7x7 - self.branch7x7_1 = conv_block(in_channels, c7, kernel_size=1) - self.branch7x7_2 = conv_block(c7, c7, kernel_size=(1, 7), padding=(0, 3)) - self.branch7x7_3 = conv_block(c7, 192, kernel_size=(7, 1), padding=(3, 0)) - - self.branch7x7dbl_1 = conv_block(in_channels, c7, kernel_size=1) - self.branch7x7dbl_2 = conv_block(c7, c7, kernel_size=(7, 1), padding=(3, 0)) - self.branch7x7dbl_3 = conv_block(c7, c7, kernel_size=(1, 7), padding=(0, 3)) - self.branch7x7dbl_4 = conv_block(c7, c7, kernel_size=(7, 1), padding=(3, 0)) - self.branch7x7dbl_5 = conv_block(c7, 192, kernel_size=(1, 7), padding=(0, 3)) - - self.branch_pool = conv_block(in_channels, 192, kernel_size=1) - - def _forward(self, x): - branch1x1 = self.branch1x1(x) - - branch7x7 = self.branch7x7_1(x) - branch7x7 = self.branch7x7_2(branch7x7) - branch7x7 = self.branch7x7_3(branch7x7) - - branch7x7dbl = self.branch7x7dbl_1(x) - branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl) - - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool] - return outputs - - def forward(self, x): - outputs = self._forward(x) - return torch.cat(outputs, 1) - - -class InceptionD(nn.Module): - - def __init__(self, in_channels, conv_block=None): - super(InceptionD, self).__init__() - if conv_block is None: - conv_block = BasicConv2d - self.branch3x3_1 = conv_block(in_channels, 192, kernel_size=1) - self.branch3x3_2 = conv_block(192, 320, kernel_size=3, stride=2) - - self.branch7x7x3_1 = conv_block(in_channels, 192, kernel_size=1) - self.branch7x7x3_2 = conv_block(192, 192, kernel_size=(1, 7), padding=(0, 3)) - self.branch7x7x3_3 = conv_block(192, 192, kernel_size=(7, 1), padding=(3, 0)) - self.branch7x7x3_4 = conv_block(192, 192, kernel_size=3, stride=2) - - def _forward(self, x): - branch3x3 = self.branch3x3_1(x) - branch3x3 = self.branch3x3_2(branch3x3) - - branch7x7x3 = self.branch7x7x3_1(x) - branch7x7x3 = self.branch7x7x3_2(branch7x7x3) - branch7x7x3 = self.branch7x7x3_3(branch7x7x3) - branch7x7x3 = self.branch7x7x3_4(branch7x7x3) - - branch_pool = F.max_pool2d(x, kernel_size=3, stride=2) - outputs = [branch3x3, branch7x7x3, branch_pool] - return outputs - - def forward(self, x): - outputs = self._forward(x) - return torch.cat(outputs, 1) - - -class InceptionE(nn.Module): - - def __init__(self, in_channels, conv_block=None): - super(InceptionE, self).__init__() - if conv_block is None: - conv_block = BasicConv2d - self.branch1x1 = conv_block(in_channels, 320, kernel_size=1) - - self.branch3x3_1 = conv_block(in_channels, 384, kernel_size=1) - self.branch3x3_2a = conv_block(384, 384, kernel_size=(1, 3), padding=(0, 1)) - self.branch3x3_2b = conv_block(384, 384, kernel_size=(3, 1), padding=(1, 0)) - - self.branch3x3dbl_1 = conv_block(in_channels, 448, kernel_size=1) - self.branch3x3dbl_2 = conv_block(448, 384, kernel_size=3, padding=1) - self.branch3x3dbl_3a = conv_block(384, 384, kernel_size=(1, 3), padding=(0, 1)) - self.branch3x3dbl_3b = conv_block(384, 384, kernel_size=(3, 1), padding=(1, 0)) - - self.branch_pool = conv_block(in_channels, 192, kernel_size=1) - - def _forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return outputs - - def forward(self, x): - outputs = self._forward(x) - return torch.cat(outputs, 1) - - -class InceptionAux(nn.Module): - - def __init__(self, in_channels, num_classes, conv_block=None): - super(InceptionAux, self).__init__() - if conv_block is None: - conv_block = BasicConv2d - self.conv0 = conv_block(in_channels, 128, kernel_size=1) - self.conv1 = conv_block(128, 768, kernel_size=5) - self.conv1.stddev = 0.01 - self.fc = Linear(768, num_classes) - self.fc.stddev = 0.001 - - def forward(self, x): - # N x 768 x 17 x 17 - x = F.avg_pool2d(x, kernel_size=5, stride=3) - # N x 768 x 5 x 5 - x = self.conv0(x) - # N x 128 x 5 x 5 - x = self.conv1(x) - # N x 768 x 1 x 1 - # Adaptive average pooling - x = F.adaptive_avg_pool2d(x, (1, 1)) - # N x 768 x 1 x 1 - x = torch.flatten(x, 1) - # N x 768 - x = self.fc(x) - # N x 1000 - return x - - -class BasicConv2d(nn.Module): - - def __init__(self, in_channels, out_channels, **kwargs): - super(BasicConv2d, self).__init__() - self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs) - self.bn = nn.BatchNorm2d(out_channels, eps=0.001) - - def forward(self, x): - x = self.conv(x) - x = self.bn(x) - return F.relu(x, inplace=True) - - -class InceptionV3(nn.Module): - """Inception-V3 with no AuxLogits - FIXME two class defs are redundant, but less screwing around with torchsript fussyness and inconsistent returns - """ - - def __init__(self, num_classes=1000, in_chans=3, drop_rate=0., global_pool='avg', aux_logits=False): - super(InceptionV3, self).__init__() - self.num_classes = num_classes - self.drop_rate = drop_rate - self.aux_logits = aux_logits - - self.Conv2d_1a_3x3 = BasicConv2d(in_chans, 32, kernel_size=3, stride=2) - self.Conv2d_2a_3x3 = BasicConv2d(32, 32, kernel_size=3) - self.Conv2d_2b_3x3 = BasicConv2d(32, 64, kernel_size=3, padding=1) - self.Pool1 = nn.MaxPool2d(kernel_size=3, stride=2) - self.Conv2d_3b_1x1 = BasicConv2d(64, 80, kernel_size=1) - self.Conv2d_4a_3x3 = BasicConv2d(80, 192, kernel_size=3) - self.Pool2 = nn.MaxPool2d(kernel_size=3, stride=2) - self.Mixed_5b = InceptionA(192, pool_features=32) - self.Mixed_5c = InceptionA(256, pool_features=64) - self.Mixed_5d = InceptionA(288, pool_features=64) - self.Mixed_6a = InceptionB(288) - self.Mixed_6b = InceptionC(768, channels_7x7=128) - self.Mixed_6c = InceptionC(768, channels_7x7=160) - self.Mixed_6d = InceptionC(768, channels_7x7=160) - self.Mixed_6e = InceptionC(768, channels_7x7=192) - if aux_logits: - self.AuxLogits = InceptionAux(768, num_classes) - else: - self.AuxLogits = None - self.Mixed_7a = InceptionD(768) - self.Mixed_7b = InceptionE(1280) - self.Mixed_7c = InceptionE(2048) - self.feature_info = [ - dict(num_chs=64, reduction=2, module='Conv2d_2b_3x3'), - dict(num_chs=192, reduction=4, module='Conv2d_4a_3x3'), - dict(num_chs=288, reduction=8, module='Mixed_5d'), - dict(num_chs=768, reduction=16, module='Mixed_6e'), - dict(num_chs=2048, reduction=32, module='Mixed_7c'), - ] - - self.num_features = 2048 - self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) - - for m in self.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear): - stddev = m.stddev if hasattr(m, 'stddev') else 0.1 - trunc_normal_(m.weight, std=stddev) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - def forward_preaux(self, x): - # N x 3 x 299 x 299 - x = self.Conv2d_1a_3x3(x) - # N x 32 x 149 x 149 - x = self.Conv2d_2a_3x3(x) - # N x 32 x 147 x 147 - x = self.Conv2d_2b_3x3(x) - # N x 64 x 147 x 147 - x = self.Pool1(x) - # N x 64 x 73 x 73 - x = self.Conv2d_3b_1x1(x) - # N x 80 x 73 x 73 - x = self.Conv2d_4a_3x3(x) - # N x 192 x 71 x 71 - x = self.Pool2(x) - # N x 192 x 35 x 35 - x = self.Mixed_5b(x) - # N x 256 x 35 x 35 - x = self.Mixed_5c(x) - # N x 288 x 35 x 35 - x = self.Mixed_5d(x) - # N x 288 x 35 x 35 - x = self.Mixed_6a(x) - # N x 768 x 17 x 17 - x = self.Mixed_6b(x) - # N x 768 x 17 x 17 - x = self.Mixed_6c(x) - # N x 768 x 17 x 17 - x = self.Mixed_6d(x) - # N x 768 x 17 x 17 - x = self.Mixed_6e(x) - # N x 768 x 17 x 17 - return x - - def forward_postaux(self, x): - x = self.Mixed_7a(x) - # N x 1280 x 8 x 8 - x = self.Mixed_7b(x) - # N x 2048 x 8 x 8 - x = self.Mixed_7c(x) - # N x 2048 x 8 x 8 - return x - - def forward_features(self, x): - x = self.forward_preaux(x) - x = self.forward_postaux(x) - return x - - def get_classifier(self): - return self.fc - - def reset_classifier(self, num_classes, global_pool='avg'): - self.num_classes = num_classes - self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) - - def forward(self, x): - x = self.forward_features(x) - x = self.global_pool(x) - if self.drop_rate > 0: - x = F.dropout(x, p=self.drop_rate, training=self.training) - x = self.fc(x) - return x - - -class InceptionV3Aux(InceptionV3): - """InceptionV3 with AuxLogits - """ - - def __init__(self, num_classes=1000, in_chans=3, drop_rate=0., global_pool='avg', aux_logits=True): - super(InceptionV3Aux, self).__init__( - num_classes, in_chans, drop_rate, global_pool, aux_logits) - - def forward_features(self, x): - x = self.forward_preaux(x) - aux = self.AuxLogits(x) if self.training else None - x = self.forward_postaux(x) - return x, aux - - def forward(self, x): - x, aux = self.forward_features(x) - x = self.global_pool(x) - if self.drop_rate > 0: - x = F.dropout(x, p=self.drop_rate, training=self.training) - x = self.fc(x) - return x, aux - - -def _create_inception_v3(variant, pretrained=False, **kwargs): - default_cfg = default_cfgs[variant] - aux_logits = kwargs.pop('aux_logits', False) - if aux_logits: - assert not kwargs.pop('features_only', False) - model_cls = InceptionV3Aux - load_strict = default_cfg['has_aux'] - else: - model_cls = InceptionV3 - load_strict = not default_cfg['has_aux'] - return build_model_with_cfg( - model_cls, variant, pretrained, default_cfg=default_cfgs[variant], - pretrained_strict=load_strict, **kwargs) - - -@register_model -def inception_v3(pretrained=False, **kwargs): - # original PyTorch weights, ported from Tensorflow but modified - model = _create_inception_v3('inception_v3', pretrained=pretrained, **kwargs) - return model - - -@register_model -def tf_inception_v3(pretrained=False, **kwargs): - # my port of Tensorflow SLIM weights (http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz) - model = _create_inception_v3('tf_inception_v3', pretrained=pretrained, **kwargs) - return model - - -@register_model -def adv_inception_v3(pretrained=False, **kwargs): - # my port of Tensorflow adversarially trained Inception V3 from - # http://download.tensorflow.org/models/adv_inception_v3_2017_08_18.tar.gz - model = _create_inception_v3('adv_inception_v3', pretrained=pretrained, **kwargs) - return model - - -@register_model -def gluon_inception_v3(pretrained=False, **kwargs): - # from gluon pretrained models, best performing in terms of accuracy/loss metrics - # https://gluon-cv.mxnet.io/model_zoo/classification.html - model = _create_inception_v3('gluon_inception_v3', pretrained=pretrained, **kwargs) - return model diff --git a/spaces/zomehwh/sovits-xiaoke/losses.py b/spaces/zomehwh/sovits-xiaoke/losses.py deleted file mode 100644 index 41f9be6980713a46824ae9ec5eb8fd7c515d89c5..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-xiaoke/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - #print(logs_p) - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/zomehwh/vits-models-ow2/models.py b/spaces/zomehwh/vits-models-ow2/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/vits-models-ow2/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/zxy666/bingo-chatai666/postcss.config.js b/spaces/zxy666/bingo-chatai666/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/zxy666/bingo-chatai666/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -}