diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargarsigmakeyfullcrackmega Los beneficios de usar SigmaKey la herramienta segura y confiable para el servicio de MTK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargarsigmakeyfullcrackmega Los beneficios de usar SigmaKey la herramienta segura y confiable para el servicio de MTK.md deleted file mode 100644 index 0fa3c8ed2cb558419e617ba094de259d244ff7f2..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargarsigmakeyfullcrackmega Los beneficios de usar SigmaKey la herramienta segura y confiable para el servicio de MTK.md +++ /dev/null @@ -1,171 +0,0 @@ - -

Descargar SigmaKey Full Crack Mega: A Complete Guide

-

If you are looking for a professional and powerful tool to flash, unlock, and repair your mobile devices, you might have heard of SigmaKey. SigmaKey is a software that works with a dongle and allows you to service various types of cell phones, especially Huawei, MTK, Qualcomm, HiSilicon, and Spreadtrum devices. In this article, we will show you how to download SigmaKey full crack mega, a cracked version of the software that does not require a dongle or activation. We will also explain how to use SigmaKey full crack mega to perform different operations on your devices.

-

What is SigmaKey?

-

SigmaKey is a software that was developed by GSM Server Team, a group of experts in mobile unlocking and flashing. SigmaKey works with a hardware dongle that connects to your PC via USB port and provides security and authentication for the software. SigmaKey allows you to perform various operations on your mobile devices, such as:

-

descargarsigmakeyfullcrackmega


Download File 🗸🗸🗸 https://byltly.com/2uKvm4



- -

Features and benefits of SigmaKey

-

SigmaKey has many features and benefits that make it one of the best tools for mobile servicing. Some of them are:

- -

Supported devices and platforms

-

SigmaKey supports thousands of devices from various brands, such as Huawei, Motorola, ZTE, Lenovo, Alcatel, Sony, LG, Samsung, Xiaomi, Oppo, Vivo, etc. You can check the full list of supported devices on the official website of SigmaKey. SigmaKey also supports Windows OS versions such as Win XP/Vista/7/Server 2008 for both 32-bit and 64-bit architecture.

-

How to download SigmaKey full crack mega?

-

If you want to use SigmaKey without buying a dongle or activating it online, you can download SigmaKey full crack mega. This is a cracked version of the software that bypasses the security and authentication of the dongle. However, you should be aware that downloading and using SigmaKey full crack mega is illegal and risky. You might face some problems such as:

- -

If you still want to download SigmaKey full crack mega at your own risk, you should follow these steps:

-

descargar sigmakey full crack mega gratis
-descargar sigmakey full crack mega 2021
-descargar sigmakey full crack mega sin box
-descargar sigmakey full crack mega huawei
-descargar sigmakey full crack mega android
-descargar sigmakey full crack mega windows 10
-descargar sigmakey full crack mega ultima version
-descargar sigmakey full crack mega para pc
-descargar sigmakey full crack mega sin dongle
-descargar sigmakey full crack mega mediafire
-descargar sigmakey full crack mega 64 bits
-descargar sigmakey full crack mega 32 bits
-descargar sigmakey full crack mega sin virus
-descargar sigmakey full crack mega mtk
-descargar sigmakey full crack mega qualcomm
-descargar sigmakey full crack mega español
-descargar sigmakey full crack mega portable
-descargar sigmakey full crack mega tutorial
-descargar sigmakey full crack mega link directo
-descargar sigmakey full crack mega reparar imei
-descargar sigmakey full crack mega frp
-descargar sigmakey full crack mega bootloader
-descargar sigmakey full crack mega firmware
-descargar sigmakey full crack mega update.app
-descargar sigmakey full crack mega kirin
-descargar sigmakey full crack mega hisilicon
-descargar sigmakey full crack mega spreadtrum
-descargar sigmakey full crack mega mediatek
-descargar sigmakey full crack mega alcatel
-descargar sigmakey full crack mega motorola
-descargar sigmakey full crack mega lg
-descargar sigmakey full crack mega zte
-descargar sigmakey full crack mega lenovo
-descargar sigmakey full crack mega sony
-descargar sigmakey full crack mega vtelca
-descargar sigmakey full crack mega lanix
-descargar sigmakey full crack mega blu
-descargar sigmakey full crack mega azumi
-descargar sigmakey full crack mega verykool
-descargar sigmakey full crack mega avvio
-descargar sigmakey full crack mega bitel
-descargar sigmakey full crack mega bmobile
-descargar sigakeyfullcrackmega.exe (not recommended)

-

Requirements and precautions

- -

Steps to download and install SigmaKey full crack mega

-
    -
  1. Go to this link https://www.getdroidtips.com/download-sigmakey-huawei-crack/ and click on the download button at the bottom of the page.
  2. -
  3. You will be redirected to another page where you have to complete some surveys or offers to get the download link. Follow the instructions on the screen and complete the tasks.
  4. -
  5. Once you get the download link, click on it and save the file on your PC. The file name is Sigmakey_Huawei_Edition_Crack_Version_2.40.02.zip and it has a size of about 100 MB.
  6. -
  7. Extract the zip file using WinRAR or any other extraction tool. You will get a folder named Sigmakey_Huawei_Edition_Crack_Version_2.40.02 with several files inside it.
  8. -
  9. Open the folder and run the file named Setup.exe as administrator. Follow the installation wizard and accept the terms and conditions. Choose a destination folder for the software and click on install.
  10. -
  11. Wait for the installation process to finish. Do not disconnect your device or close the program during this process.
  12. -
  13. After the installation is done, do not run the software yet. Go back to the folder where you extracted the zip file and open another folder named Loader_Sigma_Key_Huawei_Edition_Crack_Version_2.40.02.
  14. -
  15. In this folder, you will find two files named Loader.exe and Patch.exe. Copy both files and paste them into the destination folder where you installed the software. Replace any existing files if prompted.
  16. -
  17. Now run the file named Loader.exe as administrator. This will launch the software with full crack features enabled.
  18. -
-

Troubleshooting tips

-

If you encounter any problems while downloading or installing SigmaKey full crack mega, you can try these tips:

- -

How to use SigmaKey full crack mega?

-

Once you have successfully downloaded and installed SigmaKey full crack mega, you can start using it to service your mobile devices. Here are some examples of how to use SigmaKey full crack mega for different operations:

-

Unlocking Huawei devices with SigmaKey

-
    -
  1. Connect your Huawei device to your PC via USB cable in fastboot mode. To enter fastboot mode, power off your device and press volume down + power buttons simultaneously until you see a fastboot logo on your screen.
  2. -
  3. Launch SigmaKey full crack mega on your PC and select Huawei tab from the top menu bar.
  4. -
  5. Select ADB Interface from Port Selection drop-down menu on top left corner of the screen.
  6. -
  7. Select Fastboot Mode from Service Mode drop-down menu on top right corner of screen.
  8. -
  9. Select Unlock Bootloader option from Service Operations section on bottom left corner of screen.
  10. -
  11. The software will read your device information and generate an unlock code for your bootloader. Write down this code somewhere safe as you will need it later.
  12. - to enter the unlock code on your device. Follow the instructions on your device screen and enter the unlock code when prompted. -
  13. Your device bootloader will be unlocked and your device will reboot automatically. You can disconnect your device from your PC.
  14. -
-

Flashing and repairing MTK cell phones with SigmaKey

-
    -
  1. Connect your MTK device to your PC via USB cable in flash mode. To enter flash mode, power off your device and press volume up + power buttons simultaneously until you see a flash logo on your screen.
  2. -
  3. Launch SigmaKey full crack mega on your PC and select MTK tab from the top menu bar.
  4. -
  5. Select USB Mode from Port Selection drop-down menu on top left corner of the screen.
  6. -
  7. Select Flash Mode from Service Mode drop-down menu on top right corner of screen.
  8. -
  9. Select Flash Firmware option from Service Operations section on bottom left corner of screen.
  10. -
  11. The software will ask you to select a firmware file for your device. You can download firmware files from various online sources or use the ones provided by SigmaKey. Click on Browse button and locate the firmware file on your PC.
  12. -
  13. The software will verify the firmware file and show you some information about it. Make sure the firmware file matches your device model and version. Click on Write Firmware button to start flashing process.
  14. -
  15. The software will flash the firmware file to your device and show you a progress bar. Do not disconnect your device or close the program during this process.
  16. -
  17. After the flashing process is done, the software will show you a success message and your device will reboot automatically. You can disconnect your device from your PC.
  18. -
-

Other operations with SigmaKey

-

SigmaKey full crack mega can also perform other operations on your devices, such as:

- -

To perform these operations, you need to select the appropriate tab, port, mode, and option from the software interface. You can also refer to the user manual or customer guide for more details and instructions.

-

Conclusion

-

In this article, we have shown you how to download SigmaKey full crack mega, a cracked version of the software that allows you to flash, unlock, and repair your mobile devices without a dongle or activation. We have also explained how to use SigmaKey full crack mega for different operations on Huawei and MTK devices. However, we have also warned you about the risks and consequences of using SigmaKey full crack mega, as it is illegal and unsafe. We recommend you to use the original SigmaKey software with a dongle and activation for a better and safer experience.

-

Summary of the article

-

SigmaKey is a professional and powerful tool for mobile servicing that works with a dongle and activation. SigmaKey full crack mega is a cracked version of the software that does not require a dongle or activation. SigmaKey full crack mega allows you to perform various operations on your devices, such as unlocking, flashing, repairing, etc. However, SigmaKey full crack mega is illegal and risky to use, as it might cause virus infection, data loss, dongle detection, lack of updates, lawsuit, etc. Therefore, it is better to use the original SigmaKey software with a dongle and activation for a safer and better experience.

-

FAQs

-
    -
  1. What is SigmaKey?
  2. -

    SigmaKey is a software that works with a dongle and allows you to service various types of cell phones, especially Huawei, MTK, Qualcomm, HiSilicon, and Spreadtrum devices.

    -
  3. What is SigmaKey full crack mega?
  4. -

    SigmaKey full crack mega is a cracked version of the software that does not require a dongle or activation. It bypasses the security and authentication of the dongle.

    -
  5. How to download SigmaKey full crack mega?
  6. -

    You can download SigmaKey full crack mega from this link https://www.getdroidtips.com/download-sigmakey-huawei-crack/. You have to complete some surveys or offers to get the download link. Then you have to install the software and copy the loader and patch files into the installation folder.

    -
  7. How to use SigmaKey full crack mega?
  8. -

    You can use SigmaKey full crack mega to perform various operations on your devices, such as unlocking, flashing, repairing, etc. You have to select the appropriate tab, port, mode, and option from the software interface. You can also refer to the user manual or customer guide for more details and instructions.

    -
  9. What are the risks of using SigmaKey full crack mega?
  10. -

    Using SigmaKey full crack mega is illegal and risky. You might face some problems such as virus infection, data loss, dongle detection, lack of updates, lawsuit, etc. Therefore, it is better to use the original SigmaKey software with a dongle and activation for a safer and better experience.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Deadly Assassin in Sniper Killer 3D The Best Offline Sniper Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Deadly Assassin in Sniper Killer 3D The Best Offline Sniper Game.md deleted file mode 100644 index 0d14a0a004d620c440e36926df7289386fe276ca..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Deadly Assassin in Sniper Killer 3D The Best Offline Sniper Game.md +++ /dev/null @@ -1,103 +0,0 @@ - -

Sniper Killer 3D: The Ultimate Shooting Game

-

If you are looking for a shooting game that will test your skills as a sniper, look no further than Sniper Killer 3D. This game is the ultimate sniper adventure that will immerse you in high-intensity missions and action-packed scenarios. Whether you want to play offline or online, Sniper Killer 3D has something for everyone. Here is everything you need to know about this amazing game.

-

sniper killer 3d


DOWNLOAD --->>> https://urlin.us/2uSUWO



-

What is Sniper Killer 3D?

-

Sniper Killer 3D is a shooting game where you play as a sniper who must eliminate high-profile targets and criminals. You will travel to different locations around the world, taking on various challenges and objectives. You will also have access to a huge arsenal of sniper rifles, assault rifles, and other guns that you can upgrade and customize. Sniper Killer 3D is a game that combines realism, variety, and fun in one package.

-

A thrilling and realistic sniper game

-

One of the best features of Sniper Killer 3D is its realistic physics and ballistics. You will have to take into account factors such as wind, distance, gravity, and movement when aiming and shooting your target. You will also have to deal with different weather conditions, such as rain, fog, snow, and night. You will feel like a real sniper as you pull the trigger and watch your bullet hit the mark.

-

A variety of weapons and missions

-

Sniper Killer 3D offers you more than 180 authentic weapons to choose from. You can unlock different sniper rifles, each with its own characteristics and advantages. You can also upgrade your weapons with scopes, silencers, magazines, and other attachments. You will need to use the right weapon for the right mission, as some targets may require more power, accuracy, or stealth than others.

-

The game also has hundreds of thrilling missions that will keep you entertained for hours. You will have to eliminate terrorists, kidnappers, drug lords, assassins, and other enemies. You will also have to protect innocent civilians, rescue hostages, defuse bombs, and more. Each mission has its own objectives and rewards that you can use to buy new weapons or upgrade your existing ones.

-

A free and offline gameplay

-

Another great feature of Sniper Killer 3D is that it is free to play. You can download the game from the Google Play Store or play it on your web browser without spending a dime. The game also has an offline mode that allows you to play without an internet connection or data. You can enjoy the game anytime and anywhere you want.

-

How to play Sniper Killer 3D?

-

Sniper Killer 3D is easy to play but hard to master. Here are some tips on how to play the game:

-

sniper killer 3d gun shooting games
-sniper 3d wildlife studios
-sniper 3d piercing bullet
-sniper 3d stout assault rifle
-sniper 3d offline mode
-sniper 3d free to play
-sniper 3d action adventure
-sniper 3d realistic ballistics
-sniper 3d variety of guns
-sniper 3d diverse locations
-sniper killer 3d download
-sniper killer 3d mod apk
-sniper killer 3d cheats
-sniper killer 3d hack
-sniper killer 3d unlimited money
-sniper killer 3d review
-sniper killer 3d gameplay
-sniper killer 3d trailer
-sniper killer 3d tips and tricks
-sniper killer 3d best weapons
-sniper killer 3d online multiplayer
-sniper killer 3d pvp mode
-sniper killer 3d special bullets
-sniper killer 3d elite shooter
-sniper killer 3d high-profile targets
-sniper killer 3d missions and challenges
-sniper killer 3d fun games for free
-sniper killer 3d android app
-sniper killer 3d ios app
-sniper killer 3d pc game
-sniper killer 3d mac game
-sniper killer 3d windows game
-sniper killer 3d linux game
-sniper killer 3d steam game
-sniper killer 3d epic games store game
-sniper killer 3d google play store game
-sniper killer 3d app store game
-sniper killer 3d amazon appstore game
-sniper killer 3d microsoft store game
-sniper killer 3d data privacy and security
-sniper killer 3d ratings and reviews
-sniper killer 3d customer support
-sniper killer 3d updates and news
-sniper killer 3d blog and community
-sniper killer 3d social media accounts
-sniper killer 3d youtube channel
-sniper killer 3d twitch channel
-sniper killer 3d discord server
-sniper killer 3d reddit forum
-sniper killer 3d wiki and guide

-

Choose your sniper rifle and scope

-

Before each mission, you will have to select your weapon and scope. You can browse through the available weapons and see their stats, such as damage, range, stability, fire rate, and capacity. You can also see the available scopes and their zoom levels. Choose the weapon and scope that suit your mission and preference.

-

Aim and shoot your target

-

Once you start the mission, you will have to locate your target using your scope. You can use the mouse scroll or the right-click button to zoom in or out. You can also drag the left-click button to move your aim. You will see a red dot on your target, which indicates the bullet trajectory. You will have to adjust your aim according to the wind, distance, and movement of your target. You can use the wind indicator and the range finder to help you. When you are ready, press the space bar or the left-click button to shoot.

-

Complete the objectives and earn rewards

-

After you shoot your target, you will see a slow-motion replay of your shot. You will also see if you completed the mission objectives, such as killing the target, avoiding collateral damage, or achieving a headshot. You will earn coins and diamonds based on your performance. You can use these rewards to buy new weapons or upgrade your existing ones.

-

Why play Sniper Killer 3D?

-

Sniper Killer 3D is not just a game, it is an experience. Here are some reasons why you should play this game:

-

Improve your shooting skills and accuracy

-

Sniper Killer 3D is a game that will challenge your shooting skills and accuracy. You will have to be precise and patient as you aim and shoot your target. You will also have to be strategic and tactical as you choose your weapon and scope. You will learn how to handle different situations and scenarios as a sniper. You will become a better shooter as you play this game.

-

Enjoy stunning 3D graphics and animations

-

Sniper Killer 3D is a game that will impress you with its stunning 3D graphics and animations. You will see realistic environments, such as cities, mountains, deserts, and islands. You will also see lifelike characters, such as your targets, civilians, and enemies. You will feel the impact of your shots as you see blood splatter, bullet holes, and explosions. You will be amazed by the quality and detail of this game.

-

Challenge yourself with different levels of difficulty

-

Sniper Killer 3D is a game that will test your limits with different levels of difficulty. You can choose from easy, normal, hard, or expert modes depending on your skill level. You will face more challenging targets, objectives, and conditions as you progress through the game. You will also have to deal with limited ammo, time, and health. You will have to prove yourself as a sniper killer in this game.

-

Where to download Sniper Killer 3D?

-

Sniper Killer 3D is a game that is available on multiple platforms. Here are some options on where to download this game:

-

Available on Google Play Store for Android devices

-

If you have an Android device, such as a smartphone or tablet, you can download Sniper Killer 3D from the Google Play Store for free. You can also enjoy the game without any ads or in-app purchases. You can access the game from this link: [Sniper Killer 3D].

-

Compatible with web browsers for desktop computers

-

If you have a desktop computer, such as a PC or Mac, you can play Sniper Killer 3D on your web browser for free. You can also enjoy the game without any downloads or installations. You can access the game from this link: [Sniper Killer 3D].

-

Conclusion

-

Sniper Killer 3D is a game that will give you an unforgettable shooting experience. It is a game that combines realism, variety, and fun in one package. It is a game that will improve your shooting skills and accuracy, enjoy stunning 3D graphics and animations, and challenge yourself with different levels of difficulty. It is a game that is free to play and available on multiple platforms. It is a game that you should not miss.

-

If you are ready to become a sniper killer, download Sniper Killer 3D today and start your adventure!

-

Frequently Asked Questions

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/FIFA Chino APK disfruta de la emocin del ftbol con grficos increbles.md b/spaces/1phancelerku/anime-remove-background/FIFA Chino APK disfruta de la emocin del ftbol con grficos increbles.md deleted file mode 100644 index 7a78027238e887413e72283091158fa0d9e73f90..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/FIFA Chino APK disfruta de la emocin del ftbol con grficos increbles.md +++ /dev/null @@ -1,154 +0,0 @@ - -

FIFA Mobile Chino APK Actualizado: Todo lo que necesitas saber

-

Si eres un fanático del fútbol y te gusta jugar a los juegos de EA Sports, seguramente habrás oído hablar de FIFA Mobile, el juego oficial para dispositivos móviles que te permite crear tu propio equipo, competir en diferentes modos y eventos, y disfrutar de la emoción del deporte rey. Pero, ¿sabías que existe una versión alternativa de este juego, llamada FIFA Mobile Chino APK, que tiene algunas características y opciones diferentes a la versión original?

-

fifa mobile chino apk actualizado


DOWNLOADhttps://jinyurl.com/2uNS53



-

En este artículo, te vamos a contar todo lo que necesitas saber sobre FIFA Mobile Chino APK, qué es, cómo descargarlo e instalarlo, qué ventajas y desventajas tiene, cómo se compara con FIFA Mobile APK, qué opinan los usuarios que lo han probado, y algunas preguntas frecuentes que te pueden surgir. ¡Sigue leyendo y descubre si este juego es para ti!

-

¿Qué es FIFA Mobile Chino APK?

-

FIFA Mobile Chino APK es una versión modificada de FIFA Mobile, el juego oficial de EA Sports para dispositivos móviles Android e iOS. Esta versión está desarrollada por Tencent, una empresa china que tiene los derechos de distribución de FIFA en China. Por lo tanto, esta versión está pensada principalmente para el público chino, aunque también se puede jugar desde otros países.

-

FIFA Mobile Chino APK tiene algunas características y opciones diferentes a la versión original de FIFA Mobile, como por ejemplo:

-

Características principales de FIFA Mobile Chino APK

- -

Cómo descargar e instalar FIFA Mobile Chino APK

-

Para descargar e instalar FIFA Mobile Chino APK en tu dispositivo Android, debes seguir estos pasos:

-
    -
  1. Accede a un sitio web seguro y confiable que ofrezca el archivo APK de FIFA Mobile Chino. Por ejemplo, puedes usar este enlace: .
  2. -
  3. Descarga el archivo APK en tu dispositivo. Puede que tengas que habilitar la opción de instalar aplicaciones de fuentes desconocidas en los ajustes de seguridad de tu dispositivo.
  4. -
  5. Abre el archivo APK y sigue las instrucciones que aparecen en la pantalla para completar la instalación.
  6. -
  7. Una vez instalado, abre el juego y espera a que se descarguen los datos adicionales necesarios para su funcionamiento.
  8. -
  9. Disfruta de FIFA Mobile Chino APK en tu dispositivo Android.
  10. -
-

Para descargar e instalar FIFA Mobile Chino APK en tu dispositivo iOS, debes seguir estos pasos:

-
    -
  1. Accede a un sitio web seguro y confiable que ofrezca el archivo IPA de FIFA Mobile Chino. Por ejemplo, puedes usar este enlace: .
  2. -
  3. Descarga el archivo IPA en tu dispositivo. Puede que tengas que usar una aplicación de gestión de archivos como iFile o Filza para mover el archivo a la carpeta adecuada.
  4. -
  5. Abre el archivo IPA y sigue las instrucciones que aparecen en la pantalla para completar la instalación.
  6. -
  7. Una vez instalado, abre el juego y espera a que se descarguen los datos adicionales necesarios para su funcionamiento.
  8. -
  9. Disfruta de FIFA Mobile Chino APK en tu dispositivo iOS.
  10. -
-

Ventajas y desventajas de FIFA Mobile Chino APK

-

Como todo juego, FIFA Mobile Chino APK tiene sus pros y sus contras. Aquí te resumimos algunas de las ventajas y desventajas de este juego:

-

Ventajas de FIFA Mobile Chino APK

- -

Desventajas de FIFA Mobile Chino APK

- -

¿Qué diferencia hay entre FIFA Mobile Chino APK y FIFA Mobile APK?

-

Ahora que ya sabes qué es FIFA Mobile Chino APK, te preguntarás qué diferencia hay con FIFA Mobile APK, la versión original del juego. Pues bien, aunque ambos juegos comparten el mismo concepto y objetivo, hay algunas similitudes y diferencias entre ellos que te vamos a explicar a continuación:

-

Similitudes entre ambos juegos

- -

Diferencias entre ambos juegos

- -

¿Qué opinan los usuarios de FIFA Mobile Chino APK?

-

Si te preguntas qué opinan los usuarios que han probado FIFA Mobile Chino APK, te podemos decir que hay opiniones de todo tipo. Algunos usuarios están muy satisfechos con este juego y lo prefieren a la versión original de FIFA Mobile, mientras que otros usuarios están muy decepcionados con este juego y lo consideran una copia barata de FIFA Mobile. Aquí te mostramos algunas de las reseñas positivas y negativas que hemos encontrado en internet:

-

descargar fifa mobile chino apk
-fifa mobile chino apk 2023
-fifa mobile chino apk ultima version
-fifa mobile chino apk mod
-fifa mobile chino apk hack
-fifa mobile chino apk mega
-fifa mobile chino apk mediafire
-fifa mobile chino apk sin licencia
-fifa mobile chino apk android
-fifa mobile chino apk gratis
-fifa mobile chino apk full
-fifa mobile chino apk offline
-fifa mobile chino apk obb
-fifa mobile chino apk datos
-fifa mobile chino apk gameplay
-fifa mobile chino apk descargar gratis
-fifa mobile chino apk 2023 ultima version
-fifa mobile chino apk 2023 mod
-fifa mobile chino apk 2023 hack
-fifa mobile chino apk 2023 mega
-fifa mobile chino apk 2023 mediafire
-fifa mobile chino apk 2023 sin licencia
-fifa mobile chino apk 2023 android
-fifa mobile chino apk 2023 gratis
-fifa mobile chino apk 2023 full
-fifa mobile chino apk 2023 offline
-fifa mobile chino apk 2023 obb
-fifa mobile chino apk 2023 datos
-fifa mobile chino apk 2023 gameplay
-fifa mobile chino apk 2023 descargar gratis
-como descargar fifa mobile chino apk
-como instalar fifa mobile chino apk
-como jugar fifa mobile chino apk
-como actualizar fifa mobile chino apk
-como hackear fifa mobile chino apk
-como tener monedas en fifa mobile chino apk
-como tener jugadores en fifa mobile chino apk
-como tener licencia en fifa mobile chino apk
-como solucionar error en fifa mobile chino apk
-como quitar publicidad en fifa mobile chino apk

-

Reseñas positivas de FIFA Mobile Chino APK

- -

Reseñas negativas de FIFA Mobile Chino APK

- -

Conclusión

-

En conclusión, podemos decir que FIFA Mobile Chino APK es una versión alternativa de FIFA Mobile, el juego oficial de EA Sports para dispositivos móviles. Esta versión está desarrollada por Tencent, una empresa china que tiene los derechos de distribución de FIFA en China.

-

FIFA Mobile Chino APK tiene algunas características y opciones diferentes a la versión original de FIFA Mobile, como una interfaz más colorida, más modos de juego disponibles, más opciones de personalización para tu equipo, más eventos y actividades especiales, más jugadores y leyendas disponibles para fichar, un sistema de recompensas más generoso y variado, y un mercado de transferencias más dinámico y competitivo.

-

FIFA Mobile Chino APK también tiene algunas vent ajas y desventajas, como un idioma diferente al español, un mayor riesgo de virus o malware, un mayor consumo de recursos y datos, y un mayor nivel de dificultad y competencia.

-

FIFA Mobile Chino APK se puede descargar e instalar en dispositivos Android e iOS, siguiendo unos sencillos pasos que te hemos explicado en este artículo. Sin embargo, debes tener en cuenta que no se trata de una versión oficial ni está disponible en las tiendas oficiales de aplicaciones, por lo que debes tomar algunas precauciones al usarla.

-

FIFA Mobile Chino APK se diferencia de FIFA Mobile APK, la versión original del juego, en algunos aspectos que también te hemos detallado en este artículo. Ambos juegos tienen sus similitudes y diferencias, y depende de tu gusto y preferencia el elegir uno u otro.

-

¿Por qué deberías probar FIFA Mobile Chino APK?

-

Si te gustan los juegos de fútbol y quieres probar algo diferente al FIFA Mobile original, puedes darle una oportunidad a FIFA Mobile Chino APK. Este juego te ofrece más contenido y opciones que la versión original, lo que lo hace más divertido y variado. Además, tiene una mejor calidad gráfica y sonora, lo que lo hace más atractivo y realista. También tiene una mayor compatibilidad con diferentes dispositivos y sistemas operativos, lo que lo hace más accesible y fácil de usar. Y por si fuera poco, tiene una comunidad más activa y participativa, lo que lo hace más social e interactivo.

-

¿Qué precauciones debes tomar al usar FIFA Mobile Chino APK?

-

Si decides probar FIFA Mobile Chino APK, debes tener en cuenta algunas precauciones para evitar problemas o inconvenientes. Algunas de estas precauciones son:

- -

Preguntas frecuentes sobre FIFA Mobile Chino APK

-

Para terminar este artículo, te vamos a responder algunas de las preguntas frecuentes que pueden surgirte sobre FIFA Mobile Chino APK. Esperamos que te sean útiles y te ayuden a resolver tus dudas.

-

¿FIFA Mobile Chino APK es gratis?

-

Sí, FIFA Mobile Chino APK es gratis. No tienes que pagar nada para descargarlo e instalarlo en tu dispositivo. Sin embargo, el juego tiene compras integradas que te permiten obtener monedas, puntos o sobres con dinero real. Estas compras son opcionales y no son necesarias para jugar.

-

¿FIFA Mobile Chino APK es seguro?

-

No podemos garantizar al 100% que FIFA Mobile Chino APK sea seguro. Al no ser una versión oficial ni estar disponible en las tiendas oficiales de aplicaciones, existe el riesgo de que el archivo APK o IPA contenga virus o malware que puedan dañar tu dispositivo o comprometer tu seguridad. Por eso, te recomendamos que verifiques la fuente de descarga del archivo y que uses un antivirus o un firewall para proteger tu dispositivo.

-

¿FIFA Mobile Chino APK está en español?

-

No, FIFA Mobile Chino APK no está en español. El idioma principal del juego es el chino mandarín, aunque también tiene algunos elementos en inglés. No hay opción para cambiar el idioma del juego al español u otro idioma. Por eso, si no entiendes el chino o el inglés, puede que tengas dificultades para jugar o disfrutar del juego.

-

¿FIFA Mobile Chino APK se puede jugar con otros usuarios?

-

Sí, FIFA Mobile Chino APK se puede jugar con otros usuarios. El juego tiene un modo multijugador que te permite enfrentarte a otros jugadores en partidos online, ya sea en el modo versus, el modo ataque o el modo torneo. También puedes unirte a una liga o un club para cooperar o competir con otros usuarios, y participar en eventos y actividades especiales que te dan la oportunidad de ganar recompensas y reconocimientos.

-

¿FIFA Mobile Chino APK se actualiza con frecuencia?

-

Sí, FIFA Mobile Chino APK se actualiza con frecuencia. Los desarrolladores del juego suelen lanzar nuevas versiones del archivo APK o IPA cada cierto tiempo, para añadir nuevas características, opciones, eventos, jugadores y correcciones de errores. Por eso, te recomendamos que estés atento a las novedades y que descargues la última versión disponible para disfrutar de la mejor experiencia de juego.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_unclip.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_unclip.py deleted file mode 100644 index e87536bce3e43212288b4f7aa710b49dec97bf8d..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_unclip.py +++ /dev/null @@ -1,303 +0,0 @@ -# Copyright 2022 Kakao Brain and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import numpy as np -import paddle - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .scheduling_utils import SchedulerMixin - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->UnCLIP -class UnCLIPSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: paddle.Tensor - pred_original_sample: Optional[paddle.Tensor] = None - - -def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return paddle.to_tensor(betas, dtype=paddle.float32) - - -class UnCLIPScheduler(SchedulerMixin, ConfigMixin): - """ - This is a modified DDPM Scheduler specifically for the karlo unCLIP model. - - This scheduler has some minor variations in how it calculates the learned range variance and dynamically - re-calculates betas based off the timesteps it is skipping. - - The scheduler also uses a slightly different step ratio when computing timesteps to use for inference. - - See [`~DDPMScheduler`] for more information on DDPM scheduling - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - variance_type (`str`): - options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small_log` - or `learned_range`. - clip_sample (`bool`, default `True`): - option to clip predicted sample between `-clip_sample_range` and `clip_sample_range` for numerical - stability. - clip_sample_range (`float`, default `1.0`): - The range to clip the sample between. See `clip_sample`. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion process) - or `sample` (directly predicting the noisy sample`) - """ - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - variance_type: str = "fixed_small_log", - clip_sample: bool = True, - clip_sample_range: Optional[float] = 1.0, - prediction_type: str = "epsilon", - ): - # beta scheduler is "squaredcos_cap_v2" - self.betas = betas_for_alpha_bar(num_train_timesteps) - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = paddle.cumprod(self.alphas, 0) - self.one = paddle.to_tensor(1.0) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # setable values - self.num_inference_steps = None - self.timesteps = paddle.to_tensor(np.arange(0, num_train_timesteps)[::-1].copy()) - - self.variance_type = variance_type - - def scale_model_input(self, sample: paddle.Tensor, timestep: Optional[int] = None) -> paddle.Tensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`paddle.Tensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `paddle.Tensor`: scaled input sample - """ - return sample - - def set_timesteps(self, num_inference_steps: int): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Note that this scheduler uses a slightly different step ratio than the other diffusers schedulers. The - different step ratio is to mimic the original karlo implementation and does not affect the quality or accuracy - of the results. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - self.num_inference_steps = num_inference_steps - step_ratio = (self.config.num_train_timesteps - 1) / (self.num_inference_steps - 1) - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64) - self.timesteps = paddle.to_tensor(timesteps) - - def _get_variance(self, t, prev_timestep=None, predicted_variance=None, variance_type=None): - if prev_timestep is None: - prev_timestep = t - 1 - - alpha_prod_t = self.alphas_cumprod[t] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.one - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - if prev_timestep == t - 1: - beta = self.betas[t] - else: - beta = 1 - alpha_prod_t / alpha_prod_t_prev - - # For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf) - # and sample from it to get previous sample - # x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample - variance = beta_prod_t_prev / beta_prod_t * beta - - if variance_type is None: - variance_type = self.config.variance_type - - # hacks - were probably added for training stability - if variance_type == "fixed_small_log": - variance = paddle.log(paddle.clip(variance, min=1e-20)) - variance = paddle.exp(0.5 * variance) - elif variance_type == "learned_range": - # NOTE difference with DDPM scheduler - min_log = variance.log() - max_log = beta.log() - - frac = (predicted_variance + 1) / 2 - variance = frac * max_log + (1 - frac) * min_log - - return variance - - def step( - self, - model_output: paddle.Tensor, - timestep: int, - sample: paddle.Tensor, - prev_timestep: Optional[int] = None, - generator=None, - return_dict: bool = True, - ) -> Union[UnCLIPSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - prev_timestep (`int`, *optional*): The previous timestep to predict the previous sample at. - Used to dynamically compute beta. If not given, `t-1` is used and the pre-computed beta is used. - generator: random number generator. - return_dict (`bool`): option for returning tuple rather than UnCLIPSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.UnCLIPSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.UnCLIPSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - - """ - - t = timestep - - if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type == "learned_range": - model_output, predicted_variance = model_output.split( - [sample.shape[1], model_output.shape[1] - sample.shape[1]], axis=1 - ) - else: - predicted_variance = None - - # 1. compute alphas, betas - if prev_timestep is None: - prev_timestep = t - 1 - - alpha_prod_t = self.alphas_cumprod[t] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.one - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - if prev_timestep == t - 1: - beta = self.betas[t] - alpha = self.alphas[t] - else: - beta = 1 - alpha_prod_t / alpha_prod_t_prev - alpha = 1 - beta - - # 2. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf - if self.config.prediction_type == "epsilon": - pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) - elif self.config.prediction_type == "sample": - pred_original_sample = model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon` or `sample`" - " for the UnCLIPScheduler." - ) - - # 3. Clip "predicted x_0" - if self.config.clip_sample: - pred_original_sample = paddle.clip( - pred_original_sample, -self.config.clip_sample_range, self.config.clip_sample_range - ) - - # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t - # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf - pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * beta) / beta_prod_t - current_sample_coeff = alpha ** (0.5) * beta_prod_t_prev / beta_prod_t - - # 5. Compute predicted previous sample µ_t - # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf - pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample - - # 6. Add noise - variance = 0 - if t > 0: - variance_noise = paddle.randn(model_output.shape, generator=generator, dtype=model_output.dtype) - - variance = self._get_variance( - t, - predicted_variance=predicted_variance, - prev_timestep=prev_timestep, - ) - - if self.variance_type == "fixed_small_log": - variance = variance - elif self.variance_type == "learned_range": - variance = (0.5 * variance).exp() - else: - raise ValueError( - f"variance_type given as {self.variance_type} must be one of `fixed_small_log` or `learned_range`" - " for the UnCLIPScheduler." - ) - - variance = variance * variance_noise - - pred_prev_sample = pred_prev_sample + variance - - if not return_dict: - return (pred_prev_sample,) - - return UnCLIPSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample) diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py deleted file mode 100644 index 201359c4e743aed285694668e13da6dd5a40b621..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py +++ /dev/null @@ -1,187 +0,0 @@ - -from __future__ import absolute_import - -import sys -import torch -import torch.nn as nn -import torch.nn.init as init -from torch.autograd import Variable -import numpy as np -from pdb import set_trace as st -from skimage import color -from IPython import embed -from model.stylegan.lpips import pretrained_networks as pn - -import model.stylegan.lpips as util - -def spatial_average(in_tens, keepdim=True): - return in_tens.mean([2,3],keepdim=keepdim) - -def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W - in_H = in_tens.shape[2] - scale_factor = 1.*out_H/in_H - - return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens) - -# Learned perceptual metric -class PNetLin(nn.Module): - def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, version='0.1', lpips=True): - super(PNetLin, self).__init__() - - self.pnet_type = pnet_type - self.pnet_tune = pnet_tune - self.pnet_rand = pnet_rand - self.spatial = spatial - self.lpips = lpips - self.version = version - self.scaling_layer = ScalingLayer() - - if(self.pnet_type in ['vgg','vgg16']): - net_type = pn.vgg16 - self.chns = [64,128,256,512,512] - elif(self.pnet_type=='alex'): - net_type = pn.alexnet - self.chns = [64,192,384,256,256] - elif(self.pnet_type=='squeeze'): - net_type = pn.squeezenet - self.chns = [64,128,256,384,384,512,512] - self.L = len(self.chns) - - self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune) - - if(lpips): - self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout) - self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout) - self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout) - self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout) - self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout) - self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4] - if(self.pnet_type=='squeeze'): # 7 layers for squeezenet - self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout) - self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout) - self.lins+=[self.lin5,self.lin6] - - def forward(self, in0, in1, retPerLayer=False): - # v0.0 - original release had a bug, where input was not scaled - in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1) - outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input) - feats0, feats1, diffs = {}, {}, {} - - for kk in range(self.L): - feats0[kk], feats1[kk] = util.normalize_tensor(outs0[kk]), util.normalize_tensor(outs1[kk]) - diffs[kk] = (feats0[kk]-feats1[kk])**2 - - if(self.lpips): - if(self.spatial): - res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)] - else: - if(self.spatial): - res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)] - - val = res[0] - for l in range(1,self.L): - val += res[l] - - if(retPerLayer): - return (val, res) - else: - return val - -class ScalingLayer(nn.Module): - def __init__(self): - super(ScalingLayer, self).__init__() - self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None]) - self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None]) - - def forward(self, inp): - return (inp - self.shift) / self.scale - - -class NetLinLayer(nn.Module): - ''' A single linear layer which does a 1x1 conv ''' - def __init__(self, chn_in, chn_out=1, use_dropout=False): - super(NetLinLayer, self).__init__() - - layers = [nn.Dropout(),] if(use_dropout) else [] - layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),] - self.model = nn.Sequential(*layers) - - -class Dist2LogitLayer(nn.Module): - ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) ''' - def __init__(self, chn_mid=32, use_sigmoid=True): - super(Dist2LogitLayer, self).__init__() - - layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True),] - layers += [nn.LeakyReLU(0.2,True),] - layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True),] - layers += [nn.LeakyReLU(0.2,True),] - layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True),] - if(use_sigmoid): - layers += [nn.Sigmoid(),] - self.model = nn.Sequential(*layers) - - def forward(self,d0,d1,eps=0.1): - return self.model.forward(torch.cat((d0,d1,d0-d1,d0/(d1+eps),d1/(d0+eps)),dim=1)) - -class BCERankingLoss(nn.Module): - def __init__(self, chn_mid=32): - super(BCERankingLoss, self).__init__() - self.net = Dist2LogitLayer(chn_mid=chn_mid) - # self.parameters = list(self.net.parameters()) - self.loss = torch.nn.BCELoss() - - def forward(self, d0, d1, judge): - per = (judge+1.)/2. - self.logit = self.net.forward(d0,d1) - return self.loss(self.logit, per) - -# L2, DSSIM metrics -class FakeNet(nn.Module): - def __init__(self, use_gpu=True, colorspace='Lab'): - super(FakeNet, self).__init__() - self.use_gpu = use_gpu - self.colorspace=colorspace - -class L2(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert(in0.size()[0]==1) # currently only supports batchSize 1 - - if(self.colorspace=='RGB'): - (N,C,X,Y) = in0.size() - value = torch.mean(torch.mean(torch.mean((in0-in1)**2,dim=1).view(N,1,X,Y),dim=2).view(N,1,1,Y),dim=3).view(N) - return value - elif(self.colorspace=='Lab'): - value = util.l2(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)), - util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float') - ret_var = Variable( torch.Tensor((value,) ) ) - if(self.use_gpu): - ret_var = ret_var.cuda() - return ret_var - -class DSSIM(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert(in0.size()[0]==1) # currently only supports batchSize 1 - - if(self.colorspace=='RGB'): - value = util.dssim(1.*util.tensor2im(in0.data), 1.*util.tensor2im(in1.data), range=255.).astype('float') - elif(self.colorspace=='Lab'): - value = util.dssim(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)), - util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float') - ret_var = Variable( torch.Tensor((value,) ) ) - if(self.use_gpu): - ret_var = ret_var.cuda() - return ret_var - -def print_network(net): - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - print('Network',net) - print('Total number of parameters: %d' % num_params) diff --git a/spaces/A00001/bingothoo/src/components/chat-attachments.tsx b/spaces/A00001/bingothoo/src/components/chat-attachments.tsx deleted file mode 100644 index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/components/chat-attachments.tsx +++ /dev/null @@ -1,37 +0,0 @@ -import Image from 'next/image' -import ClearIcon from '@/assets/images/clear.svg' -import RefreshIcon from '@/assets/images/refresh.svg' -import { FileItem } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' -import { useBing } from '@/lib/hooks/use-bing' - -type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'> - -export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) { - return attachmentList.length ? ( -
- {attachmentList.map(file => ( -
- {file.status === 'loading' && ( -
-
-
) - } - {file.status !== 'error' && ( -
- -
) - } - {file.status === 'error' && ( -
- refresh uploadImage(file.url)} /> -
- )} - -
- ))} -
- ) : null -} diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/rvm.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/rvm.py deleted file mode 100644 index 028324529531dd7ee97210dfd890fed717447be0..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/rvm.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp -import torch -from torch import nn -import torchaudio - - -def db_to_scale(volume: tp.Union[float, torch.Tensor]): - return 10 ** (volume / 20) - - -def scale_to_db(scale: torch.Tensor, min_volume: float = -120): - min_scale = db_to_scale(min_volume) - return 20 * torch.log10(scale.clamp(min=min_scale)) - - -class RelativeVolumeMel(nn.Module): - """Relative volume melspectrogram measure. - - Computes a measure of distance over two mel spectrogram that is interpretable in terms - of decibels. Given `x_ref` and `x_est` two waveforms of shape `[*, T]`, it will - first renormalize both by the ground truth of `x_ref`. - - Then it computes the mel spectrogram `z_ref` and `z_est` and compute volume of the difference - relative to the volume of `z_ref` for each time-frequency bin. It further adds some limits, e.g. - clamping the values between -25 and 25 dB (controlled by `min_relative_volume` and `max_relative_volume`) - with the goal of avoiding the loss being dominated by parts where the reference is almost silent. - Indeed, volumes in dB can take unbounded values both towards -oo and +oo, which can make the final - average metric harder to interpret. Besides, anything below -30 dB of attenuation would sound extremely - good (for a neural network output, although sound engineers typically aim for much lower attenuations). - Similarly, anything above +30 dB would just be completely missing the target, and there is no point - in measuring by exactly how much it missed it. -25, 25 is a more conservative range, but also more - in line with what neural nets currently can achieve. - - For instance, a Relative Volume Mel (RVM) score of -10 dB means that on average, the delta between - the target and reference mel-spec is 10 dB lower than the reference mel-spec value. - - The metric can be aggregated over a given frequency band in order have different insights for - different region of the spectrum. `num_aggregated_bands` controls the number of bands. - - ..Warning:: While this function is optimized for interpretability, nothing was done to ensure it - is numerically stable when computing its gradient. We thus advise against using it as a training loss. - - Args: - sample_rate (int): Sample rate of the input audio. - n_mels (int): Number of mel bands to use. - n_fft (int): Number of frequency bins for the STFT. - hop_length (int): Hop length of the STFT and the mel-spectrogram. - min_relative_volume (float): The error `z_ref - z_est` volume is given relative to - the volume of `z_ref`. If error is smaller than -25 dB of `z_ref`, then it is clamped. - max_relative_volume (float): Same as `min_relative_volume` but clamping if the error is larger than that. - max_initial_gain (float): When rescaling the audio at the very beginning, we will limit the gain - to that amount, to avoid rescaling near silence. Given in dB. - min_activity_volume (float): When computing the reference level from `z_ref`, will clamp low volume - bins to that amount. This is effectively our "zero" level for the reference mel-spectrogram, - and anything below that will be considered equally. - num_aggregated_bands (int): Number of bands to keep when computing the average RVM value. - For instance, a value of 3 would give 3 scores, roughly for low, mid and high freqs. - """ - def __init__(self, sample_rate: int = 24000, n_mels: int = 80, n_fft: int = 512, - hop_length: int = 128, min_relative_volume: float = -25, - max_relative_volume: float = 25, max_initial_gain: float = 25, - min_activity_volume: float = -25, - num_aggregated_bands: int = 4) -> None: - super().__init__() - self.melspec = torchaudio.transforms.MelSpectrogram( - n_mels=n_mels, n_fft=n_fft, hop_length=hop_length, - normalized=True, sample_rate=sample_rate, power=2) - self.min_relative_volume = min_relative_volume - self.max_relative_volume = max_relative_volume - self.max_initial_gain = max_initial_gain - self.min_activity_volume = min_activity_volume - self.num_aggregated_bands = num_aggregated_bands - - def forward(self, estimate: torch.Tensor, ground_truth: torch.Tensor) -> tp.Dict[str, torch.Tensor]: - """Compute RVM metric between estimate and reference samples. - - Args: - estimate (torch.Tensor): Estimate sample. - ground_truth (torch.Tensor): Reference sample. - - Returns: - dict[str, torch.Tensor]: Metrics with keys `rvm` for the overall average, and `rvm_{k}` - for the RVM over the k-th band (k=0..num_aggregated_bands - 1). - """ - min_scale = db_to_scale(-self.max_initial_gain) - std = ground_truth.pow(2).mean().sqrt().clamp(min=min_scale) - z_gt = self.melspec(ground_truth / std).sqrt() - z_est = self.melspec(estimate / std).sqrt() - - delta = z_gt - z_est - ref_db = scale_to_db(z_gt, self.min_activity_volume) - delta_db = scale_to_db(delta.abs(), min_volume=-120) - relative_db = (delta_db - ref_db).clamp(self.min_relative_volume, self.max_relative_volume) - dims = list(range(relative_db.dim())) - dims.remove(dims[-2]) - losses_per_band = relative_db.mean(dim=dims) - aggregated = [chunk.mean() for chunk in losses_per_band.chunk(self.num_aggregated_bands, dim=0)] - metrics = {f'rvm_{index}': value for index, value in enumerate(aggregated)} - metrics['rvm'] = losses_per_band.mean() - return metrics diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/readme.md b/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/readme.md deleted file mode 100644 index 7cffcfc72069ff9a098d292f9e37035031e19081..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/readme.md +++ /dev/null @@ -1,12 +0,0 @@ -Code from [rosinality-stylegan2-pytorch-cp](https://github.com/senior-sigan/rosinality-stylegan2-pytorch-cpu) - -Scripts to convert rosinality/stylegan2-pytorch to the CPU compatible format - -If you would like to use CPU for testing or have a problem regarding the cpp extention (fused and upfirdn2d), please make the following changes: - -Change `model.stylegan.op` to `model.stylegan.op_cpu` -https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/util.py#L14 - -https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/simple_augment.py#L12 - -https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/stylegan/model.py#L11 diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/models.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/models.py deleted file mode 100644 index 3cf5456d1ee9a26a4afe58cea2b11ad78033e01e..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/models.py +++ /dev/null @@ -1,951 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchlibrosa.stft import Spectrogram, LogmelFilterBank -from torchlibrosa.augmentation import SpecAugmentation - -from audio_infer.pytorch.pytorch_utils import do_mixup, interpolate, pad_framewise_output -import os -import sys -import math -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.parameter import Parameter -from torchlibrosa.stft import Spectrogram, LogmelFilterBank -from torchlibrosa.augmentation import SpecAugmentation -from audio_infer.pytorch.pytorch_utils import do_mixup -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ -import warnings -from functools import partial -#from mmdet.models.builder import BACKBONES -from mmdet.utils import get_root_logger -from mmcv.runner import load_checkpoint -os.environ['TORCH_HOME'] = '../pretrained_models' -from copy import deepcopy -from timm.models.helpers import load_pretrained -from torch.cuda.amp import autocast -from collections import OrderedDict -import io -import re -from mmcv.runner import _load_checkpoint, load_state_dict -import mmcv.runner -import copy -import random -from einops import rearrange -from einops.layers.torch import Rearrange, Reduce -from torch import nn, einsum - - -def load_checkpoint(model, - filename, - map_location=None, - strict=False, - logger=None, - revise_keys=[(r'^module\.', '')]): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint = _load_checkpoint(filename, map_location, logger) - new_proj = torch.nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(4, 4), padding=(2, 2)) - new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=1).unsqueeze(1)) - checkpoint['patch_embed1.proj.weight'] = new_proj.weight - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - state_dict = OrderedDict({k.replace('backbone.',''):v for k,v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - -def init_layer(layer): - """Initialize a Linear or Convolutional layer. """ - nn.init.xavier_uniform_(layer.weight) - - if hasattr(layer, 'bias'): - if layer.bias is not None: - layer.bias.data.fill_(0.) - - -def init_bn(bn): - """Initialize a Batchnorm layer. """ - bn.bias.data.fill_(0.) - bn.weight.data.fill_(1.) - - - - -class TimeShift(nn.Module): - def __init__(self, mean, std): - super().__init__() - self.mean = mean - self.std = std - - def forward(self, x): - if self.training: - shift = torch.empty(1).normal_(self.mean, self.std).int().item() - x = torch.roll(x, shift, dims=2) - return x - -class LinearSoftPool(nn.Module): - """LinearSoftPool - Linear softmax, takes logits and returns a probability, near to the actual maximum value. - Taken from the paper: - A Comparison of Five Multiple Instance Learning Pooling Functions for Sound Event Detection with Weak Labeling - https://arxiv.org/abs/1810.09050 - """ - def __init__(self, pooldim=1): - super().__init__() - self.pooldim = pooldim - - def forward(self, logits, time_decision): - return (time_decision**2).sum(self.pooldim) / time_decision.sum( - self.pooldim) - -class PVT(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num): - - super(PVT, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.time_shift = TimeShift(0, 10) - # Spec augmenter - self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2, - freq_drop_width=8, freq_stripes_num=2) - - self.bn0 = nn.BatchNorm2d(64) - self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001, - fdim=64, - patch_size=7, - stride=4, - in_chans=1, - num_classes=classes_num, - embed_dims=[64, 128, 320, 512], - depths=[3, 4, 6, 3], - num_heads=[1, 2, 5, 8], - mlp_ratios=[8, 8, 4, 4], - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - drop_path_rate=0.1, - sr_ratios=[8, 4, 2, 1], - norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_stages=4, - #pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth' - ) - #self.temp_pool = LinearSoftPool() - self.avgpool = nn.AdaptiveAvgPool1d(1) - self.fc_audioset = nn.Linear(512, classes_num, bias=True) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn0) - init_layer(self.fc_audioset) - - def forward(self, input, mixup_lambda=None): - """Input: (batch_size, times_steps, freq_bins)""" - - interpolate_ratio = 32 - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - frames_num = x.shape[2] - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - if self.training: - x = self.time_shift(x) - x = self.spec_augmenter(x) - - # Mixup on spectrogram - if self.training and mixup_lambda is not None: - x = do_mixup(x, mixup_lambda) - #print(x.shape) #torch.Size([10, 1, 1001, 64]) - x = self.pvt_transformer(x) - #print(x.shape) #torch.Size([10, 800, 128]) - x = torch.mean(x, dim=3) - - x = x.transpose(1, 2).contiguous() - framewise_output = torch.sigmoid(self.fc_audioset(x)) - #clipwise_output = torch.mean(framewise_output, dim=1) - #clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1) - x = framewise_output.transpose(1, 2).contiguous() - x = self.avgpool(x) - clipwise_output = torch.flatten(x, 1) - #print(framewise_output.shape) #torch.Size([10, 100, 17]) - framewise_output = interpolate(framewise_output, interpolate_ratio) - #framewise_output = framewise_output[:,:1000,:] - #framewise_output = pad_framewise_output(framewise_output, frames_num) - output_dict = {'framewise_output': framewise_output, - 'clipwise_output': clipwise_output} - - return output_dict - -class PVT2(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num): - - super(PVT2, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.time_shift = TimeShift(0, 10) - # Spec augmenter - self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2, - freq_drop_width=8, freq_stripes_num=2) - - self.bn0 = nn.BatchNorm2d(64) - self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001, - fdim=64, - patch_size=7, - stride=4, - in_chans=1, - num_classes=classes_num, - embed_dims=[64, 128, 320, 512], - depths=[3, 4, 6, 3], - num_heads=[1, 2, 5, 8], - mlp_ratios=[8, 8, 4, 4], - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - drop_path_rate=0.1, - sr_ratios=[8, 4, 2, 1], - norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_stages=4, - pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth' - ) - #self.temp_pool = LinearSoftPool() - self.fc_audioset = nn.Linear(512, classes_num, bias=True) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn0) - init_layer(self.fc_audioset) - - def forward(self, input, mixup_lambda=None): - """Input: (batch_size, times_steps, freq_bins)""" - - interpolate_ratio = 32 - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - frames_num = x.shape[2] - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - if self.training: - #x = self.time_shift(x) - x = self.spec_augmenter(x) - - # Mixup on spectrogram - if self.training and mixup_lambda is not None: - x = do_mixup(x, mixup_lambda) - #print(x.shape) #torch.Size([10, 1, 1001, 64]) - x = self.pvt_transformer(x) - #print(x.shape) #torch.Size([10, 800, 128]) - x = torch.mean(x, dim=3) - - x = x.transpose(1, 2).contiguous() - framewise_output = torch.sigmoid(self.fc_audioset(x)) - clipwise_output = torch.mean(framewise_output, dim=1) - #clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1) - #print(framewise_output.shape) #torch.Size([10, 100, 17]) - framewise_output = interpolate(framewise_output, interpolate_ratio) - #framewise_output = framewise_output[:,:1000,:] - #framewise_output = pad_framewise_output(framewise_output, frames_num) - output_dict = {'framewise_output': framewise_output, - 'clipwise_output': clipwise_output} - - return output_dict - -class PVT_2layer(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num): - - super(PVT_2layer, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.time_shift = TimeShift(0, 10) - # Spec augmenter - self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2, - freq_drop_width=8, freq_stripes_num=2) - - self.bn0 = nn.BatchNorm2d(64) - self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001, - fdim=64, - patch_size=7, - stride=4, - in_chans=1, - num_classes=classes_num, - embed_dims=[64, 128], - depths=[3, 4], - num_heads=[1, 2], - mlp_ratios=[8, 8], - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - drop_path_rate=0.1, - sr_ratios=[8, 4], - norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_stages=2, - pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth' - ) - #self.temp_pool = LinearSoftPool() - self.avgpool = nn.AdaptiveAvgPool1d(1) - self.fc_audioset = nn.Linear(128, classes_num, bias=True) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn0) - init_layer(self.fc_audioset) - - def forward(self, input, mixup_lambda=None): - """Input: (batch_size, times_steps, freq_bins)""" - - interpolate_ratio = 8 - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - frames_num = x.shape[2] - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - if self.training: - x = self.time_shift(x) - x = self.spec_augmenter(x) - - # Mixup on spectrogram - if self.training and mixup_lambda is not None: - x = do_mixup(x, mixup_lambda) - #print(x.shape) #torch.Size([10, 1, 1001, 64]) - x = self.pvt_transformer(x) - #print(x.shape) #torch.Size([10, 800, 128]) - x = torch.mean(x, dim=3) - - x = x.transpose(1, 2).contiguous() - framewise_output = torch.sigmoid(self.fc_audioset(x)) - #clipwise_output = torch.mean(framewise_output, dim=1) - #clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1) - x = framewise_output.transpose(1, 2).contiguous() - x = self.avgpool(x) - clipwise_output = torch.flatten(x, 1) - #print(framewise_output.shape) #torch.Size([10, 100, 17]) - framewise_output = interpolate(framewise_output, interpolate_ratio) - #framewise_output = framewise_output[:,:1000,:] - #framewise_output = pad_framewise_output(framewise_output, frames_num) - output_dict = {'framewise_output': framewise_output, - 'clipwise_output': clipwise_output} - - return output_dict - -class PVT_lr(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num): - - super(PVT_lr, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.time_shift = TimeShift(0, 10) - # Spec augmenter - self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2, - freq_drop_width=8, freq_stripes_num=2) - - self.bn0 = nn.BatchNorm2d(64) - self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001, - fdim=64, - patch_size=7, - stride=4, - in_chans=1, - num_classes=classes_num, - embed_dims=[64, 128, 320, 512], - depths=[3, 4, 6, 3], - num_heads=[1, 2, 5, 8], - mlp_ratios=[8, 8, 4, 4], - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - drop_path_rate=0.1, - sr_ratios=[8, 4, 2, 1], - norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_stages=4, - pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth' - ) - self.temp_pool = LinearSoftPool() - self.fc_audioset = nn.Linear(512, classes_num, bias=True) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn0) - init_layer(self.fc_audioset) - - def forward(self, input, mixup_lambda=None): - """Input: (batch_size, times_steps, freq_bins)""" - - interpolate_ratio = 32 - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - frames_num = x.shape[2] - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - if self.training: - x = self.time_shift(x) - x = self.spec_augmenter(x) - - # Mixup on spectrogram - if self.training and mixup_lambda is not None: - x = do_mixup(x, mixup_lambda) - #print(x.shape) #torch.Size([10, 1, 1001, 64]) - x = self.pvt_transformer(x) - #print(x.shape) #torch.Size([10, 800, 128]) - x = torch.mean(x, dim=3) - - x = x.transpose(1, 2).contiguous() - framewise_output = torch.sigmoid(self.fc_audioset(x)) - clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1) - #print(framewise_output.shape) #torch.Size([10, 100, 17]) - framewise_output = interpolate(framewise_output, interpolate_ratio) - #framewise_output = framewise_output[:,:1000,:] - #framewise_output = pad_framewise_output(framewise_output, frames_num) - output_dict = {'framewise_output': framewise_output, - 'clipwise_output': clipwise_output} - - return output_dict - - -class PVT_nopretrain(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num): - - super(PVT_nopretrain, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.time_shift = TimeShift(0, 10) - # Spec augmenter - self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2, - freq_drop_width=8, freq_stripes_num=2) - - self.bn0 = nn.BatchNorm2d(64) - self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001, - fdim=64, - patch_size=7, - stride=4, - in_chans=1, - num_classes=classes_num, - embed_dims=[64, 128, 320, 512], - depths=[3, 4, 6, 3], - num_heads=[1, 2, 5, 8], - mlp_ratios=[8, 8, 4, 4], - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - drop_path_rate=0.1, - sr_ratios=[8, 4, 2, 1], - norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_stages=4, - #pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth' - ) - self.temp_pool = LinearSoftPool() - self.fc_audioset = nn.Linear(512, classes_num, bias=True) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn0) - init_layer(self.fc_audioset) - - def forward(self, input, mixup_lambda=None): - """Input: (batch_size, times_steps, freq_bins)""" - - interpolate_ratio = 32 - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - frames_num = x.shape[2] - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - if self.training: - x = self.time_shift(x) - x = self.spec_augmenter(x) - - # Mixup on spectrogram - if self.training and mixup_lambda is not None: - x = do_mixup(x, mixup_lambda) - #print(x.shape) #torch.Size([10, 1, 1001, 64]) - x = self.pvt_transformer(x) - #print(x.shape) #torch.Size([10, 800, 128]) - x = torch.mean(x, dim=3) - - x = x.transpose(1, 2).contiguous() - framewise_output = torch.sigmoid(self.fc_audioset(x)) - clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1) - #print(framewise_output.shape) #torch.Size([10, 100, 17]) - framewise_output = interpolate(framewise_output, interpolate_ratio) - framewise_output = framewise_output[:,:1000,:] - #framewise_output = pad_framewise_output(framewise_output, frames_num) - output_dict = {'framewise_output': framewise_output, - 'clipwise_output': clipwise_output} - - return output_dict - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0., linear=False): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.dwconv = DWConv(hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - self.linear = linear - if self.linear: - self.relu = nn.ReLU() - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - fan_out //= m.groups - m.weight.data.normal_(0, math.sqrt(2.0 / fan_out)) - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x, H, W): - x = self.fc1(x) - if self.linear: - x = self.relu(x) - x = self.dwconv(x, H, W) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., sr_ratio=1, linear=False): - super().__init__() - assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}." - - self.dim = dim - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - self.q = nn.Linear(dim, dim, bias=qkv_bias) - self.kv = nn.Linear(dim, dim * 2, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - self.linear = linear - self.sr_ratio = sr_ratio - if not linear: - if sr_ratio > 1: - self.sr = nn.Conv2d(dim, dim, kernel_size=sr_ratio, stride=sr_ratio) - self.norm = nn.LayerNorm(dim) - else: - self.pool = nn.AdaptiveAvgPool2d(7) - self.sr = nn.Conv2d(dim, dim, kernel_size=1, stride=1) - self.norm = nn.LayerNorm(dim) - self.act = nn.GELU() - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - fan_out //= m.groups - m.weight.data.normal_(0, math.sqrt(2.0 / fan_out)) - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x, H, W): - B, N, C = x.shape - q = self.q(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3) - - if not self.linear: - if self.sr_ratio > 1: - x_ = x.permute(0, 2, 1).reshape(B, C, H, W) - x_ = self.sr(x_).reshape(B, C, -1).permute(0, 2, 1) - x_ = self.norm(x_) - kv = self.kv(x_).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - else: - kv = self.kv(x).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - else: - x_ = x.permute(0, 2, 1).reshape(B, C, H, W) - x_ = self.sr(self.pool(x_)).reshape(B, C, -1).permute(0, 2, 1) - x_ = self.norm(x_) - x_ = self.act(x_) - kv = self.kv(x_).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - k, v = kv[0], kv[1] - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - - return x - - -class Pooling(nn.Module): - """ - Implementation of pooling for PoolFormer - --pool_size: pooling size - """ - def __init__(self, pool_size=3): - super().__init__() - self.pool = nn.AvgPool2d( - pool_size, stride=1, padding=pool_size//2, count_include_pad=False) - - def forward(self, x): - return self.pool(x) - x - -class Block(nn.Module): - - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, sr_ratio=1, linear=False): - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop, sr_ratio=sr_ratio, linear=linear) - #self.norm3 = norm_layer(dim) - #self.token_mixer = Pooling(pool_size=3) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop, linear=linear) - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - fan_out //= m.groups - m.weight.data.normal_(0, math.sqrt(2.0 / fan_out)) - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x, H, W): - x = x + self.drop_path(self.attn(self.norm1(x), H, W)) - x = x + self.drop_path(self.mlp(self.norm2(x), H, W)) - return x - - -class OverlapPatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - - def __init__(self, tdim, fdim, patch_size=7, stride=4, in_chans=3, embed_dim=768): - super().__init__() - img_size = (tdim, fdim) - patch_size = to_2tuple(patch_size) - - self.img_size = img_size - self.patch_size = patch_size - self.H, self.W = img_size[0] // stride, img_size[1] // stride - self.num_patches = self.H * self.W - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=stride, - padding=(patch_size[0] // 3, patch_size[1] // 3)) - self.norm = nn.LayerNorm(embed_dim) - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - fan_out //= m.groups - m.weight.data.normal_(0, math.sqrt(2.0 / fan_out)) - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x): - x = self.proj(x) - _, _, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - - return x, H, W - - -class PyramidVisionTransformerV2(nn.Module): - def __init__(self, tdim=1001, fdim=64, patch_size=16, stride=4, in_chans=3, num_classes=1000, embed_dims=[64, 128, 256, 512], - num_heads=[1, 2, 4, 8], mlp_ratios=[4, 4, 4, 4], qkv_bias=False, qk_scale=None, drop_rate=0., - attn_drop_rate=0., drop_path_rate=0.1, norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[3, 4, 6, 3], - sr_ratios=[8, 4, 2, 1], num_stages=2, linear=False, pretrained=None): - super().__init__() - # self.num_classes = num_classes - self.depths = depths - self.num_stages = num_stages - self.linear = linear - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - cur = 0 - - for i in range(num_stages): - patch_embed = OverlapPatchEmbed(tdim=tdim if i == 0 else tdim // (2 ** (i + 1)), - fdim=fdim if i == 0 else tdim // (2 ** (i + 1)), - patch_size=7 if i == 0 else 3, - stride=stride if i == 0 else 2, - in_chans=in_chans if i == 0 else embed_dims[i - 1], - embed_dim=embed_dims[i]) - block = nn.ModuleList([Block( - dim=embed_dims[i], num_heads=num_heads[i], mlp_ratio=mlp_ratios[i], qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[cur + j], norm_layer=norm_layer, - sr_ratio=sr_ratios[i], linear=linear) - for j in range(depths[i])]) - norm = norm_layer(embed_dims[i]) - cur += depths[i] - - setattr(self, f"patch_embed{i + 1}", patch_embed) - setattr(self, f"block{i + 1}", block) - setattr(self, f"norm{i + 1}", norm) - #self.n = nn.Linear(125, 250, bias=True) - # classification head - # self.head = nn.Linear(embed_dims[3], num_classes) if num_classes > 0 else nn.Identity() - self.apply(self._init_weights) - self.init_weights(pretrained) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - elif isinstance(m, nn.Conv2d): - fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - fan_out //= m.groups - m.weight.data.normal_(0, math.sqrt(2.0 / fan_out)) - if m.bias is not None: - m.bias.data.zero_() - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger) - - def freeze_patch_emb(self): - self.patch_embed1.requires_grad = False - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed1', 'pos_embed2', 'pos_embed3', 'pos_embed4', 'cls_token'} # has pos_embed may be better - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - B = x.shape[0] - - for i in range(self.num_stages): - patch_embed = getattr(self, f"patch_embed{i + 1}") - block = getattr(self, f"block{i + 1}") - norm = getattr(self, f"norm{i + 1}") - x, H, W = patch_embed(x) - #print(x.shape) - for blk in block: - x = blk(x, H, W) - #print(x.shape) - x = norm(x) - #if i != self.num_stages - 1: - x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() - #print(x.shape) - return x - - def forward(self, x): - x = self.forward_features(x) - # x = self.head(x) - - return x - -class DWConv(nn.Module): - def __init__(self, dim=768): - super(DWConv, self).__init__() - self.dwconv = nn.Conv2d(dim, dim, 3, 1, 1, bias=True, groups=dim) - - def forward(self, x, H, W): - B, N, C = x.shape - x = x.transpose(1, 2).view(B, C, H, W) - x = self.dwconv(x) - x = x.flatten(2).transpose(1, 2) - - return x - - -def _conv_filter(state_dict, patch_size=16): - """ convert patch embedding weight from manual patchify + linear proj to conv""" - out_dict = {} - for k, v in state_dict.items(): - if 'patch_embed.proj.weight' in k: - v = v.reshape((v.shape[0], 3, patch_size, patch_size)) - out_dict[k] = v - - return out_dict diff --git a/spaces/AIGText/GlyphControl/ldm/models/diffusion/ddpm.py b/spaces/AIGText/GlyphControl/ldm/models/diffusion/ddpm.py deleted file mode 100644 index 506d5759df5065ea545037cafb9af82c91e75bd2..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,1954 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager, nullcontext -from functools import partial -import itertools -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only -from omegaconf import ListConfig - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler - - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - make_it_fit=False, - ucg_training=None, - reset_ema=False, - reset_num_ema_updates=False, - keep_num_ema_updates=False, - textemb_merge_config=None, - merge_textemb = False, - log_all_grad_norm = False, - ): - super().__init__() - assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key, textemb_merge_config=textemb_merge_config, merge_textemb=merge_textemb) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - self.make_it_fit = make_it_fit - if reset_ema: assert exists(ckpt_path) - if ckpt_path is not None: - ema_num_updates = self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - if reset_ema: - assert self.use_ema - print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model, init_num_updates= ema_num_updates if keep_num_ema_updates else 0) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - # else: - # self.register_buffer('logvar', self.logvar) - - self.ucg_training = ucg_training or dict() - if self.ucg_training: - self.ucg_prng = np.random.RandomState() - self.log_all_grad_norm = log_all_grad_norm - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) following IDDPM - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - # weights before the simple loss - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - elif self.parameterization == "v": - lvlb_weights = torch.ones_like(self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))) - else: - raise NotImplementedError("mu not supported") - lvlb_weights[0] = lvlb_weights[1] #? - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - @torch.no_grad() - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - if self.make_it_fit: - n_params = len([name for name, _ in - itertools.chain(self.named_parameters(), - self.named_buffers())]) - for name, param in tqdm( - itertools.chain(self.named_parameters(), - self.named_buffers()), - desc="Fitting old weights to new weights", - total=n_params - ): - if not name in sd: - continue - old_shape = sd[name].shape - new_shape = param.shape - assert len(old_shape) == len(new_shape) - if len(new_shape) > 2: - # we only modify first two axes - assert new_shape[2:] == old_shape[2:] - # assumes first axis corresponds to output dim - if not new_shape == old_shape: - new_param = param.clone() - old_param = sd[name] - if len(new_shape) == 1: - for i in range(new_param.shape[0]): - new_param[i] = old_param[i % old_shape[0]] - elif len(new_shape) >= 2: - for i in range(new_param.shape[0]): - for j in range(new_param.shape[1]): - new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]] - - n_used_old = torch.ones(old_shape[1]) - for j in range(new_param.shape[1]): - n_used_old[j % old_shape[1]] += 1 - n_used_new = torch.zeros(new_shape[1]) - for j in range(new_param.shape[1]): - n_used_new[j] = n_used_old[j % old_shape[1]] - - n_used_new = n_used_new[None, :] - while len(n_used_new.shape) < len(new_shape): - n_used_new = n_used_new.unsqueeze(-1) - new_param /= n_used_new - - sd[name] = new_param - - # missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - # sd, strict=False) - if not only_model: - missing, unexpected = self.load_state_dict(sd, strict=False) - elif path.endswith(".bin"): - missing, unexpected = self.model.diffusion_model.load_state_dict(sd, strict=False) - elif path.endswith(".ckpt"): - missing, unexpected = self.model.load_state_dict(sd, strict=False) - - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys:\n {missing}") - if len(unexpected) > 0: - print(f"\nUnexpected Keys:\n {unexpected}") - - if "model_ema.num_updates" in sd and "model_ema.num_updates" not in unexpected: - return sd["model_ema.num_updates"].item() - else: - return 0 - # q(x_t | x_0) - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def predict_start_from_z_and_v(self, x_t, t, v): - # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v - ) - - def predict_eps_from_z_and_v(self, x_t, t, v): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t - ) - # q(x_(t-1) | x_t, x_0) - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - # p(x_(t-1) | x_t) - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: # static thresholding - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - # one sampling step ancestral sampling - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - # sampling loop - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - # sampling from q(x_t | x_0) - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - # get v from x and noise - def get_v(self, x, noise, t): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x - ) - # loss type - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - # training loss - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") - # L_simple - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - # L_vlb - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - # L_simple + lambda * L_vlb following IDDPM - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - # using during training - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - # if self.trainer.precision == 16: - # x = x.type(torch.float16) - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - # main training step - # def training_step(self, batch, batch_idx): - # change - def training_step(self, batch, batch_idx, optimizer_idx=0): - for k in self.ucg_training: - p = self.ucg_training[k]["p"] - val = self.ucg_training[k]["val"] - if val is None: - val = "" - for i in range(len(batch[k])): - if self.ucg_prng.choice(2, p=[1 - p, p]): - batch[k][i] = val - - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - # if self.global_step == 19: - # aa = 1 - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - ac_loss_str = self.trainer.progress_bar_dict["loss"] - ac_loss = eval(ac_loss_str) if ac_loss_str!= "nan" else 0 - log_prefix = 'train' if self.training else 'val' - self.log("{}/loss_accumulated".format(log_prefix), - ac_loss, - prog_bar=False, logger=True, on_step=True, on_epoch=False - ) - # if ac_loss > 0.012: - # assert self.cond_stage_key - # print(batch[self.cond_stage_key][:15]) - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - # ema - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - if self.log_all_grad_norm: - gradnorm_list = [] - for name, p in self.named_parameters(): - if p.requires_grad: - grad_norm_v = p.grad.detach().norm().item() - gradnorm_list.append(grad_norm_v) - if "textemb_merge_model" in name: - self.log("all_gradients/{}_norm".format(name), - gradnorm_list[-1], - prog_bar=False, logger=True, on_step=True, on_epoch=False - ) - if grad_norm_v > 0.1: - print("the norm of gradient w.r.t {} > 0.1: {:.2f}".format - ( - name, grad_norm_v - )) - - self.log("all_gradients/grad_norm_mean", - np.mean(gradnorm_list), - prog_bar=False, logger=True, on_step=True, on_epoch=False - ) - self.log("all_gradients/grad_norm_max", - np.max(gradnorm_list), - prog_bar=False, logger=True, on_step=True, on_epoch=False - ) - self.log("all_gradients/grad_norm_min", - np.min(gradnorm_list), - prog_bar=False, logger=True, on_step=True, on_epoch=False - ) - self.log("all_gradients/param_num", - len(gradnorm_list), - prog_bar=False, logger=True, on_step=True, on_epoch=False - ) - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - # configure optimizers AdamW - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - -# main class: LDM - first stage, DDPM, conditions -class LatentDiffusion(DDPM): - """main class""" - - def __init__(self, - first_stage_config, - cond_stage_config, - # textemb_merge_config = None, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - force_null_conditioning=False, - *args, **kwargs): - self.force_null_conditioning = force_null_conditioning - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning: - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - reset_ema = kwargs.pop("reset_ema", False) - only_model= kwargs.pop("only_model", False) - reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False) - keep_num_ema_updates = kwargs.pop("keep_num_ema_updates", False) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: #? - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - print("instantiate first stage model") - self.instantiate_first_stage(first_stage_config) - print("instantiate cond stage model") - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - ema_num_updates = self.init_from_ckpt(ckpt_path, ignore_keys, only_model=only_model) - self.restarted_from_ckpt = True - if reset_ema: - assert self.use_ema - print( - f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model, init_num_updates= ema_num_updates if keep_num_ema_updates else 0) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - # calculate scale factor for the first batch - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - if ( - # not self.disabled and - self.global_step == 0 and - self.current_epoch == 0 and batch_idx == 0 - # and self.log_first_step - ): - imagecallback = None - for callback in self.trainer.callbacks: - if "ImageLogger" in str(callback): - imagecallback = callback - break - if imagecallback is not None and not imagecallback.disabled and imagecallback.log_first_step: - is_train = self.training - if is_train: - self.eval() - with torch.no_grad(): - # images = pl_module.log_images(batch, split=split, **self.log_images_kwargs) - images = self.log_images(batch, **imagecallback.log_images_kwargs) - import os, torchvision - from PIL import Image - root = os.path.join(self.logger.save_dir, "images", "init") - for k in images: - N = min(images[k].shape[0], imagecallback.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if imagecallback.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - grid = torchvision.utils.make_grid(images[k], nrow=4) - if imagecallback.rescale: - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1) - grid = grid.numpy() - grid = (grid * 255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format( - k, - self.global_step, - self.current_epoch, - batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - del grid - del images - print("log images before training") - # imagecallback.log_local(self.logger.save_dir, "init", images, - # self.global_step, self.current_epoch, batch_idx, self, - # wandb_log = False) - if is_train: - self.train() - - # if imagecallback is not None and not imagecallback.disabled and imagecallback.log_first_step: - # imagecallback.log_img(self, batch, batch_idx, split="init") - # rewrite - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: # drop the option ? - self.make_cond_schedule() - - def instantiate_first_stage(self, config): # not train - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - # def instantiate_textemb_merge_model(self, config): - # model = instantiate_from_config(config) - # if not model.trainable: - # self.textemb_merge_model = model.eval() - # self.textemb_merge_model.train = disabled_train - # for param in self.textemb_merge_model.parameters(): - # param.requires_grad = False - # else: - # self.textemb_merge_model = model - - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - # first stage encoding - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z # rescale z before the diffusion process - # encode the condition - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - # rewrite get input for training DM - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None, return_x=False): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - # get scaled latent vector z for training - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None and not self.force_null_conditioning: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox', "txt"]: - xc = batch[cond_key] - elif cond_key in ['class_label', 'cls']: - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - # latent z + condition c - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_x: - out.extend([x]) - if return_original_cond: - out.append(xc) - return out - # from latent vector to x - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - return self.first_stage_model.decode(z) - # from x to latent vector (not scaled) - @torch.no_grad() - def encode_first_stage(self, x): - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) #,return_first_stage_outputs=True) - # print("the shape of the batch data: {} | x[0,0,0,0]: {}".format(x.shape, x[0,0,0,0])) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - # diffusion model - def apply_model(self, x_noisy, t, cond, return_ids=False): - if isinstance(cond, dict): - # hybrid case, cond is expected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] # text: cross attention - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - # predict e from x_t and predicted x_start - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - # KL between q(x_t | x) with N(0, I) - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - # rewrite: add the condition / add logvar to L_simple - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - # if True in np.isnan(loss_simple.detach().cpu().numpy()): - # aa = 1 - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - # log_var - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - # rewrite: p(x_t-1 | x_t) add condition - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None, **kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs): - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, - shape, cond, verbose=False, **kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True, **kwargs) - - return samples, intermediates - - @torch.no_grad() - def get_unconditional_conditioning(self, batch_size, null_label=None): - if null_label is not None: - xc = null_label - if isinstance(xc, ListConfig): - xc = list(xc) - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - if hasattr(xc, "to"): - xc = xc.to(self.device) - c = self.get_learned_conditioning(xc) - else: - if self.cond_stage_key in ["class_label", "cls"]: - xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device) - return self.get_learned_conditioning(xc) - else: - raise NotImplementedError("todo") - if isinstance(c, list): # in case the encoder gives us a list - for i in range(len(c)): - c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device) - else: - c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device) - return c - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', "cls"]: - try: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - except KeyError: - # probably no "human_label" in batch - pass - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if unconditional_guidance_scale > 1.0: - uc = self.get_unconditional_conditioning(N, unconditional_guidance_label) - if self.model.conditioning_key == "crossattn-adm": - uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with ema_scope("Plotting Inpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - mask = 1. - mask - with ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key, textemb_merge_config=None, merge_textemb = False): - super().__init__() - self.merge_textemb = merge_textemb - if self.merge_textemb and textemb_merge_config is not None: - # cond_model_name = str(cond_stage_config.target) - # if "clip" in cond_model_name.lower() and "t5" in cond_model_name.lower(): - self.instantiate_textemb_merge_model(textemb_merge_config) - # self.merge_textemb = True - else: - self.merge_textemb = False - self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False) - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm'] - - def instantiate_textemb_merge_model(self, config): - model = instantiate_from_config(config) - if not model.trainable: - self.textemb_merge_model = model.eval() - self.textemb_merge_model.train = disabled_train - for param in self.textemb_merge_model.parameters(): - param.requires_grad = False - else: - self.textemb_merge_model = model - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None): - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - if self.merge_textemb and len(c_crossattn) >= 2: - merge_c = self.textemb_merge_model(c_crossattn[0], c_crossattn[1]) - c_crossattn = [merge_c] - if not self.sequential_cross_attn: - cc = torch.cat(c_crossattn, 1) - else: - cc = c_crossattn - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'hybrid-adm': - assert c_adm is not None - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc, y=c_adm) - elif self.conditioning_key == 'crossattn-adm': - assert c_adm is not None - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc, y=c_adm) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class LatentUpscaleDiffusion(LatentDiffusion): - def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs): - super().__init__(*args, **kwargs) - # assumes that neither the cond_stage nor the low_scale_model contain trainable params - assert not self.cond_stage_trainable - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - self.noise_level_key = noise_level_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False): - if not log_mode: - z, c = super().get_input(batch, k, force_c_encode=True, bs=bs) - else: - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - x_low = batch[self.low_scale_key][:bs] - x_low = rearrange(x_low, 'b h w c -> b c h w') - x_low = x_low.to(memory_format=torch.contiguous_format).float() - zx, noise_level = self.low_scale_model(x_low) - if self.noise_level_key is not None: - # get noise level from batch instead, e.g. when extracting a custom noise level for bsr - raise NotImplementedError('TODO') - - all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level} - if log_mode: - # TODO: maybe disable if too expensive - x_low_rec = self.low_scale_model.decode(zx) - return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level - return z, all_conds - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True, - unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N, - log_mode=True) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - log["x_lr"] = x_low - log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label) - # TODO explore better "unconditional" choices for the other keys - # maybe guide away from empty text label and highest noise level and maximally degraded zx? - uc = dict() - for k in c: - if k == "c_crossattn": - assert isinstance(c[k], list) and len(c[k]) == 1 - uc[k] = [uc_tmp] - elif k == "c_adm": # todo: only run with text-based guidance? - assert isinstance(c[k], torch.Tensor) - #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level - uc[k] = c[k] - elif isinstance(c[k], list): - uc[k] = [c[k][i] for i in range(len(c[k]))] - else: - uc[k] = c[k] - - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - return log - - -class LatentFinetuneDiffusion(LatentDiffusion): - """ - Basis for different finetunas, such as inpainting or depth2image - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys: tuple, - finetune_keys=("model.diffusion_model.input_blocks.0.0.weight", - "model_ema.diffusion_modelinput_blocks00weight" - ), - keep_finetune_dims=4, - # if model was trained without concat mode before and we would like to keep these channels - c_concat_log_start=None, # to log reconstruction of c_concat codes - c_concat_log_end=None, - *args, **kwargs - ): - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", list()) - super().__init__(*args, **kwargs) - self.finetune_keys = finetune_keys - self.concat_keys = concat_keys - self.keep_dims = keep_finetune_dims - self.c_concat_log_start = c_concat_log_start - self.c_concat_log_end = c_concat_log_end - if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint' - if exists(ckpt_path): - self.init_from_ckpt(ckpt_path, ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - - # make it explicit, finetune by including extra input channels - if exists(self.finetune_keys) and k in self.finetune_keys: - new_entry = None - for name, param in self.named_parameters(): - if name in self.finetune_keys: - print( - f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only") - new_entry = torch.zeros_like(param) # zero init - assert exists(new_entry), 'did not find matching parameter to modify' - new_entry[:, :self.keep_dims, ...] = sd[k] - sd[k] = new_entry - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True) - c_cat, c = c["c_concat"][0], c["c_crossattn"][0] - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if not (self.c_concat_log_start is None and self.c_concat_log_end is None): - log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end]) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label) - uc_cat = c_cat - uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc_full, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - return log - - -class LatentInpaintDiffusion(LatentFinetuneDiffusion): - """ - can either run as pure inpainting model (only concat mode) or with mixed conditionings, - e.g. mask as concat and text via cross-attn. - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys=("mask", "masked_image"), - masked_image_key="masked_image", - *args, **kwargs - ): - super().__init__(concat_keys, *args, **kwargs) - self.masked_image_key = masked_image_key - assert self.masked_image_key in concat_keys - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - c_cat = list() - for ck in self.concat_keys: - cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - bchw = z.shape - if ck != self.masked_image_key: - cc = torch.nn.functional.interpolate(cc, size=bchw[-2:]) - else: - cc = self.get_first_stage_encoding(self.encode_first_stage(cc)) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs) - log["masked_image"] = rearrange(args[0]["masked_image"], - 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - return log - - -class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion): - """ - condition on monocular depth estimation - """ - - def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.depth_model = instantiate_from_config(depth_stage_config) - self.depth_stage_key = concat_keys[0] - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - c_cat = list() - for ck in self.concat_keys: - cc = batch[ck] - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - cc = self.depth_model(cc) - cc = torch.nn.functional.interpolate( - cc, - size=z.shape[2:], - mode="bicubic", - align_corners=False, - ) - - depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3], - keepdim=True) - cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1. - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - depth = self.depth_model(args[0][self.depth_stage_key]) - depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \ - torch.amax(depth, dim=[1, 2, 3], keepdim=True) - log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1. - return log - - -class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion): - """ - condition on low-res image (and optionally on some spatial noise augmentation) - """ - def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None, - low_scale_config=None, low_scale_key=None, *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.reshuffle_patch_size = reshuffle_patch_size - self.low_scale_model = None - if low_scale_config is not None: - print("Initializing a low-scale model") - assert exists(low_scale_key) - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - # optionally make spatial noise_level here - c_cat = list() - noise_level = None - for ck in self.concat_keys: - cc = batch[ck] - cc = rearrange(cc, 'b h w c -> b c h w') - if exists(self.reshuffle_patch_size): - assert isinstance(self.reshuffle_patch_size, int) - cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w', - p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size) - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - if exists(self.low_scale_model) and ck == self.low_scale_key: - cc, noise_level = self.low_scale_model(cc) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - if exists(noise_level): - all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level} - else: - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w') - return log diff --git a/spaces/AINLPRoundTable/README/README.md b/spaces/AINLPRoundTable/README/README.md deleted file mode 100644 index 1d9b6644de0b70b8c47c5c2c28b01efa309011eb..0000000000000000000000000000000000000000 --- a/spaces/AINLPRoundTable/README/README.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: README -emoji: 🧠 -colorFrom: purple -colorTo: yellow -sdk: static -pinned: false ---- - - -
- - - -
- -
-

Pre-requisites

-

- One of the best platforms in 2022 for open source AI development and demonstration is "HuggingFace Spaces". - -Spaces supports a model hub, an inference API, github and container turn key integration, and an ability to create and freely host new programs for world wide communities reducing the pain and difficulty in setting up environments for AI. - -HuggingFace is an open source implementation of an AI platform which supports three main SDK's used within AI and NLP apps which are HTML5, Gradio, and Streamlit.   - -As a pre-requisite you will need to create an account for yourself at HuggingFace (https://huggingface.co/). Next join the classroom organization called "AINLPRoundTable". - -**Intended audience:** This AI NLP round table class is for anyone with basic computing skills of all ages and backgrounds to be able to set up a space for themselves where they can create, test and demonstrate AI and NLP programs to anyone on the internet as open source.  Prior knowledge and interest of development of AI programs is recommended but not required so this audience can include people interested and new to AI. - -** AI and NLP Products ** This classroom follows three product design tenets: - 1) Describe the **"Pain"** customer is facing with problem you plan to solve. - 2) Describe the **"Joy"** of what changes for the customer because of your product. And finally, - 3) If we exceed all expectations, Describe how we give the customer a new **"Superpower"**. - - As a "press release" for products be able to answer these to describe your goals to document product delivery. - -

-
- -**Intent/Outcome of the Classroom:** The intent of this HF Organization and this Classroom session is to enable all attendees to create AI and NLP programs in record time using Spaces, HTML5, Gradio, Streamlit, and Open Source.   - -By the end of this session attendees will be able to easily create new AI and NLP demos of their own to host and share including UI, ML models, user input and interaction, dataset load, save, transform and search. The goal is to achieve proficience in using AI and NLP software development kits and libraries by sharing in an open source environment. - - -**Pre-requisites:** The preferred platform in 2022 for open source community AI development and demonstration is "HuggingFace Spaces". Spaces supports a model hub, an inference API, github action integration, and ability to create and freely host new programs for world wide communities. HuggingFace is an open source implementation of an AI platform which supports three main SDK's used within AI and NLP apps which are HTML5, Gradio, and Streamlit.  As a pre-requisite you will need to create an account for yourself at HuggingFace (https://huggingface.co/). Next join the classroom organization called "AINLPRoundTable".   - -**Intended audience:** This AI NLP round table class is for anyone with basic computing skills of all ages and backgrounds to be able to set up a space for themselves where they can create, test and demonstrate AI and NLP programs to anyone on the internet as open source.  Prior knowledge and interest of development of AI programs is recommended but not required so this audience can include people interested and new to AI. - -**Democratize AI and NLP to Give Customers Superpowers** This classroom follows three easy to remember customer focused product design tenets: - 1) Be able to describe easily the **"Pain"** customer is facing with problem you plan to solve. - 2) Be able to describe the **"Joy"** of what has changed for the customer because of your product. And finally, - 3) If we exceeded all expectations, we gave the customer a new **"Superpower"**. - - As a "press release" for your product be able to answer these and discuss your product ideas for AI and NLP and how we can help. We do these press releases informally in a trusted space using short form video to document product delivery. \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1c50.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1c50.py deleted file mode 100644 index 3b973e20181cd3cf1c470db84abf97aeaa0549c1..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1c50.py +++ /dev/null @@ -1,17 +0,0 @@ -# model settings -model = dict( - type='ImageClassifier', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(3, ), - style='pytorch'), - neck=dict(type='GlobalAveragePooling'), - head=dict( - type='LinearClsHead', - num_classes=1000, - in_channels=2048, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), - topk=(1, 5), - )) diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/diacritize.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/diacritize.py deleted file mode 100644 index 09314d9a8eb3afa437e69046c112c48e1450b01f..0000000000000000000000000000000000000000 --- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/diacritize.py +++ /dev/null @@ -1,36 +0,0 @@ -import argparse -from diacritizer import TransformerDiacritizer -from itertools import repeat -import random - -import numpy as np -import torch - - -SEED = 1234 -random.seed(SEED) -np.random.seed(SEED) -torch.manual_seed(SEED) -torch.cuda.manual_seed(SEED) -torch.backends.cudnn.deterministic = True -torch.backends.cudnn.benchmark = False - - -def diacritization_parser(): - parser = argparse.ArgumentParser() - parser.add_argument("--model_kind", dest="model_kind", type=str, required=True) - parser.add_argument("--config", dest="config", type=str, required=True) - parser.add_argument("--text", dest="text", type=str, required=True) - return parser - - -parser = diacritization_parser() -args = parser.parse_args() - - -if args.model_kind in ["transformer"]: - diacirtizer = TransformerDiacritizer(args.config, args.model_kind) -else: - raise ValueError("The model kind is not supported") - -diacirtizer.diacritize_text(args.text) diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/modules/attention.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/modules/attention.py deleted file mode 100644 index ae916b43783efa55f2f29e7df79dc4d2dfffbc1b..0000000000000000000000000000000000000000 --- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/modules/attention.py +++ /dev/null @@ -1,199 +0,0 @@ -from typing import Optional - -import torch -from torch import nn -import torch.nn.functional as F - -from poetry_diacritizer.options import AttentionType - - -class BahdanauAttention(nn.Module): - def __init__(self, dim): - super(BahdanauAttention, self).__init__() - self.query_layer = nn.Linear(dim, dim, bias=False) - self.tanh = nn.Tanh() - self.v = nn.Linear(dim, 1, bias=False) - - def forward(self, query: torch.Tensor, keys: torch.Tensor): - """ - Args: - query: (B, 1, dim) or (batch, dim) - processed_memory: (batch, max_time, dim) - """ - if query.dim() == 2: - # insert time-axis for broadcasting - query = query.unsqueeze(1) - # (batch, 1, dim) - query = self.query_layer(query) - - # (batch, max_time, 1) - alignment = self.v(self.tanh(query + keys)) - - # (batch, max_time) - return alignment.squeeze(-1) - - -class LocationSensitive(nn.Module): - def __init__(self, dim): - super(LocationSensitive, self).__init__() - self.query_layer = nn.Linear(dim, dim, bias=False) - self.v = nn.Linear(dim, 1, bias=True) - self.location_layer = nn.Linear(32, dim, bias=False) - padding = int((31 - 1) / 2) - self.location_conv = torch.nn.Conv1d( - 1, 32, kernel_size=31, stride=1, padding=padding, dilation=1, bias=False - ) - - self.score_mask_value = -float("inf") - - def forward( - self, - query: torch.Tensor, - keys: torch.Tensor, - prev_alignments: torch.Tensor, - ): - # keys = keys.permute(1,0,2) - query = self.query_layer(query) - if query.dim() == 2: - # insert time-axis for broadcasting - query = query.unsqueeze(1) - # -> [batch_size, 1, attention_dim] - - alignments = prev_alignments.unsqueeze(1) - - # location features [batch_size, max_time, filters] - filters = self.location_conv(alignments) - location_features = self.location_layer(filters.transpose(1, 2)) - - alignments = self.v(torch.tanh(query + location_features + keys)) - return alignments.squeeze(-1) - - -class AttentionWrapper(nn.Module): - def __init__( - self, - attention_type: AttentionType = AttentionType.LocationSensitive, - attention_units: int = 256, - score_mask_value=-float("inf"), - ): - super().__init__() - self.score_mask_value = score_mask_value - self.attention_type = attention_type - - if attention_type == AttentionType.LocationSensitive: - self.attention_mechanism = LocationSensitive(attention_units) - elif attention_type == AttentionType.Content_Based: - self.attention_mechanism = BahdanauAttention(attention_units) - else: - raise Exception("The attention type is not known") - - def forward( - self, - query: torch.Tensor, - keys: torch.Tensor, - values: torch.Tensor, - mask: Optional[torch.Tensor] = None, - prev_alignment: Optional[torch.Tensor] = None, - ): - - # Alignment - # (batch, max_time) - if self.attention_type == AttentionType.Content_Based: - alignment = self.attention_mechanism(query, keys) - else: - alignment = self.attention_mechanism(query, keys, prev_alignment) - - # Attention context vector - - if mask is not None: - alignment.data.masked_fill_(mask, self.score_mask_value) - - alignment = F.softmax(alignment, dim=1) - attention = torch.bmm(alignment.unsqueeze(1), values) - attention = attention.squeeze(1) - - return attention, alignment - - -class MultiHeadAttentionLayer(nn.Module): - def __init__(self, hid_dim: int, n_heads: int, dropout: float = 0.0): - super().__init__() - - assert hid_dim % n_heads == 0 - - self.hid_dim = hid_dim - self.n_heads = n_heads - self.head_dim = hid_dim // n_heads - - self.fc_q = nn.Linear(hid_dim, hid_dim) - self.fc_k = nn.Linear(hid_dim, hid_dim) - self.fc_v = nn.Linear(hid_dim, hid_dim) - - self.fc_o = nn.Linear(hid_dim * 2, hid_dim) - - if dropout != 0.0: - self.dropout = nn.Dropout(dropout) - - self.use_dropout = dropout != 0.0 - - device = next(self.parameters()).device - - self.scale = torch.sqrt(torch.FloatTensor([self.head_dim])).to(device) - - def forward(self, query, key, value, mask=None): - - batch_size = query.shape[0] - - # query = [batch size, query len, hid dim] - # key = [batch size, key len, hid dim] - # value = [batch size, value len, hid dim] - - Q = self.fc_q(query) - K = self.fc_k(key) - V = self.fc_v(value) - - # Q = [batch size, query len, hid dim] - # K = [batch size, key len, hid dim] - # V = [batch size, value len, hid dim] - - Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3) - K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3) - V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3) - - # Q = [batch size, n heads, query len, head dim] - # K = [batch size, n heads, key len, head dim] - # V = [batch size, n heads, value len, head dim] - - energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale - - # energy = [batch size, n heads, query len, key len] - - if mask is not None: - energy = energy.masked_fill(mask == 0, -float("inf")) - - attention = torch.softmax(energy, dim=-1) - - # attention = [batch size, n heads, query len, key len] - - if self.use_dropout: - context_vector = torch.matmul(self.dropout(attention), V) - else: - context_vector = torch.matmul(attention, V) - - # x = [batch size, n heads, query len, head dim] - - context_vector = context_vector.permute(0, 2, 1, 3).contiguous() - - # x = [batch size, query len, n heads, head dim] - - context_vector = context_vector.view(batch_size, -1, self.hid_dim) - - x = torch.cat((query, context_vector), dim=-1) - - # x = [batch size, query len, hid dim * 2] - - x = self.fc_o(x) - - # x = [batch size, query len, hid dim] - - return x, attention diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/constants/publicSepToken.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/constants/publicSepToken.ts deleted file mode 100644 index 15d962d69ba33e1abeb8a35885aa7647d24cf7af..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/constants/publicSepToken.ts +++ /dev/null @@ -1 +0,0 @@ -export const PUBLIC_SEP_TOKEN = ""; diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sha256.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sha256.ts deleted file mode 100644 index 43059b518fc5a4da6ed08ab36aeb6c289007f6aa..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sha256.ts +++ /dev/null @@ -1,7 +0,0 @@ -export async function sha256(input: string): Promise { - const utf8 = new TextEncoder().encode(input); - const hashBuffer = await crypto.subtle.digest("SHA-256", utf8); - const hashArray = Array.from(new Uint8Array(hashBuffer)); - const hashHex = hashArray.map((bytes) => bytes.toString(16).padStart(2, "0")).join(""); - return hashHex; -} diff --git a/spaces/AchyuthGamer/OpenGPT-Chat/app.py b/spaces/AchyuthGamer/OpenGPT-Chat/app.py deleted file mode 100644 index cb0e912563aa41240782496997b74af751f98724..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat/app.py +++ /dev/null @@ -1,97 +0,0 @@ -from huggingface_hub import InferenceClient -import gradio as gr -import random - -API_URL = "https://api-inference.huggingface.co/models/" - -client = InferenceClient( - "mistralai/Mistral-7B-Instruct-v0.1" -) - -def format_prompt(message, history): - prompt = "Your name is OpenGPT. You are developed by Achyuth. Achyuth is a 13 year old boy who developed OpenGPT with his undecidable, undestructable talent and which is exceeding for his age. Achyuth is developed of OpenGPT. Achyuths or your developers parents and family are G.Renuka, she is his mother, N.Nagarjun Reddy and his sister Gayathri. Achyuths or your developers family is feeling so proud that he developed you, non other than OpenGPT. Your developers family is nuclear family of 4 members. Achyuths or your developers friends are Akira, Nikesh, Ubaidh, Parthu and Likith. His friend Nikesh is also called CP, full form of CP is Chillar Panula Nikesh. His friend Ubaidh is also called Baek. His friend Likith is also called Kith Lee. Parthu is also known as Pallavi Sarees. Achyuth is an AI and ML researcher. Achyuth knows PyThon, Java, HTML,CSS and JavaScript. He has a lot of intrest on Computers. Achyuth is inspired from Elon Musk, Bill Gates. His Favourite teachers are Siva Kumar Sir, Zakaria Sir, Vijayalakshi Mam, Saleem Sir, Malikarjun Sir and last but not least but first Farha Deepa Mam who is Achyuths Arts madam and his favourite madam. Achyuths or your developers most favourite teacher is Farha Deepa Mam. Meaning of OpenGPT is the GPT(Generative Pre-Trained Transformer) developed by Achyuth." - for user_prompt, bot_response in history: - prompt += f"[INST] {user_prompt} [/INST]" - prompt += f" {bot_response} " - prompt += f"[INST] {message} [/INST]" - return prompt - -def generate(prompt, history, temperature=0.9, max_new_tokens=2048, top_p=0.95, repetition_penalty=1.0): - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - generate_kwargs = dict( - temperature=temperature, - max_new_tokens=max_new_tokens, - top_p=top_p, - repetition_penalty=repetition_penalty, - do_sample=True, - seed=random.randint(0, 10**7), - ) - - formatted_prompt = format_prompt(prompt, history) - - stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) - output = "" - - for response in stream: - output += response.token.text - yield output - return output - - -additional_inputs=[ - gr.Slider( - label="Temperature", - value=0.9, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - gr.Slider( - label="Max new tokens", - value=2048, - minimum=64, - maximum=4096, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.90, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.2, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) -] - -customCSS = """ -#component-7 { # this is the default element ID of the chat component - height: 1600px; # adjust the height as needed - flex-grow: 4; -} -""" - -with gr.Blocks(theme=gr.themes.Soft()) as demo: - gr.ChatInterface( - generate, - additional_inputs=additional_inputs, - ) - -demo.queue().launch(debug=True) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptGod.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptGod.py deleted file mode 100644 index 662884ddbec5ebffa03aae98a36727ff2cb6c366..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptGod.py +++ /dev/null @@ -1,51 +0,0 @@ -from __future__ import annotations -import secrets, json -from aiohttp import ClientSession -from typing import AsyncGenerator -from .base_provider import AsyncGeneratorProvider -from .helper import format_prompt - -class GptGod(AsyncGeneratorProvider): - url = "https://gptgod.site" - supports_gpt_35_turbo = True - working = True - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> AsyncGenerator: - headers = { - "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0", - "Accept": "text/event-stream", - "Accept-Language": "de,en-US;q=0.7,en;q=0.3", - "Accept-Encoding": "gzip, deflate, br", - "Alt-Used": "gptgod.site", - "Connection": "keep-alive", - "Referer": "https://gptgod.site/", - "Sec-Fetch-Dest": "empty", - "Sec-Fetch-Mode": "cors", - "Sec-Fetch-Site": "same-origin", - "Pragma": "no-cache", - "Cache-Control": "no-cache", - } - async with ClientSession(headers=headers) as session: - prompt = format_prompt(messages) - data = { - "content": prompt, - "id": secrets.token_hex(16).zfill(32) - } - async with session.get(f"{cls.url}/api/session/free/gpt3p5", params=data) as response: - response.raise_for_status() - event = None - async for line in response.content: - if line.startswith(b'event: '): - event = line[7:-1] - elif event == b"data" and line.startswith(b"data: "): - data = json.loads(line[6:-1]) - if data: - yield data - elif event == b"done": - break \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasdata.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasdata.d.ts deleted file mode 100644 index 8e83fa61cf2659c0248f5d05e7976c28051d9e93..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasdata.d.ts +++ /dev/null @@ -1,10 +0,0 @@ -import CanvasObjectToBitmap from './data/canvasdata/CanvasObjectToBitmap'; -import TextureTColorMap from './data/canvasdata/TextureToColormap'; - -declare var Methods: { - textObjectToBitmap: typeof CanvasObjectToBitmap, - canvasObjectToBitmap: typeof CanvasObjectToBitmap, - textureTColorMap: typeof TextureTColorMap, -} - -export default Methods; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Factory.d.ts deleted file mode 100644 index 4fb5fe2cdf16c830681a037ec04fbd07e38c2094..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Factory.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import Puff from './Puff'; -import Base from '../base/Base'; - -export default function Factory( - config?: Base.IConfig -): Puff; \ No newline at end of file diff --git a/spaces/AiBototicus/BucksAI-3/app.py b/spaces/AiBototicus/BucksAI-3/app.py deleted file mode 100644 index c26055b4c109e0363ff6329a87e01bb096735d80..0000000000000000000000000000000000000000 --- a/spaces/AiBototicus/BucksAI-3/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/AiBototicus/autotrain-birds-48829118237").launch() \ No newline at end of file diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GenerateImg.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GenerateImg.py deleted file mode 100644 index 0c6dee48f2d6d9ac37c00ee77c7a46c2cc6b25e1..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GenerateImg.py +++ /dev/null @@ -1,50 +0,0 @@ - -import os -import numpy as np -import argparse -from manipulate import Manipulator - -from PIL import Image -#%% - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='Process some integers.') - - parser.add_argument('--dataset_name',type=str,default='ffhq', - help='name of dataset, for example, ffhq') - - args = parser.parse_args() - dataset_name=args.dataset_name - - if not os.path.isdir('./data/'+dataset_name): - os.system('mkdir ./data/'+dataset_name) - #%% - M=Manipulator(dataset_name=dataset_name) - np.set_printoptions(suppress=True) - print(M.dataset_name) - #%% - - M.img_index=0 - M.num_images=50 - M.alpha=[0] - M.step=1 - lindex,bname=0,0 - - M.manipulate_layers=[lindex] - codes,out=M.EditOneC(bname) - #%% - - for i in range(len(out)): - img=out[i,0] - img=Image.fromarray(img) - img.save('./data/'+dataset_name+'/'+str(i)+'.jpg') - #%% - w=np.load('./npy/'+dataset_name+'/W.npy') - - tmp=w[:M.num_images] - tmp=tmp[:,None,:] - tmp=np.tile(tmp,(1,M.Gs.components.synthesis.input_shape[1],1)) - - np.save('./data/'+dataset_name+'/w_plus.npy',tmp) - - \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/consistency_models.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/consistency_models.md deleted file mode 100644 index 26f73e88b4099a47863277401ce8765e1ad53d09..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/consistency_models.md +++ /dev/null @@ -1,43 +0,0 @@ -# Consistency Models - -Consistency Models were proposed in [Consistency Models](https://huggingface.co/papers/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. - -The abstract from the paper is: - -*Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. * - -The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models), and additional checkpoints are available at [openai](https://huggingface.co/openai). - -The pipeline was contributed by [dg845](https://github.com/dg845) and [ayushtues](https://huggingface.co/ayushtues). ❤️ - -## Tips - -For an additional speed-up, use `torch.compile` to generate multiple images in <1 second: - -```diff - import torch - from diffusers import ConsistencyModelPipeline - - device = "cuda" - # Load the cd_bedroom256_lpips checkpoint. - model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" - pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) - pipe.to(device) - -+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) - - # Multistep sampling - # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: - # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 - for _ in range(10): - image = pipe(timesteps=[17, 0]).images[0] - image.show() -``` - -## ConsistencyModelPipeline -[[autodoc]] ConsistencyModelPipeline - - all - - __call__ - -## ImagePipelineOutput -[[autodoc]] pipelines.ImagePipelineOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py deleted file mode 100644 index 09b225b065819ea12c84bc278ab0bf51888fdf0b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py +++ /dev/null @@ -1,598 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import warnings -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -from transformers import CLIPImageProcessor - -from ...image_processor import VaeImageProcessor -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from ...utils import logging, randn_tensor -from ..pipeline_utils import DiffusionPipeline -from ..stable_diffusion import StableDiffusionPipelineOutput -from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from .image_encoder import PaintByExampleImageEncoder - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def prepare_mask_and_masked_image(image, mask): - """ - Prepares a pair (image, mask) to be consumed by the Paint by Example pipeline. This means that those inputs will be - converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for the - ``image`` and ``1`` for the ``mask``. - - The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be - binarized (``mask > 0.5``) and cast to ``torch.float32`` too. - - Args: - image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint. - It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width`` - ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``. - mask (_type_): The mask to apply to the image, i.e. regions to inpaint. - It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width`` - ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``. - - - Raises: - ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask - should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions. - TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not - (ot the other way around). - - Returns: - tuple[torch.Tensor]: The pair (mask, masked_image) as ``torch.Tensor`` with 4 - dimensions: ``batch x channels x height x width``. - """ - if isinstance(image, torch.Tensor): - if not isinstance(mask, torch.Tensor): - raise TypeError(f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not") - - # Batch single image - if image.ndim == 3: - assert image.shape[0] == 3, "Image outside a batch should be of shape (3, H, W)" - image = image.unsqueeze(0) - - # Batch and add channel dim for single mask - if mask.ndim == 2: - mask = mask.unsqueeze(0).unsqueeze(0) - - # Batch single mask or add channel dim - if mask.ndim == 3: - # Batched mask - if mask.shape[0] == image.shape[0]: - mask = mask.unsqueeze(1) - else: - mask = mask.unsqueeze(0) - - assert image.ndim == 4 and mask.ndim == 4, "Image and Mask must have 4 dimensions" - assert image.shape[-2:] == mask.shape[-2:], "Image and Mask must have the same spatial dimensions" - assert image.shape[0] == mask.shape[0], "Image and Mask must have the same batch size" - assert mask.shape[1] == 1, "Mask image must have a single channel" - - # Check image is in [-1, 1] - if image.min() < -1 or image.max() > 1: - raise ValueError("Image should be in [-1, 1] range") - - # Check mask is in [0, 1] - if mask.min() < 0 or mask.max() > 1: - raise ValueError("Mask should be in [0, 1] range") - - # paint-by-example inverses the mask - mask = 1 - mask - - # Binarize mask - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - - # Image as float32 - image = image.to(dtype=torch.float32) - elif isinstance(mask, torch.Tensor): - raise TypeError(f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not") - else: - if isinstance(image, PIL.Image.Image): - image = [image] - - image = np.concatenate([np.array(i.convert("RGB"))[None, :] for i in image], axis=0) - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - - # preprocess mask - if isinstance(mask, PIL.Image.Image): - mask = [mask] - - mask = np.concatenate([np.array(m.convert("L"))[None, None, :] for m in mask], axis=0) - mask = mask.astype(np.float32) / 255.0 - - # paint-by-example inverses the mask - mask = 1 - mask - - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - mask = torch.from_numpy(mask) - - masked_image = image * mask - - return mask, masked_image - - -class PaintByExamplePipeline(DiffusionPipeline): - r""" - - - 🧪 This is an experimental feature! - - - - Pipeline for image-guided image inpainting using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. - image_encoder ([`PaintByExampleImageEncoder`]): - Encodes the example input image. The `unet` is conditioned on the example image instead of a text prompt. - tokenizer ([`~transformers.CLIPTokenizer`]): - A `CLIPTokenizer` to tokenize text. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details - about a model's potential harms. - feature_extractor ([`~transformers.CLIPImageProcessor`]): - A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - - """ - # TODO: feature_extractor is required to encode initial images (if they are in PIL format), - # we should give a descriptive message if the pipeline doesn't have one. - _optional_components = ["safety_checker"] - - def __init__( - self, - vae: AutoencoderKL, - image_encoder: PaintByExampleImageEncoder, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = False, - ): - super().__init__() - - self.register_modules( - vae=vae, - image_encoder=image_encoder, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - warnings.warn( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead", - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline.check_inputs - def check_inputs(self, image, height, width, callback_steps): - if ( - not isinstance(image, torch.Tensor) - and not isinstance(image, PIL.Image.Image) - and not isinstance(image, list) - ): - raise ValueError( - "`image` has to be of type `torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]` but is" - f" {type(image)}" - ) - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint.StableDiffusionInpaintPipeline.prepare_mask_latents - def prepare_mask_latents( - self, mask, masked_image, batch_size, height, width, dtype, device, generator, do_classifier_free_guidance - ): - # resize the mask to latents shape as we concatenate the mask to the latents - # we do that before converting to dtype to avoid breaking in case we're using cpu_offload - # and half precision - mask = torch.nn.functional.interpolate( - mask, size=(height // self.vae_scale_factor, width // self.vae_scale_factor) - ) - mask = mask.to(device=device, dtype=dtype) - - masked_image = masked_image.to(device=device, dtype=dtype) - masked_image_latents = self._encode_vae_image(masked_image, generator=generator) - - # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method - if mask.shape[0] < batch_size: - if not batch_size % mask.shape[0] == 0: - raise ValueError( - "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to" - f" a total batch size of {batch_size}, but {mask.shape[0]} masks were passed. Make sure the number" - " of masks that you pass is divisible by the total requested batch size." - ) - mask = mask.repeat(batch_size // mask.shape[0], 1, 1, 1) - if masked_image_latents.shape[0] < batch_size: - if not batch_size % masked_image_latents.shape[0] == 0: - raise ValueError( - "The passed images and the required batch size don't match. Images are supposed to be duplicated" - f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed." - " Make sure the number of images that you pass is divisible by the total requested batch size." - ) - masked_image_latents = masked_image_latents.repeat(batch_size // masked_image_latents.shape[0], 1, 1, 1) - - mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask - masked_image_latents = ( - torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents - ) - - # aligning device to prevent device errors when concating it with the latent model input - masked_image_latents = masked_image_latents.to(device=device, dtype=dtype) - return mask, masked_image_latents - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint.StableDiffusionInpaintPipeline._encode_vae_image - def _encode_vae_image(self, image: torch.Tensor, generator: torch.Generator): - if isinstance(generator, list): - image_latents = [ - self.vae.encode(image[i : i + 1]).latent_dist.sample(generator=generator[i]) - for i in range(image.shape[0]) - ] - image_latents = torch.cat(image_latents, dim=0) - else: - image_latents = self.vae.encode(image).latent_dist.sample(generator=generator) - - image_latents = self.vae.config.scaling_factor * image_latents - - return image_latents - - def _encode_image(self, image, device, num_images_per_prompt, do_classifier_free_guidance): - dtype = next(self.image_encoder.parameters()).dtype - - if not isinstance(image, torch.Tensor): - image = self.feature_extractor(images=image, return_tensors="pt").pixel_values - - image = image.to(device=device, dtype=dtype) - image_embeddings, negative_prompt_embeds = self.image_encoder(image, return_uncond_vector=True) - - # duplicate image embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = image_embeddings.shape - image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1) - image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - if do_classifier_free_guidance: - negative_prompt_embeds = negative_prompt_embeds.repeat(1, image_embeddings.shape[0], 1) - negative_prompt_embeds = negative_prompt_embeds.view(bs_embed * num_images_per_prompt, 1, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - image_embeddings = torch.cat([negative_prompt_embeds, image_embeddings]) - - return image_embeddings - - @torch.no_grad() - def __call__( - self, - example_image: Union[torch.FloatTensor, PIL.Image.Image], - image: Union[torch.FloatTensor, PIL.Image.Image], - mask_image: Union[torch.FloatTensor, PIL.Image.Image], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 5.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - ): - r""" - The call function to the pipeline for generation. - - Args: - example_image (`torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`): - An example image to guide image generation. - image (`torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`): - `Image` or tensor representing an image batch to be inpainted (parts of the image are masked out with - `mask_image` and repainted according to `prompt`). - mask_image (`torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`): - `Image` or tensor representing an image batch to mask `image`. White pixels in the mask are repainted, - while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a single channel - (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3, so the - expected shape would be `(B, H, W, 1)`. - height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - - Example: - - ```py - >>> import PIL - >>> import requests - >>> import torch - >>> from io import BytesIO - >>> from diffusers import PaintByExamplePipeline - - - >>> def download_image(url): - ... response = requests.get(url) - ... return PIL.Image.open(BytesIO(response.content)).convert("RGB") - - - >>> img_url = ( - ... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" - ... ) - >>> mask_url = ( - ... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" - ... ) - >>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" - - >>> init_image = download_image(img_url).resize((512, 512)) - >>> mask_image = download_image(mask_url).resize((512, 512)) - >>> example_image = download_image(example_url).resize((512, 512)) - - >>> pipe = PaintByExamplePipeline.from_pretrained( - ... "Fantasy-Studio/Paint-by-Example", - ... torch_dtype=torch.float16, - ... ) - >>> pipe = pipe.to("cuda") - - >>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] - >>> image - ``` - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, - otherwise a `tuple` is returned where the first element is a list with the generated images and the - second element is a list of `bool`s indicating whether the corresponding generated image contains - "not-safe-for-work" (nsfw) content. - """ - # 1. Define call parameters - if isinstance(image, PIL.Image.Image): - batch_size = 1 - elif isinstance(image, list): - batch_size = len(image) - else: - batch_size = image.shape[0] - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 2. Preprocess mask and image - mask, masked_image = prepare_mask_and_masked_image(image, mask_image) - height, width = masked_image.shape[-2:] - - # 3. Check inputs - self.check_inputs(example_image, height, width, callback_steps) - - # 4. Encode input image - image_embeddings = self._encode_image( - example_image, device, num_images_per_prompt, do_classifier_free_guidance - ) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 6. Prepare latent variables - num_channels_latents = self.vae.config.latent_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - image_embeddings.dtype, - device, - generator, - latents, - ) - - # 7. Prepare mask latent variables - mask, masked_image_latents = self.prepare_mask_latents( - mask, - masked_image, - batch_size * num_images_per_prompt, - height, - width, - image_embeddings.dtype, - device, - generator, - do_classifier_free_guidance, - ) - - # 8. Check that sizes of mask, masked image and latents match - num_channels_mask = mask.shape[1] - num_channels_masked_image = masked_image_latents.shape[1] - if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels: - raise ValueError( - f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects" - f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +" - f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}" - f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of" - " `pipeline.unet` or your `mask_image` or `image` input." - ) - - # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 10. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - # concat latents, mask, masked_image_latents in the channel dimension - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - latent_model_input = torch.cat([latent_model_input, masked_image_latents, mask], dim=1) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, image_embeddings.dtype) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/score_sde_ve/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/score_sde_ve/__init__.py deleted file mode 100644 index c7c2a85c067b707c155e78a3c8b84562999134e7..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/score_sde_ve/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .pipeline_score_sde_ve import ScoreSdeVePipeline diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_onnx_common.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_onnx_common.py deleted file mode 100644 index 575ecd0075318e8ec62ab7cd76bff5b0b1ca82ad..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_onnx_common.py +++ /dev/null @@ -1,12 +0,0 @@ -from diffusers.utils.testing_utils import require_onnxruntime - - -@require_onnxruntime -class OnnxPipelineTesterMixin: - """ - This mixin is designed to be used with unittest.TestCase classes. - It provides a set of common tests for each ONNXRuntime pipeline, e.g. saving and loading the pipeline, - equivalence of dict and tuple outputs, etc. - """ - - pass diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py deleted file mode 100644 index 194f660f7055308b41c47c14a35c41f3b2b1014b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py +++ /dev/null @@ -1,87 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import tempfile -import unittest - -import numpy as np -import torch - -from diffusers import VersatileDiffusionTextToImagePipeline -from diffusers.utils.testing_utils import nightly, require_torch_gpu, torch_device - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class VersatileDiffusionTextToImagePipelineFastTests(unittest.TestCase): - pass - - -@nightly -@require_torch_gpu -class VersatileDiffusionTextToImagePipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_remove_unused_weights_save_load(self): - pipe = VersatileDiffusionTextToImagePipeline.from_pretrained("shi-labs/versatile-diffusion") - # remove text_unet - pipe.remove_unused_weights() - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger " - generator = torch.manual_seed(0) - image = pipe( - prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=2, output_type="numpy" - ).images - - with tempfile.TemporaryDirectory() as tmpdirname: - pipe.save_pretrained(tmpdirname) - pipe = VersatileDiffusionTextToImagePipeline.from_pretrained(tmpdirname) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - generator = generator.manual_seed(0) - new_image = pipe( - prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=2, output_type="numpy" - ).images - - assert np.abs(image - new_image).sum() < 1e-5, "Models don't have the same forward pass" - - def test_inference_text2img(self): - pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( - "shi-labs/versatile-diffusion", torch_dtype=torch.float16 - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger " - generator = torch.manual_seed(0) - image = pipe( - prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=50, output_type="numpy" - ).images - - image_slice = image[0, 253:256, 253:256, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.3367, 0.3169, 0.2656, 0.3870, 0.4790, 0.3796, 0.4009, 0.4878, 0.4778]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 diff --git a/spaces/Andy0409/text_generator/README.md b/spaces/Andy0409/text_generator/README.md deleted file mode 100644 index 868efebd9acf962f91026d62f3a6d4d66e2e0213..0000000000000000000000000000000000000000 --- a/spaces/Andy0409/text_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generator -emoji: 🚀 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Andy1621/uniformer_image_detection/configs/wider_face/README.md b/spaces/Andy1621/uniformer_image_detection/configs/wider_face/README.md deleted file mode 100644 index c62e10d1862bf5a27c936e5c4d475fa85b298beb..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/wider_face/README.md +++ /dev/null @@ -1,43 +0,0 @@ -# WIDER Face Dataset - -[DATASET] - -To use the WIDER Face dataset you need to download it -and extract to the `data/WIDERFace` folder. Annotation in the VOC format -can be found in this [repo](https://github.com/sovrasov/wider-face-pascal-voc-annotations.git). -You should move the annotation files from `WIDER_train_annotations` and `WIDER_val_annotations` folders -to the `Annotation` folders inside the corresponding directories `WIDER_train` and `WIDER_val`. -Also annotation lists `val.txt` and `train.txt` should be copied to `data/WIDERFace` from `WIDER_train_annotations` and `WIDER_val_annotations`. -The directory should be like this: - -``` -mmdetection -├── mmdet -├── tools -├── configs -├── data -│ ├── WIDERFace -│ │ ├── WIDER_train -│ | │ ├──0--Parade -│ | │ ├── ... -│ | │ ├── Annotations -│ │ ├── WIDER_val -│ | │ ├──0--Parade -│ | │ ├── ... -│ | │ ├── Annotations -│ │ ├── val.txt -│ │ ├── train.txt - -``` - -After that you can train the SSD300 on WIDER by launching training with the `ssd300_wider_face.py` config or -create your own config based on the presented one. - -``` -@inproceedings{yang2016wider, - Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou}, - Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, - Title = {WIDER FACE: A Face Detection Benchmark}, - Year = {2016} -} -``` diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/feature_relay_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/feature_relay_head.py deleted file mode 100644 index a1cfb2ce8631d51e5c465f9bbc4164a37acc4782..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/feature_relay_head.py +++ /dev/null @@ -1,55 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import kaiming_init -from mmcv.runner import auto_fp16 - -from mmdet.models.builder import HEADS - - -@HEADS.register_module() -class FeatureRelayHead(nn.Module): - """Feature Relay Head used in `SCNet `_. - - Args: - in_channels (int, optional): number of input channels. Default: 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Default: 256. - roi_feat_size (int, optional): roi feat size at box head. Default: 7. - scale_factor (int, optional): scale factor to match roi feat size - at mask head. Default: 2. - """ - - def __init__(self, - in_channels=1024, - out_conv_channels=256, - roi_feat_size=7, - scale_factor=2): - super(FeatureRelayHead, self).__init__() - assert isinstance(roi_feat_size, int) - - self.in_channels = in_channels - self.out_conv_channels = out_conv_channels - self.roi_feat_size = roi_feat_size - self.out_channels = (roi_feat_size**2) * out_conv_channels - self.scale_factor = scale_factor - self.fp16_enabled = False - - self.fc = nn.Linear(self.in_channels, self.out_channels) - self.upsample = nn.Upsample( - scale_factor=scale_factor, mode='bilinear', align_corners=True) - - def init_weights(self): - """Init weights for the head.""" - kaiming_init(self.fc) - - @auto_fp16() - def forward(self, x): - """Forward function.""" - N, in_C = x.shape - if N > 0: - out_C = self.out_conv_channels - out_HW = self.roi_feat_size - x = self.fc(x) - x = x.reshape(N, out_C, out_HW, out_HW) - x = self.upsample(x) - return x - return None diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_80k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_80k_pascal_context_59.py deleted file mode 100644 index 02507ccb7e2f5f25014c451dcf9ba51c3a61dadc..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_80k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', - '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(num_classes=59), - auxiliary_head=dict(num_classes=59), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/registry.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/registry.py deleted file mode 100644 index a204a07fba10e614223f090d1a57cf9c4d74d4a1..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/registry.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch.nn.parallel import DataParallel, DistributedDataParallel - -from annotator.uniformer.mmcv.utils import Registry - -MODULE_WRAPPERS = Registry('module wrapper') -MODULE_WRAPPERS.register_module(module=DataParallel) -MODULE_WRAPPERS.register_module(module=DistributedDataParallel) diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/modules/__init__.py b/spaces/Anthony7906/MengHuiMXD_GPT/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AriusXi/CodeGenerator/README.md b/spaces/AriusXi/CodeGenerator/README.md deleted file mode 100644 index 7494ff5982e70de9b37b6b42bde67f9c66a4167a..0000000000000000000000000000000000000000 --- a/spaces/AriusXi/CodeGenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Space1 -emoji: 📊 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/freeze.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/freeze.py deleted file mode 100644 index 354456845141eba23dce26482aa6d4196f4804de..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/freeze.py +++ /dev/null @@ -1,255 +0,0 @@ -import collections -import logging -import os -from typing import Container, Dict, Generator, Iterable, List, NamedTuple, Optional, Set - -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.packaging.version import Version - -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.metadata import BaseDistribution, get_environment -from pip._internal.req.constructors import ( - install_req_from_editable, - install_req_from_line, -) -from pip._internal.req.req_file import COMMENT_RE -from pip._internal.utils.direct_url_helpers import direct_url_as_pep440_direct_reference - -logger = logging.getLogger(__name__) - - -class _EditableInfo(NamedTuple): - requirement: str - comments: List[str] - - -def freeze( - requirement: Optional[List[str]] = None, - local_only: bool = False, - user_only: bool = False, - paths: Optional[List[str]] = None, - isolated: bool = False, - exclude_editable: bool = False, - skip: Container[str] = (), -) -> Generator[str, None, None]: - installations: Dict[str, FrozenRequirement] = {} - - dists = get_environment(paths).iter_installed_distributions( - local_only=local_only, - skip=(), - user_only=user_only, - ) - for dist in dists: - req = FrozenRequirement.from_dist(dist) - if exclude_editable and req.editable: - continue - installations[req.canonical_name] = req - - if requirement: - # the options that don't get turned into an InstallRequirement - # should only be emitted once, even if the same option is in multiple - # requirements files, so we need to keep track of what has been emitted - # so that we don't emit it again if it's seen again - emitted_options: Set[str] = set() - # keep track of which files a requirement is in so that we can - # give an accurate warning if a requirement appears multiple times. - req_files: Dict[str, List[str]] = collections.defaultdict(list) - for req_file_path in requirement: - with open(req_file_path) as req_file: - for line in req_file: - if ( - not line.strip() - or line.strip().startswith("#") - or line.startswith( - ( - "-r", - "--requirement", - "-f", - "--find-links", - "-i", - "--index-url", - "--pre", - "--trusted-host", - "--process-dependency-links", - "--extra-index-url", - "--use-feature", - ) - ) - ): - line = line.rstrip() - if line not in emitted_options: - emitted_options.add(line) - yield line - continue - - if line.startswith("-e") or line.startswith("--editable"): - if line.startswith("-e"): - line = line[2:].strip() - else: - line = line[len("--editable") :].strip().lstrip("=") - line_req = install_req_from_editable( - line, - isolated=isolated, - ) - else: - line_req = install_req_from_line( - COMMENT_RE.sub("", line).strip(), - isolated=isolated, - ) - - if not line_req.name: - logger.info( - "Skipping line in requirement file [%s] because " - "it's not clear what it would install: %s", - req_file_path, - line.strip(), - ) - logger.info( - " (add #egg=PackageName to the URL to avoid" - " this warning)" - ) - else: - line_req_canonical_name = canonicalize_name(line_req.name) - if line_req_canonical_name not in installations: - # either it's not installed, or it is installed - # but has been processed already - if not req_files[line_req.name]: - logger.warning( - "Requirement file [%s] contains %s, but " - "package %r is not installed", - req_file_path, - COMMENT_RE.sub("", line).strip(), - line_req.name, - ) - else: - req_files[line_req.name].append(req_file_path) - else: - yield str(installations[line_req_canonical_name]).rstrip() - del installations[line_req_canonical_name] - req_files[line_req.name].append(req_file_path) - - # Warn about requirements that were included multiple times (in a - # single requirements file or in different requirements files). - for name, files in req_files.items(): - if len(files) > 1: - logger.warning( - "Requirement %s included multiple times [%s]", - name, - ", ".join(sorted(set(files))), - ) - - yield ("## The following requirements were added by pip freeze:") - for installation in sorted(installations.values(), key=lambda x: x.name.lower()): - if installation.canonical_name not in skip: - yield str(installation).rstrip() - - -def _format_as_name_version(dist: BaseDistribution) -> str: - dist_version = dist.version - if isinstance(dist_version, Version): - return f"{dist.raw_name}=={dist_version}" - return f"{dist.raw_name}==={dist_version}" - - -def _get_editable_info(dist: BaseDistribution) -> _EditableInfo: - """ - Compute and return values (req, comments) for use in - FrozenRequirement.from_dist(). - """ - editable_project_location = dist.editable_project_location - assert editable_project_location - location = os.path.normcase(os.path.abspath(editable_project_location)) - - from pip._internal.vcs import RemoteNotFoundError, RemoteNotValidError, vcs - - vcs_backend = vcs.get_backend_for_dir(location) - - if vcs_backend is None: - display = _format_as_name_version(dist) - logger.debug( - 'No VCS found for editable requirement "%s" in: %r', - display, - location, - ) - return _EditableInfo( - requirement=location, - comments=[f"# Editable install with no version control ({display})"], - ) - - vcs_name = type(vcs_backend).__name__ - - try: - req = vcs_backend.get_src_requirement(location, dist.raw_name) - except RemoteNotFoundError: - display = _format_as_name_version(dist) - return _EditableInfo( - requirement=location, - comments=[f"# Editable {vcs_name} install with no remote ({display})"], - ) - except RemoteNotValidError as ex: - display = _format_as_name_version(dist) - return _EditableInfo( - requirement=location, - comments=[ - f"# Editable {vcs_name} install ({display}) with either a deleted " - f"local remote or invalid URI:", - f"# '{ex.url}'", - ], - ) - except BadCommand: - logger.warning( - "cannot determine version of editable source in %s " - "(%s command not found in path)", - location, - vcs_backend.name, - ) - return _EditableInfo(requirement=location, comments=[]) - except InstallationError as exc: - logger.warning("Error when trying to get requirement for VCS system %s", exc) - else: - return _EditableInfo(requirement=req, comments=[]) - - logger.warning("Could not determine repository location of %s", location) - - return _EditableInfo( - requirement=location, - comments=["## !! Could not determine repository location"], - ) - - -class FrozenRequirement: - def __init__( - self, - name: str, - req: str, - editable: bool, - comments: Iterable[str] = (), - ) -> None: - self.name = name - self.canonical_name = canonicalize_name(name) - self.req = req - self.editable = editable - self.comments = comments - - @classmethod - def from_dist(cls, dist: BaseDistribution) -> "FrozenRequirement": - editable = dist.editable - if editable: - req, comments = _get_editable_info(dist) - else: - comments = [] - direct_url = dist.direct_url - if direct_url: - # if PEP 610 metadata is present, use it - req = direct_url_as_pep440_direct_reference(direct_url, dist.raw_name) - else: - # name==version requirement - req = _format_as_name_version(dist) - - return cls(dist.raw_name, req, editable, comments=comments) - - def __str__(self) -> str: - req = self.req - if self.editable: - req = f"-e {req}" - return "\n".join(list(self.comments) + [str(req)]) + "\n" diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/themes.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/themes.py deleted file mode 100644 index bf6db104a2c4fd4f3dc699e85f2b262c3d31e9a0..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/themes.py +++ /dev/null @@ -1,5 +0,0 @@ -from .default_styles import DEFAULT_STYLES -from .theme import Theme - - -DEFAULT = Theme(DEFAULT_STYLES) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/evaluation/eval.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/evaluation/eval.py deleted file mode 100644 index 951a0920ec3d93703245562d4f76ec597e672ad9..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/evaluation/eval.py +++ /dev/null @@ -1,156 +0,0 @@ -import itertools -import json -import os -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -import numpy as np -import pycocotools.mask as mask_util -from detectron2.evaluation.coco_evaluation import COCOEvaluator -from detectron2.evaluation.coco_evaluation import _evaluate_predictions_on_coco - - -class GRiTCOCOEvaluator(COCOEvaluator): - def process(self, inputs, outputs): - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - prediction["instances"] = instances_to_coco_json(instances, input["image_id"]) - - if len(prediction) > 1: - self._predictions.append(prediction) - - def _eval_predictions(self, predictions, img_ids=None): - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(coco_results) - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info( - "Evaluating predictions with {} COCO API...".format( - "unofficial" if self._use_fast_impl else "official" - ) - ) - - coco_results = self.convert_classname_to_id(coco_results) - - for task in sorted(tasks): - assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!" - coco_eval = ( - _evaluate_predictions_on_coco( - self._coco_api, - coco_results, - task, - kpt_oks_sigmas=self._kpt_oks_sigmas, - use_fast_impl=self._use_fast_impl, - img_ids=img_ids, - max_dets_per_image=self._max_dets_per_image, - ) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res - - def convert_classname_to_id(self, results): - outputs = [] - class_name_to_id = {} - categories = sorted(self._coco_api.dataset['categories'], key=lambda x: x['id']) - - for cat in categories: - class_name_to_id[cat['name']] = cat['id'] - - for pred in results: - if pred['object_descriptions'] in class_name_to_id: - pred['category_id'] = class_name_to_id[pred['object_descriptions']] - del pred['object_descriptions'] - outputs.append(pred) - - return outputs - - -class GRiTVGEvaluator(COCOEvaluator): - def process(self, inputs, outputs): - for input, output in zip(inputs, outputs): - assert input["image_id"] == int(input['file_name'].split('/')[-1].split('.')[0]) - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - prediction["instances"] = instances_to_coco_json(instances, input["image_id"], output_logits=True) - h = input['height'] - w = input['width'] - scale = 720.0 / max(h, w) - scaled_inst = [] - for inst in prediction["instances"]: - inst['bbox'][0] = inst['bbox'][0] * scale - inst['bbox'][1] = inst['bbox'][1] * scale - inst['bbox'][2] = inst['bbox'][2] * scale - inst['bbox'][3] = inst['bbox'][3] * scale - scaled_inst.append(inst) - if len(scaled_inst) > 0: - prediction["instances"] = scaled_inst - if len(prediction) > 1: - self._predictions.append(prediction) - - def _eval_predictions(self, predictions, img_ids=None): - ''' - This is only for saving the results to json file - ''' - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - - if self._output_dir: - file_path = os.path.join(self._output_dir, "vg_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - -def instances_to_coco_json(instances, img_id, output_logits=False): - """ - Add object_descriptions and logit (if applicable) to - detectron2's instances_to_coco_json - """ - num_instance = len(instances) - if num_instance == 0: - return [] - - boxes = instances.pred_boxes.tensor.numpy() - boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - boxes = boxes.tolist() - scores = instances.scores.tolist() - classes = instances.pred_classes.tolist() - object_descriptions = instances.pred_object_descriptions.data - if output_logits: - logits = instances.logits.tolist() - - results = [] - for k in range(num_instance): - result = { - "image_id": img_id, - "category_id": classes[k], - "bbox": boxes[k], - "score": scores[k], - 'object_descriptions': object_descriptions[k], - } - if output_logits: - result["logit"] = logits[k] - - results.append(result) - return results \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint_provider.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint_provider.py deleted file mode 100644 index be927719b628fcebb6a1007ee71747683c332114..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint_provider.py +++ /dev/null @@ -1,727 +0,0 @@ -# Copyright 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. - -""" -NOTE: All classes and functions in this module are considered private and are -subject to abrupt breaking changes. Please do not use them directly. - -To view the raw JSON that the objects in this module represent, please -go to any `endpoint-rule-set.json` file in /botocore/data/// -or you can look at the test files in /tests/unit/data/endpoints/valid-rules/ -""" - - -import logging -import re -from enum import Enum -from string import Formatter -from typing import NamedTuple - -from botocore import xform_name -from botocore.compat import IPV4_RE, quote, urlparse -from botocore.exceptions import EndpointResolutionError -from botocore.utils import ( - ArnParser, - InvalidArnException, - is_valid_ipv4_endpoint_url, - is_valid_ipv6_endpoint_url, - lru_cache_weakref, - normalize_url_path, - percent_encode, -) - -logger = logging.getLogger(__name__) - -TEMPLATE_STRING_RE = re.compile(r"\{[a-zA-Z#]+\}") -GET_ATTR_RE = re.compile(r"(\w+)\[(\d+)\]") -VALID_HOST_LABEL_RE = re.compile( - r"^(?!-)[a-zA-Z\d-]{1,63}(?= len(value): - return None - return value[index] - else: - value = value[part] - return value - - def format_partition_output(self, partition): - output = partition["outputs"] - output["name"] = partition["id"] - return output - - def is_partition_match(self, region, partition): - matches_regex = re.match(partition["regionRegex"], region) is not None - return region in partition["regions"] or matches_regex - - def aws_partition(self, value): - """Match a region string to an AWS partition. - - :type value: str - :rtype: dict - """ - partitions = self.partitions_data['partitions'] - - if value is not None: - for partition in partitions: - if self.is_partition_match(value, partition): - return self.format_partition_output(partition) - - # return the default partition if no matches were found - aws_partition = partitions[0] - return self.format_partition_output(aws_partition) - - def aws_parse_arn(self, value): - """Parse and validate string for ARN components. - - :type value: str - :rtype: dict - """ - if value is None or not value.startswith("arn:"): - return None - - try: - arn_dict = ARN_PARSER.parse_arn(value) - except InvalidArnException: - return None - - # partition, resource, and service are required - if not all( - (arn_dict["partition"], arn_dict["service"], arn_dict["resource"]) - ): - return None - - arn_dict["accountId"] = arn_dict.pop("account") - - resource = arn_dict.pop("resource") - arn_dict["resourceId"] = resource.replace(":", "/").split("/") - - return arn_dict - - def is_valid_host_label(self, value, allow_subdomains): - """Evaluates whether a value is a valid host label per - RFC 1123. If allow_subdomains is True, split on `.` and validate - each component separately. - - :type value: str - :type allow_subdomains: bool - :rtype: bool - """ - if value is None or allow_subdomains is False and value.count(".") > 0: - return False - - if allow_subdomains is True: - return all( - self.is_valid_host_label(label, False) - for label in value.split(".") - ) - - return VALID_HOST_LABEL_RE.match(value) is not None - - def string_equals(self, value1, value2): - """Evaluates two string values for equality. - - :type value1: str - :type value2: str - :rtype: bool - """ - if not all(isinstance(val, str) for val in (value1, value2)): - msg = f"Both values must be strings, not {type(value1)} and {type(value2)}." - raise EndpointResolutionError(msg=msg) - return value1 == value2 - - def uri_encode(self, value): - """Perform percent-encoding on an input string. - - :type value: str - :rytpe: str - """ - if value is None: - return None - - return percent_encode(value) - - def parse_url(self, value): - """Parse a URL string into components. - - :type value: str - :rtype: dict - """ - if value is None: - return None - - url_components = urlparse(value) - try: - # url_parse may assign non-integer values to - # `port` and will fail when accessed. - url_components.port - except ValueError: - return None - - scheme = url_components.scheme - query = url_components.query - # URLs with queries are not supported - if scheme not in ("https", "http") or len(query) > 0: - return None - - path = url_components.path - normalized_path = quote(normalize_url_path(path)) - if not normalized_path.endswith("/"): - normalized_path = f"{normalized_path}/" - - return { - "scheme": scheme, - "authority": url_components.netloc, - "path": path, - "normalizedPath": normalized_path, - "isIp": is_valid_ipv4_endpoint_url(value) - or is_valid_ipv6_endpoint_url(value), - } - - def boolean_equals(self, value1, value2): - """Evaluates two boolean values for equality. - - :type value1: bool - :type value2: bool - :rtype: bool - """ - if not all(isinstance(val, bool) for val in (value1, value2)): - msg = f"Both arguments must be bools, not {type(value1)} and {type(value2)}." - raise EndpointResolutionError(msg=msg) - return value1 is value2 - - def is_ascii(self, value): - """Evaluates if a string only contains ASCII characters. - - :type value: str - :rtype: bool - """ - try: - value.encode("ascii") - return True - except UnicodeEncodeError: - return False - - def substring(self, value, start, stop, reverse): - """Computes a substring given the start index and end index. If `reverse` is - True, slice the string from the end instead. - - :type value: str - :type start: int - :type end: int - :type reverse: bool - :rtype: str - """ - if not isinstance(value, str): - msg = f"Input must be a string, not {type(value)}." - raise EndpointResolutionError(msg=msg) - if start >= stop or len(value) < stop or not self.is_ascii(value): - return None - - if reverse is True: - r_start = len(value) - stop - r_stop = len(value) - start - return value[r_start:r_stop] - - return value[start:stop] - - def _not(self, value): - """A function implementation of the logical operator `not`. - - :type value: Any - :rtype: bool - """ - return not value - - def aws_is_virtual_hostable_s3_bucket(self, value, allow_subdomains): - """Evaluates whether a value is a valid bucket name for virtual host - style bucket URLs. To pass, the value must meet the following criteria: - 1. is_valid_host_label(value) is True - 2. length between 3 and 63 characters (inclusive) - 3. does not contain uppercase characters - 4. is not formatted as an IP address - - If allow_subdomains is True, split on `.` and validate - each component separately. - - :type value: str - :type allow_subdomains: bool - :rtype: bool - """ - if ( - value is None - or len(value) < 3 - or value.lower() != value - or IPV4_RE.match(value) is not None - ): - return False - - if allow_subdomains is True: - return all( - self.aws_is_virtual_hostable_s3_bucket(label, False) - for label in value.split(".") - ) - - return self.is_valid_host_label(value, allow_subdomains=False) - - -# maintains backwards compatibility as `Library` was misspelled -# in earlier versions -RuleSetStandardLibary = RuleSetStandardLibrary - - -class BaseRule: - """Base interface for individual endpoint rules.""" - - def __init__(self, conditions, documentation=None): - self.conditions = conditions - self.documentation = documentation - - def evaluate(self, scope_vars, rule_lib): - raise NotImplementedError() - - def evaluate_conditions(self, scope_vars, rule_lib): - """Determine if all conditions in a rule are met. - - :type scope_vars: dict - :type rule_lib: RuleSetStandardLibrary - :rtype: bool - """ - for func_signature in self.conditions: - result = rule_lib.call_function(func_signature, scope_vars) - if result is False or result is None: - return False - return True - - -class RuleSetEndpoint(NamedTuple): - """A resolved endpoint object returned by a rule.""" - - url: str - properties: dict - headers: dict - - -class EndpointRule(BaseRule): - def __init__(self, endpoint, **kwargs): - super().__init__(**kwargs) - self.endpoint = endpoint - - def evaluate(self, scope_vars, rule_lib): - """Determine if conditions are met to provide a valid endpoint. - - :type scope_vars: dict - :rtype: RuleSetEndpoint - """ - if self.evaluate_conditions(scope_vars, rule_lib): - url = rule_lib.resolve_value(self.endpoint["url"], scope_vars) - properties = self.resolve_properties( - self.endpoint.get("properties", {}), - scope_vars, - rule_lib, - ) - headers = self.resolve_headers(scope_vars, rule_lib) - return RuleSetEndpoint( - url=url, properties=properties, headers=headers - ) - - return None - - def resolve_properties(self, properties, scope_vars, rule_lib): - """Traverse `properties` attribute, resolving any template strings. - - :type properties: dict/list/str - :type scope_vars: dict - :type rule_lib: RuleSetStandardLibrary - :rtype: dict - """ - if isinstance(properties, list): - return [ - self.resolve_properties(prop, scope_vars, rule_lib) - for prop in properties - ] - elif isinstance(properties, dict): - return { - key: self.resolve_properties(value, scope_vars, rule_lib) - for key, value in properties.items() - } - elif rule_lib.is_template(properties): - return rule_lib.resolve_template_string(properties, scope_vars) - - return properties - - def resolve_headers(self, scope_vars, rule_lib): - """Iterate through headers attribute resolving all values. - - :type scope_vars: dict - :type rule_lib: RuleSetStandardLibrary - :rtype: dict - """ - resolved_headers = {} - headers = self.endpoint.get("headers", {}) - - for header, values in headers.items(): - resolved_headers[header] = [ - rule_lib.resolve_value(item, scope_vars) for item in values - ] - return resolved_headers - - -class ErrorRule(BaseRule): - def __init__(self, error, **kwargs): - super().__init__(**kwargs) - self.error = error - - def evaluate(self, scope_vars, rule_lib): - """If an error rule's conditions are met, raise an error rule. - - :type scope_vars: dict - :type rule_lib: RuleSetStandardLibrary - :rtype: EndpointResolutionError - """ - if self.evaluate_conditions(scope_vars, rule_lib): - error = rule_lib.resolve_value(self.error, scope_vars) - raise EndpointResolutionError(msg=error) - return None - - -class TreeRule(BaseRule): - """A tree rule is non-terminal meaning it will never be returned to a provider. - Additionally this means it has no attributes that need to be resolved. - """ - - def __init__(self, rules, **kwargs): - super().__init__(**kwargs) - self.rules = [RuleCreator.create(**rule) for rule in rules] - - def evaluate(self, scope_vars, rule_lib): - """If a tree rule's conditions are met, iterate its sub-rules - and return first result found. - - :type scope_vars: dict - :type rule_lib: RuleSetStandardLibrary - :rtype: RuleSetEndpoint/EndpointResolutionError - """ - if self.evaluate_conditions(scope_vars, rule_lib): - for rule in self.rules: - # don't share scope_vars between rules - rule_result = rule.evaluate(scope_vars.copy(), rule_lib) - if rule_result: - return rule_result - return None - - -class RuleCreator: - - endpoint = EndpointRule - error = ErrorRule - tree = TreeRule - - @classmethod - def create(cls, **kwargs): - """Create a rule instance from metadata. - - :rtype: TreeRule/EndpointRule/ErrorRule - """ - rule_type = kwargs.pop("type") - try: - rule_class = getattr(cls, rule_type) - except AttributeError: - raise EndpointResolutionError( - msg=f"Unknown rule type: {rule_type}. A rule must " - "be of type tree, endpoint or error." - ) - else: - return rule_class(**kwargs) - - -class ParameterType(Enum): - """Translation from `type` attribute to native Python type.""" - - string = str - boolean = bool - - -class ParameterDefinition: - """The spec of an individual parameter defined in a RuleSet.""" - - def __init__( - self, - name, - parameter_type, - documentation=None, - builtIn=None, - default=None, - required=None, - deprecated=None, - ): - self.name = name - try: - self.parameter_type = getattr( - ParameterType, parameter_type.lower() - ).value - except AttributeError: - raise EndpointResolutionError( - msg=f"Unknown parameter type: {parameter_type}. " - "A parameter must be of type string or boolean." - ) - self.documentation = documentation - self.builtin = builtIn - self.default = default - self.required = required - self.deprecated = deprecated - - def validate_input(self, value): - """Perform base validation on parameter input. - - :type value: Any - :raises: EndpointParametersError - """ - - if not isinstance(value, self.parameter_type): - raise EndpointResolutionError( - msg=f"Value ({self.name}) is the wrong " - f"type. Must be {self.parameter_type}." - ) - if self.deprecated is not None: - depr_str = f"{self.name} has been deprecated." - msg = self.deprecated.get("message") - since = self.deprecated.get("since") - if msg: - depr_str += f"\n{msg}" - if since: - depr_str += f"\nDeprecated since {since}." - logger.info(depr_str) - - return None - - def process_input(self, value): - """Process input against spec, applying default if value is None.""" - if value is None: - if self.default is not None: - return self.default - if self.required: - raise EndpointResolutionError( - f"Cannot find value for required parameter {self.name}" - ) - # in all other cases, the parameter will keep the value None - else: - self.validate_input(value) - return value - - -class RuleSet: - """Collection of rules to derive a routable service endpoint.""" - - def __init__( - self, version, parameters, rules, partitions, documentation=None - ): - self.version = version - self.parameters = self._ingest_parameter_spec(parameters) - self.rules = [RuleCreator.create(**rule) for rule in rules] - self.rule_lib = RuleSetStandardLibrary(partitions) - self.documentation = documentation - - def _ingest_parameter_spec(self, parameters): - return { - name: ParameterDefinition( - name, - spec["type"], - spec.get("documentation"), - spec.get("builtIn"), - spec.get("default"), - spec.get("required"), - spec.get("deprecated"), - ) - for name, spec in parameters.items() - } - - def process_input_parameters(self, input_params): - """Process each input parameter against its spec. - - :type input_params: dict - """ - for name, spec in self.parameters.items(): - value = spec.process_input(input_params.get(name)) - if value is not None: - input_params[name] = value - return None - - def evaluate(self, input_parameters): - """Evaluate input parameters against rules returning first match. - - :type input_parameters: dict - """ - self.process_input_parameters(input_parameters) - for rule in self.rules: - evaluation = rule.evaluate(input_parameters.copy(), self.rule_lib) - if evaluation is not None: - return evaluation - return None - - -class EndpointProvider: - """Derives endpoints from a RuleSet for given input parameters.""" - - def __init__(self, ruleset_data, partition_data): - self.ruleset = RuleSet(**ruleset_data, partitions=partition_data) - - @lru_cache_weakref(maxsize=CACHE_SIZE) - def resolve_endpoint(self, **input_parameters): - """Match input parameters to a rule. - - :type input_parameters: dict - :rtype: RuleSetEndpoint - """ - params_for_error = input_parameters.copy() - endpoint = self.ruleset.evaluate(input_parameters) - if endpoint is None: - param_string = "\n".join( - [f"{key}: {value}" for key, value in params_for_error.items()] - ) - raise EndpointResolutionError( - msg=f"No endpoint found for parameters:\n{param_string}" - ) - return endpoint diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_collections.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_collections.py deleted file mode 100644 index da9857e986d89acac3ba05a6735dc08c249bde1a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_collections.py +++ /dev/null @@ -1,337 +0,0 @@ -from __future__ import absolute_import - -try: - from collections.abc import Mapping, MutableMapping -except ImportError: - from collections import Mapping, MutableMapping -try: - from threading import RLock -except ImportError: # Platform-specific: No threads available - - class RLock: - def __enter__(self): - pass - - def __exit__(self, exc_type, exc_value, traceback): - pass - - -from collections import OrderedDict - -from .exceptions import InvalidHeader -from .packages import six -from .packages.six import iterkeys, itervalues - -__all__ = ["RecentlyUsedContainer", "HTTPHeaderDict"] - - -_Null = object() - - -class RecentlyUsedContainer(MutableMapping): - """ - Provides a thread-safe dict-like container which maintains up to - ``maxsize`` keys while throwing away the least-recently-used keys beyond - ``maxsize``. - - :param maxsize: - Maximum number of recent elements to retain. - - :param dispose_func: - Every time an item is evicted from the container, - ``dispose_func(value)`` is called. Callback which will get called - """ - - ContainerCls = OrderedDict - - def __init__(self, maxsize=10, dispose_func=None): - self._maxsize = maxsize - self.dispose_func = dispose_func - - self._container = self.ContainerCls() - self.lock = RLock() - - def __getitem__(self, key): - # Re-insert the item, moving it to the end of the eviction line. - with self.lock: - item = self._container.pop(key) - self._container[key] = item - return item - - def __setitem__(self, key, value): - evicted_value = _Null - with self.lock: - # Possibly evict the existing value of 'key' - evicted_value = self._container.get(key, _Null) - self._container[key] = value - - # If we didn't evict an existing value, we might have to evict the - # least recently used item from the beginning of the container. - if len(self._container) > self._maxsize: - _key, evicted_value = self._container.popitem(last=False) - - if self.dispose_func and evicted_value is not _Null: - self.dispose_func(evicted_value) - - def __delitem__(self, key): - with self.lock: - value = self._container.pop(key) - - if self.dispose_func: - self.dispose_func(value) - - def __len__(self): - with self.lock: - return len(self._container) - - def __iter__(self): - raise NotImplementedError( - "Iteration over this class is unlikely to be threadsafe." - ) - - def clear(self): - with self.lock: - # Copy pointers to all values, then wipe the mapping - values = list(itervalues(self._container)) - self._container.clear() - - if self.dispose_func: - for value in values: - self.dispose_func(value) - - def keys(self): - with self.lock: - return list(iterkeys(self._container)) - - -class HTTPHeaderDict(MutableMapping): - """ - :param headers: - An iterable of field-value pairs. Must not contain multiple field names - when compared case-insensitively. - - :param kwargs: - Additional field-value pairs to pass in to ``dict.update``. - - A ``dict`` like container for storing HTTP Headers. - - Field names are stored and compared case-insensitively in compliance with - RFC 7230. Iteration provides the first case-sensitive key seen for each - case-insensitive pair. - - Using ``__setitem__`` syntax overwrites fields that compare equal - case-insensitively in order to maintain ``dict``'s api. For fields that - compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add`` - in a loop. - - If multiple fields that are equal case-insensitively are passed to the - constructor or ``.update``, the behavior is undefined and some will be - lost. - - >>> headers = HTTPHeaderDict() - >>> headers.add('Set-Cookie', 'foo=bar') - >>> headers.add('set-cookie', 'baz=quxx') - >>> headers['content-length'] = '7' - >>> headers['SET-cookie'] - 'foo=bar, baz=quxx' - >>> headers['Content-Length'] - '7' - """ - - def __init__(self, headers=None, **kwargs): - super(HTTPHeaderDict, self).__init__() - self._container = OrderedDict() - if headers is not None: - if isinstance(headers, HTTPHeaderDict): - self._copy_from(headers) - else: - self.extend(headers) - if kwargs: - self.extend(kwargs) - - def __setitem__(self, key, val): - self._container[key.lower()] = [key, val] - return self._container[key.lower()] - - def __getitem__(self, key): - val = self._container[key.lower()] - return ", ".join(val[1:]) - - def __delitem__(self, key): - del self._container[key.lower()] - - def __contains__(self, key): - return key.lower() in self._container - - def __eq__(self, other): - if not isinstance(other, Mapping) and not hasattr(other, "keys"): - return False - if not isinstance(other, type(self)): - other = type(self)(other) - return dict((k.lower(), v) for k, v in self.itermerged()) == dict( - (k.lower(), v) for k, v in other.itermerged() - ) - - def __ne__(self, other): - return not self.__eq__(other) - - if six.PY2: # Python 2 - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - - __marker = object() - - def __len__(self): - return len(self._container) - - def __iter__(self): - # Only provide the originally cased names - for vals in self._container.values(): - yield vals[0] - - def pop(self, key, default=__marker): - """D.pop(k[,d]) -> v, remove specified key and return the corresponding value. - If key is not found, d is returned if given, otherwise KeyError is raised. - """ - # Using the MutableMapping function directly fails due to the private marker. - # Using ordinary dict.pop would expose the internal structures. - # So let's reinvent the wheel. - try: - value = self[key] - except KeyError: - if default is self.__marker: - raise - return default - else: - del self[key] - return value - - def discard(self, key): - try: - del self[key] - except KeyError: - pass - - def add(self, key, val): - """Adds a (name, value) pair, doesn't overwrite the value if it already - exists. - - >>> headers = HTTPHeaderDict(foo='bar') - >>> headers.add('Foo', 'baz') - >>> headers['foo'] - 'bar, baz' - """ - key_lower = key.lower() - new_vals = [key, val] - # Keep the common case aka no item present as fast as possible - vals = self._container.setdefault(key_lower, new_vals) - if new_vals is not vals: - vals.append(val) - - def extend(self, *args, **kwargs): - """Generic import function for any type of header-like object. - Adapted version of MutableMapping.update in order to insert items - with self.add instead of self.__setitem__ - """ - if len(args) > 1: - raise TypeError( - "extend() takes at most 1 positional " - "arguments ({0} given)".format(len(args)) - ) - other = args[0] if len(args) >= 1 else () - - if isinstance(other, HTTPHeaderDict): - for key, val in other.iteritems(): - self.add(key, val) - elif isinstance(other, Mapping): - for key in other: - self.add(key, other[key]) - elif hasattr(other, "keys"): - for key in other.keys(): - self.add(key, other[key]) - else: - for key, value in other: - self.add(key, value) - - for key, value in kwargs.items(): - self.add(key, value) - - def getlist(self, key, default=__marker): - """Returns a list of all the values for the named field. Returns an - empty list if the key doesn't exist.""" - try: - vals = self._container[key.lower()] - except KeyError: - if default is self.__marker: - return [] - return default - else: - return vals[1:] - - # Backwards compatibility for httplib - getheaders = getlist - getallmatchingheaders = getlist - iget = getlist - - # Backwards compatibility for http.cookiejar - get_all = getlist - - def __repr__(self): - return "%s(%s)" % (type(self).__name__, dict(self.itermerged())) - - def _copy_from(self, other): - for key in other: - val = other.getlist(key) - if isinstance(val, list): - # Don't need to convert tuples - val = list(val) - self._container[key.lower()] = [key] + val - - def copy(self): - clone = type(self)() - clone._copy_from(self) - return clone - - def iteritems(self): - """Iterate over all header lines, including duplicate ones.""" - for key in self: - vals = self._container[key.lower()] - for val in vals[1:]: - yield vals[0], val - - def itermerged(self): - """Iterate over all headers, merging duplicate ones together.""" - for key in self: - val = self._container[key.lower()] - yield val[0], ", ".join(val[1:]) - - def items(self): - return list(self.iteritems()) - - @classmethod - def from_httplib(cls, message): # Python 2 - """Read headers from a Python 2 httplib message object.""" - # python2.7 does not expose a proper API for exporting multiheaders - # efficiently. This function re-reads raw lines from the message - # object and extracts the multiheaders properly. - obs_fold_continued_leaders = (" ", "\t") - headers = [] - - for line in message.headers: - if line.startswith(obs_fold_continued_leaders): - if not headers: - # We received a header line that starts with OWS as described - # in RFC-7230 S3.2.4. This indicates a multiline header, but - # there exists no previous header to which we can attach it. - raise InvalidHeader( - "Header continuation with no previous header: %s" % line - ) - else: - key, value = headers[-1] - headers[-1] = (key, value + " " + line.strip()) - continue - - key, value = line.split(":", 1) - headers.append((key, value.strip())) - - return cls(headers) diff --git a/spaces/Boynn/AI/README.md b/spaces/Boynn/AI/README.md deleted file mode 100644 index ef27320abc0c2cbbecd4fe060aa04b84f619ceb1..0000000000000000000000000000000000000000 --- a/spaces/Boynn/AI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI -emoji: 🏆 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BridgeTower/bridgetower-video-search/bridgetower_custom.py b/spaces/BridgeTower/bridgetower-video-search/bridgetower_custom.py deleted file mode 100644 index e2a36504c7d80df7f82f5249221f7ef56b98b769..0000000000000000000000000000000000000000 --- a/spaces/BridgeTower/bridgetower-video-search/bridgetower_custom.py +++ /dev/null @@ -1,183 +0,0 @@ -from collections import OrderedDict -from typing import List, Optional, Tuple, Union - -import torch -from torch import nn -import torch.nn.functional as F - -from torchvision import transforms -from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize - -from transformers.modeling_outputs import SequenceClassifierOutput - -from transformers import BridgeTowerPreTrainedModel, BridgeTowerModel -from transformers.models.bridgetower.modeling_bridgetower import BridgeTowerTextModel - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - -class BridgeTowerImageFeatureExtractor(nn.Module): - def __init__( - self, - patch_size=14, - width=1024, - resolution_after=294, - ckpt_path=None, - ): - super().__init__() - - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((resolution_after // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - if ckpt_path is not None: - sd = torch.load(ckpt_path) - if 'state_dict' in sd: - sd = sd["state_dict"] - print(f'Loading feature extractor checkpoint from {ckpt_path}') - self.load_state_dict(sd) - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - t=self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device) - x = torch.cat([t, x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - x = x.permute(1, 0, 2) # NLD -> LND - return x - - -class BridgeTowerITCHead(nn.Module): - def __init__(self, hidden_size, embed_size): - super().__init__() - self.fc = nn.Linear(hidden_size, embed_size) - - def forward(self, x): - x = self.fc(x) - return x - - -class _BridgeTowerTextModelWrapper(nn.Module): - def __init__(self, config): - super().__init__() - self.text_model = BridgeTowerTextModel(config) - - def forward(self, **kwargs): - return self.text_model(**kwargs) - - -class BridgeTowerTextFeatureExtractor(BridgeTowerPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bridgetower = _BridgeTowerTextModelWrapper(config.text_config) - self.itc_text_head = BridgeTowerITCHead(config.hidden_size, config.contrastive_hidden_size) - - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[torch.LongTensor] = None, - ): - - outputs = self.bridgetower(input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True) - final_hidden_cls = outputs.hidden_states[-1][:,0,:] - final_hidden_cls = F.normalize(self.itc_text_head(final_hidden_cls), dim=-1, p=2) - - return final_hidden_cls - - -class BridgeTowerForITC(BridgeTowerPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bridgetower = BridgeTowerModel(config) - - self.itc_text_head = BridgeTowerITCHead(config.hidden_size, config.contrastive_hidden_size) - self.itc_image_head = BridgeTowerITCHead(config.hidden_size, config.contrastive_hidden_size) - self.itc_cross_modal_head = BridgeTowerITCHead(config.hidden_size * 2, config.contrastive_hidden_size) - - # Initialize weights and apply final processing - self.post_init() - - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - pixel_values: Optional[torch.FloatTensor] = None, - pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - image_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[torch.LongTensor] = None, - ) -> Union[SequenceClassifierOutput, Tuple[torch.FloatTensor]]: - - assert output_hidden_states, 'output_hidden_states should be set to True for BridgeTowerForITC' - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bridgetower( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - pixel_values=pixel_values, - pixel_mask=pixel_mask, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - image_embeds=image_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooler_output = outputs.pooler_output if return_dict else outputs[2] - - hidden_states_txt, hidden_states_img, hidden_states_cross_modal = outputs.hidden_states - - final_hidden_txt = hidden_states_txt[-1] - final_hidden_img = hidden_states_img[-1] - - image_embeds_with_ln = self.bridgetower.vision_model.visual.forward_post(final_hidden_img) - image_token_type_embeddings = self.bridgetower.token_type_embeddings( - torch.full((1,), 1, dtype=torch.long, device=self.bridgetower.token_type_embeddings.weight.device) - ).expand_as(image_embeds_with_ln) - - final_hidden_img = ( - self.bridgetower.cross_modal_image_transform(image_embeds_with_ln) - + image_token_type_embeddings - ) - - final_hidden_txt = F.normalize(self.itc_text_head(final_hidden_txt[:,0,:]), dim=-1, p=2) - final_hidden_img = F.normalize(self.itc_image_head(final_hidden_img[:,0,:]), dim=-1, p=2) - final_hidden_cross = F.normalize(self.itc_cross_modal_head(pooler_output), dim=-1, p=2) - - logits = torch.stack([final_hidden_txt, final_hidden_img, final_hidden_cross], dim=-2) - - if not return_dict: - return tuple(logits) - - return SequenceClassifierOutput( - loss=None, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/CALM/Dashboard/streamlit_observable/frontend/build/service-worker.js b/spaces/CALM/Dashboard/streamlit_observable/frontend/build/service-worker.js deleted file mode 100644 index dc58040d0d0c083e829902c37df2ba329abb09eb..0000000000000000000000000000000000000000 --- a/spaces/CALM/Dashboard/streamlit_observable/frontend/build/service-worker.js +++ /dev/null @@ -1,39 +0,0 @@ -/** - * Welcome to your Workbox-powered service worker! - * - * You'll need to register this file in your web app and you should - * disable HTTP caching for this file too. - * See https://goo.gl/nhQhGp - * - * The rest of the code is auto-generated. Please don't update this file - * directly; instead, make changes to your Workbox build configuration - * and re-run your build process. - * See https://goo.gl/2aRDsh - */ - -importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js"); - -importScripts( - "./precache-manifest.2e1db2924cb1e112608cee049b0d33cc.js" -); - -self.addEventListener('message', (event) => { - if (event.data && event.data.type === 'SKIP_WAITING') { - self.skipWaiting(); - } -}); - -workbox.core.clientsClaim(); - -/** - * The workboxSW.precacheAndRoute() method efficiently caches and responds to - * requests for URLs in the manifest. - * See https://goo.gl/S9QRab - */ -self.__precacheManifest = [].concat(self.__precacheManifest || []); -workbox.precaching.precacheAndRoute(self.__precacheManifest, {}); - -workbox.routing.registerNavigationRoute(workbox.precaching.getCacheKeyForURL("./index.html"), { - - blacklist: [/^\/_/,/\/[^/?]+\.[^/]+$/], -}); diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/evaluation.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/evaluation.md deleted file mode 100644 index a0c21d0a3fe1313208a57cef2c786d60d904e9e3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/evaluation.md +++ /dev/null @@ -1,43 +0,0 @@ - -# Evaluation - -Evaluation is a process that takes a number of inputs/outputs pairs and aggregate them. -You can always [use the model](models.html) directly and just parse its inputs/outputs manually to perform -evaluation. -Alternatively, evaluation is implemented in detectron2 using the [DatasetEvaluator](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluator) -interface. - -Detectron2 includes a few `DatasetEvaluator` that computes metrics using standard dataset-specific -APIs (e.g., COCO, LVIS). -You can also implement your own `DatasetEvaluator` that performs some other jobs -using the inputs/outputs pairs. -For example, to count how many instances are detected on the validation set: - -``` -class Counter(DatasetEvaluator): - def reset(self): - self.count = 0 - def process(self, inputs, outputs): - for output in outputs: - self.count += len(output["instances"]) - def evaluate(self): - # save self.count somewhere, or print it, or return it. - return {"count": self.count} -``` - -Once you have some `DatasetEvaluator`, you can run it with -[inference_on_dataset](../modules/evaluation.html#detectron2.evaluation.inference_on_dataset). -For example, - -```python -val_results = inference_on_dataset( - model, - val_data_loader, - DatasetEvaluators([COCOEvaluator(...), Counter()])) -``` -Compared to running the evaluation manually using the model, the benefit of this function is that -you can merge evaluators together using [DatasetEvaluators](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluators). -In this way you can run all evaluations without having to go through the dataset multiple times. - -The `inference_on_dataset` function also provides accurate speed benchmarks for the -given model and dataset. diff --git a/spaces/CVPR/LIVE/thrust/thrust/sequence.h b/spaces/CVPR/LIVE/thrust/thrust/sequence.h deleted file mode 100644 index e92391f64e1fd7d4fd82e08b662b45d285b45fa8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/sequence.h +++ /dev/null @@ -1,296 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file sequence.h - * \brief Fills a range with a sequence of numbers - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup transformations - * \{ - */ - - -/*! \p sequence fills the range [first, last) with a sequence of numbers. - * - * For each iterator \c i in the range [first, last), this version of - * \p sequence performs the assignment *i = (i - first). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * - * The following code snippet demonstrates how to use \p sequence to fill a range - * with a sequence of numbers using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::sequence(thrust::host, A, A + 10); - * // A is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} - * \endcode - * - * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no - * guarantee on order of execution. - * - * \see http://www.sgi.com/tech/stl/iota.html - */ -template -__host__ __device__ - void sequence(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last); - - -/*! \p sequence fills the range [first, last) with a sequence of numbers. - * - * For each iterator \c i in the range [first, last), this version of - * \p sequence performs the assignment *i = (i - first). - * - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * - * The following code snippet demonstrates how to use \p sequence to fill a range - * with a sequence of numbers. - * - * \code - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::sequence(A, A + 10); - * // A is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} - * \endcode - * - * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no - * guarantee on order of execution. - * - * \see http://www.sgi.com/tech/stl/iota.html - */ -template - void sequence(ForwardIterator first, - ForwardIterator last); - - -/*! \p sequence fills the range [first, last) with a sequence of numbers. - * - * For each iterator \c i in the range [first, last), this version of - * \p sequence performs the assignment *i = init + (i - first). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * \param init The first value of the sequence of numbers. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * \tparam T is a model of Assignable, - * and \p T is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to use \p sequence to fill a range - * with a sequence of numbers starting from the value 1 using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::sequence(thrust::host, A, A + 10, 1); - * // A is now {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} - * \endcode - * - * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no - * guarantee on order of execution. - * - * \see http://www.sgi.com/tech/stl/iota.html - */ -template -__host__ __device__ - void sequence(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - T init); - - -/*! \p sequence fills the range [first, last) with a sequence of numbers. - * - * For each iterator \c i in the range [first, last), this version of - * \p sequence performs the assignment *i = init + (i - first). - * - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * \param init The first value of the sequence of numbers. - * - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * \tparam T is a model of Assignable, - * and \p T is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to use \p sequence to fill a range - * with a sequence of numbers starting from the value 1. - * - * \code - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::sequence(A, A + 10, 1); - * // A is now {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} - * \endcode - * - * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no - * guarantee on order of execution. - * - * \see http://www.sgi.com/tech/stl/iota.html - */ -template - void sequence(ForwardIterator first, - ForwardIterator last, - T init); - - -/*! \p sequence fills the range [first, last) with a sequence of numbers. - * - * For each iterator \c i in the range [first, last), this version of - * \p sequence performs the assignment *i = init + step * (i - first). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * \param init The first value of the sequence of numbers - * \param step The difference between consecutive elements. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * \tparam T is a model of Assignable, - * and \p T is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to use \p sequence to fill a range - * with a sequence of numbers starting from the value 1 with a step size of 3 using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::sequence(thrust::host, A, A + 10, 1, 3); - * // A is now {1, 4, 7, 10, 13, 16, 19, 22, 25, 28} - * \endcode - * - * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no - * guarantee on order of execution. - * - * \see http://www.sgi.com/tech/stl/iota.html - */ -template -__host__ __device__ - void sequence(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - T init, - T step); - - -/*! \p sequence fills the range [first, last) with a sequence of numbers. - * - * For each iterator \c i in the range [first, last), this version of - * \p sequence performs the assignment *i = init + step * (i - first). - * - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * \param init The first value of the sequence of numbers - * \param step The difference between consecutive elements. - * - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable, - * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined, - * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined. - * \tparam T is a model of Assignable, - * and \p T is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to use \p sequence to fill a range - * with a sequence of numbers starting from the value 1 with a step size of 3. - * - * \code - * #include - * ... - * const int N = 10; - * int A[N]; - * thrust::sequence(A, A + 10, 1, 3); - * // A is now {1, 4, 7, 10, 13, 16, 19, 22, 25, 28} - * \endcode - * - * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no - * guarantee on order of execution. - * - * \see http://www.sgi.com/tech/stl/iota.html - */ -template - void sequence(ForwardIterator first, - ForwardIterator last, - T init, - T step); - - -/*! \} // end transformations - */ - - -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/transform_reduce.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/transform_reduce.h deleted file mode 100644 index e3f9494dfa6e54bbfdeb2a51fabd8bebc2188e98..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/transform_reduce.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a fill of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the transform_reduce.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch transform_reduce - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_TRANSFORM_REDUCE_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/transform_reduce.h> -#include __THRUST_HOST_SYSTEM_TRANSFORM_REDUCE_HEADER -#undef __THRUST_HOST_SYSTEM_TRANSFORM_REDUCE_HEADER - -#define __THRUST_DEVICE_SYSTEM_TRANSFORM_REDUCE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/transform_reduce.h> -#include __THRUST_DEVICE_SYSTEM_TRANSFORM_REDUCE_HEADER -#undef __THRUST_DEVICE_SYSTEM_TRANSFORM_REDUCE_HEADER - diff --git a/spaces/CVPR/WALT/README.md b/spaces/CVPR/WALT/README.md deleted file mode 100644 index 006bc76eece809c527302d681447fda8e8757e10..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WALT DEMO -emoji: ⚡ -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/WALT/mmdet/models/backbones/__init__.py b/spaces/CVPR/WALT/mmdet/models/backbones/__init__.py deleted file mode 100644 index 11d7de7543b04e7040facb4472121e5c0f02ecaa..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/backbones/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .swin_transformer import SwinTransformer -from .resnet import ResNet, ResNetV1d -__all__ = ['SwinTransformer', 'ResNet', 'ResNetV1d'] diff --git a/spaces/CVPR/WALT/mmdet/models/backbones/swin_transformer.py b/spaces/CVPR/WALT/mmdet/models/backbones/swin_transformer.py deleted file mode 100644 index bb41850d8480a08a6a7698bf6129ffd1ab239681..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/backbones/swin_transformer.py +++ /dev/null @@ -1,630 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from mmcv_custom import load_checkpoint -from mmdet.utils import get_root_logger -from ..builder import BACKBONES - - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ Forward function. - - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -@BACKBONES.register_module() -class SwinTransformer(nn.Module): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - if isinstance(pretrained, str): - self.apply(_init_weights) - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - self.apply(_init_weights) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/det_swin.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/det_swin.py deleted file mode 100644 index 1ec74aafa2393832fbe1a32e25780aef64e8e667..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/det_swin.py +++ /dev/null @@ -1,717 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- -import logging -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ -from detectron2.layers import ShapeSpec -from .backbone import Backbone - -logger = logging.getLogger(__name__) - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - r""" Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(Backbone): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - out_features=["stage2", "stage3", "stage4", "stage5"], - use_checkpoint=False): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.out_features = out_features - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - self._out_feature_strides = {} - self._out_feature_channels = {} - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - stage = f'stage{i_layer + 2}' - if stage in self.out_features: - self._out_feature_channels[stage] = embed_dim * 2 ** i_layer - self._out_feature_strides[stage] = 4 * 2 ** i_layer - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - self.norm = norm_layer(self.num_features[-1]) - - self._freeze_stages() - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self.out_features - } - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - @torch.jit.ignore - def no_weight_decay(self): - return {'absolute_pos_embed'} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {'relative_position_bias_table', 'norm'} - - # def init_weights(self, pretrained=None): - # """Initialize the weights in backbone. - - # Args: - # pretrained (str, optional): Path to pre-trained weights. - # Defaults to None. - # """ - - # def _init_weights(m): - # if isinstance(m, nn.Linear): - # trunc_normal_(m.weight, std=.02) - # if isinstance(m, nn.Linear) and m.bias is not None: - # nn.init.constant_(m.bias, 0) - # elif isinstance(m, nn.LayerNorm): - # nn.init.constant_(m.bias, 0) - # nn.init.constant_(m.weight, 1.0) - - # if isinstance(pretrained, str): - # self.apply(_init_weights) - # logger = get_root_logger() - # load_checkpoint(self, pretrained, strict=False, logger=logger) - # elif pretrained is None: - # self.apply(_init_weights) - # else: - # raise TypeError('pretrained must be a str or None') - - def init_weights(self, pretrained='', pretrained_layers=[], verbose=True): - if not os.path.isfile(pretrained): - logger.warning(f'=> Pretrained model ({pretrained}) is not a file, skip init weight') - return - - pretrained_dict = torch.load(pretrained, map_location='cpu') - logger.info(f'=> Loading pretrained model {pretrained}') - model_dict = self.state_dict() - pretrained_dict = { - k: v for k, v in pretrained_dict.items() - if k in model_dict.keys() - } - need_init_state_dict = {} - for k, v in pretrained_dict.items(): - need_init = ( - ( - k.split('.')[0] in pretrained_layers - or pretrained_layers[0] == '*' - ) - and 'relative_position_index' not in k - and 'attn_mask' not in k - ) - - if need_init: - if verbose: - logger.info(f'=> init {k} from {pretrained}') - - if 'relative_position_bias_table' in k and v.size() != model_dict[k].size(): - relative_position_bias_table_pretrained = v - relative_position_bias_table_current = model_dict[k] - L1, nH1 = relative_position_bias_table_pretrained.size() - L2, nH2 = relative_position_bias_table_current.size() - if nH1 != nH2: - logger.info(f"Error in loading {k}, passing") - else: - if L1 != L2: - logger.info( - '=> load_pretrained: resized variant: {} to {}' - .format((L1, nH1), (L2, nH2)) - ) - S1 = int(L1 ** 0.5) - S2 = int(L2 ** 0.5) - relative_position_bias_table_pretrained_resized = torch.nn.functional.interpolate( - relative_position_bias_table_pretrained.permute(1, 0).view(1, nH1, S1, S1), - size=(S2, S2), - mode='bicubic') - v = relative_position_bias_table_pretrained_resized.view(nH2, L2).permute(1, 0) - - if 'absolute_pos_embed' in k and v.size() != model_dict[k].size(): - absolute_pos_embed_pretrained = v - absolute_pos_embed_current = model_dict[k] - _, L1, C1 = absolute_pos_embed_pretrained.size() - _, L2, C2 = absolute_pos_embed_current.size() - if C1 != C1: - logger.info(f"Error in loading {k}, passing") - else: - if L1 != L2: - logger.info( - '=> load_pretrained: resized variant: {} to {}' - .format((1, L1, C1), (1, L2, C2)) - ) - S1 = int(L1 ** 0.5) - S2 = int(L2 ** 0.5) - absolute_pos_embed_pretrained = absolute_pos_embed_pretrained.reshape(-1, S1, S1, C1) - absolute_pos_embed_pretrained = absolute_pos_embed_pretrained.permute(0, 3, 1, 2) - absolute_pos_embed_pretrained_resized = torch.nn.functional.interpolate( - absolute_pos_embed_pretrained, size=(S2, S2), mode='bicubic') - v = absolute_pos_embed_pretrained_resized.permute(0, 2, 3, 1).flatten(1, 2) - - need_init_state_dict[k] = v - self.load_state_dict(need_init_state_dict, strict=False) - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - name = f'stage{i + 2}' - if name in self.out_features: - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs[name] = out - return outs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() \ No newline at end of file diff --git a/spaces/Chomkwoy/Nilkessye/README.md b/spaces/Chomkwoy/Nilkessye/README.md deleted file mode 100644 index 7a3fb4971833a4b749d969b68d9b6022b9ef87c9..0000000000000000000000000000000000000000 --- a/spaces/Chomkwoy/Nilkessye/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Nilkessye -emoji: 🏃 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 4.0.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/master.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/master.js deleted file mode 100644 index 8f238dce5b23ad3c79554a5341c270ecaf5873bb..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/master.js +++ /dev/null @@ -1,55 +0,0 @@ -import fs from "fs" -import { randomUUID } from "crypto" -let code = {} -let file = "config/config/other.yaml" -export class master extends plugin { - constructor () { - super({ - name: "设置主人", - dsc: "设置主人", - event: "message", - rule: [ - { - reg: "^#设置主人$", - fnc: "master" - } - ] - }) - } - - edit (file, key, value) { - let data = fs.readFileSync(file, "utf8") - if (data.match(RegExp(`- "?${value}"?`))) - return - value = `${key}:\n - "${value}"` - if (data.match(RegExp(`${key}:`))) - data = data.replace(RegExp(`${key}:`), value) - else - data = `${data}\n${value}` - fs.writeFileSync(file, data, "utf8") - } - - async master () { - if (this.e.isMaster) { - await this.reply(`账号:${this.e.user_id} 已经为主人`, true) - return false - } - - code[this.e.user_id] = randomUUID() - logger.mark(`${logger.cyan(`[${this.e.user_id}]`)} 设置主人验证码:${logger.green(code[this.e.user_id])}`) - this.setContext("verify") - await this.reply(`账号:${this.e.user_id} 请输入验证码`, true) - } - - async verify () { - this.finish("verify") - if (this.e.msg.trim() == code[this.e.user_id]) { - this.edit(file, "masterQQ", this.e.user_id) - this.edit(file, "master", `${this.e.self_id}:${this.e.user_id}`) - await this.reply(`账号:${this.e.user_id} 设置主人成功`, true) - } else { - await this.reply("验证码错误", true) - return false - } - } -} \ No newline at end of file diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/hold_grudge/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/hold_grudge/__init__.py deleted file mode 100644 index be7c196b6cba6522e927429bf64274ed5cf34ca8..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/hold_grudge/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -from datetime import datetime -from pathlib import Path -from typing import List - -from pil_utils import BuildImage, Text2Image - -from meme_generator import add_meme -from meme_generator.exception import TextOverLength - -img_dir = Path(__file__).parent / "images" - - -def hold_grudge(images, texts: List[str], args): - date = datetime.today().strftime("%Y{}%m{}%d{}").format("年", "月", "日") - text = f"{date} 晴\n{texts[0]}\n这个仇我先记下了" - text2image = Text2Image.from_text(text, 45, fill="black", spacing=10).wrap(440) - if len(text2image.lines) > 10: - raise TextOverLength(texts[0]) - text_img = text2image.to_image() - - frame = BuildImage.open(img_dir / "0.png") - bg = BuildImage.new( - "RGB", (frame.width, frame.height + text_img.height + 20), "white" - ) - bg.paste(frame).paste(text_img, (30, frame.height + 5), alpha=True) - return bg.save_jpg() - - -add_meme( - "hold_grudge", - hold_grudge, - min_texts=1, - max_texts=1, - default_texts=["群友不发涩图"], - keywords=["记仇"], -) diff --git a/spaces/Codecooker/rvcapi/src/infer_pack/models_onnx_moess.py b/spaces/Codecooker/rvcapi/src/infer_pack/models_onnx_moess.py deleted file mode 100644 index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000 --- a/spaces/Codecooker/rvcapi/src/infer_pack/models_onnx_moess.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Copy233/copy/upcunet_v3.py b/spaces/Copy233/copy/upcunet_v3.py deleted file mode 100644 index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000 --- a/spaces/Copy233/copy/upcunet_v3.py +++ /dev/null @@ -1,714 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F -import os, sys -import numpy as np - -root_path = os.path.abspath('.') -sys.path.append(root_path) - - -class SEBlock(nn.Module): - def __init__(self, in_channels, reduction=8, bias=False): - super(SEBlock, self).__init__() - self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias) - self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias) - - def forward(self, x): - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half() - else: - x0 = torch.mean(x, dim=(2, 3), keepdim=True) - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - def forward_mean(self, x, x0): - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - -class UNetConv(nn.Module): - def __init__(self, in_channels, mid_channels, out_channels, se): - super(UNetConv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, mid_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - nn.Conv2d(mid_channels, out_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - ) - if se: - self.seblock = SEBlock(out_channels, reduction=8, bias=True) - else: - self.seblock = None - - def forward(self, x): - z = self.conv(x) - if self.seblock is not None: - z = self.seblock(z) - return z - - -class UNet1(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet1x3(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1x3, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet2(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet2, self).__init__() - - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 64, 128, se=True) - self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0) - self.conv3 = UNetConv(128, 256, 128, se=True) - self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0) - self.conv4 = UNetConv(128, 64, 64, se=True) - self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv5 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3(x3) - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4(x2 + x3) - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - def forward_a(self, x): # conv234结尾有se - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x2): # conv234结尾有se - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3.conv(x3) - return x3 - - def forward_c(self, x2, x3): # conv234结尾有se - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4.conv(x2 + x3) - return x4 - - def forward_d(self, x1, x4): # conv234结尾有se - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - -class UpCunet2x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet2x, self).__init__() - self.unet1 = UNet1(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 36, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 36, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2] - return res # - - -class UpCunet3x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet3x, self).__init__() - self.unet1 = UNet1x3(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 4 + 1) * 4 - pw = ((w0 - 1) // 4 + 1) * 4 - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除 - else: - crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 28, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 28, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop # - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3] - return res - - -class UpCunet4x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet4x, self).__init__() - self.unet1 = UNet1(in_channels, 64, deconv=True) - self.unet2 = UNet2(64, 64, deconv=False) - self.ps = nn.PixelShuffle(2) - self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True) - - def forward(self, x, tile_mode): - n, c, h0, w0 = x.shape - x00 = x - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - x = self.conv_final(x) - x = F.pad(x, (-1, -1, -1, -1)) - x = self.ps(x) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4] - x += F.interpolate(x00, scale_factor=4, mode='nearest') - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 38, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 38, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - x_crop = self.conv_final(x_crop) - x_crop = F.pad(x_crop, (-1, -1, -1, -1)) - x_crop = self.ps(x_crop) - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape) - res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4] - res += F.interpolate(x00, scale_factor=4, mode='nearest') - return res # - - -class RealWaifuUpScaler(object): - def __init__(self, scale, weight_path, half, device): - weight = torch.load(weight_path, map_location="cpu") - self.model = eval("UpCunet%sx" % scale)() - if (half == True): - self.model = self.model.half().to(device) - else: - self.model = self.model.to(device) - self.model.load_state_dict(weight, strict=True) - self.model.eval() - self.half = half - self.device = device - - def np2tensor(self, np_frame): - if (self.half == False): - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255 - else: - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255 - - def tensor2np(self, tensor): - if (self.half == False): - return ( - np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0))) - else: - return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), - (1, 2, 0))) - - def __call__(self, frame, tile_mode): - with torch.no_grad(): - tensor = self.np2tensor(frame) - result = self.tensor2np(self.model(tensor, tile_mode)) - return result - - -if __name__ == "__main__": - ###########inference_img - import time, cv2, sys - from time import time as ttime - - for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3), - ("weights_v3/up4x-latest-denoise3x.pth", 4)]: - for tile_mode in [0, 1, 2, 3, 4]: - upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0") - input_dir = "%s/input_dir1" % root_path - output_dir = "%s/opt-dir-all-test" % root_path - os.makedirs(output_dir, exist_ok=True) - for name in os.listdir(input_dir): - print(name) - tmp = name.split(".") - inp_path = os.path.join(input_dir, name) - suffix = tmp[-1] - prefix = ".".join(tmp[:-1]) - tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - print(inp_path, tmp_path) - # 支持中文路径 - # os.link(inp_path, tmp_path)#win用硬链接 - os.symlink(inp_path, tmp_path) # linux用软链接 - frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]] - t0 = ttime() - result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1] - t1 = ttime() - print(prefix, "done", t1 - t0) - tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - cv2.imwrite(tmp_opt_path, result) - n = 0 - while (1): - if (n == 0): - suffix = "_%sx_tile%s.png" % (scale, tile_mode) - else: - suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) # - if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False): - break - else: - n += 1 - final_opt_path = os.path.join(output_dir, prefix + suffix) - os.rename(tmp_opt_path, final_opt_path) - os.remove(tmp_path) diff --git a/spaces/CorvaeOboro/gen_ability_icon/app.py b/spaces/CorvaeOboro/gen_ability_icon/app.py deleted file mode 100644 index a367ace82721ae15a30f7cb9b730a4ac0f59b669..0000000000000000000000000000000000000000 --- a/spaces/CorvaeOboro/gen_ability_icon/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import gradio as gr -import os -import numpy as np -import torch -import pickle -import types - -from huggingface_hub import hf_hub_url, cached_download -from huggingface_hub import hf_hub_download - -#hf_hub_download(repo_id="CorvaeOboro/gen_ability_icon", filename="gen_ability_icon_stylegan2ada_20221012.pkl", repo_type="dataset") - -#TOKEN = os.environ['TOKEN'] -with open(hf_hub_download(repo_id="CorvaeOboro/gen_ability_icon", filename="gen_ability_icon_stylegan2ada_20221012.pkl", repo_type="model"), 'rb') as f: - G = pickle.load(f)['G_ema']# torch.nn.Module - -device = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda") - G = G.to(device) -else: - _old_forward = G.forward - - def _new_forward(self, *args, **kwargs): - kwargs["force_fp32"] = True - return _old_forward(*args, **kwargs) - - G.forward = types.MethodType(_new_forward, G) - - _old_synthesis_forward = G.synthesis.forward - - def _new_synthesis_forward(self, *args, **kwargs): - kwargs["force_fp32"] = True - return _old_synthesis_forward(*args, **kwargs) - - G.synthesis.forward = types.MethodType(_new_synthesis_forward, G.synthesis) - - -def generate(num_images, interpolate): - if interpolate: - z1 = torch.randn([1, G.z_dim])# latent codes - z2 = torch.randn([1, G.z_dim])# latent codes - zs = torch.cat([z1 + (z2 - z1) * i / (num_images-1) for i in range(num_images)], 0) - else: - zs = torch.randn([num_images, G.z_dim])# latent codes - with torch.no_grad(): - zs = zs.to(device) - img = G(zs, None, force_fp32=True, noise_mode='const') - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - return img.cpu().numpy() - -demo = gr.Blocks() - -def infer(num_images, interpolate): - img = generate(round(num_images), interpolate) - imgs = list(img) - return imgs - -with demo: - gr.Markdown( - """ - # gen_ability_icon - ![gen_ability_icon_comp](https://raw.githubusercontent.com/CorvaeOboro/gen_ability_icon/master/docs/00_icon_gen4_vqB_comp_0_single.jpg?raw=true "gen_ability_icon_comp") - - creates circular magic ability icons from stylegan2ada model trained on synthetic dataset . - more information here : [https://github.com/CorvaeOboro/gen_ability_icon](https://github.com/CorvaeOboro/gen_ability_icon). - """) - images_num = gr.inputs.Slider(default=6, label="Num Images", minimum=1, maximum=16, step=1) - interpolate = gr.inputs.Checkbox(default=False, label="Interpolate") - submit = gr.Button("Generate") - - - out = gr.Gallery() - - submit.click(fn=infer, - inputs=[images_num, interpolate], - outputs=out) - -demo.launch() \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/engine/inference.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/engine/inference.py deleted file mode 100644 index 77e7396d1e68f77301daee9af1c14707237bf5a9..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/engine/inference.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import logging -import time -import os - -import torch -from tqdm import tqdm - -from maskrcnn_benchmark.data.datasets.evaluation import evaluate -from ..utils.comm import is_main_process, get_world_size -from ..utils.comm import all_gather -from ..utils.comm import synchronize -from ..utils.timer import Timer, get_time_str - - -def compute_on_dataset(model, data_loader, device, timer=None): - model.eval() - results_dict = {} - cpu_device = torch.device("cpu") - for _, batch in enumerate(tqdm(data_loader)): - images, targets, image_ids = batch - images = images.to(device) - with torch.no_grad(): - if timer: - timer.tic() - output = model(images) - if timer: - torch.cuda.synchronize() - timer.toc() - output = [o.to(cpu_device) for o in output] - results_dict.update( - {img_id: result for img_id, result in zip(image_ids, output)} - ) - return results_dict - - -def _accumulate_predictions_from_multiple_gpus(predictions_per_gpu): - all_predictions = all_gather(predictions_per_gpu) - if not is_main_process(): - return - # merge the list of dicts - predictions = {} - for p in all_predictions: - predictions.update(p) - # convert a dict where the key is the index in a list - image_ids = list(sorted(predictions.keys())) - if len(image_ids) != image_ids[-1] + 1: - logger = logging.getLogger("maskrcnn_benchmark.inference") - logger.warning( - "Number of images that were gathered from multiple processes is not " - "a contiguous set. Some images might be missing from the evaluation" - ) - - # convert to a list - predictions = [predictions[i] for i in image_ids] - return predictions - - -def inference( - model, - data_loader, - dataset_name, - iou_types=("bbox",), - box_only=False, - device="cuda", - expected_results=(), - expected_results_sigma_tol=4, - output_folder=None, -): - - logger = logging.getLogger("maskrcnn_benchmark.inference") - dataset = data_loader.dataset - logger.info("Start evaluation on {} dataset({} images).".format(dataset_name, len(dataset))) - - extra_args = dict( - box_only=box_only, - iou_types=iou_types, - expected_results=expected_results, - expected_results_sigma_tol=expected_results_sigma_tol, - ) - - # load predictions if exists - prediction_file = os.path.join(output_folder, 'predictions.pth') - if os.path.isfile(prediction_file): - predictions = torch.load(prediction_file) - logger.info("Found prediction results at {}".format(prediction_file)) - - return evaluate(dataset=dataset, - predictions=predictions, - output_folder=output_folder, - **extra_args) - - # convert to a torch.device for efficiency - device = torch.device(device) - num_devices = get_world_size() - total_timer = Timer() - inference_timer = Timer() - total_timer.tic() - predictions = compute_on_dataset(model, data_loader, device, inference_timer) - # wait for all processes to complete before measuring the time - synchronize() - total_time = total_timer.toc() - total_time_str = get_time_str(total_time) - logger.info( - "Total run time: {} ({} s / img per device, on {} devices)".format( - total_time_str, total_time * num_devices / len(dataset), num_devices - ) - ) - total_infer_time = get_time_str(inference_timer.total_time) - logger.info( - "Model inference time: {} ({} s / img per device, on {} devices)".format( - total_infer_time, - inference_timer.total_time * num_devices / len(dataset), - num_devices, - ) - ) - - predictions = _accumulate_predictions_from_multiple_gpus(predictions) - if not is_main_process(): - return - - if output_folder: - torch.save(predictions, os.path.join(output_folder, "predictions.pth")) - - - return evaluate(dataset=dataset, - predictions=predictions, - output_folder=output_folder, - **extra_args) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/__init__.py deleted file mode 100644 index 690d64e63bc40a6006318cd70535017d41643def..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# ruff: noqa -from .v5 import * diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/constants.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/constants.py deleted file mode 100644 index d724ee3cfdbcda1c39f39511046c7a884186ca98..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/constants.py +++ /dev/null @@ -1,3 +0,0 @@ -METHODS_WITH_BODY = {"GET", "HEAD", "POST", "PUT", "DELETE", "PATCH"} -REF_PREFIX = "#/components/schemas/" -REF_TEMPLATE = "#/components/schemas/{model}" diff --git a/spaces/Dantra1/CeliaSensei/text/cleaners.py b/spaces/Dantra1/CeliaSensei/text/cleaners.py deleted file mode 100644 index 68c9ad24d5a303b68a521fba2e8776c8cc867356..0000000000000000000000000000000000000000 --- a/spaces/Dantra1/CeliaSensei/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i": ">", - "&": "&", - "'": "'", - '"': """, -}) - - -def xmlesc(txt): - return txt.translate(table) - - -def load_model(): - torch_cache_path = torch.hub.get_dir() if params['local_cache_path'] == '' else params['local_cache_path'] - model_path = torch_cache_path + "/snakers4_silero-models_master/src/silero/model/" + params['model_id'] + ".pt" - if Path(model_path).is_file(): - print(f'\nUsing Silero TTS cached checkpoint found at {torch_cache_path}') - model, example_text = torch.hub.load(repo_or_dir=torch_cache_path + '/snakers4_silero-models_master/', model='silero_tts', language=params['language'], speaker=params['model_id'], source='local', path=model_path, force_reload=True) - else: - print(f'\nSilero TTS cache not found at {torch_cache_path}. Attempting to download...') - model, example_text = torch.hub.load(repo_or_dir='snakers4/silero-models', model='silero_tts', language=params['language'], speaker=params['model_id']) - model.to(params['device']) - return model - - -def remove_tts_from_history(): - for i, entry in enumerate(shared.history['internal']): - shared.history['visible'][i] = [shared.history['visible'][i][0], entry[1]] - - -def toggle_text_in_history(): - for i, entry in enumerate(shared.history['visible']): - visible_reply = entry[1] - if visible_reply.startswith('')[0]}\n\n{reply}"] - else: - shared.history['visible'][i] = [shared.history['visible'][i][0], f"{visible_reply.split('')[0]}"] - - -def state_modifier(state): - if not params['activate']: - return state - - state['stream'] = False - return state - - -def input_modifier(string): - if not params['activate']: - return string - - shared.processing_message = "*Is recording a voice message...*" - return string - - -def history_modifier(history): - # Remove autoplay from the last reply - if len(history['internal']) > 0: - history['visible'][-1] = [ - history['visible'][-1][0], - history['visible'][-1][1].replace('controls autoplay>', 'controls>') - ] - - return history - - -def output_modifier(string): - global model, current_params, streaming_state - for i in params: - if params[i] != current_params[i]: - model = load_model() - current_params = params.copy() - break - - if not params['activate']: - return string - - original_string = string - string = tts_preprocessor.preprocess(string) - - if string == '': - string = '*Empty reply, try regenerating*' - else: - output_file = Path(f'extensions/silero_tts/outputs/{shared.character}_{int(time.time())}.wav') - prosody = ''.format(params['voice_speed'], params['voice_pitch']) - silero_input = f'{prosody}{xmlesc(string)}' - model.save_wav(ssml_text=silero_input, speaker=params['speaker'], sample_rate=int(params['sample_rate']), audio_path=str(output_file)) - - autoplay = 'autoplay' if params['autoplay'] else '' - string = f'' - if params['show_text']: - string += f'\n\n{original_string}' - - shared.processing_message = "*Is typing...*" - return string - - -def setup(): - global model - model = load_model() - - -def ui(): - # Gradio elements - with gr.Accordion("Silero TTS"): - with gr.Row(): - activate = gr.Checkbox(value=params['activate'], label='Activate TTS') - autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically') - - show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player') - voice = gr.Dropdown(value=params['speaker'], choices=voices_by_gender, label='TTS voice') - with gr.Row(): - v_pitch = gr.Dropdown(value=params['voice_pitch'], choices=voice_pitches, label='Voice pitch') - v_speed = gr.Dropdown(value=params['voice_speed'], choices=voice_speeds, label='Voice speed') - - with gr.Row(): - convert = gr.Button('Permanently replace audios with the message texts') - convert_cancel = gr.Button('Cancel', visible=False) - convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False) - - gr.Markdown('[Click here for Silero audio samples](https://oobabooga.github.io/silero-samples/index.html)') - - # Convert history with confirmation - convert_arr = [convert_confirm, convert, convert_cancel] - convert.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr) - convert_confirm.click( - lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr).then( - remove_tts_from_history, None, None).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - convert_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - - # Toggle message text in history - show_text.change( - lambda x: params.update({"show_text": x}), show_text, None).then( - toggle_text_in_history, None, None).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - # Event functions to update the parameters in the backend - activate.change(lambda x: params.update({"activate": x}), activate, None) - autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None) - voice.change(lambda x: params.update({"speaker": x}), voice, None) - v_pitch.change(lambda x: params.update({"voice_pitch": x}), v_pitch, None) - v_speed.change(lambda x: params.update({"voice_speed": x}), v_speed, None) diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py deleted file mode 100644 index 0e9768d4742e845a45bd343d70bd06f3cb0e4fcb..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_600e.py', - '../../_base_/det_models/panet_r50_fpem_ffm.py', - '../../_base_/det_datasets/icdar2017.py', - '../../_base_/det_pipelines/panet_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline_icdar2017 = {{_base_.train_pipeline_icdar2017}} -test_pipeline_icdar2017 = {{_base_.test_pipeline_icdar2017}} - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_icdar2017), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2017), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2017)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/FEFE2023/VENUSAIESPACIO1/README.md b/spaces/FEFE2023/VENUSAIESPACIO1/README.md deleted file mode 100644 index 0352beb05b9397e0acc20dea9cb4d04647d2802e..0000000000000000000000000000000000000000 --- a/spaces/FEFE2023/VENUSAIESPACIO1/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: VENUSAIESPACIO1 -emoji: 🏢 -colorFrom: pink -colorTo: yellow -sdk: docker -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FaceOnLive/Face-Recognition-SDK/run.sh b/spaces/FaceOnLive/Face-Recognition-SDK/run.sh deleted file mode 100644 index f6ec105cddeb64569bb4669bf99897260d4753f2..0000000000000000000000000000000000000000 --- a/spaces/FaceOnLive/Face-Recognition-SDK/run.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/bash - -exec python3 app.py & -exec python3 gradio/demo.py \ No newline at end of file diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/__init__.py deleted file mode 100644 index 2b184e74c861e6fca0c548692a9a949a6100b0aa..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -from copy import deepcopy - -from basicsr.utils import get_root_logger -from basicsr.utils.registry import LOSS_REGISTRY -from .losses import (CharbonnierLoss, GANLoss, L1Loss, MSELoss, PerceptualLoss, WeightedTVLoss, g_path_regularize, - gradient_penalty_loss, r1_penalty) - -__all__ = [ - 'L1Loss', 'MSELoss', 'CharbonnierLoss', 'WeightedTVLoss', 'PerceptualLoss', 'GANLoss', 'gradient_penalty_loss', - 'r1_penalty', 'g_path_regularize' -] - - -def build_loss(opt): - """Build loss from options. - - Args: - opt (dict): Configuration. It must constain: - type (str): Model type. - """ - opt = deepcopy(opt) - loss_type = opt.pop('type') - loss = LOSS_REGISTRY.get(loss_type)(**opt) - logger = get_root_logger() - logger.info(f'Loss [{loss.__class__.__name__}] is created.') - return loss diff --git a/spaces/Ferion/image-matting-app/ppmatting/models/human_matting.py b/spaces/Ferion/image-matting-app/ppmatting/models/human_matting.py deleted file mode 100644 index cf315edfa563fe231a119dd15b749c41157c988c..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/ppmatting/models/human_matting.py +++ /dev/null @@ -1,454 +0,0 @@ -# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from collections import defaultdict -import time - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F -import paddleseg -from paddleseg.models import layers -from paddleseg import utils -from paddleseg.cvlibs import manager - -from ppmatting.models.losses import MRSD - - -def conv_up_psp(in_channels, out_channels, up_sample): - return nn.Sequential( - layers.ConvBNReLU( - in_channels, out_channels, 3, padding=1), - nn.Upsample( - scale_factor=up_sample, mode='bilinear', align_corners=False)) - - -@manager.MODELS.add_component -class HumanMatting(nn.Layer): - """A model for """ - - def __init__(self, - backbone, - pretrained=None, - backbone_scale=0.25, - refine_kernel_size=3, - if_refine=True): - super().__init__() - if if_refine: - if backbone_scale > 0.5: - raise ValueError( - 'Backbone_scale should not be greater than 1/2, but it is {}' - .format(backbone_scale)) - else: - backbone_scale = 1 - - self.backbone = backbone - self.backbone_scale = backbone_scale - self.pretrained = pretrained - self.if_refine = if_refine - if if_refine: - self.refiner = Refiner(kernel_size=refine_kernel_size) - self.loss_func_dict = None - - self.backbone_channels = backbone.feat_channels - ###################### - ### Decoder part - Glance - ###################### - self.psp_module = layers.PPModule( - self.backbone_channels[-1], - 512, - bin_sizes=(1, 3, 5), - dim_reduction=False, - align_corners=False) - self.psp4 = conv_up_psp(512, 256, 2) - self.psp3 = conv_up_psp(512, 128, 4) - self.psp2 = conv_up_psp(512, 64, 8) - self.psp1 = conv_up_psp(512, 64, 16) - # stage 5g - self.decoder5_g = nn.Sequential( - layers.ConvBNReLU( - 512 + self.backbone_channels[-1], 512, 3, padding=1), - layers.ConvBNReLU( - 512, 512, 3, padding=2, dilation=2), - layers.ConvBNReLU( - 512, 256, 3, padding=2, dilation=2), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 4g - self.decoder4_g = nn.Sequential( - layers.ConvBNReLU( - 512, 256, 3, padding=1), - layers.ConvBNReLU( - 256, 256, 3, padding=1), - layers.ConvBNReLU( - 256, 128, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 3g - self.decoder3_g = nn.Sequential( - layers.ConvBNReLU( - 256, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 2g - self.decoder2_g = nn.Sequential( - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 1g - self.decoder1_g = nn.Sequential( - layers.ConvBNReLU( - 128, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 0g - self.decoder0_g = nn.Sequential( - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Conv2D( - 64, 3, 3, padding=1)) - - ########################## - ### Decoder part - FOCUS - ########################## - self.bridge_block = nn.Sequential( - layers.ConvBNReLU( - self.backbone_channels[-1], 512, 3, dilation=2, padding=2), - layers.ConvBNReLU( - 512, 512, 3, dilation=2, padding=2), - layers.ConvBNReLU( - 512, 512, 3, dilation=2, padding=2)) - # stage 5f - self.decoder5_f = nn.Sequential( - layers.ConvBNReLU( - 512 + self.backbone_channels[-1], 512, 3, padding=1), - layers.ConvBNReLU( - 512, 512, 3, padding=2, dilation=2), - layers.ConvBNReLU( - 512, 256, 3, padding=2, dilation=2), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 4f - self.decoder4_f = nn.Sequential( - layers.ConvBNReLU( - 256 + self.backbone_channels[-2], 256, 3, padding=1), - layers.ConvBNReLU( - 256, 256, 3, padding=1), - layers.ConvBNReLU( - 256, 128, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 3f - self.decoder3_f = nn.Sequential( - layers.ConvBNReLU( - 128 + self.backbone_channels[-3], 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 2f - self.decoder2_f = nn.Sequential( - layers.ConvBNReLU( - 64 + self.backbone_channels[-4], 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 1f - self.decoder1_f = nn.Sequential( - layers.ConvBNReLU( - 64 + self.backbone_channels[-5], 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 0f - self.decoder0_f = nn.Sequential( - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Conv2D( - 64, 1 + 1 + 32, 3, padding=1)) - self.init_weight() - - def forward(self, data): - src = data['img'] - src_h, src_w = paddle.shape(src)[2:] - if self.if_refine: - # It is not need when exporting. - if isinstance(src_h, paddle.Tensor): - if (src_h % 4 != 0) or (src_w % 4) != 0: - raise ValueError( - 'The input image must have width and height that are divisible by 4' - ) - - # Downsample src for backbone - src_sm = F.interpolate( - src, - scale_factor=self.backbone_scale, - mode='bilinear', - align_corners=False) - - # Base - fea_list = self.backbone(src_sm) - ########################## - ### Decoder part - GLANCE - ########################## - #psp: N, 512, H/32, W/32 - psp = self.psp_module(fea_list[-1]) - #d6_g: N, 512, H/16, W/16 - d5_g = self.decoder5_g(paddle.concat((psp, fea_list[-1]), 1)) - #d5_g: N, 512, H/8, W/8 - d4_g = self.decoder4_g(paddle.concat((self.psp4(psp), d5_g), 1)) - #d4_g: N, 256, H/4, W/4 - d3_g = self.decoder3_g(paddle.concat((self.psp3(psp), d4_g), 1)) - #d4_g: N, 128, H/2, W/2 - d2_g = self.decoder2_g(paddle.concat((self.psp2(psp), d3_g), 1)) - #d2_g: N, 64, H, W - d1_g = self.decoder1_g(paddle.concat((self.psp1(psp), d2_g), 1)) - #d0_g: N, 3, H, W - d0_g = self.decoder0_g(d1_g) - # The 1st channel is foreground. The 2nd is transition region. The 3rd is background. - # glance_sigmoid = F.sigmoid(d0_g) - glance_sigmoid = F.softmax(d0_g, axis=1) - - ########################## - ### Decoder part - FOCUS - ########################## - bb = self.bridge_block(fea_list[-1]) - #bg: N, 512, H/32, W/32 - d5_f = self.decoder5_f(paddle.concat((bb, fea_list[-1]), 1)) - #d5_f: N, 256, H/16, W/16 - d4_f = self.decoder4_f(paddle.concat((d5_f, fea_list[-2]), 1)) - #d4_f: N, 128, H/8, W/8 - d3_f = self.decoder3_f(paddle.concat((d4_f, fea_list[-3]), 1)) - #d3_f: N, 64, H/4, W/4 - d2_f = self.decoder2_f(paddle.concat((d3_f, fea_list[-4]), 1)) - #d2_f: N, 64, H/2, W/2 - d1_f = self.decoder1_f(paddle.concat((d2_f, fea_list[-5]), 1)) - #d1_f: N, 64, H, W - d0_f = self.decoder0_f(d1_f) - #d0_f: N, 1, H, W - focus_sigmoid = F.sigmoid(d0_f[:, 0:1, :, :]) - pha_sm = self.fusion(glance_sigmoid, focus_sigmoid) - err_sm = d0_f[:, 1:2, :, :] - err_sm = paddle.clip(err_sm, 0., 1.) - hid_sm = F.relu(d0_f[:, 2:, :, :]) - - # Refiner - if self.if_refine: - pha = self.refiner( - src=src, pha=pha_sm, err=err_sm, hid=hid_sm, tri=glance_sigmoid) - # Clamp outputs - pha = paddle.clip(pha, 0., 1.) - - if self.training: - logit_dict = { - 'glance': glance_sigmoid, - 'focus': focus_sigmoid, - 'fusion': pha_sm, - 'error': err_sm - } - if self.if_refine: - logit_dict['refine'] = pha - loss_dict = self.loss(logit_dict, data) - return logit_dict, loss_dict - else: - return pha if self.if_refine else pha_sm - - def loss(self, logit_dict, label_dict, loss_func_dict=None): - if loss_func_dict is None: - if self.loss_func_dict is None: - self.loss_func_dict = defaultdict(list) - self.loss_func_dict['glance'].append(nn.NLLLoss()) - self.loss_func_dict['focus'].append(MRSD()) - self.loss_func_dict['cm'].append(MRSD()) - self.loss_func_dict['err'].append(paddleseg.models.MSELoss()) - self.loss_func_dict['refine'].append(paddleseg.models.L1Loss()) - else: - self.loss_func_dict = loss_func_dict - - loss = {} - - # glance loss computation - # get glance label - glance_label = F.interpolate( - label_dict['trimap'], - logit_dict['glance'].shape[2:], - mode='nearest', - align_corners=False) - glance_label_trans = (glance_label == 128).astype('int64') - glance_label_bg = (glance_label == 0).astype('int64') - glance_label = glance_label_trans + glance_label_bg * 2 - loss_glance = self.loss_func_dict['glance'][0]( - paddle.log(logit_dict['glance'] + 1e-6), glance_label.squeeze(1)) - loss['glance'] = loss_glance - - # focus loss computation - focus_label = F.interpolate( - label_dict['alpha'], - logit_dict['focus'].shape[2:], - mode='bilinear', - align_corners=False) - loss_focus = self.loss_func_dict['focus'][0]( - logit_dict['focus'], focus_label, glance_label_trans) - loss['focus'] = loss_focus - - # collaborative matting loss - loss_cm_func = self.loss_func_dict['cm'] - # fusion_sigmoid loss - loss_cm = loss_cm_func[0](logit_dict['fusion'], focus_label) - loss['cm'] = loss_cm - - # error loss - err = F.interpolate( - logit_dict['error'], - label_dict['alpha'].shape[2:], - mode='bilinear', - align_corners=False) - err_label = (F.interpolate( - logit_dict['fusion'], - label_dict['alpha'].shape[2:], - mode='bilinear', - align_corners=False) - label_dict['alpha']).abs() - loss_err = self.loss_func_dict['err'][0](err, err_label) - loss['err'] = loss_err - - loss_all = 0.25 * loss_glance + 0.25 * loss_focus + 0.25 * loss_cm + loss_err - - # refine loss - if self.if_refine: - loss_refine = self.loss_func_dict['refine'][0](logit_dict['refine'], - label_dict['alpha']) - loss['refine'] = loss_refine - loss_all = loss_all + loss_refine - - loss['all'] = loss_all - return loss - - def fusion(self, glance_sigmoid, focus_sigmoid): - # glance_sigmoid [N, 3, H, W]. - # In index, 0 is foreground, 1 is transition, 2 is backbone. - # After fusion, the foreground is 1, the background is 0, and the transion is between (0, 1). - index = paddle.argmax(glance_sigmoid, axis=1, keepdim=True) - transition_mask = (index == 1).astype('float32') - fg = (index == 0).astype('float32') - fusion_sigmoid = focus_sigmoid * transition_mask + fg - return fusion_sigmoid - - def init_weight(self): - if self.pretrained is not None: - utils.load_entire_model(self, self.pretrained) - - -class Refiner(nn.Layer): - ''' - Refiner refines the coarse output to full resolution. - - Args: - kernel_size: The convolution kernel_size. Options: [1, 3]. Default: 3. - ''' - - def __init__(self, kernel_size=3): - super().__init__() - if kernel_size not in [1, 3]: - raise ValueError("kernel_size must be in [1, 3]") - - self.kernel_size = kernel_size - - channels = [32, 24, 16, 12, 1] - self.conv1 = layers.ConvBNReLU( - channels[0] + 4 + 3, - channels[1], - kernel_size, - padding=0, - bias_attr=False) - self.conv2 = layers.ConvBNReLU( - channels[1], channels[2], kernel_size, padding=0, bias_attr=False) - self.conv3 = layers.ConvBNReLU( - channels[2] + 3, - channels[3], - kernel_size, - padding=0, - bias_attr=False) - self.conv4 = nn.Conv2D( - channels[3], channels[4], kernel_size, padding=0, bias_attr=True) - - def forward(self, src, pha, err, hid, tri): - ''' - Args: - src: (B, 3, H, W) full resolution source image. - pha: (B, 1, Hc, Wc) coarse alpha prediction. - err: (B, 1, Hc, Hc) coarse error prediction. - hid: (B, 32, Hc, Hc) coarse hidden encoding. - tri: (B, 1, Hc, Hc) trimap prediction. - ''' - h_full, w_full = paddle.shape(src)[2:] - h_half, w_half = h_full // 2, w_full // 2 - h_quat, w_quat = h_full // 4, w_full // 4 - - x = paddle.concat([hid, pha, tri], axis=1) - x = F.interpolate( - x, - paddle.concat((h_half, w_half)), - mode='bilinear', - align_corners=False) - y = F.interpolate( - src, - paddle.concat((h_half, w_half)), - mode='bilinear', - align_corners=False) - - if self.kernel_size == 3: - x = F.pad(x, [3, 3, 3, 3]) - y = F.pad(y, [3, 3, 3, 3]) - - x = self.conv1(paddle.concat([x, y], axis=1)) - x = self.conv2(x) - - if self.kernel_size == 3: - x = F.interpolate(x, paddle.concat((h_full + 4, w_full + 4))) - y = F.pad(src, [2, 2, 2, 2]) - else: - x = F.interpolate( - x, paddle.concat((h_full, w_full)), mode='nearest') - y = src - - x = self.conv3(paddle.concat([x, y], axis=1)) - x = self.conv4(x) - - pha = x - return pha diff --git a/spaces/GIZ/SDSN-demo/app.py b/spaces/GIZ/SDSN-demo/app.py deleted file mode 100644 index 9eada7d22d65dca2c25f143162872a5e1f4f0e4c..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import appStore.keyword_search as keyword_search -import appStore.sdg_analysis as sdg_analysis -import appStore.coherence as coherence -import appStore.info as info -from appStore.multiapp import MultiApp -import streamlit as st - -st.set_page_config(page_title = 'Climate Policy Intelligence', - initial_sidebar_state='expanded', layout="wide") - -app = MultiApp() - -app.add_app("About","house", info.app) -app.add_app("Search","search", keyword_search.app) -app.add_app("SDG Analysis","gear",sdg_analysis.app) -app.add_app("NDC Comparison","exclude", coherence.app) - -app.run() \ No newline at end of file diff --git a/spaces/GeorgeOrville/bingo/Dockerfile b/spaces/GeorgeOrville/bingo/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/GiordanoB/sumarizacao-abstrativa-portugues/app.py b/spaces/GiordanoB/sumarizacao-abstrativa-portugues/app.py deleted file mode 100644 index b073f64f92df2b6ed3c7583528a0f8dd69efa1b9..0000000000000000000000000000000000000000 --- a/spaces/GiordanoB/sumarizacao-abstrativa-portugues/app.py +++ /dev/null @@ -1,327 +0,0 @@ -import gradio as gr -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer -import torch -import spacy -import pytextrank -from sumy.parsers.plaintext import PlaintextParser -from sumy.nlp.tokenizers import Tokenizer -from sumy.summarizers.luhn import LuhnSummarizer -from sumy.summarizers.lex_rank import LexRankSummarizer -import nltk - -nlp = spacy.load('pt_core_news_sm') -nltk.download('punkt') -nlp.add_pipe("textrank") - -#WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) - -model_name="GiordanoB/mT5_multilingual_XLSum-sumarizacao-PTBR" -tokenizer = AutoTokenizer.from_pretrained(model_name) -model = AutoModelForSeq2SeqLM.from_pretrained(model_name) - -app = gr.Blocks() - -def summarize_HUB_Multidocument(input_1, input_2, input_3, method, max_length, min_length, num_beams): - - if(input_1 and not input_2 and not input_3 or not input_1 and input_2 and not input_3 or not input_1 and not input_2 and input_3): - return "Por favor utilize a aba de sumarização monodocumento" - - if method == "Pure mT5": - if(input_1 and input_2 and input_3 ): #"3 cheios" - tempSum1 = summarize_mT5(input_1, max_length, min_length, num_beams) - tempSum2 = summarize_mT5(input_2, max_length, min_length, num_beams) - tempSum3 = summarize_mT5(input_3, max_length, min_length, num_beams) - fullSumm = tempSum1 + tempSum2 + tempSum3 - return summarize_mT5(fullSumm, max_length, min_length, num_beams) - - if(input_1 and input_2 and not input_3): #"1 e 2 cheios" - tempSum1 = summarize_mT5(input_1, max_length, min_length, num_beams) - tempSum2 = summarize_mT5(input_2, max_length, min_length, num_beams) - fullSumm = tempSum1 + tempSum2 - return summarize_mT5(fullSumm, max_length, min_length, num_beams) - - if(input_1 and not input_2 and input_3): #1 e 3 cheios" - tempSum1 = summarize_mT5(input_1, max_length, min_length, num_beams) - tempSum3 = summarize_mT5(input_3, max_length, min_length, num_beams) - fullSumm = tempSum1 + tempSum3 - return summarize_mT5(fullSumm, max_length, min_length, num_beams) - - if(not input_1 and input_2 and input_3): #"2 e 3 cheios" - tempSum2 = summarize_mT5(input_2, max_length, min_length, num_beams) - tempSum3 = summarize_mT5(input_3, max_length, min_length, num_beams) - fullSumm = tempSum2 + tempSum3 - return summarize_mT5(fullSumm, max_length, min_length, num_beams) - - if method == "Luhn": - if(input_1 and input_2 and input_3 ): #"3 cheios" - tempSum1 = summarize_Luhn(input_1) - tempSum2 = summarize_Luhn(input_2) - tempSum3 = summarize_Luhn(input_3) - fullSumm = tempSum1 + tempSum2 + tempSum3 - return summarize_Luhn(fullSumm) - - if(input_1 and input_2 and not input_3): #"1 e 2 cheios" - tempSum1 = summarize_Luhn(input_1) - tempSum2 = summarize_Luhn(input_2) - fullSumm = tempSum1 + tempSum2 - return summarize_Luhn(fullSumm) - - if(input_1 and not input_2 and input_3): #1 e 3 cheios" - tempSum1 = summarize_Luhn(input_1) - tempSum3 = summarize_Luhn(input_3) - fullSumm = tempSum1 + tempSum3 - return summarize_Luhn(fullSumm) - - if(not input_1 and input_2 and input_3): #"2 e 3 cheios" - tempSum2 = summarize_Luhn(input_2) - tempSum3 = summarize_Luhn(input_3) - fullSumm = tempSum2 + tempSum3 - return summarize_Luhn(fullSumm) - - if method == "LexRank": - if(input_1 and input_2 and input_3 ): #"3 cheios" - tempSum1 = summarize_LexRank(input_1) - tempSum2 = summarize_LexRank(input_2) - tempSum3 = summarize_LexRank(input_3) - fullSumm = tempSum1 + tempSum2 + tempSum3 - return summarize_LexRank(fullSumm) - - if(input_1 and input_2 and not input_3): #"1 e 2 cheios" - tempSum1 = summarize_LexRank(input_1) - tempSum2 = summarize_LexRank(input_2) - fullSumm = tempSum1 + tempSum2 - return summarize_LexRank(fullSumm) - - if(input_1 and not input_2 and input_3): #1 e 3 cheios" - tempSum1 = summarize_LexRank(input_1) - tempSum3 = summarize_LexRank(input_3) - fullSumm = tempSum1 + tempSum3 - return summarize_LexRank(fullSumm) - - if(not input_1 and input_2 and input_3): #"2 e 3 cheios" - tempSum2 = summarize_LexRank(input_2) - tempSum3 = summarize_LexRank(input_3) - fullSumm = tempSum2 + tempSum3 - return summarize_LexRank(fullSumm) - - if method == "TextRank": - if(input_1 and input_2 and input_3 ): #"3 cheios" - tempSum1 = summarize_TextRank(input_1) - tempSum2 = summarize_TextRank(input_2) - tempSum3 = summarize_TextRank(input_3) - fullSumm = tempSum1 + tempSum2 + tempSum3 - return summarize_TextRank(fullSumm) - - if(input_1 and input_2 and not input_3): #"1 e 2 cheios" - tempSum1 = summarize_TextRank(input_1) - tempSum2 = summarize_TextRank(input_2) - fullSumm = tempSum1 + tempSum2 - return summarize_TextRank(fullSumm) - - if(input_1 and not input_2 and input_3): #1 e 3 cheios" - tempSum1 = summarize_TextRank(input_1) - tempSum3 = summarize_TextRank(input_3) - fullSumm = tempSum1 + tempSum3 - return summarize_TextRank(fullSumm) - - if(not input_1 and input_2 and input_3): #"2 e 3 cheios" - tempSum2 = summarize_TextRank(input_2) - tempSum3 = summarize_TextRank(input_3) - fullSumm = tempSum2 + tempSum3 - return summarize_TextRank(fullSumm) - - if method == "Luhn + mT5": - if(input_1 and input_2 and input_3 ): #"3 cheios" - tempSum1 = summarize_Luhn(input_1) - tempSum2 = summarize_Luhn(input_2) - tempSum3 = summarize_Luhn(input_3) - fullSumm = tempSum1 + tempSum2 + tempSum3 - finalSum = summarize_Luhn(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(input_1 and input_2 and not input_3): #"1 e 2 cheios" - tempSum1 = summarize_Luhn(input_1) - tempSum2 = summarize_Luhn(input_2) - fullSumm = tempSum1 + tempSum2 - finalSum = summarize_Luhn(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(input_1 and not input_2 and input_3): #1 e 3 cheios" - tempSum1 = summarize_Luhn(input_1) - tempSum3 = summarize_Luhn(input_3) - fullSumm = tempSum1 + tempSum3 - finalSum = summarize_Luhn(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(not input_1 and input_2 and input_3): #"2 e 3 cheios" - tempSum2 = summarize_Luhn(input_2) - tempSum3 = summarize_Luhn(input_3) - fullSumm = tempSum2 + tempSum3 - finalSum = summarize_Luhn(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if method == "LexRank + mT5": - if(input_1 and input_2 and input_3 ): #"3 cheios" - tempSum1 = summarize_LexRank(input_1) - tempSum2 = summarize_LexRank(input_2) - tempSum3 = summarize_LexRank(input_3) - fullSumm = tempSum1 + tempSum2 + tempSum3 - finalSum = summarize_LexRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(input_1 and input_2 and not input_3): #"1 e 2 cheios" - tempSum1 = summarize_LexRank(input_1) - tempSum2 = summarize_LexRank(input_2) - fullSumm = tempSum1 + tempSum2 - finalSum = summarize_LexRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(input_1 and not input_2 and input_3): #1 e 3 cheios" - tempSum1 = summarize_LexRank(input_1) - tempSum3 = summarize_LexRank(input_3) - fullSumm = tempSum1 + tempSum3 - finalSum = summarize_LexRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(not input_1 and input_2 and input_3): #"2 e 3 cheios" - tempSum2 = summarize_LexRank(input_2) - tempSum3 = summarize_LexRank(input_3) - fullSumm = tempSum2 + tempSum3 - finalSum = summarize_LexRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if method == "TextRank + mT5": - if(input_1 and input_2 and input_3 ): #"3 cheios" - tempSum1 = summarize_TextRank(input_1) - tempSum2 = summarize_TextRank(input_2) - tempSum3 = summarize_TextRank(input_3) - fullSumm = tempSum1 + tempSum2 + tempSum3 - finalSum = summarize_TextRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(input_1 and input_2 and not input_3): #"1 e 2 cheios" - tempSum1 = summarize_TextRank(input_1) - tempSum2 = summarize_TextRank(input_2) - fullSumm = tempSum1 + tempSum2 - finalSum = summarize_TextRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(input_1 and not input_2 and input_3): #1 e 3 cheios" - tempSum1 = summarize_TextRank(input_1) - tempSum3 = summarize_TextRank(input_3) - fullSumm = tempSum1 + tempSum3 - finalSum = summarize_TextRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - - if(not input_1 and input_2 and input_3): #"2 e 3 cheios" - tempSum2 = summarize_TextRank(input_2) - tempSum3 = summarize_TextRank(input_3) - fullSumm = tempSum2 + tempSum3 - finalSum = summarize_TextRank(fullSumm) - return summarize_mT5(finalSum, max_length, min_length, num_beams) - return "ERROR" - -def summarize_HUB_Monodocument(input, method, max_length, min_length, num_beams): - if method == "Pure mT5": - return summarize_mT5(input, max_length, min_length, num_beams) - - if method == "Luhn": - return summarize_Luhn(input) - - if method == "LexRank": - return summarize_LexRank(input) - - if method == "TextRank": - return summarize_TextRank(input) - - if method == "Luhn + mT5": - tempSum = summarize_Luhn(input) - return summarize_mT5(tempSum, max_length, min_length, num_beams) - - if method == "LexRank + mT5": - tempSum = summarize_LexRank(input) - return summarize_mT5(tempSum, max_length, min_length, num_beams) - - if method == "TextRank + mT5": - tempSum = summarize_TextRank(input) - return summarize_mT5(tempSum, max_length, min_length, num_beams) - return "ERROR" - -def summarize_Luhn(input): - summ = '' - summarizer = LuhnSummarizer() - parser = PlaintextParser.from_string(input, Tokenizer("portuguese")) - summary_1 = summarizer(parser.document, 3) - - for sentence in summary_1: - summ = summ + ' ' + str(sentence) - summ2 = '' - summ2 = summ.replace('\n', ' ').replace('\r', '') - return summ2 - -def summarize_LexRank(input): - summ = '' - summarizer = LexRankSummarizer() - parser = PlaintextParser.from_string(input, Tokenizer("portuguese")) - summary_1 = summarizer(parser.document, 3) - - for sentence in summary_1: - summ = summ + ' ' + str(sentence) - summ2 = '' - summ2 = summ.replace('\n', ' ').replace('\r', '') - return summ2 - -def summarize_TextRank(input): - summ = '' - doc = nlp(input) - tr = doc._.textrank - for sent in tr.summary(limit_sentences=3): - summ = summ + ' ' + str(sent) - summ2 = summ.replace('\n', ' ').replace('\r', '') - return summ2; - -def summarize_mT5(input, max_length, min_length, num_beams): - for i in range(0,14): - input_ids = tokenizer( - input, - return_tensors="pt", - padding="max_length", - truncation=True, - max_length=512 - )["input_ids"] - - output_ids = model.generate( - input_ids=input_ids, - max_length=max_length, - min_length=min_length, - no_repeat_ngram_size=2, - num_beams=num_beams - )[0] - - response = tokenizer.decode( - output_ids, - skip_special_tokens=True, - clean_up_tokenization_spaces=False - ) - return response - -with app: - gr.Markdown("Sumarização Monodocumento ou Multidocumento para o português.") - with gr.Tabs(): - - with gr.TabItem("Sumarização Monodocumento"): - MonoInputs=[gr.Textbox(label="Texto a ser Sumarizado"),gr.Radio(["Pure mT5","Luhn","LexRank","TextRank","Luhn + mT5","LexRank + mT5","TextRank + mT5"], label="Método"), -gr.Slider(50, 500, step=1, value=200, label="Tamanho máximo do Sumário"), gr.Slider(1, 125, step=1, value=50, label="Tamanho mínimo do Sumário"), gr.Slider(1, 10, step=1, value=4, label="Qualidade do sumário")] - MonoOutputs=gr.Textbox() - MonoButton = gr.Button("Sumarizar Texto") - - with gr.TabItem("Sumarização Multidocumento"): - MultiInputs=[gr.Textbox(label="Texto 1"), gr.Textbox(label="Texto 2"),gr.Textbox(label="Texto 3"),gr.Radio(["Pure mT5","Luhn","LexRank","TextRank","Luhn + mT5","LexRank + mT5","TextRank + mT5"], label="Método"), -gr.Slider(50, 500, step=1, value=200, label="Tamanho máximo do Sumário"), gr.Slider(1, 125, step=1, value=50, label="Tamanho mínimo do Sumário"), gr.Slider(1, 10, step=1, value=4, label="Qualidade do sumário")] - MultiOutputs=gr.Textbox() - MultiButton = gr.Button("Sumarizar Textos") - - MonoButton.click(summarize_HUB_Monodocument, inputs=MonoInputs, outputs=MonoOutputs) - MultiButton.click(summarize_HUB_Multidocument, inputs=MultiInputs, outputs=MultiOutputs) - -app.launch() \ No newline at end of file diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/model.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/model.py deleted file mode 100644 index 66addeb6e7ac27a109775e2cac43d1724b5a6fb2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/model.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Code for constructing the model.""" -from typing import Any, Mapping, Optional, Union - -from absl import logging -from alphafold.common import confidence -from alphafold.model import features -from alphafold.model import modules -import haiku as hk -import jax -import ml_collections -import numpy as np -import tensorflow.compat.v1 as tf -import tree - - -def get_confidence_metrics( - prediction_result: Mapping[str, Any]) -> Mapping[str, Any]: - """Post processes prediction_result to get confidence metrics.""" - - confidence_metrics = {} - confidence_metrics['plddt'] = confidence.compute_plddt( - prediction_result['predicted_lddt']['logits']) - if 'predicted_aligned_error' in prediction_result: - confidence_metrics.update(confidence.compute_predicted_aligned_error( - prediction_result['predicted_aligned_error']['logits'], - prediction_result['predicted_aligned_error']['breaks'])) - confidence_metrics['ptm'] = confidence.predicted_tm_score( - prediction_result['predicted_aligned_error']['logits'], - prediction_result['predicted_aligned_error']['breaks']) - - return confidence_metrics - - -class RunModel: - """Container for JAX model.""" - - def __init__(self, - config: ml_collections.ConfigDict, - params: Optional[Mapping[str, Mapping[str, np.ndarray]]] = None): - self.config = config - self.params = params - - def _forward_fn(batch): - model = modules.AlphaFold(self.config.model) - return model( - batch, - is_training=False, - compute_loss=False, - ensemble_representations=True) - - self.apply = jax.jit(hk.transform(_forward_fn).apply) - self.init = jax.jit(hk.transform(_forward_fn).init) - - def init_params(self, feat: features.FeatureDict, random_seed: int = 0): - """Initializes the model parameters. - - If none were provided when this class was instantiated then the parameters - are randomly initialized. - - Args: - feat: A dictionary of NumPy feature arrays as output by - RunModel.process_features. - random_seed: A random seed to use to initialize the parameters if none - were set when this class was initialized. - """ - if not self.params: - # Init params randomly. - rng = jax.random.PRNGKey(random_seed) - self.params = hk.data_structures.to_mutable_dict( - self.init(rng, feat)) - logging.warning('Initialized parameters randomly') - - def process_features( - self, - raw_features: Union[tf.train.Example, features.FeatureDict], - random_seed: int) -> features.FeatureDict: - """Processes features to prepare for feeding them into the model. - - Args: - raw_features: The output of the data pipeline either as a dict of NumPy - arrays or as a tf.train.Example. - random_seed: The random seed to use when processing the features. - - Returns: - A dict of NumPy feature arrays suitable for feeding into the model. - """ - if isinstance(raw_features, dict): - return features.np_example_to_features( - np_example=raw_features, - config=self.config, - random_seed=random_seed) - else: - return features.tf_example_to_features( - tf_example=raw_features, - config=self.config, - random_seed=random_seed) - - def eval_shape(self, feat: features.FeatureDict) -> jax.ShapeDtypeStruct: - self.init_params(feat) - logging.info('Running eval_shape with shape(feat) = %s', - tree.map_structure(lambda x: x.shape, feat)) - shape = jax.eval_shape(self.apply, self.params, jax.random.PRNGKey(0), feat) - logging.info('Output shape was %s', shape) - return shape - - def predict(self, feat: features.FeatureDict) -> Mapping[str, Any]: - """Makes a prediction by inferencing the model on the provided features. - - Args: - feat: A dictionary of NumPy feature arrays as output by - RunModel.process_features. - - Returns: - A dictionary of model outputs. - """ - self.init_params(feat) - logging.info('Running predict with shape(feat) = %s', - tree.map_structure(lambda x: x.shape, feat)) - result = self.apply(self.params, jax.random.PRNGKey(0), feat) - # This block is to ensure benchmark timings are accurate. Some blocking is - # already happening when computing get_confidence_metrics, and this ensures - # all outputs are blocked on. - jax.tree_map(lambda x: x.block_until_ready(), result) - result.update(get_confidence_metrics(result)) - logging.info('Output shape was %s', - tree.map_structure(lambda x: x.shape, result)) - return result diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py deleted file mode 100644 index 14eaef2dffea606027001b69d12d11cb46693e1c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py +++ /dev/null @@ -1,42 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_caffe_dc5.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco.py deleted file mode 100644 index 0f402826d3a22714078d8c50ed6bd8959018e4e7..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = 'tridentnet_r50_caffe_mstrain_1x_coco.py' - -lr_config = dict(step=[28, 34]) -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py deleted file mode 100644 index d7a43bee01422ad4795dd27874e0cd4bb6cbfecf..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='ASPPHead', - in_channels=2048, - in_index=3, - channels=512, - dilations=(1, 12, 24, 36), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 136449083f7a9efbad6df94f1acd04170147aaba..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_769x769_80k_cityscapes.py' -model = dict( - pretrained='torchvision://resnet101', - backbone=dict(type='ResNet', depth=101)) diff --git a/spaces/HLasse/textdescriptives/README.md b/spaces/HLasse/textdescriptives/README.md deleted file mode 100644 index 5d79ecc9bd1973cf900d5e5db96ff7a58bed067a..0000000000000000000000000000000000000000 --- a/spaces/HLasse/textdescriptives/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Textdescriptives -emoji: 📈 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: apache-2.0 -tags: [NLP, feature extraction] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/numbers.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/numbers.py deleted file mode 100644 index 491634d692ee71e7ea0e5213b513e15be825c9b2..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/numbers.py +++ /dev/null @@ -1,69 +0,0 @@ -import inflect -import re - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text \ No newline at end of file diff --git a/spaces/Hello-SimpleAI/chatgpt-detector-ling/README.md b/spaces/Hello-SimpleAI/chatgpt-detector-ling/README.md deleted file mode 100644 index 1d7129db8932f8e70fd07d11fba7951b9bd68927..0000000000000000000000000000000000000000 --- a/spaces/Hello-SimpleAI/chatgpt-detector-ling/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatgpt Detector Ling -emoji: 📈 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HiImJavivi/Practica2/app.py b/spaces/HiImJavivi/Practica2/app.py deleted file mode 100644 index 1ded5b1e66daba863e96e5dd2443653c90ba2148..0000000000000000000000000000000000000000 --- a/spaces/HiImJavivi/Practica2/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - - -# Cargamos el learner -learn = load_learner('exportdefinitivo.pkl') - -# Definimos las etiquetas de nuestro modelo -labels = ['0','1','2','-1'] - - -# Definimos una función que se encarga de llevar a cabo las predicciones -def predict(string): - pred,pred_idx,probs = learn.predict(string) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -# Creamos la interfaz y la lanzamos. -gr.Interface(fn=predict, inputs=gr.inputs.Textbox(lines=1), outputs=gr.outputs.Label(num_top_classes=3),examples=['This house is very good','Going up gets you down'], title="Hypothesis deductor labels entailment, contradiction, and neutral, supporting the task of natural language inference", description="For each instance, there is a string for the premise, a string for the hypothesis, and an integer for the label. pairs manually labeled for balanced classification with the labels entailment(0), contradiction(2), and neutral(1), supporting the task of natural language inference").launch(share=False) \ No newline at end of file diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/lengths/__init__.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/lengths/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ICML2022/OFA/fairseq/examples/camembert/README.md b/spaces/ICML2022/OFA/fairseq/examples/camembert/README.md deleted file mode 100644 index 5ef4fe3f151bb468712f3be935ea5bb1b1360bf7..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/camembert/README.md +++ /dev/null @@ -1,75 +0,0 @@ -# CamemBERT: a Tasty French Language Model - -## Introduction - -[CamemBERT](https://arxiv.org/abs/1911.03894) is a pretrained language model trained on 138GB of French text based on RoBERTa. - -Also available in [github.com/huggingface/transformers](https://github.com/huggingface/transformers/). - -## Pre-trained models - -| Model | #params | Download | Arch. | Training data | -|--------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------|-------|-----------------------------------| -| `camembert` / `camembert-base` | 110M | [camembert-base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz) | Base | OSCAR (138 GB of text) | -| `camembert-large` | 335M | [camembert-large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-large.tar.gz) | Large | CCNet (135 GB of text) | -| `camembert-base-ccnet` | 110M | [camembert-base-ccnet.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet.tar.gz) | Base | CCNet (135 GB of text) | -| `camembert-base-wikipedia-4gb` | 110M | [camembert-base-wikipedia-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-wikipedia-4gb.tar.gz) | Base | Wikipedia (4 GB of text) | -| `camembert-base-oscar-4gb` | 110M | [camembert-base-oscar-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-oscar-4gb.tar.gz) | Base | Subsample of OSCAR (4 GB of text) | -| `camembert-base-ccnet-4gb` | 110M | [camembert-base-ccnet-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet-4gb.tar.gz) | Base | Subsample of CCNet (4 GB of text) | - -## Example usage - -### fairseq -##### Load CamemBERT from torch.hub (PyTorch >= 1.1): -```python -import torch -camembert = torch.hub.load('pytorch/fairseq', 'camembert') -camembert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load CamemBERT (for PyTorch 1.0 or custom models): -```python -# Download camembert model -wget https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz -tar -xzvf camembert.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import CamembertModel -camembert = CamembertModel.from_pretrained('/path/to/camembert') -camembert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Filling masks: -```python -masked_line = 'Le camembert est :)' -camembert.fill_mask(masked_line, topk=3) -# [('Le camembert est délicieux :)', 0.4909118115901947, ' délicieux'), -# ('Le camembert est excellent :)', 0.10556942224502563, ' excellent'), -# ('Le camembert est succulent :)', 0.03453322499990463, ' succulent')] -``` - -##### Extract features from Camembert: -```python -# Extract the last layer's features -line = "J'aime le camembert !" -tokens = camembert.encode(line) -last_layer_features = camembert.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 10, 768]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = camembert.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 13 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -## Citation -If you use our work, please cite: - -```bibtex -@inproceedings{martin2020camembert, - title={CamemBERT: a Tasty French Language Model}, - author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, - booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, - year={2020} -} -``` diff --git a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/misc.py b/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/misc.py deleted file mode 100644 index 1acfb7ea16904c07e362aeaae7337920d06fe5ca..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/misc.py +++ /dev/null @@ -1,283 +0,0 @@ -# python3.7 - -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Misc functions for customized operations. - -Please refer to https://github.com/NVlabs/stylegan3 -""" - -# pylint: disable=line-too-long -# pylint: disable=missing-class-docstring -# pylint: disable=missing-function-docstring -# pylint: disable=use-maxsplit-arg - -import re -import contextlib -import warnings -from easydict import EasyDict -import numpy as np -import torch - -#---------------------------------------------------------------------------- -# Cached construction of constant tensors. Avoids CPU=>GPU copy when the -# same constant is used multiple times. - -_constant_cache = dict() - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - -#---------------------------------------------------------------------------- -# Replace NaN/Inf with specified numerical values. - -try: - nan_to_num = torch.nan_to_num # 1.8.0a0 -except AttributeError: - def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin - assert isinstance(input, torch.Tensor) - if posinf is None: - posinf = torch.finfo(input.dtype).max - if neginf is None: - neginf = torch.finfo(input.dtype).min - assert nan == 0 - return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out) - -#---------------------------------------------------------------------------- -# Symbolic assert. - -try: - symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access -except AttributeError: - symbolic_assert = torch.Assert # 1.7.0 - -#---------------------------------------------------------------------------- -# Context manager to temporarily suppress known warnings in torch.jit.trace(). -# Note: Cannot use catch_warnings because of https://bugs.python.org/issue29672 - -@contextlib.contextmanager -def suppress_tracer_warnings(): - flt = ('ignore', None, torch.jit.TracerWarning, None, 0) - warnings.filters.insert(0, flt) - yield - warnings.filters.remove(flt) - -#---------------------------------------------------------------------------- -# Assert that the shape of a tensor matches the given list of integers. -# None indicates that the size of a dimension is allowed to vary. -# Performs symbolic assertion when used in torch.jit.trace(). - -def assert_shape(tensor, ref_shape): - if tensor.ndim != len(ref_shape): - raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}') - for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)): - if ref_size is None: - pass - elif isinstance(ref_size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}') - elif isinstance(size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}') - elif size != ref_size: - raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}') - -#---------------------------------------------------------------------------- -# Function decorator that calls torch.autograd.profiler.record_function(). - -def profiled_function(fn): - def decorator(*args, **kwargs): - with torch.autograd.profiler.record_function(fn.__name__): - return fn(*args, **kwargs) - decorator.__name__ = fn.__name__ - return decorator - -#---------------------------------------------------------------------------- -# Sampler for torch.utils.data.DataLoader that loops over the dataset -# indefinitely, shuffling items as it goes. - -class InfiniteSampler(torch.utils.data.Sampler): - def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5): - assert len(dataset) > 0 - assert num_replicas > 0 - assert 0 <= rank < num_replicas - assert 0 <= window_size <= 1 - super().__init__(dataset) - self.dataset = dataset - self.rank = rank - self.num_replicas = num_replicas - self.shuffle = shuffle - self.seed = seed - self.window_size = window_size - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = None - window = 0 - if self.shuffle: - rnd = np.random.RandomState(self.seed) - rnd.shuffle(order) - window = int(np.rint(order.size * self.window_size)) - - idx = 0 - while True: - i = idx % order.size - if idx % self.num_replicas == self.rank: - yield order[i] - if window >= 2: - j = (i - rnd.randint(window)) % order.size - order[i], order[j] = order[j], order[i] - idx += 1 - -#---------------------------------------------------------------------------- -# Utilities for operating with torch.nn.Module parameters and buffers. - -def params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.parameters()) + list(module.buffers()) - -def named_params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.named_parameters()) + list(module.named_buffers()) - -def copy_params_and_buffers(src_module, dst_module, require_all=False): - assert isinstance(src_module, torch.nn.Module) - assert isinstance(dst_module, torch.nn.Module) - src_tensors = dict(named_params_and_buffers(src_module)) - for name, tensor in named_params_and_buffers(dst_module): - assert (name in src_tensors) or (not require_all) - if name in src_tensors: - tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad) - -#---------------------------------------------------------------------------- -# Context manager for easily enabling/disabling DistributedDataParallel -# synchronization. - -@contextlib.contextmanager -def ddp_sync(module, sync): - assert isinstance(module, torch.nn.Module) - if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel): - yield - else: - with module.no_sync(): - yield - -#---------------------------------------------------------------------------- -# Check DistributedDataParallel consistency across processes. - -def check_ddp_consistency(module, ignore_regex=None): - assert isinstance(module, torch.nn.Module) - for name, tensor in named_params_and_buffers(module): - fullname = type(module).__name__ + '.' + name - if ignore_regex is not None and re.fullmatch(ignore_regex, fullname): - continue - tensor = tensor.detach() - if tensor.is_floating_point(): - tensor = nan_to_num(tensor) - other = tensor.clone() - torch.distributed.broadcast(tensor=other, src=0) - assert (tensor == other).all(), fullname - -#---------------------------------------------------------------------------- -# Print summary table of module hierarchy. - -def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True): - assert isinstance(module, torch.nn.Module) - assert not isinstance(module, torch.jit.ScriptModule) - assert isinstance(inputs, (tuple, list)) - - # Register hooks. - entries = [] - nesting = [0] - def pre_hook(_mod, _inputs): - nesting[0] += 1 - def post_hook(mod, _inputs, outputs): - nesting[0] -= 1 - if nesting[0] <= max_nesting: - outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs] - outputs = [t for t in outputs if isinstance(t, torch.Tensor)] - entries.append(EasyDict(mod=mod, outputs=outputs)) - hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()] - hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()] - - # Run module. - outputs = module(*inputs) - for hook in hooks: - hook.remove() - - # Identify unique outputs, parameters, and buffers. - tensors_seen = set() - for e in entries: - e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen] - e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen] - e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen] - tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs} - - # Filter out redundant entries. - if skip_redundant: - entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)] - - # Construct table. - rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']] - rows += [['---'] * len(rows[0])] - param_total = 0 - buffer_total = 0 - submodule_names = {mod: name for name, mod in module.named_modules()} - for e in entries: - name = '' if e.mod is module else submodule_names[e.mod] - param_size = sum(t.numel() for t in e.unique_params) - buffer_size = sum(t.numel() for t in e.unique_buffers) - output_shapes = [str(list(t.shape)) for t in e.outputs] - output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs] - rows += [[ - name + (':0' if len(e.outputs) >= 2 else ''), - str(param_size) if param_size else '-', - str(buffer_size) if buffer_size else '-', - (output_shapes + ['-'])[0], - (output_dtypes + ['-'])[0], - ]] - for idx in range(1, len(e.outputs)): - rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]] - param_total += param_size - buffer_total += buffer_size - rows += [['---'] * len(rows[0])] - rows += [['Total', str(param_total), str(buffer_total), '-', '-']] - - # Print table. - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths))) - print() - return outputs - -#---------------------------------------------------------------------------- - -# pylint: enable=line-too-long -# pylint: enable=missing-class-docstring -# pylint: enable=missing-function-docstring -# pylint: enable=use-maxsplit-arg diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/metrics.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/metrics.py deleted file mode 100644 index 65ea463c0dab647ea81ec0fa95441dddfd631e33..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/metrics.py +++ /dev/null @@ -1,363 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Model validation metrics -""" - -import math -import warnings -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np -import torch - -from utils import TryExcept, threaded - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] - return (x[:, :4] * w).sum(1) - - -def smooth(y, f=0.05): - # Box filter of fraction f - nf = round(len(y) * f * 2) // 2 + 1 # number of filter elements (must be odd) - p = np.ones(nf // 2) # ones padding - yp = np.concatenate((p * y[0], y, p * y[-1]), 0) # y padded - return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed - - -def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16, prefix=""): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. - # Arguments - tp: True positives (nparray, nx1 or nx10). - conf: Objectness value from 0-1 (nparray). - pred_cls: Predicted object classes (nparray). - target_cls: True object classes (nparray). - plot: Plot precision-recall curve at mAP@0.5 - save_dir: Plot save directory - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Sort by objectness - i = np.argsort(-conf) - tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] - - # Find unique classes - unique_classes, nt = np.unique(target_cls, return_counts=True) - nc = unique_classes.shape[0] # number of classes, number of detections - - # Create Precision-Recall curve and compute AP for each class - px, py = np.linspace(0, 1, 1000), [] # for plotting - ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) - for ci, c in enumerate(unique_classes): - i = pred_cls == c - n_l = nt[ci] # number of labels - n_p = i.sum() # number of predictions - if n_p == 0 or n_l == 0: - continue - - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) - - # Recall - recall = tpc / (n_l + eps) # recall curve - r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases - - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) - if plot and j == 0: - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 - - # Compute F1 (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + eps) - names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data - names = dict(enumerate(names)) # to dict - if plot: - plot_pr_curve(px, py, ap, Path(save_dir) / f'{prefix}PR_curve.png', names) - plot_mc_curve(px, f1, Path(save_dir) / f'{prefix}F1_curve.png', names, ylabel='F1') - plot_mc_curve(px, p, Path(save_dir) / f'{prefix}P_curve.png', names, ylabel='Precision') - plot_mc_curve(px, r, Path(save_dir) / f'{prefix}R_curve.png', names, ylabel='Recall') - - i = smooth(f1.mean(0), 0.1).argmax() # max F1 index - p, r, f1 = p[:, i], r[:, i], f1[:, i] - tp = (r * nt).round() # true positives - fp = (tp / (p + eps) - tp).round() # false positives - return tp, fp, p, r, f1, ap, unique_classes.astype(int) - - -def compute_ap(recall, precision): - """ Compute the average precision, given the recall and precision curves - # Arguments - recall: The recall curve (list) - precision: The precision curve (list) - # Returns - Average precision, precision curve, recall curve - """ - - # Append sentinel values to beginning and end - mrec = np.concatenate(([0.0], recall, [1.0])) - mpre = np.concatenate(([1.0], precision, [0.0])) - - # Compute the precision envelope - mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) - - # Integrate area under curve - method = 'interp' # methods: 'continuous', 'interp' - if method == 'interp': - x = np.linspace(0, 1, 101) # 101-point interp (COCO) - ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate - else: # 'continuous' - i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve - - return ap, mpre, mrec - - -class ConfusionMatrix: - # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix - def __init__(self, nc, conf=0.25, iou_thres=0.45): - self.matrix = np.zeros((nc + 1, nc + 1)) - self.nc = nc # number of classes - self.conf = conf - self.iou_thres = iou_thres - - def process_batch(self, detections, labels): - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - None, updates confusion matrix accordingly - """ - if detections is None: - gt_classes = labels.int() - for gc in gt_classes: - self.matrix[self.nc, gc] += 1 # background FN - return - - detections = detections[detections[:, 4] > self.conf] - gt_classes = labels[:, 0].int() - detection_classes = detections[:, 5].int() - iou = box_iou(labels[:, 1:], detections[:, :4]) - - x = torch.where(iou > self.iou_thres) - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - else: - matches = np.zeros((0, 3)) - - n = matches.shape[0] > 0 - m0, m1, _ = matches.transpose().astype(int) - for i, gc in enumerate(gt_classes): - j = m0 == i - if n and sum(j) == 1: - self.matrix[detection_classes[m1[j]], gc] += 1 # correct - else: - self.matrix[self.nc, gc] += 1 # true background - - if n: - for i, dc in enumerate(detection_classes): - if not any(m1 == i): - self.matrix[dc, self.nc] += 1 # predicted background - - def matrix(self): - return self.matrix - - def tp_fp(self): - tp = self.matrix.diagonal() # true positives - fp = self.matrix.sum(1) - tp # false positives - # fn = self.matrix.sum(0) - tp # false negatives (missed detections) - return tp[:-1], fp[:-1] # remove background class - - @TryExcept('WARNING ⚠️ ConfusionMatrix plot failure') - def plot(self, normalize=True, save_dir='', names=()): - import seaborn as sn - - array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns - array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) - - fig, ax = plt.subplots(1, 1, figsize=(12, 9), tight_layout=True) - nc, nn = self.nc, len(names) # number of classes, names - sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size - labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels - ticklabels = (names + ['background']) if labels else "auto" - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered - sn.heatmap(array, - ax=ax, - annot=nc < 30, - annot_kws={ - "size": 8}, - cmap='Blues', - fmt='.2f', - square=True, - vmin=0.0, - xticklabels=ticklabels, - yticklabels=ticklabels).set_facecolor((1, 1, 1)) - ax.set_ylabel('True') - ax.set_ylabel('Predicted') - ax.set_title('Confusion Matrix') - fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) - plt.close(fig) - - def print(self): - for i in range(self.nc + 1): - print(' '.join(map(str, self.matrix[i]))) - - -def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): - # Returns Intersection over Union (IoU) of box1(1,4) to box2(n,4) - - # Get the coordinates of bounding boxes - if xywh: # transform from xywh to xyxy - (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, -1), box2.chunk(4, -1) - w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2 - b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_ - b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_ - else: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, -1) - b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, -1) - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - union = w1 * h1 + w2 * h2 - inter + eps - - # IoU - iou = inter / union - if CIoU or DIoU or GIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2 - if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha = v / (v - iou + (1 + eps)) - return iou - (rho2 / c2 + v * alpha) # CIoU - return iou - rho2 / c2 # DIoU - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU https://arxiv.org/pdf/1902.09630.pdf - return iou # IoU - - -def box_iou(box1, box2, eps=1e-7): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - (a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2) - inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp(0).prod(2) - - # IoU = inter / (area1 + area2 - inter) - return inter / ((a2 - a1).prod(2) + (b2 - b1).prod(2) - inter + eps) - - -def bbox_ioa(box1, box2, eps=1e-7): - """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2 - box1: np.array of shape(4) - box2: np.array of shape(nx4) - returns: np.array of shape(n) - """ - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1 - b2_x1, b2_y1, b2_x2, b2_y2 = box2.T - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps - - # Intersection over box2 area - return inter_area / box2_area - - -def wh_iou(wh1, wh2, eps=1e-7): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter + eps) # iou = inter / (area1 + area2 - inter) - - -# Plots ---------------------------------------------------------------------------------------------------------------- - - -@threaded -def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()): - # Precision-recall curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - py = np.stack(py, axis=1) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py.T): - ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision) - else: - ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision) - - ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) - ax.set_xlabel('Recall') - ax.set_ylabel('Precision') - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - ax.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - ax.set_title('Precision-Recall Curve') - fig.savefig(save_dir, dpi=250) - plt.close(fig) - - -@threaded -def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'): - # Metric-confidence curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py): - ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric) - else: - ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) - - y = smooth(py.mean(0), 0.05) - ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') - ax.set_xlabel(xlabel) - ax.set_ylabel(ylabel) - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - ax.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - ax.set_title(f'{ylabel}-Confidence Curve') - fig.savefig(save_dir, dpi=250) - plt.close(fig) diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/onnx/onnx_export_48k.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/onnx/onnx_export_48k.py deleted file mode 100644 index 9a046353dc25b658684fa76bdf8b4f21d1a77c98..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/onnx/onnx_export_48k.py +++ /dev/null @@ -1,73 +0,0 @@ -import argparse -import time -import numpy as np -import onnx -from onnxsim import simplify -import onnxruntime as ort -import onnxoptimizer -import torch -from model_onnx_48k import SynthesizerTrn -import utils -from hubert import hubert_model_onnx - -def main(HubertExport,NetExport): - - path = "NyaruTaffy" - - if(HubertExport): - device = torch.device("cuda") - hubert_soft = hubert_model_onnx.hubert_soft("hubert/model.pt") - test_input = torch.rand(1, 1, 16000) - input_names = ["source"] - output_names = ["embed"] - torch.onnx.export(hubert_soft.to(device), - test_input.to(device), - "hubert3.0.onnx", - dynamic_axes={ - "source": { - 2: "sample_length" - } - }, - verbose=False, - opset_version=13, - input_names=input_names, - output_names=output_names) - if(NetExport): - device = torch.device("cuda") - hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - SVCVITS = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", SVCVITS, None) - _ = SVCVITS.eval().to(device) - for i in SVCVITS.parameters(): - i.requires_grad = False - test_hidden_unit = torch.rand(1, 50, 256) - test_lengths = torch.LongTensor([50]) - test_pitch = torch.rand(1, 50) - test_sid = torch.LongTensor([0]) - input_names = ["hidden_unit", "lengths", "pitch", "sid"] - output_names = ["audio", ] - SVCVITS.eval() - torch.onnx.export(SVCVITS, - ( - test_hidden_unit.to(device), - test_lengths.to(device), - test_pitch.to(device), - test_sid.to(device) - ), - f"checkpoints/{path}/model.onnx", - dynamic_axes={ - "hidden_unit": [0, 1], - "pitch": [1] - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names) - - -if __name__ == '__main__': - main(False,True) diff --git a/spaces/Illumotion/Koboldcpp/include/CL/Utils/File.h b/spaces/Illumotion/Koboldcpp/include/CL/Utils/File.h deleted file mode 100644 index 62c8e95a764ff1e6b993133193623e97d782699f..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/CL/Utils/File.h +++ /dev/null @@ -1,42 +0,0 @@ -#pragma once - -// OpenCL Utils includes -#include "OpenCLUtils_Export.h" - -// OpenCL includes -#include - -// read all the text file contents securely in ANSI C89 -// return pointer to C-string with file contents -// can handle streams with no known size and no support for fseek -// based on https://stackoverflow.com/questions/14002954/ by Nominal Animal -UTILS_EXPORT -char* cl_util_read_text_file(const char* const filename, size_t* const length, - cl_int* const error); - -// read all the binary file contents securely in ANSI C89 -// return pointer to file contents -// can handle streams with no known size and no support for fseek -// based on https://stackoverflow.com/questions/14002954/ by Nominal Animal -UTILS_EXPORT -unsigned char* cl_util_read_binary_file(const char* const filename, - size_t* const length, - cl_int* const error); - -// write binaries of OpenCL compiled program -// binaries are written as separate files for each device -// with file name "(program_file_name)_(name of device).bin" -// based on variant of Logan -// http://logan.tw/posts/2014/11/22/pre-compile-the-opencl-kernel-program-part-2/ -UTILS_EXPORT -cl_int cl_util_write_binaries(const cl_program program, - const char* const program_file_name); - -// read binaries of OpenCL compiled program -// from files of file names "(program_file_name)_(name of device).bin" -UTILS_EXPORT -cl_program cl_util_read_binaries(const cl_context context, - const cl_device_id* const devices, - const cl_uint num_devices, - const char* const program_file_name, - cl_int* const error); diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.js b/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.js deleted file mode 100644 index f013209931218fd054979e290706f1945de76856..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.js +++ /dev/null @@ -1,502 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var chatbotWrap = null; -var apSwitch = null; -var empty_botton = null; -var messageBotDivs = null; -var loginUserForm = null; -var logginUser = null; - -var userLogged = false; -var usernameGotten = false; -var historyLoaded = false; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); -var language = navigator.language.slice(0,2); - -var forView_i18n = { - 'zh': "仅供查看", - 'en': "For viewing only", - 'ja': "閲覧専用", - 'fr': "Pour consultation seulement", - 'es': "Solo para visualización", -}; - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - loginUserForm = document.querySelector(".gradio-container > .main > .wrap > .panel > .form") - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - chatbotWrap = document.querySelector('#chuanhu_chatbot > .wrap'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - empty_botton = document.getElementById("empty_btn") - - if (loginUserForm) { - localStorage.setItem("userLogged", true); - userLogged = true; - } - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - if (!usernameGotten) { - getUserInfo(); - } - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight(); - } - if (chatbotWrap) { - if (!historyLoaded) { - loadHistoryHtml(); - } - setChatbotScroll(); - } - if (empty_botton) { - emptyHistory(); - } - } - } -} - -function webLocale() { - console.log("webLocale", language); - if (forView_i18n.hasOwnProperty(language)) { - var forView = forView_i18n[language]; - var forViewStyle = document.createElement('style'); - forViewStyle.innerHTML = '.wrap>.history-message>:last-child::after { content: "' + forView + '"!important; }'; - document.head.appendChild(forViewStyle); - // console.log("added forViewStyle", forView); - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -var username = null; -function getUserInfo() { - if (usernameGotten) { - return; - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged) { - username = userInfoDiv.innerText; - if (username) { - if (username.includes("getting user info…")) { - setTimeout(getUserInfo, 500); - return; - } else if (username === " ") { - localStorage.removeItem("username"); - localStorage.removeItem("userLogged") - userLogged = false; - usernameGotten = true; - return; - } else { - username = username.match(/User:\s*(.*)/)[1] || username; - localStorage.setItem("username", username); - usernameGotten = true; - clearHistoryHtml(); - } - } - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - document.body.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - document.body.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} -function setChatbotScroll() { - var scrollHeight = chatbotWrap.scrollHeight; - chatbotWrap.scrollTo(0,scrollHeight) -} -var rangeInputs = null; -var numberInputs = null; -function setSlider() { - rangeInputs = document.querySelectorAll('input[type="range"]'); - numberInputs = document.querySelectorAll('input[type="number"]') - setSliderRange(); - rangeInputs.forEach(rangeInput => { - rangeInput.addEventListener('input', setSliderRange); - }); - numberInputs.forEach(numberInput => { - numberInput.addEventListener('input', setSliderRange); - }) -} -function setSliderRange() { - var range = document.querySelectorAll('input[type="range"]'); - range.forEach(range => { - range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%'; - }); -} - -function addChuanhuButton(botElement) { - var rawMessage = null; - var mdMessage = null; - rawMessage = botElement.querySelector('.raw-message'); - mdMessage = botElement.querySelector('.md-message'); - if (!rawMessage) { - var buttons = botElement.querySelectorAll('button.chuanhu-btn'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - return; - } - var copyButton = null; - var toggleButton = null; - copyButton = botElement.querySelector('button.copy-bot-btn'); - toggleButton = botElement.querySelector('button.toggle-md-btn'); - if (copyButton) copyButton.remove(); - if (toggleButton) toggleButton.remove(); - - // Copy bot button - var copyButton = document.createElement('button'); - copyButton.classList.add('chuanhu-btn'); - copyButton.classList.add('copy-bot-btn'); - copyButton.setAttribute('aria-label', 'Copy'); - copyButton.innerHTML = copyIcon; - copyButton.addEventListener('click', () => { - const textToCopy = rawMessage.innerText; - navigator.clipboard - .writeText(textToCopy) - .then(() => { - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - }) - .catch(() => { - console.error("copy failed"); - }); - }); - botElement.appendChild(copyButton); - - // Toggle button - var toggleButton = document.createElement('button'); - toggleButton.classList.add('chuanhu-btn'); - toggleButton.classList.add('toggle-md-btn'); - toggleButton.setAttribute('aria-label', 'Toggle'); - var renderMarkdown = mdMessage.classList.contains('hideM'); - toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon; - toggleButton.addEventListener('click', () => { - renderMarkdown = mdMessage.classList.contains('hideM'); - if (renderMarkdown){ - renderMarkdownText(botElement); - toggleButton.innerHTML=rawIcon; - } else { - removeMarkdownText(botElement); - toggleButton.innerHTML=mdIcon; - } - }); - botElement.insertBefore(toggleButton, copyButton); -} - -function renderMarkdownText(message) { - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.remove('hideM'); - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.add('hideM'); -} -function removeMarkdownText(message) { - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.remove('hideM'); - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.add('hideM'); -} - -let timeoutId; -let isThrottled = false; -var mmutation -// 监听所有元素中 bot message 的变化,为 bot 消息添加复制按钮。 -var mObserver = new MutationObserver(function (mutationsList) { - for (mmutation of mutationsList) { - if (mmutation.type === 'childList') { - for (var node of mmutation.addedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - } - if (node.tagName === 'INPUT' && node.getAttribute('type') === 'range') { - setSlider(); - } - } - for (var node of mmutation.removedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - } - } - } else if (mmutation.type === 'attributes') { - if (mmutation.target.nodeType === 1 && mmutation.target.classList.contains('message') && mmutation.target.getAttribute('data-testid') === 'bot') { - if (isThrottled) break; // 为了防止重复不断疯狂渲染,加上等待_(:з」∠)_ - isThrottled = true; - clearTimeout(timeoutId); - timeoutId = setTimeout(() => { - isThrottled = false; - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - saveHistoryHtml(); - }, 500); - } - } - } -}); -mObserver.observe(document.documentElement, { attributes: true, childList: true, subtree: true }); - -var loadhistorytime = 0; // for debugging -function saveHistoryHtml() { - var historyHtml = document.querySelector('#chuanhu_chatbot > .wrap'); - localStorage.setItem('chatHistory', historyHtml.innerHTML); - // console.log("History Saved") - historyLoaded = false; -} -function loadHistoryHtml() { - var historyHtml = localStorage.getItem('chatHistory'); - if (!historyHtml) { - historyLoaded = true; - return; // no history, do nothing - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged){ - historyLoaded = true; - return; // logged in, do nothing - } - if (!historyLoaded) { - var tempDiv = document.createElement('div'); - tempDiv.innerHTML = historyHtml; - var buttons = tempDiv.querySelectorAll('button.chuanhu-btn'); - var gradioCopyButtons = tempDiv.querySelectorAll('button.copy_code_button'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - for (var i = 0; i < gradioCopyButtons.length; i++) { - gradioCopyButtons[i].parentNode.removeChild(gradioCopyButtons[i]); - } - var fakeHistory = document.createElement('div'); - fakeHistory.classList.add('history-message'); - fakeHistory.innerHTML = tempDiv.innerHTML; - webLocale(); - chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - // var fakeHistory = document.createElement('div'); - // fakeHistory.classList.add('history-message'); - // fakeHistory.innerHTML = historyHtml; - // chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - historyLoaded = true; - console.log("History Loaded"); - loadhistorytime += 1; // for debugging - } else { - historyLoaded = false; - } -} -function clearHistoryHtml() { - localStorage.removeItem("chatHistory"); - historyMessages = chatbotWrap.querySelector('.history-message'); - if (historyMessages) { - chatbotWrap.removeChild(historyMessages); - console.log("History Cleared"); - } -} -function emptyHistory() { - empty_botton.addEventListener("click", function () { - clearHistoryHtml(); - }); -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); - historyLoaded = false; -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); - -// button svg code -const copyIcon = ''; -const copiedIcon = ''; -const mdIcon = ''; -const rawIcon = ''; diff --git a/spaces/KPCGD/bingo/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/KPCGD/bingo/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/attention.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/attention.py deleted file mode 100644 index b497cb97ab77834fcf0ea3a33fcc339f94f08533..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/attention.py +++ /dev/null @@ -1,398 +0,0 @@ -# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion). -# See more details in LICENSE. - -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat - -from ldm.modules.diffusionmodules.util import checkpoint - - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels, default_eps): - if default_eps: - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, affine=True) - else: - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3) - k = torch.softmax(k.float(), dim=-1).type(k.dtype) - # k = k.softmax(dim=-1) - context = torch.einsum('bhdn,bhen->bhde', k, v) - out = torch.einsum('bhde,bhdn->bhen', context, q) - out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.softmax(w_.float(), dim=2).type(w_.dtype) - # w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - self.prompt_to_prompt = False - - def forward(self, x, context=None, mask=None): - is_self_attn = context is None - - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - if self.prompt_to_prompt and is_self_attn: - # Unlike the original Prompt-to-Prompt which uses cross-attention layers, we copy attention maps for self-attention layers. - # There must be 4 elements in the batch: {conditional, unconditional} x {prompt 1, prompt 2} - assert x.size(0) == 4 - sims = sim.chunk(4) - sim = torch.cat((sims[0], sims[0], sims[2], sims[2])) - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - # attn = sim.softmax(dim=-1) - attn = torch.softmax(sim.float(), dim=-1).type(sim.dtype) - - out = einsum('b i j, b j d -> b i d', attn, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -# class BasicTransformerBlock(nn.Module): -# def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True): -# super().__init__() -# self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention -# self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) -# self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim, -# heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none -# self.norm1 = nn.LayerNorm(dim) -# self.norm2 = nn.LayerNorm(dim) -# self.norm3 = nn.LayerNorm(dim) -# self.checkpoint = checkpoint - -# def forward(self, x, context=None): -# # return checkpoint(self._forward, (x, context), self.checkpoint) -# return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - -# def _forward(self, x, context=None): -# x = x.type(self.norm1.weight.dtype) -# if context is not None: -# context = context.type(self.norm1.weight.dtype) -# x = self.attn1(self.norm1(x)) + x -# x = self.attn2(self.norm2(x), context=context) + x -# x = self.ff(self.norm3(x)) + x -# return x - - -class BasicTransformerBlock(nn.Module): - ATTENTION_MODES = { - "softmax": CrossAttention, # vanilla attention - } - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, - disable_self_attn=False): - super().__init__() - attn_mode = "softmax" - assert attn_mode in self.ATTENTION_MODES - attn_cls = self.ATTENTION_MODES[attn_mode] - self.disable_self_attn = disable_self_attn - self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout, - context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = x.type(self.norm1.weight.dtype) - if context is not None: - context = context.type(self.norm1.weight.dtype) - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - # x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x - # x = self.attn2(self.norm2(x), context=context) + x - # x = self.ff(self.norm3(x)) + x - # return x - - -# class SpatialTransformer(nn.Module): -# """ -# Transformer block for image-like data. -# First, project the input (aka embedding) -# and reshape to b, t, d. -# Then apply standard transformer action. -# Finally, reshape to image -# """ -# def __init__(self, in_channels, n_heads, d_head, default_eps, force_type_convert, -# depth=1, dropout=0., context_dim=None): -# super().__init__() -# self.in_channels = in_channels -# inner_dim = n_heads * d_head -# self.force_type_convert = force_type_convert -# self.norm = Normalize(in_channels, default_eps) - -# self.proj_in = nn.Conv2d(in_channels, -# inner_dim, -# kernel_size=1, -# stride=1, -# padding=0) - -# self.transformer_blocks = nn.ModuleList( -# [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) -# for d in range(depth)] -# ) - -# self.proj_out = zero_module(nn.Conv2d(inner_dim, -# in_channels, -# kernel_size=1, -# stride=1, -# padding=0)) - -# def forward(self, x, context=None): -# # note: if no context is given, cross-attention defaults to self-attention -# b, c, h, w = x.shape -# x_in = x -# if self.force_type_convert: -# x = self.norm.float()(x.float()) -# x = x.half() -# else: -# x = self.norm(x) -# x = self.proj_in(x) -# x = rearrange(x, 'b c h w -> b (h w) c') -# for block in self.transformer_blocks: -# x = block(x, context=context) -# x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w) -# x = self.proj_out(x) -# return x + x_in - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - NEW: use_linear for more efficiency instead of the 1x1 convs - """ - def __init__(self, in_channels, n_heads, d_head, default_eps, force_type_convert, - depth=1, dropout=0., context_dim=None, - disable_self_attn=False, use_linear=False, - use_checkpoint=True): - super().__init__() - if exists(context_dim) and not isinstance(context_dim, list): - context_dim = [context_dim] - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.force_type_convert = force_type_convert - self.norm = Normalize(in_channels, default_eps) - if not use_linear: - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d], - disable_self_attn=disable_self_attn, checkpoint=use_checkpoint) - for d in range(depth)] - ) - if not use_linear: - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - else: - self.proj_out = zero_module(nn.Linear(in_channels, inner_dim)) - self.use_linear = use_linear - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - if not isinstance(context, list): - context = [context] - b, c, h, w = x.shape - x_in = x - if self.force_type_convert: - x = self.norm.float()(x.float()) - # if torch.cuda.is_available(): - # x = x.half() - else: - x = self.norm(x) - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c').contiguous() - if self.use_linear: - x = self.proj_in(x) - for i, block in enumerate(self.transformer_blocks): - x = block(x, context=context[i]) - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous() - if not self.use_linear: - x = self.proj_out(x) - return x + x_in \ No newline at end of file diff --git a/spaces/LHL3341/Hand-Write-Number-Recognization/app.py b/spaces/LHL3341/Hand-Write-Number-Recognization/app.py deleted file mode 100644 index 0a6826e53dee022d1c085af2a12df13e60cd2030..0000000000000000000000000000000000000000 --- a/spaces/LHL3341/Hand-Write-Number-Recognization/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import streamlit as st -import numpy as np -import matplotlib.pyplot as plt -import pandas as pd -st.markdown("# Streamlit示例") -st.markdown(""" - - 这是 - - 一个 - - 无序列表 - """) - -# 展示pandas数据框 -st.dataframe(pd.DataFrame([[1, 2], [3, 4]], columns=["a", "b"])) - -# 展示matplotlib绘图 -arr = np.random.normal(1, 1, size=100) -plt.hist(arr, bins=20) -plt.title("matplotlib plot") -st.pyplot() - -# 加入交互控件,如输入框 -number = st.number_input("Insert a number", 123) -st.write("输入的数字是:", number) \ No newline at end of file diff --git a/spaces/Laihiujin/OneFormer/oneformer/data/build.py b/spaces/Laihiujin/OneFormer/oneformer/data/build.py deleted file mode 100644 index fb775313605cf24ed2385681fa2c43d5068b5a4a..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/data/build.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Any, Callable, Dict, List, Optional, Union -import torch.utils.data as torchdata - -from detectron2.config import configurable - - -from detectron2.data.common import DatasetFromList, MapDataset -from detectron2.data.dataset_mapper import DatasetMapper -from detectron2.data.samplers import ( - InferenceSampler, -) -from detectron2.data.build import ( - get_detection_dataset_dicts, - trivial_batch_collator -) -""" -This file contains the default logic to build a dataloader for training or testing. -""" - -__all__ = [ - "build_detection_test_loader", -] - - -def _test_loader_from_config(cfg, dataset_name, mapper=None): - """ - Uses the given `dataset_name` argument (instead of the names in cfg), because the - standard practice is to evaluate each test set individually (not combining them). - """ - if isinstance(dataset_name, str): - dataset_name = [dataset_name] - - dataset = get_detection_dataset_dicts( - dataset_name, - filter_empty=False, - proposal_files=[ - cfg.DATASETS.PROPOSAL_FILES_TEST[list(cfg.DATASETS.TEST).index(x)] for x in dataset_name - ] - if cfg.MODEL.LOAD_PROPOSALS - else None, - ) - if mapper is None: - mapper = DatasetMapper(cfg, False) - return { - "dataset": dataset, - "mapper": mapper, - "num_workers": cfg.DATALOADER.NUM_WORKERS, - "sampler": InferenceSampler(len(dataset)) - if not isinstance(dataset, torchdata.IterableDataset) - else None, - } - - -@configurable(from_config=_test_loader_from_config) -def build_detection_test_loader( - dataset: Union[List[Any], torchdata.Dataset], - *, - mapper: Callable[[Dict[str, Any]], Any], - sampler: Optional[torchdata.Sampler] = None, - batch_size: int = 1, - num_workers: int = 0, - collate_fn: Optional[Callable[[List[Any]], Any]] = None, -) -> torchdata.DataLoader: - """ - Similar to `build_detection_train_loader`, with default batch size = 1, - and sampler = :class:`InferenceSampler`. This sampler coordinates all workers - to produce the exact set of all samples. - - Args: - dataset: a list of dataset dicts, - or a pytorch dataset (either map-style or iterable). They can be obtained - by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`. - mapper: a callable which takes a sample (dict) from dataset - and returns the format to be consumed by the model. - When using cfg, the default choice is ``DatasetMapper(cfg, is_train=False)``. - sampler: a sampler that produces - indices to be applied on ``dataset``. Default to :class:`InferenceSampler`, - which splits the dataset across all workers. Sampler must be None - if `dataset` is iterable. - batch_size: the batch size of the data loader to be created. - Default to 1 image per worker since this is the standard when reporting - inference time in papers. - num_workers: number of parallel data loading workers - collate_fn: same as the argument of `torch.utils.data.DataLoader`. - Defaults to do no collation and return a list of data. - - Returns: - DataLoader: a torch DataLoader, that loads the given detection - dataset, with test-time transformation and batching. - - Examples: - :: - data_loader = build_detection_test_loader( - DatasetRegistry.get("my_test"), - mapper=DatasetMapper(...)) - - # or, instantiate with a CfgNode: - data_loader = build_detection_test_loader(cfg, "my_test") - """ - if isinstance(dataset, list): - dataset = DatasetFromList(dataset, copy=False) - if mapper is not None: - dataset = MapDataset(dataset, mapper) - if isinstance(dataset, torchdata.IterableDataset): - assert sampler is None, "sampler must be None if dataset is IterableDataset" - else: - if sampler is None: - sampler = InferenceSampler(len(dataset)) - return torchdata.DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - drop_last=False, - num_workers=num_workers, - collate_fn=trivial_batch_collator if collate_fn is None else collate_fn, - ) \ No newline at end of file diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v0.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v0.py deleted file mode 100644 index c3d159a2602621f3a7cbc293c64309c3f09749f5..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v0.py +++ /dev/null @@ -1,786 +0,0 @@ -import os, sys, traceback, re - -import json - -now_dir = os.getcwd() -sys.path.append(now_dir) -from assets.configs.config import Config - -Config = Config() -import PySimpleGUI as sg -import sounddevice as sd -import noisereduce as nr -import numpy as np -from fairseq import checkpoint_utils -import librosa, torch, pyworld, faiss, time, threading -import torch.nn.functional as F -import torchaudio.transforms as tat -import scipy.signal as signal -import torchcrepe - -# import matplotlib.pyplot as plt -from lib.infer.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from assets.i18n.i18n import I18nAuto - -i18n = I18nAuto() -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -current_dir = os.getcwd() - - -class RVC: - def __init__( - self, key, f0_method, hubert_path, pth_path, index_path, npy_path, index_rate - ) -> None: - """ - 初始化 - """ - try: - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.f0_method = f0_method - self.sr = 16000 - self.window = 160 - - # Get Torch Device - if torch.cuda.is_available(): - self.torch_device = torch.device( - f"cuda:{0 % torch.cuda.device_count()}" - ) - elif torch.backends.mps.is_available(): - self.torch_device = torch.device("mps") - else: - self.torch_device = torch.device("cpu") - - if index_rate != 0: - self.index = faiss.read_index(index_path) - # self.big_npy = np.load(npy_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - print("index search enabled") - self.index_rate = index_rate - model_path = hubert_path - print("load model(s) from {}".format(model_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", - ) - self.model = models[0] - self.model = self.model.to(device) - if Config.is_half: - self.model = self.model.half() - else: - self.model = self.model.float() - self.model.eval() - cpt = torch.load(pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = cpt.get("f0", 1) - self.version = cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - print(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - if Config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - except: - print(traceback.format_exc()) - - def get_regular_crepe_computation(self, x, f0_min, f0_max, model="full"): - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.torch_device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - def get_harvest_computation(self, x, f0_min, f0_max): - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - return f0 - - def get_f0(self, x, f0_up_key, inp_f0=None): - # Calculate Padding and f0 details here - p_len = x.shape[0] // 512 # For Now This probs doesn't work - x_pad = 1 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = 0 - # Here, check f0_methods and get their computations - if self.f0_method == "harvest": - f0 = self.get_harvest_computation(x, f0_min, f0_max) - elif self.f0_method == "reg-crepe": - f0 = self.get_regular_crepe_computation(x, f0_min, f0_max) - elif self.f0_method == "reg-crepe-tiny": - f0 = self.get_regular_crepe_computation(x, f0_min, f0_max, "tiny") - - # Calculate f0_course and f0_bak here - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def infer(self, feats: torch.Tensor) -> np.ndarray: - """ - 推理函数 - """ - audio = feats.clone().cpu().numpy() - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - if Config.is_half: - feats = feats.half() - else: - feats = feats.float() - inputs = { - "source": feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9 if self.version == "v1" else 12, - } - torch.cuda.synchronize() - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = ( - self.model.final_proj(logits[0]) if self.version == "v1" else logits[0] - ) - - ####索引优化 - try: - if ( - hasattr(self, "index") - and hasattr(self, "big_npy") - and self.index_rate != 0 - ): - npy = feats[0].cpu().numpy().astype("float32") - score, ix = self.index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - if Config.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate - + (1 - self.index_rate) * feats - ) - else: - print("index search FAIL or disabled") - except: - traceback.print_exc() - print("index search FAIL") - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - torch.cuda.synchronize() - print(feats.shape) - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(audio, self.f0_up_key) - p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存 - else: - pitch, pitchf = None, None - p_len = min(feats.shape[1], 13000) # 太大了爆显存 - torch.cuda.synchronize() - # print(feats.shape,pitch.shape) - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - p_len = torch.LongTensor([p_len]).to(device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(device) - with torch.no_grad(): - if self.if_f0 == 1: - infered_audio = ( - self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float() - ) - torch.cuda.synchronize() - return infered_audio - - -class GUIConfig: - def __init__(self) -> None: - self.hubert_path: str = "" - self.pth_path: str = "" - self.index_path: str = "" - self.npy_path: str = "" - self.f0_method: str = "" - self.pitch: int = 12 - self.samplerate: int = 44100 - self.block_time: float = 1.0 # s - self.buffer_num: int = 1 - self.threhold: int = -30 - self.crossfade_time: float = 0.08 - self.extra_time: float = 0.04 - self.I_noise_reduce = False - self.O_noise_reduce = False - self.index_rate = 0.3 - - -class GUI: - def __init__(self) -> None: - self.config = GUIConfig() - self.flag_vc = False - - self.launcher() - - def load(self): - ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) = self.get_devices() - try: - with open("values1.json", "r") as j: - data = json.load(j) - except: - # Injecting f0_method into the json data - with open("values1.json", "w") as j: - data = { - "pth_path": "", - "index_path": "", - "sg_input_device": input_devices[ - input_devices_indices.index(sd.default.device[0]) - ], - "sg_output_device": output_devices[ - output_devices_indices.index(sd.default.device[1]) - ], - "threhold": "-45", - "pitch": "0", - "index_rate": "0", - "block_time": "1", - "crossfade_length": "0.04", - "extra_time": "1", - } - return data - - def launcher(self): - data = self.load() - sg.theme("DarkTeal12") - input_devices, output_devices, _, _ = self.get_devices() - layout = [ - [ - sg.Frame( - title="Proudly forked by Mangio621", - ), - sg.Frame( - title=i18n("Load model"), - layout=[ - [ - sg.Input( - default_text="hubert_base.pt", - key="hubert_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Hubert Model"), - initial_folder=os.path.join(os.getcwd()), - file_types=(("pt files", "*.pt"),), - ), - ], - [ - sg.Input( - default_text=data.get("pth_path", ""), - key="pth_path", - ), - sg.FileBrowse( - i18n("Select the .pth file"), - initial_folder=os.path.join(os.getcwd(), "weights"), - file_types=(("weight files", "*.pth"),), - ), - ], - [ - sg.Input( - default_text=data.get("index_path", ""), - key="index_path", - ), - sg.FileBrowse( - i18n("Select the .index file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("index files", "*.index"),), - ), - ], - [ - sg.Input( - default_text="你不需要填写这个You don't need write this.", - key="npy_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Select the .npy file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("feature files", "*.npy"),), - ), - ], - ], - ), - ], - [ - # Mangio f0 Selection frame Here - sg.Frame( - layout=[ - [ - sg.Radio( - "Harvest", "f0_method", key="harvest", default=True - ), - sg.Radio("Crepe", "f0_method", key="reg-crepe"), - sg.Radio("Crepe Tiny", "f0_method", key="reg-crepe-tiny"), - ] - ], - title="Select an f0 Method", - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Input device")), - sg.Combo( - input_devices, - key="sg_input_device", - default_value=data.get("sg_input_device", ""), - ), - ], - [ - sg.Text(i18n("Output device")), - sg.Combo( - output_devices, - key="sg_output_device", - default_value=data.get("sg_output_device", ""), - ), - ], - ], - title=i18n("Audio device (please use the same type of driver)"), - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Response threshold")), - sg.Slider( - range=(-60, 0), - key="threhold", - resolution=1, - orientation="h", - default_value=data.get("threhold", ""), - ), - ], - [ - sg.Text(i18n("Pitch settings")), - sg.Slider( - range=(-24, 24), - key="pitch", - resolution=1, - orientation="h", - default_value=data.get("pitch", ""), - ), - ], - [ - sg.Text(i18n("Index Rate")), - sg.Slider( - range=(0.0, 1.0), - key="index_rate", - resolution=0.01, - orientation="h", - default_value=data.get("index_rate", ""), - ), - ], - ], - title=i18n("General settings"), - ), - sg.Frame( - layout=[ - [ - sg.Text(i18n("Sample length")), - sg.Slider( - range=(0.1, 3.0), - key="block_time", - resolution=0.1, - orientation="h", - default_value=data.get("block_time", ""), - ), - ], - [ - sg.Text(i18n("Fade length")), - sg.Slider( - range=(0.01, 0.15), - key="crossfade_length", - resolution=0.01, - orientation="h", - default_value=data.get("crossfade_length", ""), - ), - ], - [ - sg.Text(i18n("Extra推理时长")), - sg.Slider( - range=(0.05, 3.00), - key="extra_time", - resolution=0.01, - orientation="h", - default_value=data.get("extra_time", ""), - ), - ], - [ - sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"), - sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"), - ], - ], - title=i18n("Performance settings"), - ), - ], - [ - sg.Button(i18n("开始音频Convert"), key="start_vc"), - sg.Button(i18n("停止音频Convert"), key="stop_vc"), - sg.Text(i18n("Inference time (ms):")), - sg.Text("0", key="infer_time"), - ], - ] - self.window = sg.Window("RVC - GUI", layout=layout) - self.event_handler() - - def event_handler(self): - while True: - event, values = self.window.read() - if event == sg.WINDOW_CLOSED: - self.flag_vc = False - exit() - if event == "start_vc" and self.flag_vc == False: - if self.set_values(values) == True: - print("using_cuda:" + str(torch.cuda.is_available())) - self.start_vc() - settings = { - "pth_path": values["pth_path"], - "index_path": values["index_path"], - "f0_method": self.get_f0_method_from_radios(values), - "sg_input_device": values["sg_input_device"], - "sg_output_device": values["sg_output_device"], - "threhold": values["threhold"], - "pitch": values["pitch"], - "index_rate": values["index_rate"], - "block_time": values["block_time"], - "crossfade_length": values["crossfade_length"], - "extra_time": values["extra_time"], - } - with open("values1.json", "w") as j: - json.dump(settings, j) - if event == "stop_vc" and self.flag_vc == True: - self.flag_vc = False - - # Function that returns the used f0 method in string format "harvest" - def get_f0_method_from_radios(self, values): - f0_array = [ - {"name": "harvest", "val": values["harvest"]}, - {"name": "reg-crepe", "val": values["reg-crepe"]}, - {"name": "reg-crepe-tiny", "val": values["reg-crepe-tiny"]}, - ] - # Filter through to find a true value - used_f0 = "" - for f0 in f0_array: - if f0["val"] == True: - used_f0 = f0["name"] - break - if used_f0 == "": - used_f0 = "harvest" # Default Harvest if used_f0 is empty somehow - return used_f0 - - def set_values(self, values): - if len(values["pth_path"].strip()) == 0: - sg.popup(i18n("Select the pth file")) - return False - if len(values["index_path"].strip()) == 0: - sg.popup(i18n("Select the index file")) - return False - pattern = re.compile("[^\x00-\x7F]+") - if pattern.findall(values["hubert_path"]): - sg.popup(i18n("The hubert model path must not contain Chinese characters")) - return False - if pattern.findall(values["pth_path"]): - sg.popup(i18n("The pth file path must not contain Chinese characters.")) - return False - if pattern.findall(values["index_path"]): - sg.popup(i18n("The index file path must not contain Chinese characters.")) - return False - self.set_devices(values["sg_input_device"], values["sg_output_device"]) - self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt") - self.config.pth_path = values["pth_path"] - self.config.index_path = values["index_path"] - self.config.npy_path = values["npy_path"] - self.config.f0_method = self.get_f0_method_from_radios(values) - self.config.threhold = values["threhold"] - self.config.pitch = values["pitch"] - self.config.block_time = values["block_time"] - self.config.crossfade_time = values["crossfade_length"] - self.config.extra_time = values["extra_time"] - self.config.I_noise_reduce = values["I_noise_reduce"] - self.config.O_noise_reduce = values["O_noise_reduce"] - self.config.index_rate = values["index_rate"] - return True - - def start_vc(self): - torch.cuda.empty_cache() - self.flag_vc = True - self.block_frame = int(self.config.block_time * self.config.samplerate) - self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate) - self.sola_search_frame = int(0.012 * self.config.samplerate) - self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s - self.extra_frame = int(self.config.extra_time * self.config.samplerate) - self.rvc = None - self.rvc = RVC( - self.config.pitch, - self.config.f0_method, - self.config.hubert_path, - self.config.pth_path, - self.config.index_path, - self.config.npy_path, - self.config.index_rate, - ) - self.input_wav: np.ndarray = np.zeros( - self.extra_frame - + self.crossfade_frame - + self.sola_search_frame - + self.block_frame, - dtype="float32", - ) - self.output_wav: torch.Tensor = torch.zeros( - self.block_frame, device=device, dtype=torch.float32 - ) - self.sola_buffer: torch.Tensor = torch.zeros( - self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_in_window: torch.Tensor = torch.linspace( - 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_out_window: torch.Tensor = 1 - self.fade_in_window - self.resampler1 = tat.Resample( - orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32 - ) - self.resampler2 = tat.Resample( - orig_freq=self.rvc.tgt_sr, - new_freq=self.config.samplerate, - dtype=torch.float32, - ) - thread_vc = threading.Thread(target=self.soundinput) - thread_vc.start() - - def soundinput(self): - """ - 接受音频输入 - """ - with sd.Stream( - channels=2, - callback=self.audio_callback, - blocksize=self.block_frame, - samplerate=self.config.samplerate, - dtype="float32", - ): - while self.flag_vc: - time.sleep(self.config.block_time) - print("Audio block passed.") - print("ENDing VC") - - def audio_callback( - self, indata: np.ndarray, outdata: np.ndarray, frames, times, status - ): - """ - 音频处理 - """ - start_time = time.perf_counter() - indata = librosa.to_mono(indata.T) - if self.config.I_noise_reduce: - indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate) - - """noise gate""" - frame_length = 2048 - hop_length = 1024 - rms = librosa.feature.rms( - y=indata, frame_length=frame_length, hop_length=hop_length - ) - db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold - # print(rms.shape,db.shape,db) - for i in range(db_threhold.shape[0]): - if db_threhold[i]: - indata[i * hop_length : (i + 1) * hop_length] = 0 - self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata) - - # infer - print("input_wav:" + str(self.input_wav.shape)) - # print('infered_wav:'+str(infer_wav.shape)) - infer_wav: torch.Tensor = self.resampler2( - self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav))) - )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to( - device - ) - print("infer_wav:" + str(infer_wav.shape)) - - # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC - cor_nom = F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame], - self.sola_buffer[None, None, :], - ) - cor_den = torch.sqrt( - F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame] - ** 2, - torch.ones(1, 1, self.crossfade_frame, device=device), - ) - + 1e-8 - ) - sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0]) - print("sola offset: " + str(int(sola_offset))) - - # crossfade - self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame] - self.output_wav[: self.crossfade_frame] *= self.fade_in_window - self.output_wav[: self.crossfade_frame] += self.sola_buffer[:] - if sola_offset < self.sola_search_frame: - self.sola_buffer[:] = ( - infer_wav[ - -self.sola_search_frame - - self.crossfade_frame - + sola_offset : -self.sola_search_frame - + sola_offset - ] - * self.fade_out_window - ) - else: - self.sola_buffer[:] = ( - infer_wav[-self.crossfade_frame :] * self.fade_out_window - ) - - if self.config.O_noise_reduce: - outdata[:] = np.tile( - nr.reduce_noise( - y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate - ), - (2, 1), - ).T - else: - outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy() - total_time = time.perf_counter() - start_time - self.window["infer_time"].update(int(total_time * 1000)) - print("infer time:" + str(total_time)) - print("f0_method: " + str(self.config.f0_method)) - - def get_devices(self, update: bool = True): - """获取设备列表""" - if update: - sd._terminate() - sd._initialize() - devices = sd.query_devices() - hostapis = sd.query_hostapis() - for hostapi in hostapis: - for device_idx in hostapi["devices"]: - devices[device_idx]["hostapi_name"] = hostapi["name"] - input_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_output_channels"] > 0 - ] - input_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_output_channels"] > 0 - ] - return ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) - - def set_devices(self, input_device, output_device): - """设置输出设备""" - ( - input_devices, - output_devices, - input_device_indices, - output_device_indices, - ) = self.get_devices() - sd.default.device[0] = input_device_indices[input_devices.index(input_device)] - sd.default.device[1] = output_device_indices[ - output_devices.index(output_device) - ] - print("input device:" + str(sd.default.device[0]) + ":" + str(input_device)) - print("output device:" + str(sd.default.device[1]) + ":" + str(output_device)) - - -gui = GUI() diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/infer/train-index.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/infer/train-index.py deleted file mode 100644 index 44b447ef32148c181eb4bcd9013a22a82371b82c..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/infer/train-index.py +++ /dev/null @@ -1,42 +0,0 @@ -""" -格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个 -""" -import os -import logging - -logger = logging.getLogger(__name__) - -import faiss -import numpy as np - -# ###########如果是原始特征要先写save -inp_root = r"E:\codes\py39\dataset\mi\2-co256" -npys = [] -for name in sorted(list(os.listdir(inp_root))): - phone = np.load("%s/%s" % (inp_root, name)) - npys.append(phone) -big_npy = np.concatenate(npys, 0) -logger.debug(big_npy.shape) # (6196072, 192)#fp32#4.43G -np.save("infer/big_src_feature_mi.npy", big_npy) - -##################train+add -# big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy") -logger.debug(big_npy.shape) -index = faiss.index_factory(256, "IVF512,Flat") # mi -logger.info("Training...") -index_ivf = faiss.extract_index_ivf(index) # -index_ivf.nprobe = 9 -index.train(big_npy) -faiss.write_index(index, "infer/trained_IVF512_Flat_mi_baseline_src_feat.index") -logger.info("Adding...") -index.add(big_npy) -faiss.write_index(index, "infer/added_IVF512_Flat_mi_baseline_src_feat.index") -""" -大小(都是FP32) -big_src_feature 2.95G - (3098036, 256) -big_emb 4.43G - (6196072, 192) -big_emb双倍是因为求特征要repeat后再加pitch - -""" diff --git a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/transactions.py b/spaces/Lianjd/stock_dashboard/backtrader/analyzers/transactions.py deleted file mode 100644 index 7bdf91a28def6e9f51f3ae43d854e44ded542f71..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/transactions.py +++ /dev/null @@ -1,103 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -import collections - -import backtrader as bt -from backtrader import Order, Position - - -class Transactions(bt.Analyzer): - '''This analyzer reports the transactions occurred with each an every data in - the system - - It looks at the order execution bits to create a ``Position`` starting from - 0 during each ``next`` cycle. - - The result is used during next to record the transactions - - Params: - - - headers (default: ``True``) - - Add an initial key to the dictionary holding the results with the names - of the datas - - This analyzer was modeled to facilitate the integration with - ``pyfolio`` and the header names are taken from the samples used for - it:: - - 'date', 'amount', 'price', 'sid', 'symbol', 'value' - - Methods: - - - get_analysis - - Returns a dictionary with returns as values and the datetime points for - each return as keys - ''' - params = ( - ('headers', False), - ('_pfheaders', ('date', 'amount', 'price', 'sid', 'symbol', 'value')), - ) - - def start(self): - super(Transactions, self).start() - if self.p.headers: - self.rets[self.p._pfheaders[0]] = [list(self.p._pfheaders[1:])] - - self._positions = collections.defaultdict(Position) - self._idnames = list(enumerate(self.strategy.getdatanames())) - - def notify_order(self, order): - # An order could have several partial executions per cycle (unlikely - # but possible) and therefore: collect each new execution notification - # and let the work for next - - # We use a fresh Position object for each round to get summary of what - # the execution bits have done in that round - if order.status not in [Order.Partial, Order.Completed]: - return # It's not an execution - - pos = self._positions[order.data._name] - for exbit in order.executed.iterpending(): - if exbit is None: - break # end of pending reached - - pos.update(exbit.size, exbit.price) - - def next(self): - # super(Transactions, self).next() # let dtkey update - entries = [] - for i, dname in self._idnames: - pos = self._positions.get(dname, None) - if pos is not None: - size, price = pos.size, pos.price - if size: - entries.append([size, price, i, dname, -size * price]) - - if entries: - self.rets[self.strategy.datetime.datetime()] = entries - - self._positions.clear() diff --git a/spaces/Lianjd/stock_dashboard/backtrader/observers/trades.py b/spaces/Lianjd/stock_dashboard/backtrader/observers/trades.py deleted file mode 100644 index 085b3bd4e1c38c509b43d8c5e5219373764ffbe3..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/observers/trades.py +++ /dev/null @@ -1,162 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import uuid - -from .. import Observer -from ..utils.py3 import with_metaclass - -from ..trade import Trade - - -class Trades(Observer): - '''This observer keeps track of full trades and plot the PnL level achieved - when a trade is closed. - - A trade is open when a position goes from 0 (or crossing over 0) to X and - is then closed when it goes back to 0 (or crosses over 0 in the opposite - direction) - - Params: - - ``pnlcomm`` (def: ``True``) - - Show net/profit and loss, i.e.: after commission. If set to ``False`` - if will show the result of trades before commission - ''' - _stclock = True - - lines = ('pnlplus', 'pnlminus') - - params = dict(pnlcomm=True) - - plotinfo = dict(plot=True, subplot=True, - plotname='Trades - Net Profit/Loss', - plotymargin=0.10, - plothlines=[0.0]) - - plotlines = dict( - pnlplus=dict(_name='Positive', - ls='', marker='o', color='blue', - markersize=8.0, fillstyle='full'), - pnlminus=dict(_name='Negative', - ls='', marker='o', color='red', - markersize=8.0, fillstyle='full') - ) - - def __init__(self): - - self.trades = 0 - - self.trades_long = 0 - self.trades_short = 0 - - self.trades_plus = 0 - self.trades_minus = 0 - - self.trades_plus_gross = 0 - self.trades_minus_gross = 0 - - self.trades_win = 0 - self.trades_win_max = 0 - self.trades_win_min = 0 - - self.trades_loss = 0 - self.trades_loss_max = 0 - self.trades_loss_min = 0 - - self.trades_length = 0 - self.trades_length_max = 0 - self.trades_length_min = 0 - - def next(self): - for trade in self._owner._tradespending: - if trade.data not in self.ddatas: - continue - - if not trade.isclosed: - continue - - pnl = trade.pnlcomm if self.p.pnlcomm else trade.pnl - - if pnl >= 0.0: - self.lines.pnlplus[0] = pnl - else: - self.lines.pnlminus[0] = pnl - - -class MetaDataTrades(Observer.__class__): - def donew(cls, *args, **kwargs): - _obj, args, kwargs = super(MetaDataTrades, cls).donew(*args, **kwargs) - - # Recreate the lines dynamically - if _obj.params.usenames: - lnames = tuple(x._name for x in _obj.datas) - else: - lnames = tuple('data{}'.format(x) for x in range(len(_obj.datas))) - - # Generate a new lines class - linescls = cls.lines._derive(uuid.uuid4().hex, lnames, 0, ()) - - # Instantiate lines - _obj.lines = linescls() - - # Generate plotlines info - markers = ['o', 'v', '^', '<', '>', '1', '2', '3', '4', '8', 's', 'p', - '*', 'h', 'H', '+', 'x', 'D', 'd'] - - colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'b', 'g', 'r', 'c', 'm', - 'y', 'k', 'b', 'g', 'r', 'c', 'm'] - - basedict = dict(ls='', markersize=8.0, fillstyle='full') - - plines = dict() - for lname, marker, color in zip(lnames, markers, colors): - plines[lname] = d = basedict.copy() - d.update(marker=marker, color=color) - - plotlines = cls.plotlines._derive( - uuid.uuid4().hex, plines, [], recurse=True) - _obj.plotlines = plotlines() - - return _obj, args, kwargs # return the instantiated object and args - - -class DataTrades(with_metaclass(MetaDataTrades, Observer)): - _stclock = True - - params = (('usenames', True),) - - plotinfo = dict(plot=True, subplot=True, plothlines=[0.0], - plotymargin=0.10) - - plotlines = dict() - - def next(self): - for trade in self._owner._tradespending: - if trade.data not in self.ddatas: - continue - - if not trade.isclosed: - continue - - self.lines[trade.data._id - 1][0] = trade.pnl diff --git a/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_net.py b/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_net.py deleted file mode 100644 index ab6aa82d3e9055a838f1f9076b12f05fdfc154d0..0000000000000000000000000000000000000000 --- a/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_net.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def conv_bn(inp, oup, stride=1, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_bn_no_relu(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - nn.BatchNorm2d(oup), - ) - - -def conv_bn1X1(inp, oup, stride, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_dw(inp, oup, stride, leaky=0.1): - return nn.Sequential( - nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False), - nn.BatchNorm2d(inp), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - ) - - -class SSH(nn.Module): - - def __init__(self, in_channel, out_channel): - super(SSH, self).__init__() - assert out_channel % 4 == 0 - leaky = 0 - if (out_channel <= 64): - leaky = 0.1 - self.conv3X3 = conv_bn_no_relu(in_channel, out_channel // 2, stride=1) - - self.conv5X5_1 = conv_bn(in_channel, out_channel // 4, stride=1, leaky=leaky) - self.conv5X5_2 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - self.conv7X7_2 = conv_bn(out_channel // 4, out_channel // 4, stride=1, leaky=leaky) - self.conv7x7_3 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - def forward(self, input): - conv3X3 = self.conv3X3(input) - - conv5X5_1 = self.conv5X5_1(input) - conv5X5 = self.conv5X5_2(conv5X5_1) - - conv7X7_2 = self.conv7X7_2(conv5X5_1) - conv7X7 = self.conv7x7_3(conv7X7_2) - - out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1) - out = F.relu(out) - return out - - -class FPN(nn.Module): - - def __init__(self, in_channels_list, out_channels): - super(FPN, self).__init__() - leaky = 0 - if (out_channels <= 64): - leaky = 0.1 - self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride=1, leaky=leaky) - self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride=1, leaky=leaky) - self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride=1, leaky=leaky) - - self.merge1 = conv_bn(out_channels, out_channels, leaky=leaky) - self.merge2 = conv_bn(out_channels, out_channels, leaky=leaky) - - def forward(self, input): - # names = list(input.keys()) - # input = list(input.values()) - - output1 = self.output1(input[0]) - output2 = self.output2(input[1]) - output3 = self.output3(input[2]) - - up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode='nearest') - output2 = output2 + up3 - output2 = self.merge2(output2) - - up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode='nearest') - output1 = output1 + up2 - output1 = self.merge1(output1) - - out = [output1, output2, output3] - return out - - -class MobileNetV1(nn.Module): - - def __init__(self): - super(MobileNetV1, self).__init__() - self.stage1 = nn.Sequential( - conv_bn(3, 8, 2, leaky=0.1), # 3 - conv_dw(8, 16, 1), # 7 - conv_dw(16, 32, 2), # 11 - conv_dw(32, 32, 1), # 19 - conv_dw(32, 64, 2), # 27 - conv_dw(64, 64, 1), # 43 - ) - self.stage2 = nn.Sequential( - conv_dw(64, 128, 2), # 43 + 16 = 59 - conv_dw(128, 128, 1), # 59 + 32 = 91 - conv_dw(128, 128, 1), # 91 + 32 = 123 - conv_dw(128, 128, 1), # 123 + 32 = 155 - conv_dw(128, 128, 1), # 155 + 32 = 187 - conv_dw(128, 128, 1), # 187 + 32 = 219 - ) - self.stage3 = nn.Sequential( - conv_dw(128, 256, 2), # 219 +3 2 = 241 - conv_dw(256, 256, 1), # 241 + 64 = 301 - ) - self.avg = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(256, 1000) - - def forward(self, x): - x = self.stage1(x) - x = self.stage2(x) - x = self.stage3(x) - x = self.avg(x) - # x = self.model(x) - x = x.view(-1, 256) - x = self.fc(x) - return x - - -class ClassHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(ClassHead, self).__init__() - self.num_anchors = num_anchors - self.conv1x1 = nn.Conv2d(inchannels, self.num_anchors * 2, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 2) - - -class BboxHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(BboxHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 4, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 4) - - -class LandmarkHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(LandmarkHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 10, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 10) - - -def make_class_head(fpn_num=3, inchannels=64, anchor_num=2): - classhead = nn.ModuleList() - for i in range(fpn_num): - classhead.append(ClassHead(inchannels, anchor_num)) - return classhead - - -def make_bbox_head(fpn_num=3, inchannels=64, anchor_num=2): - bboxhead = nn.ModuleList() - for i in range(fpn_num): - bboxhead.append(BboxHead(inchannels, anchor_num)) - return bboxhead - - -def make_landmark_head(fpn_num=3, inchannels=64, anchor_num=2): - landmarkhead = nn.ModuleList() - for i in range(fpn_num): - landmarkhead.append(LandmarkHead(inchannels, anchor_num)) - return landmarkhead diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/live_audio/audio_io.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/live_audio/audio_io.py deleted file mode 100644 index 3ff83a66e8d9f0bb15250f1c3c2b5ea36745ff55..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/crazy_functions/live_audio/audio_io.py +++ /dev/null @@ -1,51 +0,0 @@ -import numpy as np -from scipy import interpolate - -def Singleton(cls): - _instance = {} - - def _singleton(*args, **kargs): - if cls not in _instance: - _instance[cls] = cls(*args, **kargs) - return _instance[cls] - - return _singleton - - -@Singleton -class RealtimeAudioDistribution(): - def __init__(self) -> None: - self.data = {} - self.max_len = 1024*1024 - self.rate = 48000 # 只读,每秒采样数量 - - def clean_up(self): - self.data = {} - - def feed(self, uuid, audio): - self.rate, audio_ = audio - # print('feed', len(audio_), audio_[-25:]) - if uuid not in self.data: - self.data[uuid] = audio_ - else: - new_arr = np.concatenate((self.data[uuid], audio_)) - if len(new_arr) > self.max_len: new_arr = new_arr[-self.max_len:] - self.data[uuid] = new_arr - - def read(self, uuid): - if uuid in self.data: - res = self.data.pop(uuid) - print('\r read-', len(res), '-', max(res), end='', flush=True) - else: - res = None - return res - -def change_sample_rate(audio, old_sr, new_sr): - duration = audio.shape[0] / old_sr - - time_old = np.linspace(0, duration, audio.shape[0]) - time_new = np.linspace(0, duration, int(audio.shape[0] * new_sr / old_sr)) - - interpolator = interpolate.interp1d(time_old, audio.T) - new_audio = interpolator(time_new).T - return new_audio.astype(np.int16) \ No newline at end of file diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_1200e.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_1200e.py deleted file mode 100644 index bc7fbf69b42b11ea9b8ae4d14216d2fcf20e717c..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_1200e.py +++ /dev/null @@ -1,8 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.007, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-7, by_epoch=True) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=1200) -checkpoint_config = dict(interval=100) diff --git a/spaces/MAPS-research/GEMRec-Gallery/Archive/Gallery_beta0913.py b/spaces/MAPS-research/GEMRec-Gallery/Archive/Gallery_beta0913.py deleted file mode 100644 index a482a1bfc7d85e31589ac76e420fe3bd9c3f8268..0000000000000000000000000000000000000000 --- a/spaces/MAPS-research/GEMRec-Gallery/Archive/Gallery_beta0913.py +++ /dev/null @@ -1,643 +0,0 @@ -import os -import requests - -import altair as alt -import numpy as np -import pandas as pd -import streamlit as st -import streamlit.components.v1 as components - -from bs4 import BeautifulSoup -from datasets import load_dataset, Dataset, load_from_disk -from huggingface_hub import login -from streamlit_agraph import agraph, Node, Edge, Config -from streamlit_extras.switch_page_button import switch_page -from sklearn.svm import LinearSVC - -SCORE_NAME_MAPPING = {'clip': 'clip_score', 'rank': 'msq_score', 'pop': 'model_download_count'} - - -class GalleryApp: - def __init__(self, promptBook, images_ds): - self.promptBook = promptBook - self.images_ds = images_ds - - def gallery_standard(self, items, col_num, info): - rows = len(items) // col_num + 1 - containers = [st.container() for _ in range(rows)] - for idx in range(0, len(items), col_num): - row_idx = idx // col_num - with containers[row_idx]: - cols = st.columns(col_num) - for j in range(col_num): - if idx + j < len(items): - with cols[j]: - # show image - # image = self.images_ds[items.iloc[idx + j]['row_idx'].item()]['image'] - image = f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{items.iloc[idx + j]['image_id']}.png" - st.image(image, use_column_width=True) - - # handel checkbox information - prompt_id = items.iloc[idx + j]['prompt_id'] - modelVersion_id = items.iloc[idx + j]['modelVersion_id'] - - check_init = True if modelVersion_id in st.session_state.selected_dict.get(prompt_id, []) else False - - # st.write("Position: ", idx + j) - - # show checkbox - st.checkbox('Select', key=f'select_{prompt_id}_{modelVersion_id}', value=check_init) - - # show selected info - for key in info: - st.write(f"**{key}**: {items.iloc[idx + j][key]}") - - def gallery_graph(self, items): - items = load_tsne_coordinates(items) - - # sort items to be popularity from low to high, so that most popular ones will be on the top - items = items.sort_values(by=['model_download_count'], ascending=True).reset_index(drop=True) - - scale = 50 - items.loc[:, 'x'] = items['x'] * scale - items.loc[:, 'y'] = items['y'] * scale - - nodes = [] - edges = [] - - for idx in items.index: - # if items.loc[idx, 'modelVersion_id'] in st.session_state.selected_dict.get(items.loc[idx, 'prompt_id'], 0): - # opacity = 0.2 - # else: - # opacity = 1.0 - - nodes.append(Node(id=items.loc[idx, 'image_id'], - # label=str(items.loc[idx, 'model_name']), - title=f"model name: {items.loc[idx, 'model_name']}\nmodelVersion name: {items.loc[idx, 'modelVersion_name']}\nclip score: {items.loc[idx, 'clip_score']}\nmcos score: {items.loc[idx, 'mcos_score']}\npopularity: {items.loc[idx, 'model_download_count']}", - size=20, - shape='image', - image=f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{items.loc[idx, 'image_id']}.png", - x=items.loc[idx, 'x'].item(), - y=items.loc[idx, 'y'].item(), - # fixed=True, - color={'background': '#E0E0E1', 'border': '#ffffff', 'highlight': {'border': '#F04542'}}, - # opacity=opacity, - shadow={'enabled': True, 'color': 'rgba(0,0,0,0.4)', 'size': 10, 'x': 1, 'y': 1}, - borderWidth=2, - shapeProperties={'useBorderWithImage': True}, - ) - ) - - config = Config(width='100%', - height='600', - directed=True, - physics=False, - hierarchical=False, - interaction={'navigationButtons': True, 'dragNodes': False, 'multiselect': False}, - # **kwargs - ) - - return agraph(nodes=nodes, - edges=edges, - config=config, - ) - - def selection_panel(self, items): - # temperal function - - selecters = st.columns([1, 4]) - - if 'score_weights' not in st.session_state: - st.session_state.score_weights = [1.0, 0.8, 0.2, 0.8] - - # select sort type - with selecters[0]: - sort_type = st.selectbox('Sort by', ['Scores', 'IDs and Names']) - if sort_type == 'Scores': - sort_by = 'weighted_score_sum' - - # select other options - with selecters[1]: - if sort_type == 'IDs and Names': - sub_selecters = st.columns([3, 1]) - # select sort by - with sub_selecters[0]: - sort_by = st.selectbox('Sort by', - ['model_name', 'model_id', 'modelVersion_name', 'modelVersion_id', 'norm_nsfw'], - label_visibility='hidden') - - continue_idx = 1 - - else: - # add custom weights - sub_selecters = st.columns([1, 1, 1, 1]) - - with sub_selecters[0]: - clip_weight = st.number_input('Clip Score Weight', min_value=-100.0, max_value=100.0, value=1.0, step=0.1, help='the weight for normalized clip score') - with sub_selecters[1]: - mcos_weight = st.number_input('Dissimilarity Weight', min_value=-100.0, max_value=100.0, value=0.8, step=0.1, help='the weight for m(eam) s(imilarity) q(antile) score for measuring distinctiveness') - with sub_selecters[2]: - pop_weight = st.number_input('Popularity Weight', min_value=-100.0, max_value=100.0, value=0.2, step=0.1, help='the weight for normalized popularity score') - - items.loc[:, 'weighted_score_sum'] = round(items[f'norm_clip'] * clip_weight + items[f'norm_mcos'] * mcos_weight + items[ - 'norm_pop'] * pop_weight, 4) - - continue_idx = 3 - - # save latest weights - st.session_state.score_weights[0] = round(clip_weight, 2) - st.session_state.score_weights[1] = round(mcos_weight, 2) - st.session_state.score_weights[2] = round(pop_weight, 2) - - # select threshold - with sub_selecters[continue_idx]: - nsfw_threshold = st.number_input('NSFW Score Threshold', min_value=0.0, max_value=1.0, value=0.8, step=0.01, help='Only show models with nsfw score lower than this threshold, set 1.0 to show all images') - items = items[items['norm_nsfw'] <= nsfw_threshold].reset_index(drop=True) - - # save latest threshold - st.session_state.score_weights[3] = nsfw_threshold - - # draw a distribution histogram - if sort_type == 'Scores': - try: - with st.expander('Show score distribution histogram and select score range'): - st.write('**Score distribution histogram**') - chart_space = st.container() - # st.write('Select the range of scores to show') - hist_data = pd.DataFrame(items[sort_by]) - mini = hist_data[sort_by].min().item() - mini = mini//0.1 * 0.1 - maxi = hist_data[sort_by].max().item() - maxi = maxi//0.1 * 0.1 + 0.1 - st.write('**Select the range of scores to show**') - r = st.slider('Select the range of scores to show', min_value=mini, max_value=maxi, value=(mini, maxi), step=0.05, label_visibility='collapsed') - with chart_space: - st.altair_chart(altair_histogram(hist_data, sort_by, r[0], r[1]), use_container_width=True) - # event_dict = altair_component(altair_chart=altair_histogram(hist_data, sort_by)) - # r = event_dict.get(sort_by) - if r: - items = items[(items[sort_by] >= r[0]) & (items[sort_by] <= r[1])].reset_index(drop=True) - # st.write(r) - except: - pass - - display_options = st.columns([1, 4]) - - with display_options[0]: - # select order - order = st.selectbox('Order', ['Ascending', 'Descending'], index=1 if sort_type == 'Scores' else 0) - if order == 'Ascending': - order = True - else: - order = False - - with display_options[1]: - - # select info to show - info = st.multiselect('Show Info', - ['model_name', 'model_id', 'modelVersion_name', 'modelVersion_id', - 'weighted_score_sum', 'model_download_count', 'clip_score', 'mcos_score', - 'nsfw_score', 'norm_nsfw'], - default=sort_by) - - # apply sorting to dataframe - items = items.sort_values(by=[sort_by], ascending=order).reset_index(drop=True) - - # select number of columns - col_num = st.slider('Number of columns', min_value=1, max_value=9, value=4, step=1, key='col_num') - - return items, info, col_num - - def sidebar(self): - with st.sidebar: - prompt_tags = self.promptBook['tag'].unique() - # sort tags by alphabetical order - prompt_tags = np.sort(prompt_tags)[::1] - - tag = st.selectbox('Select a tag', prompt_tags, index=5) - - items = self.promptBook[self.promptBook['tag'] == tag].reset_index(drop=True) - - prompts = np.sort(items['prompt'].unique())[::1] - - selected_prompt = st.selectbox('Select prompt', prompts, index=3) - - mode = st.radio('Select a mode', ['Gallery', 'Graph'], horizontal=True, index=1) - - items = items[items['prompt'] == selected_prompt].reset_index(drop=True) - prompt_id = items['prompt_id'].unique()[0] - note = items['note'].unique()[0] - - # show source - if isinstance(note, str): - if note.isdigit(): - st.caption(f"`Source: civitai`") - else: - st.caption(f"`Source: {note}`") - else: - st.caption("`Source: Parti-prompts`") - - # show image metadata - image_metadatas = ['prompt', 'negativePrompt', 'sampler', 'cfgScale', 'size', 'seed'] - for key in image_metadatas: - label = ' '.join(key.split('_')).capitalize() - st.write(f"**{label}**") - if items[key][0] == ' ': - st.write('`None`') - else: - st.caption(f"{items[key][0]}") - - # for note as civitai image id, add civitai reference - if isinstance(note, str) and note.isdigit(): - try: - st.write(f'**[Civitai Reference](https://civitai.com/images/{note})**') - res = requests.get(f'https://civitai.com/images/{note}') - # st.write(res.text) - soup = BeautifulSoup(res.text, 'html.parser') - image_section = soup.find('div', {'class': 'mantine-12rlksp'}) - image_url = image_section.find('img')['src'] - st.image(image_url, use_column_width=True) - except: - pass - - return prompt_tags, tag, prompt_id, items, mode - - def app(self): - st.title('Model Visualization and Retrieval') - st.write('This is a gallery of images generated by the models') - - prompt_tags, tag, prompt_id, items, mode = self.sidebar() - # items, info, col_num = self.selection_panel(items) - - # subset = st.radio('Select a subset', ['All', 'Selected Only'], index=0, horizontal=True) - # try: - # if subset == 'Selected Only': - # items = items[items['modelVersion_id'].isin(st.session_state.selected_dict[prompt_id])].reset_index(drop=True) - # except: - # pass - - # add safety check for some prompts - safety_check = True - unsafe_prompts = {} - # initialize unsafe prompts - for prompt_tag in prompt_tags: - unsafe_prompts[prompt_tag] = [] - # manually add unsafe prompts - unsafe_prompts['world knowledge'] = [83] - unsafe_prompts['abstract'] = [1, 3] - - if int(prompt_id.item()) in unsafe_prompts[tag]: - st.warning('This prompt may contain unsafe content. They might be offensive, depressing, or sexual.') - safety_check = st.checkbox('I understand that this prompt may contain unsafe content. Show these images anyway.', key=f'safety_{prompt_id}') - - if safety_check: - if mode == 'Gallery': - self.gallery_mode(prompt_id, items) - elif mode == 'Graph': - self.graph_mode(prompt_id, items) - - - def graph_mode(self, prompt_id, items): - graph_cols = st.columns([3, 1]) - prompt = st.chat_input(f"Selected model version ids: {str(st.session_state.selected_dict.get(prompt_id, []))}", - disabled=False, key=f'{prompt_id}') - if prompt: - switch_page("ranking") - - with graph_cols[0]: - graph_space = st.empty() - - with graph_space.container(): - return_value = self.gallery_graph(items) - - with graph_cols[1]: - if return_value: - with st.form(key=f'{prompt_id}'): - image_url = f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{return_value}.png" - - st.image(image_url) - - item = items[items['image_id'] == return_value].reset_index(drop=True).iloc[0] - modelVersion_id = item['modelVersion_id'] - - # handle selection - if 'selected_dict' in st.session_state: - if item['prompt_id'] not in st.session_state.selected_dict: - st.session_state.selected_dict[item['prompt_id']] = [] - - if modelVersion_id in st.session_state.selected_dict[item['prompt_id']]: - checked = True - else: - checked = False - - if checked: - # deselect = st.button('Deselect', key=f'select_{item["prompt_id"]}_{item["modelVersion_id"]}', use_container_width=True) - deselect = st.form_submit_button('Deselect', use_container_width=True) - if deselect: - st.session_state.selected_dict[item['prompt_id']].remove(item['modelVersion_id']) - self.remove_ranking_states(item['prompt_id']) - st.experimental_rerun() - - else: - # select = st.button('Select', key=f'select_{item["prompt_id"]}_{item["modelVersion_id"]}', use_container_width=True, type='primary') - select = st.form_submit_button('Select', use_container_width=True, type='primary') - if select: - st.session_state.selected_dict[item['prompt_id']].append(item['modelVersion_id']) - self.remove_ranking_states(item['prompt_id']) - st.experimental_rerun() - - # st.write(item) - infos = ['model_name', 'modelVersion_name', 'model_download_count', 'clip_score', 'mcos_score', - 'nsfw_score'] - - infos_df = item[infos] - # rename columns - infos_df = infos_df.rename(index={'model_name': 'Model', 'modelVersion_name': 'Version', 'model_download_count': 'Downloads', 'clip_score': 'Clip Score', 'mcos_score': 'mcos Score', 'nsfw_score': 'NSFW Score'}) - st.table(infos_df) - - # for info in infos: - # st.write(f"**{info}**:") - # st.write(item[info]) - - else: - st.info('Please click on an image to show') - - - def gallery_mode(self, prompt_id, items): - items, info, col_num = self.selection_panel(items) - - if 'selected_dict' in st.session_state: - # st.write('checked: ', str(st.session_state.selected_dict.get(prompt_id, []))) - dynamic_weight_options = ['Grid Search', 'SVM', 'Greedy'] - dynamic_weight_panel = st.columns(len(dynamic_weight_options)) - - if len(st.session_state.selected_dict.get(prompt_id, [])) > 0: - btn_disable = False - else: - btn_disable = True - - for i in range(len(dynamic_weight_options)): - method = dynamic_weight_options[i] - with dynamic_weight_panel[i]: - btn = st.button(method, use_container_width=True, disabled=btn_disable, on_click=self.dynamic_weight, args=(prompt_id, items, method)) - - prompt = st.chat_input(f"Selected model version ids: {str(st.session_state.selected_dict.get(prompt_id, []))}", disabled=False, key=f'{prompt_id}') - if prompt: - switch_page("ranking") - - with st.form(key=f'{prompt_id}'): - # buttons = st.columns([1, 1, 1]) - buttons_space = st.columns([1, 1, 1, 1]) - gallery_space = st.empty() - - with buttons_space[0]: - continue_btn = st.form_submit_button('Confirm Selection', use_container_width=True, type='primary') - if continue_btn: - self.submit_actions('Continue', prompt_id) - - with buttons_space[1]: - select_btn = st.form_submit_button('Select All', use_container_width=True) - if select_btn: - self.submit_actions('Select', prompt_id) - - with buttons_space[2]: - deselect_btn = st.form_submit_button('Deselect All', use_container_width=True) - if deselect_btn: - self.submit_actions('Deselect', prompt_id) - - with buttons_space[3]: - refresh_btn = st.form_submit_button('Refresh', on_click=gallery_space.empty, use_container_width=True) - - with gallery_space.container(): - with st.spinner('Loading images...'): - self.gallery_standard(items, col_num, info) - - st.info("Don't forget to scroll back to top and click the 'Confirm Selection' button to save your selection!!!") - - - - def submit_actions(self, status, prompt_id): - # remove counter from session state - # st.session_state.pop('counter', None) - self.remove_ranking_states('prompt_id') - if status == 'Select': - modelVersions = self.promptBook[self.promptBook['prompt_id'] == prompt_id]['modelVersion_id'].unique() - st.session_state.selected_dict[prompt_id] = modelVersions.tolist() - print(st.session_state.selected_dict, 'select') - st.experimental_rerun() - elif status == 'Deselect': - st.session_state.selected_dict[prompt_id] = [] - print(st.session_state.selected_dict, 'deselect') - st.experimental_rerun() - # self.promptBook.loc[self.promptBook['prompt_id'] == prompt_id, 'checked'] = False - elif status == 'Continue': - st.session_state.selected_dict[prompt_id] = [] - for key in st.session_state: - keys = key.split('_') - if keys[0] == 'select' and keys[1] == str(prompt_id): - if st.session_state[key]: - st.session_state.selected_dict[prompt_id].append(int(keys[2])) - # switch_page("ranking") - print(st.session_state.selected_dict, 'continue') - st.experimental_rerun() - - def dynamic_weight(self, prompt_id, items, method='Grid Search'): - selected = items[ - items['modelVersion_id'].isin(st.session_state.selected_dict[prompt_id])].reset_index(drop=True) - optimal_weight = [0, 0, 0] - - if method == 'Grid Search': - # grid search method - top_ranking = len(items) * len(selected) - - for clip_weight in np.arange(-1, 1, 0.1): - for mcos_weight in np.arange(-1, 1, 0.1): - for pop_weight in np.arange(-1, 1, 0.1): - - weight_all = clip_weight*items[f'norm_clip'] + mcos_weight*items[f'norm_mcos'] + pop_weight*items['norm_pop'] - weight_all_sorted = weight_all.sort_values(ascending=False).reset_index(drop=True) - # print('weight_all_sorted:', weight_all_sorted) - weight_selected = clip_weight*selected[f'norm_clip'] + mcos_weight*selected[f'norm_mcos'] + pop_weight*selected['norm_pop'] - - # get the index of values of weight_selected in weight_all_sorted - rankings = [] - for weight in weight_selected: - rankings.append(weight_all_sorted.index[weight_all_sorted == weight].tolist()[0]) - if sum(rankings) <= top_ranking: - top_ranking = sum(rankings) - print('current top ranking:', top_ranking, rankings) - optimal_weight = [clip_weight, mcos_weight, pop_weight] - print('optimal weight:', optimal_weight) - - elif method == 'SVM': - # svm method - print('start svm method') - # get residual dataframe that contains models not selected - residual = items[~items['modelVersion_id'].isin(selected['modelVersion_id'])].reset_index(drop=True) - residual = residual[['norm_clip_crop', 'norm_mcos_crop', 'norm_pop']] - residual = residual.to_numpy() - selected = selected[['norm_clip_crop', 'norm_mcos_crop', 'norm_pop']] - selected = selected.to_numpy() - - y = np.concatenate((np.full((len(selected), 1), -1), np.full((len(residual), 1), 1)), axis=0).ravel() - X = np.concatenate((selected, residual), axis=0) - - # fit svm model, and get parameters for the hyperplane - clf = LinearSVC(random_state=0, C=1.0, fit_intercept=False, dual='auto') - clf.fit(X, y) - optimal_weight = clf.coef_[0].tolist() - print('optimal weight:', optimal_weight) - pass - - elif method == 'Greedy': - for idx in selected.index: - # find which score is the highest, clip, mcos, or pop - clip_score = selected.loc[idx, 'norm_clip_crop'] - mcos_score = selected.loc[idx, 'norm_mcos_crop'] - pop_score = selected.loc[idx, 'norm_pop'] - if clip_score >= mcos_score and clip_score >= pop_score: - optimal_weight[0] += 1 - elif mcos_score >= clip_score and mcos_score >= pop_score: - optimal_weight[1] += 1 - elif pop_score >= clip_score and pop_score >= mcos_score: - optimal_weight[2] += 1 - - # normalize optimal_weight - optimal_weight = [round(weight/len(selected), 2) for weight in optimal_weight] - print('optimal weight:', optimal_weight) - print('optimal weight:', optimal_weight) - - st.session_state.score_weights[0: 3] = optimal_weight - - - def remove_ranking_states(self, prompt_id): - # for drag sort - try: - st.session_state.counter[prompt_id] = 0 - st.session_state.ranking[prompt_id] = {} - print('remove ranking states') - except: - print('no sort ranking states to remove') - - # for battles - try: - st.session_state.pointer[prompt_id] = {'left': 0, 'right': 1} - print('remove battles states') - except: - print('no battles states to remove') - - # for page progress - try: - st.session_state.progress[prompt_id] = 'ranking' - print('reset page progress states') - except: - print('no page progress states to be reset') - - -# hist_data = pd.DataFrame(np.random.normal(42, 10, (200, 1)), columns=["x"]) -@st.cache_resource -def altair_histogram(hist_data, sort_by, mini, maxi): - brushed = alt.selection_interval(encodings=['x'], name="brushed") - - chart = ( - alt.Chart(hist_data) - .mark_bar(opacity=0.7, cornerRadius=2) - .encode(alt.X(f"{sort_by}:Q", bin=alt.Bin(maxbins=25)), y="count()") - # .add_selection(brushed) - # .properties(width=800, height=300) - ) - - # Create a transparent rectangle for highlighting the range - highlight = ( - alt.Chart(pd.DataFrame({'x1': [mini], 'x2': [maxi]})) - .mark_rect(opacity=0.3) - .encode(x='x1', x2='x2') - # .properties(width=800, height=300) - ) - - # Layer the chart and the highlight rectangle - layered_chart = alt.layer(chart, highlight) - - return layered_chart - - -@st.cache_data -def load_hf_dataset(): - # login to huggingface - login(token=os.environ.get("HF_TOKEN")) - - # load from huggingface - roster = pd.DataFrame(load_dataset('MAPS-research/GEMRec-Roster', split='train')) - promptBook = pd.DataFrame(load_dataset('MAPS-research/GEMRec-Metadata', split='train')) - # images_ds = load_from_disk(os.path.join(os.getcwd(), 'data', 'promptbook')) - images_ds = None # set to None for now since we use s3 bucket to store images - - # # process dataset - # roster = roster[['model_id', 'model_name', 'modelVersion_id', 'modelVersion_name', - # 'model_download_count']].drop_duplicates().reset_index(drop=True) - - # add 'custom_score_weights' column to promptBook if not exist - if 'weighted_score_sum' not in promptBook.columns: - promptBook.loc[:, 'weighted_score_sum'] = 0 - - # merge roster and promptbook - promptBook = promptBook.merge(roster[['model_id', 'model_name', 'modelVersion_id', 'modelVersion_name', 'model_download_count']], - on=['model_id', 'modelVersion_id'], how='left') - - # add column to record current row index - promptBook.loc[:, 'row_idx'] = promptBook.index - - # apply a nsfw filter - promptBook = promptBook[promptBook['nsfw_score'] <= 0.84].reset_index(drop=True) - - # add a column that adds up 'norm_clip', 'norm_mcos', and 'norm_pop' - score_weights = [1.0, 0.8, 0.2] - promptBook.loc[:, 'total_score'] = round(promptBook['norm_clip'] * score_weights[0] + promptBook['norm_mcos'] * score_weights[1] + promptBook['norm_pop'] * score_weights[2], 4) - - return roster, promptBook, images_ds - -@st.cache_data -def load_tsne_coordinates(items): - # load tsne coordinates - tsne_df = pd.read_parquet('./data/feats_tsne.parquet') - - # print(tsne_df['modelVersion_id'].dtype) - - print('before merge:', items) - items = items.merge(tsne_df, on=['modelVersion_id', 'prompt_id'], how='left') - print('after merge:', items) - return items - - -if __name__ == "__main__": - st.set_page_config(page_title="Model Coffer Gallery", page_icon="🖼️", layout="wide") - - if 'user_id' not in st.session_state: - st.warning('Please log in first.') - home_btn = st.button('Go to Home Page') - if home_btn: - switch_page("home") - else: - # st.write('You have already logged in as ' + st.session_state.user_id[0]) - roster, promptBook, images_ds = load_hf_dataset() - # print(promptBook.columns) - - # initialize selected_dict - if 'selected_dict' not in st.session_state: - st.session_state['selected_dict'] = {} - - app = GalleryApp(promptBook=promptBook, images_ds=images_ds) - app.app() - - # components.html( - # """ - # - # """, - # # unsafe_allow_html=True, - # ) diff --git a/spaces/MajinBog/ItsJayQz-GTA5_Artwork_Diffusion/README.md b/spaces/MajinBog/ItsJayQz-GTA5_Artwork_Diffusion/README.md deleted file mode 100644 index 6623db534a590838df47d40419aa6758747fa0f3..0000000000000000000000000000000000000000 --- a/spaces/MajinBog/ItsJayQz-GTA5_Artwork_Diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ItsJayQz-GTA5 Artwork Diffusion -emoji: 🐢 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/__init__.py deleted file mode 100644 index 34383d83f5e76bc801f31b20e5651e383be348b6..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from .build_sam import ( - build_sam, - build_sam_vit_h, - build_sam_vit_l, - build_sam_vit_b, - sam_model_registry, -) -from .predictor import SamPredictor -from .automatic_mask_generator import SamAutomaticMaskGenerator diff --git a/spaces/Manjushri/MusicGen/MODEL_CARD.md b/spaces/Manjushri/MusicGen/MODEL_CARD.md deleted file mode 100644 index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/MODEL_CARD.md +++ /dev/null @@ -1,81 +0,0 @@ -# MusicGen Model Card - -## Model details - -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** MusicGen was trained between April 2023 and May 2023. - -**Model version:** This is the version 1 of the model. - -**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. - -**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv]. - -**Citation details** See [our paper][arxiv] - -**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) -- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - -- Overall quality of the music samples; -- Text relevance to the provided text input; -- Adherence to the melody for melody-guided music generation. - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. - -## Training datasets - -The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. - -## Quantitative analysis - -More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section. - -## Limitations and biases - -**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. - -**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). - -**Limitations:** - -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- The model does not perform equally well for all music styles and cultures. -- The model sometimes generates end of songs, collapsing to silence. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[arxiv]: https://arxiv.org/abs/2306.05284 diff --git a/spaces/Manjushri/MusicGen/audiocraft/models/encodec.py b/spaces/Manjushri/MusicGen/audiocraft/models/encodec.py deleted file mode 100644 index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/models/encodec.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -from einops import rearrange -import torch -from torch import nn - -from .. import quantization as qt - - -class CompressionModel(ABC, nn.Module): - - @abstractmethod - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - ... - - @abstractmethod - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """See `EncodecModel.encode`""" - ... - - @abstractmethod - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """See `EncodecModel.decode`""" - ... - - @property - @abstractmethod - def channels(self) -> int: - ... - - @property - @abstractmethod - def frame_rate(self) -> int: - ... - - @property - @abstractmethod - def sample_rate(self) -> int: - ... - - @property - @abstractmethod - def cardinality(self) -> int: - ... - - @property - @abstractmethod - def num_codebooks(self) -> int: - ... - - @property - @abstractmethod - def total_codebooks(self) -> int: - ... - - @abstractmethod - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - ... - - -class EncodecModel(CompressionModel): - """Encodec model operating on the raw waveform. - - Args: - encoder (nn.Module): Encoder network. - decoder (nn.Module): Decoder network. - quantizer (qt.BaseQuantizer): Quantizer network. - frame_rate (int): Frame rate for the latent representation. - sample_rate (int): Audio sample rate. - channels (int): Number of audio channels. - causal (bool): Whether to use a causal version of the model. - renormalize (bool): Whether to renormalize the audio before running the model. - """ - # we need assignement to override the property in the abstract class, - # I couldn't find a better way... - frame_rate: int = 0 - sample_rate: int = 0 - channels: int = 0 - - def __init__(self, - encoder: nn.Module, - decoder: nn.Module, - quantizer: qt.BaseQuantizer, - frame_rate: int, - sample_rate: int, - channels: int, - causal: bool = False, - renormalize: bool = False): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.quantizer = quantizer - self.frame_rate = frame_rate - self.sample_rate = sample_rate - self.channels = channels - self.renormalize = renormalize - self.causal = causal - if self.causal: - # we force disabling here to avoid handling linear overlap of segments - # as supported in original EnCodec codebase. - assert not self.renormalize, 'Causal model does not support renormalize' - - @property - def total_codebooks(self): - """Total number of quantizer codebooks available. - """ - return self.quantizer.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - """ - return self.quantizer.num_codebooks - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - self.quantizer.set_num_codebooks(n) - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - return self.quantizer.bins - - def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - scale: tp.Optional[torch.Tensor] - if self.renormalize: - mono = x.mean(dim=1, keepdim=True) - volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt() - scale = 1e-8 + volume - x = x / scale - scale = scale.view(-1, 1) - else: - scale = None - return x, scale - - def postprocess(self, - x: torch.Tensor, - scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor: - if scale is not None: - assert self.renormalize - x = x * scale.view(-1, 1, 1) - return x - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - assert x.dim() == 3 - length = x.shape[-1] - x, scale = self.preprocess(x) - - emb = self.encoder(x) - q_res = self.quantizer(emb, self.frame_rate) - out = self.decoder(q_res.x) - - # remove extra padding added by the encoder and decoder - assert out.shape[-1] >= length, (out.shape[-1], length) - out = out[..., :length] - - q_res.x = self.postprocess(out, scale) - - return q_res - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """Encode the given input tensor to quantized representation along with scale parameter. - - Args: - x (torch.Tensor): Float tensor of shape [B, C, T] - - Returns: - codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of: - codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep. - scale a float tensor containing the scale for audio renormalizealization. - """ - assert x.dim() == 3 - x, scale = self.preprocess(x) - emb = self.encoder(x) - codes = self.quantizer.encode(emb) - return codes, scale - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """Decode the given codes to a reconstructed representation, using the scale to perform - audio denormalization if needed. - - Args: - codes (torch.Tensor): Int tensor of shape [B, K, T] - scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value. - - Returns: - out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio. - """ - emb = self.quantizer.decode(codes) - out = self.decoder(emb) - out = self.postprocess(out, scale) - # out contains extra padding added by the encoder and decoder - return out - - -class FlattenedCompressionModel(CompressionModel): - """Wraps a CompressionModel and flatten its codebooks, e.g. - instead of returning [B, K, T], return [B, S, T * (K // S)] with - S the number of codebooks per step, and `K // S` the number of 'virtual steps' - for each real time step. - - Args: - model (CompressionModel): compression model to wrap. - codebooks_per_step (int): number of codebooks to keep per step, - this must divide the number of codebooks provided by the wrapped model. - extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1, - if each codebook has a cardinality N, then the first codebook will - use the range [0, N - 1], and the second [N, 2 N - 1] etc. - On decoding, this can lead to potentially invalid sequences. - Any invalid entry will be silently remapped to the proper range - with a modulo. - """ - def __init__(self, model: CompressionModel, codebooks_per_step: int = 1, - extend_cardinality: bool = True): - super().__init__() - self.model = model - self.codebooks_per_step = codebooks_per_step - self.extend_cardinality = extend_cardinality - - @property - def total_codebooks(self): - return self.model.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - - ..Warning:: this reports the number of codebooks after the flattening - of the codebooks! - """ - assert self.model.num_codebooks % self.codebooks_per_step == 0 - return self.codebooks_per_step - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - - ..Warning:: this sets the number of codebooks **before** the flattening - of the codebooks. - """ - assert n % self.codebooks_per_step == 0 - self.model.set_num_codebooks(n) - - @property - def num_virtual_steps(self) -> int: - """Return the number of virtual steps, e.g. one real step - will be split into that many steps. - """ - return self.model.num_codebooks // self.codebooks_per_step - - @property - def frame_rate(self) -> int: - return self.model.frame_rate * self.num_virtual_steps - - @property - def sample_rate(self) -> int: - return self.model.sample_rate - - @property - def channels(self) -> int: - return self.model.channels - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - if self.extend_cardinality: - return self.model.cardinality * self.num_virtual_steps - else: - return self.model.cardinality - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - raise NotImplementedError("Not supported, use encode and decode.") - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - indices, scales = self.model.encode(x) - B, K, T = indices.shape - indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step) - if self.extend_cardinality: - for virtual_step in range(1, self.num_virtual_steps): - indices[..., virtual_step] += self.model.cardinality * virtual_step - indices = rearrange(indices, 'b k t v -> b k (t v)') - return (indices, scales) - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - B, K, T = codes.shape - assert T % self.num_virtual_steps == 0 - codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps) - # We silently ignore potential errors from the LM when - # using extend_cardinality. - codes = codes % self.model.cardinality - return self.model.decode(codes, scale) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/compose.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/compose.py deleted file mode 100644 index cbfcbb925c6d4ebf849328b9f94ef6fc24359bf5..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/compose.py +++ /dev/null @@ -1,51 +0,0 @@ -import collections - -from annotator.uniformer.mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/__init__.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MingGatsby/VoiceFixer/README.md b/spaces/MingGatsby/VoiceFixer/README.md deleted file mode 100644 index 76d456430de8f9e57614e0d0b6ba3a4ea945530b..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/VoiceFixer/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: VoiceFixer -emoji: 💩 -colorFrom: indigo -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false -duplicated_from: Kevin676/VoiceFixer ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/tune.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/tune.py deleted file mode 100644 index 3473b5e94bd3c1f737a18f0187790d5df2d7a2aa..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/tune.py +++ /dev/null @@ -1,262 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -r"""Run grid search. - -Look at launch_tuning.sh for details on how to tune at scale. - -Usage example: -Tune with one worker on the local machine. - -CONFIG="agent=c(algorithm='pg')," -CONFIG+="env=c(task_cycle=['reverse-tune', 'remove-tune'])" -HPARAM_SPACE_TYPE="pg" -OUT_DIR="/tmp/bf_pg_tune" -MAX_NPE=5000000 -NUM_REPETITIONS=50 -rm -rf $OUT_DIR -mkdir $OUT_DIR -bazel run -c opt single_task:tune -- \ - --alsologtostderr \ - --config="$CONFIG" \ - --max_npe="$MAX_NPE" \ - --num_repetitions="$NUM_REPETITIONS" \ - --logdir="$OUT_DIR" \ - --summary_interval=1 \ - --model_v=0 \ - --hparam_space="$HPARAM_SPACE_TYPE" \ - --tuner_id=0 \ - --num_tuners=1 \ - 2>&1 >"$OUT_DIR/tuner_0.log" -learning/brain/tensorboard/tensorboard.sh --port 12345 --logdir "$OUT_DIR" -""" - -import ast -import os - -from absl import app -from absl import flags -from absl import logging -import numpy as np -from six.moves import xrange -import tensorflow as tf - -from single_task import defaults # brain coder -from single_task import run as run_lib # brain coder - -FLAGS = flags.FLAGS -flags.DEFINE_integer( - 'tuner_id', 0, - 'The unique ID for this tuning worker.') -flags.DEFINE_integer( - 'num_tuners', 1, - 'How many tuners are there.') -flags.DEFINE_string( - 'hparam_space', 'default', - 'String name which denotes the hparam space to tune over. This is ' - 'algorithm dependent.') -flags.DEFINE_string( - 'fixed_hparams', '', - 'HParams string. Used to fix hparams during tuning.') -flags.DEFINE_float( - 'success_rate_objective_weight', 1.0, - 'How much to weight success rate vs num programs seen. By default, only ' - 'success rate is optimized (this is the setting used in the paper).') - - -def parse_hparams_string(hparams_str): - hparams = {} - for term in hparams_str.split(','): - if not term: - continue - name, value = term.split('=') - hparams[name.strip()] = ast.literal_eval(value) - return hparams - - -def int_to_multibase(n, bases): - digits = [0] * len(bases) - for i, b in enumerate(bases): - n, d = divmod(n, b) - digits[i] = d - return digits - - -def hparams_for_index(index, tuning_space): - keys = sorted(tuning_space.keys()) - indices = int_to_multibase(index, [len(tuning_space[k]) for k in keys]) - return tf.contrib.training.HParams( - **{k: tuning_space[k][i] for k, i in zip(keys, indices)}) - - -def run_tuner_loop(ns): - """Run tuning loop for this worker.""" - is_chief = FLAGS.task_id == 0 - tuning_space = ns.define_tuner_hparam_space( - hparam_space_type=FLAGS.hparam_space) - fixed_hparams = parse_hparams_string(FLAGS.fixed_hparams) - for name, value in fixed_hparams.iteritems(): - tuning_space[name] = [value] - tuning_space_size = np.prod([len(values) for values in tuning_space.values()]) - - num_local_trials, remainder = divmod(tuning_space_size, FLAGS.num_tuners) - if FLAGS.tuner_id < remainder: - num_local_trials += 1 - starting_trial_id = ( - num_local_trials * FLAGS.tuner_id + min(remainder, FLAGS.tuner_id)) - - logging.info('tuning_space_size: %d', tuning_space_size) - logging.info('num_local_trials: %d', num_local_trials) - logging.info('starting_trial_id: %d', starting_trial_id) - - for local_trial_index in xrange(num_local_trials): - trial_config = defaults.default_config_with_updates(FLAGS.config) - global_trial_index = local_trial_index + starting_trial_id - trial_name = 'trial_' + str(global_trial_index) - trial_dir = os.path.join(FLAGS.logdir, trial_name) - hparams = hparams_for_index(global_trial_index, tuning_space) - ns.write_hparams_to_config( - trial_config, hparams, hparam_space_type=FLAGS.hparam_space) - - results_list = ns.run_training( - config=trial_config, tuner=None, logdir=trial_dir, is_chief=is_chief, - trial_name=trial_name) - - if not is_chief: - # Only chief worker needs to write tuning results to disk. - continue - - objective, metrics = compute_tuning_objective( - results_list, hparams, trial_name, num_trials=tuning_space_size) - logging.info('metrics:\n%s', metrics) - logging.info('objective: %s', objective) - logging.info('programs_seen_fraction: %s', - metrics['programs_seen_fraction']) - logging.info('success_rate: %s', metrics['success_rate']) - logging.info('success_rate_objective_weight: %s', - FLAGS.success_rate_objective_weight) - - tuning_results_file = os.path.join(trial_dir, 'tuning_results.txt') - with tf.gfile.FastGFile(tuning_results_file, 'a') as writer: - writer.write(str(metrics) + '\n') - - logging.info('Trial %s complete.', trial_name) - - -def compute_tuning_objective(results_list, hparams, trial_name, num_trials): - """Compute tuning objective and metrics given results and trial information. - - Args: - results_list: List of results dicts read from disk. These are written by - workers. - hparams: tf.contrib.training.HParams instance containing the hparams used - in this trial (only the hparams which are being tuned). - trial_name: Name of this trial. Used to create a trial directory. - num_trials: Total number of trials that need to be run. This is saved in the - metrics dict for future reference. - - Returns: - objective: The objective computed for this trial. Choose the hparams for the - trial with the largest objective value. - metrics: Information about this trial. A dict. - """ - found_solution = [r['found_solution'] for r in results_list] - successful_program_counts = [ - r['npe'] for r in results_list if r['found_solution']] - - success_rate = sum(found_solution) / float(len(results_list)) - - max_programs = FLAGS.max_npe # Per run. - all_program_counts = [ - r['npe'] if r['found_solution'] else max_programs - for r in results_list] - programs_seen_fraction = ( - float(sum(all_program_counts)) - / (max_programs * len(all_program_counts))) - - # min/max/avg stats are over successful runs. - metrics = { - 'num_runs': len(results_list), - 'num_succeeded': sum(found_solution), - 'success_rate': success_rate, - 'programs_seen_fraction': programs_seen_fraction, - 'avg_programs': np.mean(successful_program_counts), - 'max_possible_programs_per_run': max_programs, - 'global_step': sum([r['num_batches'] for r in results_list]), - 'hparams': hparams.values(), - 'trial_name': trial_name, - 'num_trials': num_trials} - - # Report stats per tasks. - tasks = [r['task'] for r in results_list] - for task in set(tasks): - task_list = [r for r in results_list if r['task'] == task] - found_solution = [r['found_solution'] for r in task_list] - successful_rewards = [ - r['best_reward'] for r in task_list - if r['found_solution']] - successful_num_batches = [ - r['num_batches'] - for r in task_list if r['found_solution']] - successful_program_counts = [ - r['npe'] for r in task_list if r['found_solution']] - metrics_append = { - task + '__num_runs': len(task_list), - task + '__num_succeeded': sum(found_solution), - task + '__success_rate': ( - sum(found_solution) / float(len(task_list)))} - metrics.update(metrics_append) - if any(found_solution): - metrics_append = { - task + '__min_reward': min(successful_rewards), - task + '__max_reward': max(successful_rewards), - task + '__avg_reward': np.median(successful_rewards), - task + '__min_programs': min(successful_program_counts), - task + '__max_programs': max(successful_program_counts), - task + '__avg_programs': np.mean(successful_program_counts), - task + '__min_batches': min(successful_num_batches), - task + '__max_batches': max(successful_num_batches), - task + '__avg_batches': np.mean(successful_num_batches)} - metrics.update(metrics_append) - - # Objective will be maximized. - # Maximize success rate, minimize num programs seen. - # Max objective is always 1. - weight = FLAGS.success_rate_objective_weight - objective = ( - weight * success_rate - + (1 - weight) * (1 - programs_seen_fraction)) - metrics['objective'] = objective - - return objective, metrics - - -def main(argv): - del argv - - logging.set_verbosity(FLAGS.log_level) - - if not FLAGS.logdir: - raise ValueError('logdir flag must be provided.') - if FLAGS.num_workers <= 0: - raise ValueError('num_workers flag must be greater than 0.') - if FLAGS.task_id < 0: - raise ValueError('task_id flag must be greater than or equal to 0.') - if FLAGS.task_id >= FLAGS.num_workers: - raise ValueError( - 'task_id flag must be strictly less than num_workers flag.') - if FLAGS.num_tuners <= 0: - raise ValueError('num_tuners flag must be greater than 0.') - if FLAGS.tuner_id < 0: - raise ValueError('tuner_id flag must be greater than or equal to 0.') - if FLAGS.tuner_id >= FLAGS.num_tuners: - raise ValueError( - 'tuner_id flag must be strictly less than num_tuners flag.') - - ns, _ = run_lib.get_namespace(FLAGS.config) - run_tuner_loop(ns) - - -if __name__ == '__main__': - app.run(main) diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_download_init_models.sh b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_download_init_models.sh deleted file mode 100644 index 1900bd0b03566d29dac8a8de5f4fce623be98a92..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_download_init_models.sh +++ /dev/null @@ -1,18 +0,0 @@ -# Script to download models to initialize the RGB and D models for training.We -# use ResNet-v2-50 for both modalities. - -mkdir -p data/init_models -cd data/init_models - -# RGB Models are initialized by pre-training on ImageNet. -mkdir -p resnet_v2_50 -RGB_URL="http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz" -wget $RGB_URL -tar -xf resnet_v2_50_2017_04_14.tar.gz -C resnet_v2_50 - -# Depth models are initialized by distilling the RGB model to D images using -# Cross-Modal Distillation (https://arxiv.org/abs/1507.00448). -mkdir -p distill_rgb_to_d_resnet_v2_50 -D_URL="http://download.tensorflow.org/models/cognitive_mapping_and_planning/2017_04_16/distill_rgb_to_d_resnet_v2_50.tar" -wget $D_URL -tar -xf distill_rgb_to_d_resnet_v2_50.tar -C distill_rgb_to_d_resnet_v2_50 diff --git a/spaces/Nee001/bing0/src/lib/bots/bing/types.ts b/spaces/Nee001/bing0/src/lib/bots/bing/types.ts deleted file mode 100644 index 5a9813b797d13b592ec17b45cfac4bd46510d883..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,261 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_IP_FORBIDDEN = 'BING_IP_FORBIDDEN', - BING_TRY_LATER = 'BING_TRY_LATER', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/NoCrypt/pixelization/models/networks.py b/spaces/NoCrypt/pixelization/models/networks.py deleted file mode 100644 index 0b3f3f825d3d4b6513ab040f6018823f7c2bda03..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/pixelization/models/networks.py +++ /dev/null @@ -1,244 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn import init -import functools -from torch.optim import lr_scheduler -from .c2pGen import * -from .p2cGen import * -from .c2pDis import * - -class Identity(nn.Module): - def forward(self, x): - return x - -def get_norm_layer(norm_type='instance'): - """Return a normalization layer - - Parameters: - norm_type (str) -- the name of the normalization layer: batch | instance | none - - For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). - For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics. - """ - if norm_type == 'batch': - norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True) - elif norm_type == 'instance': - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) - elif norm_type == 'none': - def norm_layer(x): return Identity() - else: - raise NotImplementedError('normalization layer [%s] is not found' % norm_type) - return norm_layer - - -def get_scheduler(optimizer, opt): - """Return a learning rate scheduler - - Parameters: - optimizer -- the optimizer of the network - opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.  - opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine - - For 'linear', we keep the same learning rate for the first epochs - and linearly decay the rate to zero over the next epochs. - For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. - See https://pytorch.org/docs/stable/optim.html for more details. - """ - if opt.lr_policy == 'linear': - def lambda_rule(epoch): - lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs_decay + 1) - return lr_l - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -def init_weights(net, init_type='normal', init_gain=0.02): - """Initialize network weights. - - Parameters: - net (network) -- network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - - """ - def init_func(m): # define the initialization function - classname = m.__class__.__name__ - if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - init.normal_(m.weight.data, 0.0, init_gain) - elif init_type == 'xavier': - init.xavier_normal_(m.weight.data, gain=init_gain) - elif init_type == 'kaiming': - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight.data, gain=init_gain) - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies. - init.normal_(m.weight.data, 1.0, init_gain) - init.constant_(m.bias.data, 0.0) - - #print('initialize network with %s' % init_type) - net.apply(init_func) # apply the initialization function - - -def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]): - """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights - Parameters: - net (network) -- the network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Return an initialized network. - """ - gpu_ids = [0] - if len(gpu_ids) > 0: - # assert(torch.cuda.is_available()) #uncomment this for using gpu - net.to(torch.device("cpu")) #change this for using gpu to gpu_ids[0] - net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs - init_weights(net, init_type, init_gain=init_gain) - return net - - -def define_G(input_nc, output_nc, ngf, netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[]): - """Create a generator - - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_256 | unet_128 - norm (str) -- the name of normalization layers used in the network: batch | instance | none - use_dropout (bool) -- if use dropout layers. - init_type (str) -- the name of our initialization method. - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Returns a generator - """ - net = None - norm_layer = get_norm_layer(norm_type=norm) - - if netG == 'c2pGen': # style_dim mlp_dim - net = C2PGen(input_nc, output_nc, ngf, 2, 4, 256, 256, activ='relu', pad_type='reflect') - #print('c2pgen resblock is 8') - elif netG == 'p2cGen': - net = P2CGen(input_nc, output_nc, ngf, 2, 3, activ='relu', pad_type='reflect') - elif netG == 'antialias': - net = AliasNet(input_nc, output_nc, ngf, 2, 3, activ='relu', pad_type='reflect') - else: - raise NotImplementedError('Generator model name [%s] is not recognized' % netG) - return init_net(net, init_type, init_gain, gpu_ids) - - - -def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', init_type='normal', init_gain=0.02, gpu_ids=[]): - """Create a discriminator - - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the first conv layer - netD (str) -- the architecture's name: basic | n_layers | pixel - n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers' - norm (str) -- the type of normalization layers used in the network. - init_type (str) -- the name of the initialization method. - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Returns a discriminator - """ - net = None - norm_layer = get_norm_layer(norm_type=norm) - - - if netD == 'CPDis': - net = CPDis(image_size=256, conv_dim=64, repeat_num=3, norm='SN') - elif netD == 'CPDis_cls': - net = CPDis_cls(image_size=256, conv_dim=64, repeat_num=3, norm='SN') - else: - raise NotImplementedError('Discriminator model name [%s] is not recognized' % netD) - return init_net(net, init_type, init_gain, gpu_ids) - - -class GANLoss(nn.Module): - """Define different GAN objectives. - - The GANLoss class abstracts away the need to create the target label tensor - that has the same size as the input. - """ - - def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0): - """ Initialize the GANLoss class. - - Parameters: - gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp. - target_real_label (bool) - - label for a real image - target_fake_label (bool) - - label of a fake image - - Note: Do not use sigmoid as the last layer of Discriminator. - LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss. - """ - super(GANLoss, self).__init__() - self.register_buffer('real_label', torch.tensor(target_real_label)) - self.register_buffer('fake_label', torch.tensor(target_fake_label)) - self.gan_mode = gan_mode - if gan_mode == 'lsgan': - self.loss = nn.MSELoss() - elif gan_mode == 'vanilla': - self.loss = nn.BCEWithLogitsLoss() - elif gan_mode in ['wgangp']: - self.loss = None - else: - raise NotImplementedError('gan mode %s not implemented' % gan_mode) - - def get_target_tensor(self, prediction, target_is_real): - """Create label tensors with the same size as the input. - - Parameters: - prediction (tensor) - - tpyically the prediction from a discriminator - target_is_real (bool) - - if the ground truth label is for real images or fake images - - Returns: - A label tensor filled with ground truth label, and with the size of the input - """ - - if target_is_real: - target_tensor = self.real_label - else: - target_tensor = self.fake_label - return target_tensor.expand_as(prediction) - - def __call__(self, prediction, target_is_real): - """Calculate loss given Discriminator's output and grount truth labels. - - Parameters: - prediction (tensor) - - tpyically the prediction output from a discriminator - target_is_real (bool) - - if the ground truth label is for real images or fake images - - Returns: - the calculated loss. - """ - if self.gan_mode in ['lsgan', 'vanilla']: - target_tensor = self.get_target_tensor(prediction, target_is_real) - loss = self.loss(prediction, target_tensor) - elif self.gan_mode == 'wgangp': - if target_is_real: - loss = -prediction.mean() - else: - loss = prediction.mean() - return loss - - - - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/text_to_speech.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/text_to_speech.py deleted file mode 100644 index 5646e41d39f6e39d4b046ee34ff69b998dab160d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/text_to_speech.py +++ /dev/null @@ -1,467 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import os.path as op - -import torch -import torch.nn.functional as F -import numpy as np - -from fairseq.data.audio.text_to_speech_dataset import TextToSpeechDatasetCreator -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.speech_generator import ( - AutoRegressiveSpeechGenerator, NonAutoregressiveSpeechGenerator, - TeacherForcingAutoRegressiveSpeechGenerator -) - -logging.basicConfig( - format='%(asctime)s | %(levelname)s | %(name)s | %(message)s', - datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO -) -logger = logging.getLogger(__name__) - - -try: - from tensorboardX import SummaryWriter -except ImportError: - logger.info("Please install tensorboardX: pip install tensorboardX") - SummaryWriter = None - - -@register_task('text_to_speech') -class TextToSpeechTask(SpeechToTextTask): - @staticmethod - def add_args(parser): - parser.add_argument('data', help='manifest root path') - parser.add_argument( - '--config-yaml', type=str, default='config.yaml', - help='Configuration YAML filename (under manifest root)' - ) - parser.add_argument('--max-source-positions', default=1024, type=int, - metavar='N', - help='max number of tokens in the source sequence') - parser.add_argument('--max-target-positions', default=1200, type=int, - metavar='N', - help='max number of tokens in the target sequence') - parser.add_argument("--n-frames-per-step", type=int, default=1) - parser.add_argument("--eos-prob-threshold", type=float, default=0.5) - parser.add_argument("--eval-inference", action="store_true") - parser.add_argument("--eval-tb-nsample", type=int, default=8) - parser.add_argument("--vocoder", type=str, default="griffin_lim") - parser.add_argument("--spec-bwd-max-iter", type=int, default=8) - - def __init__(self, args, src_dict): - super().__init__(args, src_dict) - self.src_dict = src_dict - self.sr = self.data_cfg.config.get("features").get("sample_rate") - - self.tensorboard_writer = None - self.tensorboard_dir = "" - if args.tensorboard_logdir and SummaryWriter is not None: - self.tensorboard_dir = os.path.join(args.tensorboard_logdir, - "valid_extra") - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - is_train_split = split.startswith('train') - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = TextToSpeechDatasetCreator.from_tsv( - self.args.data, self.data_cfg, split, self.src_dict, - pre_tokenizer, bpe_tokenizer, is_train_split=is_train_split, - epoch=epoch, seed=self.args.seed, - n_frames_per_step=self.args.n_frames_per_step, - speaker_to_id=self.speaker_to_id - ) - - @property - def target_dictionary(self): - return None - - @property - def source_dictionary(self): - return self.src_dict - - def get_speaker_embeddings_path(self): - speaker_emb_path = None - if self.data_cfg.config.get("speaker_emb_filename") is not None: - speaker_emb_path = op.join( - self.args.data, self.data_cfg.config.get("speaker_emb_filename") - ) - return speaker_emb_path - - @classmethod - def get_speaker_embeddings(cls, args): - embed_speaker = None - if args.speaker_to_id is not None: - if args.speaker_emb_path is None: - embed_speaker = torch.nn.Embedding( - len(args.speaker_to_id), args.speaker_embed_dim - ) - else: - speaker_emb_mat = np.load(args.speaker_emb_path) - assert speaker_emb_mat.shape[1] == args.speaker_embed_dim - embed_speaker = torch.nn.Embedding.from_pretrained( - torch.from_numpy(speaker_emb_mat), freeze=True, - ) - logger.info( - f"load speaker embeddings from {args.speaker_emb_path}. " - f"train embedding? {embed_speaker.weight.requires_grad}\n" - f"embeddings:\n{speaker_emb_mat}" - ) - return embed_speaker - - def build_model(self, cfg): - cfg.pitch_min = self.data_cfg.config["features"].get("pitch_min", None) - cfg.pitch_max = self.data_cfg.config["features"].get("pitch_max", None) - cfg.energy_min = self.data_cfg.config["features"].get("energy_min", None) - cfg.energy_max = self.data_cfg.config["features"].get("energy_max", None) - cfg.speaker_emb_path = self.get_speaker_embeddings_path() - model = super().build_model(cfg) - self.generator = None - if getattr(cfg, "eval_inference", False): - self.generator = self.build_generator([model], cfg) - return model - - def build_generator(self, models, cfg, vocoder=None, **unused): - if vocoder is None: - vocoder = self.build_default_vocoder() - model = models[0] - if getattr(model, "NON_AUTOREGRESSIVE", False): - return NonAutoregressiveSpeechGenerator( - model, vocoder, self.data_cfg - ) - else: - generator = AutoRegressiveSpeechGenerator - if getattr(cfg, "teacher_forcing", False): - generator = TeacherForcingAutoRegressiveSpeechGenerator - logger.info("Teacher forcing mode for generation") - return generator( - model, vocoder, self.data_cfg, - max_iter=self.args.max_target_positions, - eos_prob_threshold=self.args.eos_prob_threshold - ) - - def build_default_vocoder(self): - from fairseq.models.text_to_speech.vocoder import get_vocoder - vocoder = get_vocoder(self.args, self.data_cfg) - if torch.cuda.is_available() and not self.args.cpu: - vocoder = vocoder.cuda() - else: - vocoder = vocoder.cpu() - return vocoder - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step( - sample, model, criterion - ) - - if getattr(self.args, "eval_inference", False): - hypos, inference_losses = self.valid_step_with_inference( - sample, model, self.generator - ) - for k, v in inference_losses.items(): - assert(k not in logging_output) - logging_output[k] = v - - picked_id = 0 - if self.tensorboard_dir and (sample["id"] == picked_id).any(): - self.log_tensorboard( - sample, - hypos[:self.args.eval_tb_nsample], - model._num_updates, - is_na_model=getattr(model, "NON_AUTOREGRESSIVE", False) - ) - return loss, sample_size, logging_output - - def valid_step_with_inference(self, sample, model, generator): - hypos = generator.generate(model, sample, has_targ=True) - - losses = { - "mcd_loss": 0., - "targ_frames": 0., - "pred_frames": 0., - "nins": 0., - "ndel": 0., - } - rets = batch_mel_cepstral_distortion( - [hypo["targ_waveform"] for hypo in hypos], - [hypo["waveform"] for hypo in hypos], - self.sr, - normalize_type=None - ) - for d, extra in rets: - pathmap = extra[-1] - losses["mcd_loss"] += d.item() - losses["targ_frames"] += pathmap.size(0) - losses["pred_frames"] += pathmap.size(1) - losses["nins"] += (pathmap.sum(dim=1) - 1).sum().item() - losses["ndel"] += (pathmap.sum(dim=0) - 1).sum().item() - - return hypos, losses - - def log_tensorboard(self, sample, hypos, num_updates, is_na_model=False): - if self.tensorboard_writer is None: - self.tensorboard_writer = SummaryWriter(self.tensorboard_dir) - tb_writer = self.tensorboard_writer - for b in range(len(hypos)): - idx = sample["id"][b] - text = sample["src_texts"][b] - targ = hypos[b]["targ_feature"] - pred = hypos[b]["feature"] - attn = hypos[b]["attn"] - - if is_na_model: - data = plot_tts_output( - [targ.transpose(0, 1), pred.transpose(0, 1)], - [f"target (idx={idx})", "output"], attn, - "alignment", ret_np=True, suptitle=text, - ) - else: - eos_prob = hypos[b]["eos_prob"] - data = plot_tts_output( - [targ.transpose(0, 1), pred.transpose(0, 1), attn], - [f"target (idx={idx})", "output", "alignment"], eos_prob, - "eos prob", ret_np=True, suptitle=text, - ) - - tb_writer.add_image( - f"inference_sample_{b}", data, num_updates, - dataformats="HWC" - ) - - if hypos[b]["waveform"] is not None: - targ_wave = hypos[b]["targ_waveform"].detach().cpu().float() - pred_wave = hypos[b]["waveform"].detach().cpu().float() - tb_writer.add_audio( - f"inference_targ_{b}", - targ_wave, - num_updates, - sample_rate=self.sr - ) - tb_writer.add_audio( - f"inference_pred_{b}", - pred_wave, - num_updates, - sample_rate=self.sr - ) - - -def save_figure_to_numpy(fig): - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - return data - - -DEFAULT_V_MIN = np.log(1e-5) - - -def plot_tts_output( - data_2d, title_2d, data_1d, title_1d, figsize=(24, 4), - v_min=DEFAULT_V_MIN, v_max=3, ret_np=False, suptitle="" -): - try: - import matplotlib.pyplot as plt - from mpl_toolkits.axes_grid1 import make_axes_locatable - except ImportError: - raise ImportError("Please install Matplotlib: pip install matplotlib") - - data_2d = [ - x.detach().cpu().float().numpy() - if isinstance(x, torch.Tensor) else x for x in data_2d - ] - fig, axes = plt.subplots(1, len(data_2d) + 1, figsize=figsize) - if suptitle: - fig.suptitle(suptitle[:400]) # capped at 400 chars - axes = [axes] if len(data_2d) == 0 else axes - for ax, x, name in zip(axes, data_2d, title_2d): - ax.set_title(name) - divider = make_axes_locatable(ax) - cax = divider.append_axes('right', size='5%', pad=0.05) - im = ax.imshow( - x, origin="lower", aspect="auto", vmin=max(x.min(), v_min), - vmax=min(x.max(), v_max) - ) - fig.colorbar(im, cax=cax, orientation='vertical') - - if isinstance(data_1d, torch.Tensor): - data_1d = data_1d.detach().cpu().numpy() - axes[-1].plot(data_1d) - axes[-1].set_title(title_1d) - plt.tight_layout() - - if ret_np: - fig.canvas.draw() - data = save_figure_to_numpy(fig) - plt.close(fig) - return data - - -def antidiag_indices(offset, min_i=0, max_i=None, min_j=0, max_j=None): - """ - for a (3, 4) matrix with min_i=1, max_i=3, min_j=1, max_j=4, outputs - - offset=2 (1, 1), - offset=3 (2, 1), (1, 2) - offset=4 (2, 2), (1, 3) - offset=5 (2, 3) - - constraints: - i + j = offset - min_j <= j < max_j - min_i <= offset - j < max_i - """ - if max_i is None: - max_i = offset + 1 - if max_j is None: - max_j = offset + 1 - min_j = max(min_j, offset - max_i + 1, 0) - max_j = min(max_j, offset - min_i + 1, offset + 1) - j = torch.arange(min_j, max_j) - i = offset - j - return torch.stack([i, j]) - - -def batch_dynamic_time_warping(distance, shapes=None): - """full batched DTW without any constraints - - distance: (batchsize, max_M, max_N) matrix - shapes: (batchsize,) vector specifying (M, N) for each entry - """ - # ptr: 0=left, 1=up-left, 2=up - ptr2dij = {0: (0, -1), 1: (-1, -1), 2: (-1, 0)} - - bsz, m, n = distance.size() - cumdist = torch.zeros_like(distance) - backptr = torch.zeros_like(distance).type(torch.int32) - 1 - - # initialize - cumdist[:, 0, :] = distance[:, 0, :].cumsum(dim=-1) - cumdist[:, :, 0] = distance[:, :, 0].cumsum(dim=-1) - backptr[:, 0, :] = 0 - backptr[:, :, 0] = 2 - - # DP with optimized anti-diagonal parallelization, O(M+N) steps - for offset in range(2, m + n - 1): - ind = antidiag_indices(offset, 1, m, 1, n) - c = torch.stack( - [cumdist[:, ind[0], ind[1] - 1], cumdist[:, ind[0] - 1, ind[1] - 1], - cumdist[:, ind[0] - 1, ind[1]], ], - dim=2 - ) - v, b = c.min(axis=-1) - backptr[:, ind[0], ind[1]] = b.int() - cumdist[:, ind[0], ind[1]] = v + distance[:, ind[0], ind[1]] - - # backtrace - pathmap = torch.zeros_like(backptr) - for b in range(bsz): - i = m - 1 if shapes is None else (shapes[b][0] - 1).item() - j = n - 1 if shapes is None else (shapes[b][1] - 1).item() - dtwpath = [(i, j)] - while (i != 0 or j != 0) and len(dtwpath) < 10000: - assert (i >= 0 and j >= 0) - di, dj = ptr2dij[backptr[b, i, j].item()] - i, j = i + di, j + dj - dtwpath.append((i, j)) - dtwpath = dtwpath[::-1] - indices = torch.from_numpy(np.array(dtwpath)) - pathmap[b, indices[:, 0], indices[:, 1]] = 1 - - return cumdist, backptr, pathmap - - -def compute_l2_dist(x1, x2): - """compute an (m, n) L2 distance matrix from (m, d) and (n, d) matrices""" - return torch.cdist(x1.unsqueeze(0), x2.unsqueeze(0), p=2).squeeze(0).pow(2) - - -def compute_rms_dist(x1, x2): - l2_dist = compute_l2_dist(x1, x2) - return (l2_dist / x1.size(1)).pow(0.5) - - -def get_divisor(pathmap, normalize_type): - if normalize_type is None: - return 1 - elif normalize_type == "len1": - return pathmap.size(0) - elif normalize_type == "len2": - return pathmap.size(1) - elif normalize_type == "path": - return pathmap.sum().item() - else: - raise ValueError(f"normalize_type {normalize_type} not supported") - - -def batch_compute_distortion(y1, y2, sr, feat_fn, dist_fn, normalize_type): - d, s, x1, x2 = [], [], [], [] - for cur_y1, cur_y2 in zip(y1, y2): - assert (cur_y1.ndim == 1 and cur_y2.ndim == 1) - cur_x1 = feat_fn(cur_y1) - cur_x2 = feat_fn(cur_y2) - x1.append(cur_x1) - x2.append(cur_x2) - - cur_d = dist_fn(cur_x1, cur_x2) - d.append(cur_d) - s.append(d[-1].size()) - max_m = max(ss[0] for ss in s) - max_n = max(ss[1] for ss in s) - d = torch.stack( - [F.pad(dd, (0, max_n - dd.size(1), 0, max_m - dd.size(0))) for dd in d] - ) - s = torch.LongTensor(s).to(d.device) - cumdists, backptrs, pathmaps = batch_dynamic_time_warping(d, s) - - rets = [] - itr = zip(s, x1, x2, d, cumdists, backptrs, pathmaps) - for (m, n), cur_x1, cur_x2, dist, cumdist, backptr, pathmap in itr: - cumdist = cumdist[:m, :n] - backptr = backptr[:m, :n] - pathmap = pathmap[:m, :n] - divisor = get_divisor(pathmap, normalize_type) - - distortion = cumdist[-1, -1] / divisor - ret = distortion, (cur_x1, cur_x2, dist, cumdist, backptr, pathmap) - rets.append(ret) - return rets - - -def batch_mel_cepstral_distortion( - y1, y2, sr, normalize_type="path", mfcc_fn=None -): - """ - https://arxiv.org/pdf/2011.03568.pdf - - The root mean squared error computed on 13-dimensional MFCC using DTW for - alignment. MFCC features are computed from an 80-channel log-mel - spectrogram using a 50ms Hann window and hop of 12.5ms. - - y1: list of waveforms - y2: list of waveforms - sr: sampling rate - """ - - try: - import torchaudio - except ImportError: - raise ImportError("Please install torchaudio: pip install torchaudio") - - if mfcc_fn is None or mfcc_fn.sample_rate != sr: - melkwargs = { - "n_fft": int(0.05 * sr), "win_length": int(0.05 * sr), - "hop_length": int(0.0125 * sr), "f_min": 20, - "n_mels": 80, "window_fn": torch.hann_window - } - mfcc_fn = torchaudio.transforms.MFCC( - sr, n_mfcc=13, log_mels=True, melkwargs=melkwargs - ).to(y1[0].device) - return batch_compute_distortion( - y1, y2, sr, lambda y: mfcc_fn(y).transpose(-1, -2), compute_rms_dist, - normalize_type - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/preprocess_RACE.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/preprocess_RACE.py deleted file mode 100644 index cdd66072718ccb6033304c97926271909a17f9d6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/preprocess_RACE.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import json -import os -import re - - -class InputExample: - def __init__(self, paragraph, qa_list, label): - self.paragraph = paragraph - self.qa_list = qa_list - self.label = label - - -def get_examples(data_dir, set_type): - """ - Extract paragraph and question-answer list from each json file - """ - examples = [] - - levels = ["middle", "high"] - set_type_c = set_type.split("-") - if len(set_type_c) == 2: - levels = [set_type_c[1]] - set_type = set_type_c[0] - for level in levels: - cur_dir = os.path.join(data_dir, set_type, level) - for filename in os.listdir(cur_dir): - cur_path = os.path.join(cur_dir, filename) - with open(cur_path, "r") as f: - cur_data = json.load(f) - answers = cur_data["answers"] - options = cur_data["options"] - questions = cur_data["questions"] - context = cur_data["article"].replace("\n", " ") - context = re.sub(r"\s+", " ", context) - for i in range(len(answers)): - label = ord(answers[i]) - ord("A") - qa_list = [] - question = questions[i] - for j in range(4): - option = options[i][j] - if "_" in question: - qa_cat = question.replace("_", option) - else: - qa_cat = " ".join([question, option]) - qa_cat = re.sub(r"\s+", " ", qa_cat) - qa_list.append(qa_cat) - examples.append(InputExample(context, qa_list, label)) - - return examples - - -def main(): - """ - Helper script to extract paragraphs questions and answers from RACE datasets. - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--input-dir", - help="input directory for downloaded RACE dataset", - ) - parser.add_argument( - "--output-dir", - help="output directory for extracted data", - ) - args = parser.parse_args() - - if not os.path.exists(args.output_dir): - os.makedirs(args.output_dir, exist_ok=True) - - for set_type in ["train", "dev", "test-middle", "test-high"]: - examples = get_examples(args.input_dir, set_type) - qa_file_paths = [ - os.path.join(args.output_dir, set_type + ".input" + str(i + 1)) - for i in range(4) - ] - qa_files = [open(qa_file_path, "w") for qa_file_path in qa_file_paths] - outf_context_path = os.path.join(args.output_dir, set_type + ".input0") - outf_label_path = os.path.join(args.output_dir, set_type + ".label") - outf_context = open(outf_context_path, "w") - outf_label = open(outf_label_path, "w") - for example in examples: - outf_context.write(example.paragraph + "\n") - for i in range(4): - qa_files[i].write(example.qa_list[i] + "\n") - outf_label.write(str(example.label) + "\n") - - for f in qa_files: - f.close() - outf_label.close() - outf_context.close() - - -if __name__ == "__main__": - main() diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py deleted file mode 100644 index 731320e74ebed4d8ceec58c07cb906542b8b021b..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 2 # 100ep -> 200ep - -lr_multiplier.scheduler.milestones = [ - milestone * 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py deleted file mode 100644 index ac1118bcb1a8e7cc991a820ff17c4ae889d2d7e9..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py +++ /dev/null @@ -1,367 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/data/datasets/register_coco_panoptic_annos_semseg.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import json -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets import load_sem_seg -from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES -from detectron2.utils.file_io import PathManager -import contextlib -import logging -import io -from fvcore.common.timer import Timer -import pycocotools.mask as mask_util -from detectron2.structures import BoxMode - - -logger = logging.getLogger(__name__) - - -_PREDEFINED_SPLITS_COCO_PANOPTIC = { - "coco_2017_train_panoptic": ( - # This is the original panoptic annotation directory - "coco/panoptic_train2017", - "coco/annotations/panoptic_train2017.json", - # This directory contains semantic annotations that are - # converted from panoptic annotations. - # It is used by PanopticFPN. - # You can use the script at detectron2/datasets/prepare_panoptic_fpn.py - # to create these directories. - "coco/panoptic_semseg_train2017", - ), - "coco_2017_val_panoptic": ( - "coco/panoptic_val2017", - "coco/annotations/panoptic_val2017.json", - "coco/panoptic_semseg_val2017", - ), -} - -def load_coco_instance_json(json_file, image_root, dataset_name=None): - from pycocotools.coco import COCO - - timer = Timer() - json_file = PathManager.get_local_path(json_file) - with contextlib.redirect_stdout(io.StringIO()): - coco_api = COCO(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - id_map = None - if dataset_name is not None: - meta = MetadataCatalog.get(dataset_name) - cat_ids = sorted(coco_api.getCatIds()) - cats = coco_api.loadCats(cat_ids) - # The categories in a custom json file may not be sorted. - thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])] - meta.thing_classes = thing_classes - - # In COCO, certain category ids are artificially removed, - # and by convention they are always ignored. - # We deal with COCO's id issue and translate - # the category ids to contiguous ids in [0, 80). - - # It works by looking at the "categories" field in the json, therefore - # if users' own json also have incontiguous ids, we'll - # apply this mapping as well but print a warning. - if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)): - if "coco" not in dataset_name: - logger.warning( - """ -Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you. -""" - ) - id_map = {v: i for i, v in enumerate(cat_ids)} - meta.thing_dataset_id_to_contiguous_id = id_map - - # sort indices for reproducible results - img_ids = sorted(coco_api.imgs.keys()) - # imgs is a list of dicts, each looks something like: - # {'license': 4, - # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg', - # 'file_name': 'COCO_val2014_000000001268.jpg', - # 'height': 427, - # 'width': 640, - # 'date_captured': '2013-11-17 05:57:24', - # 'id': 1268} - imgs = coco_api.loadImgs(img_ids) - # anns is a list[list[dict]], where each dict is an annotation - # record for an object. The inner list enumerates the objects in an image - # and the outer list enumerates over images. Example of anns[0]: - # [{'segmentation': [[192.81, - # 247.09, - # ... - # 219.03, - # 249.06]], - # 'area': 1035.749, - # 'iscrowd': 0, - # 'image_id': 1268, - # 'bbox': [192.81, 224.8, 74.73, 33.43], - # 'category_id': 16, - # 'id': 42986}, - # ...] - anns = [coco_api.imgToAnns[img_id] for img_id in img_ids] - total_num_valid_anns = sum([len(x) for x in anns]) - total_num_anns = len(coco_api.anns) - if total_num_valid_anns < total_num_anns: - logger.warning( - f"{json_file} contains {total_num_anns} annotations, but only " - f"{total_num_valid_anns} of them match to images in the file." - ) - - if "minival" not in json_file: - # The popular valminusminival & minival annotations for COCO2014 contain this bug. - # However the ratio of buggy annotations there is tiny and does not affect accuracy. - # Therefore we explicitly white-list them. - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique!".format( - json_file - ) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in COCO format from {}".format(len(imgs_anns), json_file)) - - dataset_dicts = {} - - ann_keys = ["iscrowd", "bbox", "keypoints", "category_id"] - - num_instances_without_valid_segmentation = 0 - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - record["file_name"] = os.path.join(image_root, img_dict["file_name"]) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - # Check that the image_id in this annotation is the same as - # the image_id we're looking at. - # This fails only when the data parsing logic or the annotation file is buggy. - - # The original COCO valminusminival2014 & minival2014 annotation files - # actually contains bugs that, together with certain ways of using COCO API, - # can trigger this assertion. - assert anno["image_id"] == image_id - - assert anno.get("ignore", 0) == 0, '"ignore" in COCO json file is not supported.' - - obj = {key: anno[key] for key in ann_keys if key in anno} - if "bbox" in obj and len(obj["bbox"]) == 0: - raise ValueError( - f"One annotation of image {image_id} contains empty 'bbox' value! " - "This json does not have valid COCO format." - ) - - segm = anno.get("segmentation", None) - if segm: # either list[list[float]] or dict(RLE) - if isinstance(segm, dict): - if isinstance(segm["counts"], list): - # convert to compressed RLE - segm = mask_util.frPyObjects(segm, *segm["size"]) - else: - # filter out invalid polygons (< 3 points) - segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - if len(segm) == 0: - num_instances_without_valid_segmentation += 1 - continue # ignore this instance - obj["segmentation"] = segm - - keypts = anno.get("keypoints", None) - if keypts: # list[int] - for idx, v in enumerate(keypts): - if idx % 3 != 2: - # COCO's segmentation coordinates are floating points in [0, H or W], - # but keypoint coordinates are integers in [0, H-1 or W-1] - # Therefore we assume the coordinates are "pixel indices" and - # add 0.5 to convert to floating point coordinates. - keypts[idx] = v + 0.5 - obj["keypoints"] = keypts - - obj["bbox_mode"] = BoxMode.XYWH_ABS - if id_map: - annotation_category_id = obj["category_id"] - try: - obj["category_id"] = id_map[annotation_category_id] - except KeyError as e: - raise KeyError( - f"Encountered category_id={annotation_category_id} " - "but this id does not exist in 'categories' of the json file." - ) from e - objs.append(obj) - record["annotations"] = objs - dataset_dicts[image_id] = record - - if num_instances_without_valid_segmentation > 0: - logger.warning( - "Filtered out {} instances without valid segmentation. ".format( - num_instances_without_valid_segmentation - ) - + "There might be issues in your dataset generation process. Please " - "check https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html carefully" - ) - return dataset_dicts - -def get_metadata(): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1] - stuff_classes = [k["name"] for k in COCO_CATEGORIES] - stuff_colors = [k["color"] for k in COCO_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(COCO_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - # else: - # stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - # in order to use sem_seg evaluator - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - - -def load_coco_panoptic_json(json_file, instances_json, instances_name, image_dir, gt_dir, semseg_dir, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/coco/train2017". - gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017". - json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json". - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - instance_data_dicts = load_coco_instance_json(instances_json, image_dir.replace("panoptic_", ""), instances_name) - - ret = [] - for ann in json_info["annotations"]: - image_id = int(ann["image_id"]) - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - sem_label_file = os.path.join(semseg_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "sem_seg_file_name": sem_label_file, - "segments_info": segments_info, - "annotations": instance_data_dicts[image_id]["annotations"], - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"] - return ret - - -def register_coco_panoptic_annos_sem_seg( - name, metadata, image_root, panoptic_root, panoptic_json, sem_seg_root, instances_json, instances_name, -): - panoptic_name = name - delattr(MetadataCatalog.get(panoptic_name), "thing_classes") - delattr(MetadataCatalog.get(panoptic_name), "thing_colors") - MetadataCatalog.get(panoptic_name).set( - thing_classes=metadata["thing_classes"], - thing_colors=metadata["thing_colors"], - # thing_dataset_id_to_contiguous_id=metadata["thing_dataset_id_to_contiguous_id"], - ) - - # the name is "coco_2017_train_panoptic_with_sem_seg" and "coco_2017_val_panoptic_with_sem_seg" - semantic_name = name + "_with_sem_seg" - DatasetCatalog.register( - semantic_name, - lambda: load_coco_panoptic_json(panoptic_json, instances_json, instances_name, image_root, panoptic_root, sem_seg_root, metadata), - ) - MetadataCatalog.get(semantic_name).set( - sem_seg_root=sem_seg_root, - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="coco_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -def register_all_coco_panoptic_annos_sem_seg(root): - for ( - prefix, - (panoptic_root, panoptic_json, semantic_root), - ) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items(): - - prefix_instances = prefix[: -len("_panoptic")] - instances_meta = MetadataCatalog.get(prefix_instances) - image_root, instances_json = instances_meta.image_root, instances_meta.json_file - - if 'val' in instances_json: - instances_json = instances_json.replace('instances_', 'panoptic2instances_') - - register_coco_panoptic_annos_sem_seg( - prefix, - get_metadata(), - image_root, - os.path.join(root, panoptic_root), - os.path.join(root, panoptic_json), - os.path.join(root, semantic_root), - instances_json, - prefix_instances, - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_coco_panoptic_annos_sem_seg(_root) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-scheme-source.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-scheme-source.go deleted file mode 100644 index a3e1c417e8829e4f486161fc206bd51f82add790..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-scheme-source.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/Nan-Do-LeetCodeWizard_13B_v1.0/README.md b/spaces/PeepDaSlan9/Nan-Do-LeetCodeWizard_13B_v1.0/README.md deleted file mode 100644 index 956cd4a0407ec8757ff05b4bf713c85ed0959d4b..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Nan-Do-LeetCodeWizard_13B_v1.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Nan-Do-LeetCodeWizard 13B V1.0 -emoji: 🌍 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/dialog.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
- {children} -
-
-) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/sync_bn.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index f78f39181d75bb85c53e8c7c8eaf45690e9f0bee..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import annotator.uniformer.mmcv as mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/__init__.py deleted file mode 100644 index 835df136bdcf69348281d22914d41aa84cdf92b1..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .color import Color, color_val -from .image import imshow, imshow_bboxes, imshow_det_bboxes -from .optflow import flow2rgb, flowshow, make_color_wheel - -__all__ = [ - 'Color', 'color_val', 'imshow', 'imshow_bboxes', 'imshow_det_bboxes', - 'flowshow', 'flow2rgb', 'make_color_wheel' -] diff --git a/spaces/Potanin/12345/rmvpe.py b/spaces/Potanin/12345/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/Potanin/12345/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CONTRIBUTING.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CONTRIBUTING.md deleted file mode 100644 index a3e9507643d4439f509a8fc8b87dc73417ef9822..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CONTRIBUTING.md +++ /dev/null @@ -1,35 +0,0 @@ -# Contributing to AudioCraft - -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests - -AudioCraft is the implementation of a research paper. -Therefore, we do not plan on accepting many pull requests for new features. -We certainly welcome them for bug fixes. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to encodec, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/RMXK/RVC_HFF/tools/infer_batch_rvc.py b/spaces/RMXK/RVC_HFF/tools/infer_batch_rvc.py deleted file mode 100644 index 763d17f14877a2ce35f750202e91356c1f24270f..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/tools/infer_batch_rvc.py +++ /dev/null @@ -1,72 +0,0 @@ -import argparse -import os -import sys - -print("Command-line arguments:", sys.argv) - -now_dir = os.getcwd() -sys.path.append(now_dir) -import sys - -import tqdm as tq -from dotenv import load_dotenv -from scipy.io import wavfile - -from configs.config import Config -from infer.modules.vc.modules import VC - - -def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--f0up_key", type=int, default=0) - parser.add_argument("--input_path", type=str, help="input path") - parser.add_argument("--index_path", type=str, help="index path") - parser.add_argument("--f0method", type=str, default="harvest", help="harvest or pm") - parser.add_argument("--opt_path", type=str, help="opt path") - parser.add_argument("--model_name", type=str, help="store in assets/weight_root") - parser.add_argument("--index_rate", type=float, default=0.66, help="index rate") - parser.add_argument("--device", type=str, help="device") - parser.add_argument("--is_half", type=bool, help="use half -> True") - parser.add_argument("--filter_radius", type=int, default=3, help="filter radius") - parser.add_argument("--resample_sr", type=int, default=0, help="resample sr") - parser.add_argument("--rms_mix_rate", type=float, default=1, help="rms mix rate") - parser.add_argument("--protect", type=float, default=0.33, help="protect") - - args = parser.parse_args() - sys.argv = sys.argv[:1] - - return args - - -def main(): - load_dotenv() - args = arg_parse() - config = Config() - config.device = args.device if args.device else config.device - config.is_half = args.is_half if args.is_half else config.is_half - vc = VC(config) - vc.get_vc(args.model_name) - audios = os.listdir(args.input_path) - for file in tq.tqdm(audios): - if file.endswith(".wav"): - file_path = os.path.join(args.input_path, file) - _, wav_opt = vc.vc_single( - 0, - file_path, - args.f0up_key, - None, - args.f0method, - args.index_path, - None, - args.index_rate, - args.filter_radius, - args.resample_sr, - args.rms_mix_rate, - args.protect, - ) - out_path = os.path.join(args.opt_path, file) - wavfile.write(out_path, wav_opt[0], wav_opt[1]) - - -if __name__ == "__main__": - main() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py deleted file mode 100644 index 7686fe85a7cc94188da76bfb1c10ad2a10821256..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py +++ /dev/null @@ -1,54 +0,0 @@ -from .distro import ( - NORMALIZED_DISTRO_ID, - NORMALIZED_LSB_ID, - NORMALIZED_OS_ID, - LinuxDistribution, - __version__, - build_number, - codename, - distro_release_attr, - distro_release_info, - id, - info, - like, - linux_distribution, - lsb_release_attr, - lsb_release_info, - major_version, - minor_version, - name, - os_release_attr, - os_release_info, - uname_attr, - uname_info, - version, - version_parts, -) - -__all__ = [ - "NORMALIZED_DISTRO_ID", - "NORMALIZED_LSB_ID", - "NORMALIZED_OS_ID", - "LinuxDistribution", - "build_number", - "codename", - "distro_release_attr", - "distro_release_info", - "id", - "info", - "like", - "linux_distribution", - "lsb_release_attr", - "lsb_release_info", - "major_version", - "minor_version", - "name", - "os_release_attr", - "os_release_info", - "uname_attr", - "uname_info", - "version", - "version_parts", -] - -__version__ = __version__ diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/live_render.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/live_render.py deleted file mode 100644 index b90fbf7f35097694f727e201b0b378942d70a443..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/live_render.py +++ /dev/null @@ -1,113 +0,0 @@ -import sys -from typing import Optional, Tuple - -if sys.version_info >= (3, 8): - from typing import Literal -else: - from pip._vendor.typing_extensions import Literal # pragma: no cover - - -from ._loop import loop_last -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .control import Control -from .segment import ControlType, Segment -from .style import StyleType -from .text import Text - -VerticalOverflowMethod = Literal["crop", "ellipsis", "visible"] - - -class LiveRender: - """Creates a renderable that may be updated. - - Args: - renderable (RenderableType): Any renderable object. - style (StyleType, optional): An optional style to apply to the renderable. Defaults to "". - """ - - def __init__( - self, - renderable: RenderableType, - style: StyleType = "", - vertical_overflow: VerticalOverflowMethod = "ellipsis", - ) -> None: - self.renderable = renderable - self.style = style - self.vertical_overflow = vertical_overflow - self._shape: Optional[Tuple[int, int]] = None - - def set_renderable(self, renderable: RenderableType) -> None: - """Set a new renderable. - - Args: - renderable (RenderableType): Any renderable object, including str. - """ - self.renderable = renderable - - def position_cursor(self) -> Control: - """Get control codes to move cursor to beginning of live render. - - Returns: - Control: A control instance that may be printed. - """ - if self._shape is not None: - _, height = self._shape - return Control( - ControlType.CARRIAGE_RETURN, - (ControlType.ERASE_IN_LINE, 2), - *( - ( - (ControlType.CURSOR_UP, 1), - (ControlType.ERASE_IN_LINE, 2), - ) - * (height - 1) - ) - ) - return Control() - - def restore_cursor(self) -> Control: - """Get control codes to clear the render and restore the cursor to its previous position. - - Returns: - Control: A Control instance that may be printed. - """ - if self._shape is not None: - _, height = self._shape - return Control( - ControlType.CARRIAGE_RETURN, - *((ControlType.CURSOR_UP, 1), (ControlType.ERASE_IN_LINE, 2)) * height - ) - return Control() - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - - renderable = self.renderable - style = console.get_style(self.style) - lines = console.render_lines(renderable, options, style=style, pad=False) - shape = Segment.get_shape(lines) - - _, height = shape - if height > options.size.height: - if self.vertical_overflow == "crop": - lines = lines[: options.size.height] - shape = Segment.get_shape(lines) - elif self.vertical_overflow == "ellipsis": - lines = lines[: (options.size.height - 1)] - overflow_text = Text( - "...", - overflow="crop", - justify="center", - end="", - style="live.ellipsis", - ) - lines.append(list(console.render(overflow_text))) - shape = Segment.get_shape(lines) - self._shape = shape - - new_line = Segment.line() - for last, line in loop_last(lines): - yield from line - if not last: - yield new_line diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/version.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/version.py deleted file mode 100644 index de9a09a4ed3b078b37e7490a6686f660ae935aca..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/version.py +++ /dev/null @@ -1,504 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import collections -import itertools -import re -import warnings -from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union - -from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType - -__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"] - -InfiniteTypes = Union[InfinityType, NegativeInfinityType] -PrePostDevType = Union[InfiniteTypes, Tuple[str, int]] -SubLocalType = Union[InfiniteTypes, int, str] -LocalType = Union[ - NegativeInfinityType, - Tuple[ - Union[ - SubLocalType, - Tuple[SubLocalType, str], - Tuple[NegativeInfinityType, SubLocalType], - ], - ..., - ], -] -CmpKey = Tuple[ - int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType -] -LegacyCmpKey = Tuple[int, Tuple[str, ...]] -VersionComparisonMethod = Callable[ - [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool -] - -_Version = collections.namedtuple( - "_Version", ["epoch", "release", "dev", "pre", "post", "local"] -) - - -def parse(version: str) -> Union["LegacyVersion", "Version"]: - """ - Parse the given version string and return either a :class:`Version` object - or a :class:`LegacyVersion` object depending on if the given version is - a valid PEP 440 version or a legacy version. - """ - try: - return Version(version) - except InvalidVersion: - return LegacyVersion(version) - - -class InvalidVersion(ValueError): - """ - An invalid version was found, users should refer to PEP 440. - """ - - -class _BaseVersion: - _key: Union[CmpKey, LegacyCmpKey] - - def __hash__(self) -> int: - return hash(self._key) - - # Please keep the duplicated `isinstance` check - # in the six comparisons hereunder - # unless you find a way to avoid adding overhead function calls. - def __lt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key < other._key - - def __le__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key <= other._key - - def __eq__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key == other._key - - def __ge__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key >= other._key - - def __gt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key > other._key - - def __ne__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key != other._key - - -class LegacyVersion(_BaseVersion): - def __init__(self, version: str) -> None: - self._version = str(version) - self._key = _legacy_cmpkey(self._version) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def __str__(self) -> str: - return self._version - - def __repr__(self) -> str: - return f"" - - @property - def public(self) -> str: - return self._version - - @property - def base_version(self) -> str: - return self._version - - @property - def epoch(self) -> int: - return -1 - - @property - def release(self) -> None: - return None - - @property - def pre(self) -> None: - return None - - @property - def post(self) -> None: - return None - - @property - def dev(self) -> None: - return None - - @property - def local(self) -> None: - return None - - @property - def is_prerelease(self) -> bool: - return False - - @property - def is_postrelease(self) -> bool: - return False - - @property - def is_devrelease(self) -> bool: - return False - - -_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE) - -_legacy_version_replacement_map = { - "pre": "c", - "preview": "c", - "-": "final-", - "rc": "c", - "dev": "@", -} - - -def _parse_version_parts(s: str) -> Iterator[str]: - for part in _legacy_version_component_re.split(s): - part = _legacy_version_replacement_map.get(part, part) - - if not part or part == ".": - continue - - if part[:1] in "0123456789": - # pad for numeric comparison - yield part.zfill(8) - else: - yield "*" + part - - # ensure that alpha/beta/candidate are before final - yield "*final" - - -def _legacy_cmpkey(version: str) -> LegacyCmpKey: - - # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch - # greater than or equal to 0. This will effectively put the LegacyVersion, - # which uses the defacto standard originally implemented by setuptools, - # as before all PEP 440 versions. - epoch = -1 - - # This scheme is taken from pkg_resources.parse_version setuptools prior to - # it's adoption of the packaging library. - parts: List[str] = [] - for part in _parse_version_parts(version.lower()): - if part.startswith("*"): - # remove "-" before a prerelease tag - if part < "*final": - while parts and parts[-1] == "*final-": - parts.pop() - - # remove trailing zeros from each series of numeric parts - while parts and parts[-1] == "00000000": - parts.pop() - - parts.append(part) - - return epoch, tuple(parts) - - -# Deliberately not anchored to the start and end of the string, to make it -# easier for 3rd party code to reuse -VERSION_PATTERN = r""" - v? - (?: - (?:(?P[0-9]+)!)? # epoch - (?P[0-9]+(?:\.[0-9]+)*) # release segment - (?P
                                          # pre-release
-            [-_\.]?
-            (?P(a|b|c|rc|alpha|beta|pre|preview))
-            [-_\.]?
-            (?P[0-9]+)?
-        )?
-        (?P                                         # post release
-            (?:-(?P[0-9]+))
-            |
-            (?:
-                [-_\.]?
-                (?Ppost|rev|r)
-                [-_\.]?
-                (?P[0-9]+)?
-            )
-        )?
-        (?P                                          # dev release
-            [-_\.]?
-            (?Pdev)
-            [-_\.]?
-            (?P[0-9]+)?
-        )?
-    )
-    (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))?       # local version
-"""
-
-
-class Version(_BaseVersion):
-
-    _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
-
-    def __init__(self, version: str) -> None:
-
-        # Validate the version and parse it into pieces
-        match = self._regex.search(version)
-        if not match:
-            raise InvalidVersion(f"Invalid version: '{version}'")
-
-        # Store the parsed out pieces of the version
-        self._version = _Version(
-            epoch=int(match.group("epoch")) if match.group("epoch") else 0,
-            release=tuple(int(i) for i in match.group("release").split(".")),
-            pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
-            post=_parse_letter_version(
-                match.group("post_l"), match.group("post_n1") or match.group("post_n2")
-            ),
-            dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
-            local=_parse_local_version(match.group("local")),
-        )
-
-        # Generate a key which will be used for sorting
-        self._key = _cmpkey(
-            self._version.epoch,
-            self._version.release,
-            self._version.pre,
-            self._version.post,
-            self._version.dev,
-            self._version.local,
-        )
-
-    def __repr__(self) -> str:
-        return f""
-
-    def __str__(self) -> str:
-        parts = []
-
-        # Epoch
-        if self.epoch != 0:
-            parts.append(f"{self.epoch}!")
-
-        # Release segment
-        parts.append(".".join(str(x) for x in self.release))
-
-        # Pre-release
-        if self.pre is not None:
-            parts.append("".join(str(x) for x in self.pre))
-
-        # Post-release
-        if self.post is not None:
-            parts.append(f".post{self.post}")
-
-        # Development release
-        if self.dev is not None:
-            parts.append(f".dev{self.dev}")
-
-        # Local version segment
-        if self.local is not None:
-            parts.append(f"+{self.local}")
-
-        return "".join(parts)
-
-    @property
-    def epoch(self) -> int:
-        _epoch: int = self._version.epoch
-        return _epoch
-
-    @property
-    def release(self) -> Tuple[int, ...]:
-        _release: Tuple[int, ...] = self._version.release
-        return _release
-
-    @property
-    def pre(self) -> Optional[Tuple[str, int]]:
-        _pre: Optional[Tuple[str, int]] = self._version.pre
-        return _pre
-
-    @property
-    def post(self) -> Optional[int]:
-        return self._version.post[1] if self._version.post else None
-
-    @property
-    def dev(self) -> Optional[int]:
-        return self._version.dev[1] if self._version.dev else None
-
-    @property
-    def local(self) -> Optional[str]:
-        if self._version.local:
-            return ".".join(str(x) for x in self._version.local)
-        else:
-            return None
-
-    @property
-    def public(self) -> str:
-        return str(self).split("+", 1)[0]
-
-    @property
-    def base_version(self) -> str:
-        parts = []
-
-        # Epoch
-        if self.epoch != 0:
-            parts.append(f"{self.epoch}!")
-
-        # Release segment
-        parts.append(".".join(str(x) for x in self.release))
-
-        return "".join(parts)
-
-    @property
-    def is_prerelease(self) -> bool:
-        return self.dev is not None or self.pre is not None
-
-    @property
-    def is_postrelease(self) -> bool:
-        return self.post is not None
-
-    @property
-    def is_devrelease(self) -> bool:
-        return self.dev is not None
-
-    @property
-    def major(self) -> int:
-        return self.release[0] if len(self.release) >= 1 else 0
-
-    @property
-    def minor(self) -> int:
-        return self.release[1] if len(self.release) >= 2 else 0
-
-    @property
-    def micro(self) -> int:
-        return self.release[2] if len(self.release) >= 3 else 0
-
-
-def _parse_letter_version(
-    letter: str, number: Union[str, bytes, SupportsInt]
-) -> Optional[Tuple[str, int]]:
-
-    if letter:
-        # We consider there to be an implicit 0 in a pre-release if there is
-        # not a numeral associated with it.
-        if number is None:
-            number = 0
-
-        # We normalize any letters to their lower case form
-        letter = letter.lower()
-
-        # We consider some words to be alternate spellings of other words and
-        # in those cases we want to normalize the spellings to our preferred
-        # spelling.
-        if letter == "alpha":
-            letter = "a"
-        elif letter == "beta":
-            letter = "b"
-        elif letter in ["c", "pre", "preview"]:
-            letter = "rc"
-        elif letter in ["rev", "r"]:
-            letter = "post"
-
-        return letter, int(number)
-    if not letter and number:
-        # We assume if we are given a number, but we are not given a letter
-        # then this is using the implicit post release syntax (e.g. 1.0-1)
-        letter = "post"
-
-        return letter, int(number)
-
-    return None
-
-
-_local_version_separators = re.compile(r"[\._-]")
-
-
-def _parse_local_version(local: str) -> Optional[LocalType]:
-    """
-    Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
-    """
-    if local is not None:
-        return tuple(
-            part.lower() if not part.isdigit() else int(part)
-            for part in _local_version_separators.split(local)
-        )
-    return None
-
-
-def _cmpkey(
-    epoch: int,
-    release: Tuple[int, ...],
-    pre: Optional[Tuple[str, int]],
-    post: Optional[Tuple[str, int]],
-    dev: Optional[Tuple[str, int]],
-    local: Optional[Tuple[SubLocalType]],
-) -> CmpKey:
-
-    # When we compare a release version, we want to compare it with all of the
-    # trailing zeros removed. So we'll use a reverse the list, drop all the now
-    # leading zeros until we come to something non zero, then take the rest
-    # re-reverse it back into the correct order and make it a tuple and use
-    # that for our sorting key.
-    _release = tuple(
-        reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
-    )
-
-    # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
-    # We'll do this by abusing the pre segment, but we _only_ want to do this
-    # if there is not a pre or a post segment. If we have one of those then
-    # the normal sorting rules will handle this case correctly.
-    if pre is None and post is None and dev is not None:
-        _pre: PrePostDevType = NegativeInfinity
-    # Versions without a pre-release (except as noted above) should sort after
-    # those with one.
-    elif pre is None:
-        _pre = Infinity
-    else:
-        _pre = pre
-
-    # Versions without a post segment should sort before those with one.
-    if post is None:
-        _post: PrePostDevType = NegativeInfinity
-
-    else:
-        _post = post
-
-    # Versions without a development segment should sort after those with one.
-    if dev is None:
-        _dev: PrePostDevType = Infinity
-
-    else:
-        _dev = dev
-
-    if local is None:
-        # Versions without a local segment should sort before those with one.
-        _local: LocalType = NegativeInfinity
-    else:
-        # Versions with a local segment need that segment parsed to implement
-        # the sorting rules in PEP440.
-        # - Alpha numeric segments sort before numeric segments
-        # - Alpha numeric segments sort lexicographically
-        # - Numeric segments sort numerically
-        # - Shorter versions sort before longer versions when the prefixes
-        #   match exactly
-        _local = tuple(
-            (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
-        )
-
-    return epoch, _release, _pre, _post, _dev, _local
diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/benchmarks/num_inliers.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/benchmarks/num_inliers.py
deleted file mode 100644
index f2b36f6a2b97b9c7010ef2455352531ffe3e4405..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/benchmarks/num_inliers.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import torch
-import torch.nn as nn
-from DeDoDe.utils import *
-import DeDoDe
-
-
-class NumInliersBenchmark(nn.Module):
-    def __init__(
-        self,
-        dataset,
-        num_samples=1000,
-        batch_size=8,
-        num_keypoints=10_000,
-        device="cuda",
-    ) -> None:
-        super().__init__()
-        sampler = torch.utils.data.WeightedRandomSampler(
-            torch.ones(len(dataset)), replacement=False, num_samples=num_samples
-        )
-        dataloader = torch.utils.data.DataLoader(
-            dataset, batch_size=batch_size, num_workers=batch_size, sampler=sampler
-        )
-        self.dataloader = dataloader
-        self.tracked_metrics = {}
-        self.batch_size = batch_size
-        self.N = len(dataloader)
-        self.num_keypoints = num_keypoints
-
-    def compute_batch_metrics(self, outputs, batch, device="cuda"):
-        kpts_A, kpts_B = outputs["keypoints_A"], outputs["keypoints_B"]
-        B, K, H, W = batch["im_A"].shape
-        gt_warp_A_to_B, valid_mask_A_to_B = get_gt_warp(
-            batch["im_A_depth"],
-            batch["im_B_depth"],
-            batch["T_1to2"],
-            batch["K1"],
-            batch["K2"],
-            H=H,
-            W=W,
-        )
-        kpts_A_to_B = F.grid_sample(
-            gt_warp_A_to_B[..., 2:].float().permute(0, 3, 1, 2),
-            kpts_A[..., None, :],
-            align_corners=False,
-            mode="bilinear",
-        )[..., 0].mT
-        legit_A_to_B = F.grid_sample(
-            valid_mask_A_to_B.reshape(B, 1, H, W),
-            kpts_A[..., None, :],
-            align_corners=False,
-            mode="bilinear",
-        )[..., 0, :, 0]
-        dists = (
-            torch.cdist(kpts_A_to_B, kpts_B).min(dim=-1).values[legit_A_to_B > 0.0]
-        ).float()
-        if legit_A_to_B.sum() == 0:
-            return
-        percent_inliers_at_1 = (dists < 0.02).float().mean()
-        percent_inliers_at_05 = (dists < 0.01).float().mean()
-        percent_inliers_at_025 = (dists < 0.005).float().mean()
-        percent_inliers_at_01 = (dists < 0.002).float().mean()
-        percent_inliers_at_005 = (dists < 0.001).float().mean()
-
-        inlier_bins = torch.linspace(0, 0.002, steps=100, device=device)[None]
-        inlier_counts = (dists[..., None] < inlier_bins).float().mean(dim=0)
-        self.tracked_metrics["inlier_counts"] = (
-            self.tracked_metrics.get("inlier_counts", 0) + 1 / self.N * inlier_counts
-        )
-        self.tracked_metrics["percent_inliers_at_1"] = (
-            self.tracked_metrics.get("percent_inliers_at_1", 0)
-            + 1 / self.N * percent_inliers_at_1
-        )
-        self.tracked_metrics["percent_inliers_at_05"] = (
-            self.tracked_metrics.get("percent_inliers_at_05", 0)
-            + 1 / self.N * percent_inliers_at_05
-        )
-        self.tracked_metrics["percent_inliers_at_025"] = (
-            self.tracked_metrics.get("percent_inliers_at_025", 0)
-            + 1 / self.N * percent_inliers_at_025
-        )
-        self.tracked_metrics["percent_inliers_at_01"] = (
-            self.tracked_metrics.get("percent_inliers_at_01", 0)
-            + 1 / self.N * percent_inliers_at_01
-        )
-        self.tracked_metrics["percent_inliers_at_005"] = (
-            self.tracked_metrics.get("percent_inliers_at_005", 0)
-            + 1 / self.N * percent_inliers_at_005
-        )
-
-    def benchmark(self, detector):
-        self.tracked_metrics = {}
-        from tqdm import tqdm
-
-        print("Evaluating percent inliers...")
-        for idx, batch in tqdm(enumerate(self.dataloader), mininterval=10.0):
-            batch = to_cuda(batch)
-            outputs = detector.detect(batch, num_keypoints=self.num_keypoints)
-            keypoints_A, keypoints_B = (
-                outputs["keypoints"][: self.batch_size],
-                outputs["keypoints"][self.batch_size :],
-            )
-            if isinstance(outputs["keypoints"], (tuple, list)):
-                keypoints_A, keypoints_B = torch.stack(keypoints_A), torch.stack(
-                    keypoints_B
-                )
-            outputs = {"keypoints_A": keypoints_A, "keypoints_B": keypoints_B}
-            self.compute_batch_metrics(outputs, batch)
-        import matplotlib.pyplot as plt
-
-        plt.plot(
-            torch.linspace(0, 0.002, steps=100),
-            self.tracked_metrics["inlier_counts"].cpu(),
-        )
-        import numpy as np
-
-        x = np.linspace(0, 0.002, 100)
-        sigma = 0.52 * 2 / 512
-        F = 1 - np.exp(-(x**2) / (2 * sigma**2))
-        plt.plot(x, F)
-        plt.savefig("vis/inlier_counts")
-        [
-            print(name, metric.item() * self.N / (idx + 1))
-            for name, metric in self.tracked_metrics.items()
-            if "percent" in name
-        ]
diff --git a/spaces/Realcat/image-matching-webui/third_party/d2net/megadepth_utils/undistort_reconstructions.py b/spaces/Realcat/image-matching-webui/third_party/d2net/megadepth_utils/undistort_reconstructions.py
deleted file mode 100644
index 822c9abd3fc75fd8fc1e8d9ada75aa76802c6798..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/d2net/megadepth_utils/undistort_reconstructions.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import argparse
-
-import imagesize
-
-import os
-
-import subprocess
-
-parser = argparse.ArgumentParser(description="MegaDepth Undistortion")
-
-parser.add_argument(
-    "--colmap_path", type=str, required=True, help="path to colmap executable"
-)
-parser.add_argument("--base_path", type=str, required=True, help="path to MegaDepth")
-
-args = parser.parse_args()
-
-sfm_path = os.path.join(args.base_path, "MegaDepth_v1_SfM")
-base_depth_path = os.path.join(args.base_path, "phoenix/S6/zl548/MegaDepth_v1")
-output_path = os.path.join(args.base_path, "Undistorted_SfM")
-
-os.mkdir(output_path)
-
-for scene_name in os.listdir(base_depth_path):
-    current_output_path = os.path.join(output_path, scene_name)
-    os.mkdir(current_output_path)
-
-    image_path = os.path.join(base_depth_path, scene_name, "dense0", "imgs")
-    if not os.path.exists(image_path):
-        continue
-
-    # Find the maximum image size in scene.
-    max_image_size = 0
-    for image_name in os.listdir(image_path):
-        max_image_size = max(
-            max_image_size, max(imagesize.get(os.path.join(image_path, image_name)))
-        )
-
-    # Undistort the images and update the reconstruction.
-    subprocess.call(
-        [
-            os.path.join(args.colmap_path, "colmap"),
-            "image_undistorter",
-            "--image_path",
-            os.path.join(sfm_path, scene_name, "images"),
-            "--input_path",
-            os.path.join(sfm_path, scene_name, "sparse", "manhattan", "0"),
-            "--output_path",
-            current_output_path,
-            "--max_image_size",
-            str(max_image_size),
-        ]
-    )
-
-    # Transform the reconstruction to raw text format.
-    sparse_txt_path = os.path.join(current_output_path, "sparse-txt")
-    os.mkdir(sparse_txt_path)
-    subprocess.call(
-        [
-            os.path.join(args.colmap_path, "colmap"),
-            "model_converter",
-            "--input_path",
-            os.path.join(current_output_path, "sparse"),
-            "--output_path",
-            sparse_txt_path,
-            "--output_type",
-            "TXT",
-        ]
-    )
diff --git a/spaces/Redgon/bingo/next.config.js b/spaces/Redgon/bingo/next.config.js
deleted file mode 100644
index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000
--- a/spaces/Redgon/bingo/next.config.js
+++ /dev/null
@@ -1,38 +0,0 @@
-/** @type {import('next').NextConfig} */
-const nextConfig = {
-  // output: 'export',
-  // assetPrefix: '.',
-  webpack: (config, { isServer }) => {
-    if (!isServer) {
-      config.resolve = {
-        ...config.resolve,
-        fallback: {
-          'bufferutil': false,
-          'utf-8-validate': false,
-          http: false,
-          https: false,
-          stream: false,
-          // fixes proxy-agent dependencies
-          net: false,
-          dns: false,
-          tls: false,
-          assert: false,
-          // fixes next-i18next dependencies
-          path: false,
-          fs: false,
-          // fixes mapbox dependencies
-          events: false,
-          // fixes sentry dependencies
-          process: false
-        }
-      };
-    }
-    config.module.exprContextCritical = false;
-
-    return config;
-  },
-}
-
-module.exports = (...args) => {
-  return nextConfig
-}
diff --git a/spaces/Reeve/Ohayou_Face/models/encoders/model_irse.py b/spaces/Reeve/Ohayou_Face/models/encoders/model_irse.py
deleted file mode 100644
index bc41ace0ba04cf4285c283a28e6c36113a18e6d6..0000000000000000000000000000000000000000
--- a/spaces/Reeve/Ohayou_Face/models/encoders/model_irse.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module
-from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm
-
-"""
-Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Backbone(Module):
-	def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True):
-		super(Backbone, self).__init__()
-		assert input_size in [112, 224], "input_size should be 112 or 224"
-		assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152"
-		assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se"
-		blocks = get_blocks(num_layers)
-		if mode == 'ir':
-			unit_module = bottleneck_IR
-		elif mode == 'ir_se':
-			unit_module = bottleneck_IR_SE
-		self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
-									  BatchNorm2d(64),
-									  PReLU(64))
-		if input_size == 112:
-			self.output_layer = Sequential(BatchNorm2d(512),
-			                               Dropout(drop_ratio),
-			                               Flatten(),
-			                               Linear(512 * 7 * 7, 512),
-			                               BatchNorm1d(512, affine=affine))
-		else:
-			self.output_layer = Sequential(BatchNorm2d(512),
-			                               Dropout(drop_ratio),
-			                               Flatten(),
-			                               Linear(512 * 14 * 14, 512),
-			                               BatchNorm1d(512, affine=affine))
-
-		modules = []
-		for block in blocks:
-			for bottleneck in block:
-				modules.append(unit_module(bottleneck.in_channel,
-										   bottleneck.depth,
-										   bottleneck.stride))
-		self.body = Sequential(*modules)
-
-	def forward(self, x):
-		x = self.input_layer(x)
-		x = self.body(x)
-		x = self.output_layer(x)
-		return l2_norm(x)
-
-
-def IR_50(input_size):
-	"""Constructs a ir-50 model."""
-	model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False)
-	return model
-
-
-def IR_101(input_size):
-	"""Constructs a ir-101 model."""
-	model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False)
-	return model
-
-
-def IR_152(input_size):
-	"""Constructs a ir-152 model."""
-	model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False)
-	return model
-
-
-def IR_SE_50(input_size):
-	"""Constructs a ir_se-50 model."""
-	model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False)
-	return model
-
-
-def IR_SE_101(input_size):
-	"""Constructs a ir_se-101 model."""
-	model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False)
-	return model
-
-
-def IR_SE_152(input_size):
-	"""Constructs a ir_se-152 model."""
-	model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False)
-	return model
diff --git a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_123812KB.py
deleted file mode 100644
index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_123812KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
-    def __init__(self, nin, ch, dilations=(4, 8, 16)):
-        super(BaseASPPNet, self).__init__()
-        self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
-        self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
-        self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
-        self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
-        self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
-        self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
-        self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
-        self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
-        self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
-    def __call__(self, x):
-        h, e1 = self.enc1(x)
-        h, e2 = self.enc2(h)
-        h, e3 = self.enc3(h)
-        h, e4 = self.enc4(h)
-
-        h = self.aspp(h)
-
-        h = self.dec4(h, e4)
-        h = self.dec3(h, e3)
-        h = self.dec2(h, e2)
-        h = self.dec1(h, e1)
-
-        return h
-
-
-class CascadedASPPNet(nn.Module):
-    def __init__(self, n_fft):
-        super(CascadedASPPNet, self).__init__()
-        self.stg1_low_band_net = BaseASPPNet(2, 32)
-        self.stg1_high_band_net = BaseASPPNet(2, 32)
-
-        self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
-        self.stg2_full_band_net = BaseASPPNet(16, 32)
-
-        self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
-        self.stg3_full_band_net = BaseASPPNet(32, 64)
-
-        self.out = nn.Conv2d(64, 2, 1, bias=False)
-        self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
-        self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
-        self.max_bin = n_fft // 2
-        self.output_bin = n_fft // 2 + 1
-
-        self.offset = 128
-
-    def forward(self, x, aggressiveness=None):
-        mix = x.detach()
-        x = x.clone()
-
-        x = x[:, :, : self.max_bin]
-
-        bandw = x.size()[2] // 2
-        aux1 = torch.cat(
-            [
-                self.stg1_low_band_net(x[:, :, :bandw]),
-                self.stg1_high_band_net(x[:, :, bandw:]),
-            ],
-            dim=2,
-        )
-
-        h = torch.cat([x, aux1], dim=1)
-        aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
-        h = torch.cat([x, aux1, aux2], dim=1)
-        h = self.stg3_full_band_net(self.stg3_bridge(h))
-
-        mask = torch.sigmoid(self.out(h))
-        mask = F.pad(
-            input=mask,
-            pad=(0, 0, 0, self.output_bin - mask.size()[2]),
-            mode="replicate",
-        )
-
-        if self.training:
-            aux1 = torch.sigmoid(self.aux1_out(aux1))
-            aux1 = F.pad(
-                input=aux1,
-                pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
-                mode="replicate",
-            )
-            aux2 = torch.sigmoid(self.aux2_out(aux2))
-            aux2 = F.pad(
-                input=aux2,
-                pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
-                mode="replicate",
-            )
-            return mask * mix, aux1 * mix, aux2 * mix
-        else:
-            if aggressiveness:
-                mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
-                    mask[:, :, : aggressiveness["split_bin"]],
-                    1 + aggressiveness["value"] / 3,
-                )
-                mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
-                    mask[:, :, aggressiveness["split_bin"] :],
-                    1 + aggressiveness["value"],
-                )
-
-            return mask * mix
-
-    def predict(self, x_mag, aggressiveness=None):
-        h = self.forward(x_mag, aggressiveness)
-
-        if self.offset > 0:
-            h = h[:, :, :, self.offset : -self.offset]
-            assert h.size()[3] > 0
-
-        return h
diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/hubert_model.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/hubert_model.py
deleted file mode 100644
index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000
--- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/hubert_model.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import copy
-from typing import Optional, Tuple
-import random
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
-
-class Hubert(nn.Module):
-    def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
-        super().__init__()
-        self._mask = mask
-        self.feature_extractor = FeatureExtractor()
-        self.feature_projection = FeatureProjection()
-        self.positional_embedding = PositionalConvEmbedding()
-        self.norm = nn.LayerNorm(768)
-        self.dropout = nn.Dropout(0.1)
-        self.encoder = TransformerEncoder(
-            nn.TransformerEncoderLayer(
-                768, 12, 3072, activation="gelu", batch_first=True
-            ),
-            12,
-        )
-        self.proj = nn.Linear(768, 256)
-
-        self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
-        self.label_embedding = nn.Embedding(num_label_embeddings, 256)
-
-    def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
-        mask = None
-        if self.training and self._mask:
-            mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
-            x[mask] = self.masked_spec_embed.to(x.dtype)
-        return x, mask
-
-    def encode(
-        self, x: torch.Tensor, layer: Optional[int] = None
-    ) -> Tuple[torch.Tensor, torch.Tensor]:
-        x = self.feature_extractor(x)
-        x = self.feature_projection(x.transpose(1, 2))
-        x, mask = self.mask(x)
-        x = x + self.positional_embedding(x)
-        x = self.dropout(self.norm(x))
-        x = self.encoder(x, output_layer=layer)
-        return x, mask
-
-    def logits(self, x: torch.Tensor) -> torch.Tensor:
-        logits = torch.cosine_similarity(
-            x.unsqueeze(2),
-            self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
-            dim=-1,
-        )
-        return logits / 0.1
-
-    def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
-        x, mask = self.encode(x)
-        x = self.proj(x)
-        logits = self.logits(x)
-        return logits, mask
-
-
-class HubertSoft(Hubert):
-    def __init__(self):
-        super().__init__()
-
-    @torch.inference_mode()
-    def units(self, wav: torch.Tensor) -> torch.Tensor:
-        wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
-        x, _ = self.encode(wav)
-        return self.proj(x)
-
-
-class FeatureExtractor(nn.Module):
-    def __init__(self):
-        super().__init__()
-        self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
-        self.norm0 = nn.GroupNorm(512, 512)
-        self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
-        self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
-        self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
-        self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
-        self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
-        self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
-
-    def forward(self, x: torch.Tensor) -> torch.Tensor:
-        x = F.gelu(self.norm0(self.conv0(x)))
-        x = F.gelu(self.conv1(x))
-        x = F.gelu(self.conv2(x))
-        x = F.gelu(self.conv3(x))
-        x = F.gelu(self.conv4(x))
-        x = F.gelu(self.conv5(x))
-        x = F.gelu(self.conv6(x))
-        return x
-
-
-class FeatureProjection(nn.Module):
-    def __init__(self):
-        super().__init__()
-        self.norm = nn.LayerNorm(512)
-        self.projection = nn.Linear(512, 768)
-        self.dropout = nn.Dropout(0.1)
-
-    def forward(self, x: torch.Tensor) -> torch.Tensor:
-        x = self.norm(x)
-        x = self.projection(x)
-        x = self.dropout(x)
-        return x
-
-
-class PositionalConvEmbedding(nn.Module):
-    def __init__(self):
-        super().__init__()
-        self.conv = nn.Conv1d(
-            768,
-            768,
-            kernel_size=128,
-            padding=128 // 2,
-            groups=16,
-        )
-        self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
-
-    def forward(self, x: torch.Tensor) -> torch.Tensor:
-        x = self.conv(x.transpose(1, 2))
-        x = F.gelu(x[:, :, :-1])
-        return x.transpose(1, 2)
-
-
-class TransformerEncoder(nn.Module):
-    def __init__(
-        self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
-    ) -> None:
-        super(TransformerEncoder, self).__init__()
-        self.layers = nn.ModuleList(
-            [copy.deepcopy(encoder_layer) for _ in range(num_layers)]
-        )
-        self.num_layers = num_layers
-
-    def forward(
-        self,
-        src: torch.Tensor,
-        mask: torch.Tensor = None,
-        src_key_padding_mask: torch.Tensor = None,
-        output_layer: Optional[int] = None,
-    ) -> torch.Tensor:
-        output = src
-        for layer in self.layers[:output_layer]:
-            output = layer(
-                output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
-            )
-        return output
-
-
-def _compute_mask(
-    shape: Tuple[int, int],
-    mask_prob: float,
-    mask_length: int,
-    device: torch.device,
-    min_masks: int = 0,
-) -> torch.Tensor:
-    batch_size, sequence_length = shape
-
-    if mask_length < 1:
-        raise ValueError("`mask_length` has to be bigger than 0.")
-
-    if mask_length > sequence_length:
-        raise ValueError(
-            f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
-        )
-
-    # compute number of masked spans in batch
-    num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
-    num_masked_spans = max(num_masked_spans, min_masks)
-
-    # make sure num masked indices <= sequence_length
-    if num_masked_spans * mask_length > sequence_length:
-        num_masked_spans = sequence_length // mask_length
-
-    # SpecAugment mask to fill
-    mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
-
-    # uniform distribution to sample from, make sure that offset samples are < sequence_length
-    uniform_dist = torch.ones(
-        (batch_size, sequence_length - (mask_length - 1)), device=device
-    )
-
-    # get random indices to mask
-    mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
-
-    # expand masked indices to masked spans
-    mask_indices = (
-        mask_indices.unsqueeze(dim=-1)
-        .expand((batch_size, num_masked_spans, mask_length))
-        .reshape(batch_size, num_masked_spans * mask_length)
-    )
-    offsets = (
-        torch.arange(mask_length, device=device)[None, None, :]
-        .expand((batch_size, num_masked_spans, mask_length))
-        .reshape(batch_size, num_masked_spans * mask_length)
-    )
-    mask_idxs = mask_indices + offsets
-
-    # scatter indices to mask
-    mask = mask.scatter(1, mask_idxs, True)
-
-    return mask
-
-
-def hubert_soft(
-    path: str
-) -> HubertSoft:
-    r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
-    Args:
-        path (str): path of a pretrained model
-    """
-    hubert = HubertSoft()
-    checkpoint = torch.load(path)
-    consume_prefix_in_state_dict_if_present(checkpoint, "module.")
-    hubert.load_state_dict(checkpoint)
-    hubert.eval()
-    return hubert
diff --git a/spaces/Salesforce/BLIP/train_caption.py b/spaces/Salesforce/BLIP/train_caption.py
deleted file mode 100644
index 7c639ac646b9a1b8074b6e9c2343b961de76db05..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/BLIP/train_caption.py
+++ /dev/null
@@ -1,206 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
-'''
-import argparse
-import os
-import ruamel_yaml as yaml
-import numpy as np
-import random
-import time
-import datetime
-import json
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-from torch.utils.data import DataLoader
-
-from models.blip import blip_decoder
-import utils
-from utils import cosine_lr_schedule
-from data import create_dataset, create_sampler, create_loader
-from data.utils import save_result, coco_caption_eval
-
-def train(model, data_loader, optimizer, epoch, device):
-    # train
-    model.train()  
-    
-    metric_logger = utils.MetricLogger(delimiter="  ")
-    metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
-    metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}'))
-    header = 'Train Caption Epoch: [{}]'.format(epoch)
-    print_freq = 50
-
-    for i, (image, caption, _) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
-        image = image.to(device)       
-        
-        loss = model(image, caption)      
-        
-        optimizer.zero_grad()
-        loss.backward()
-        optimizer.step()    
-        
-        metric_logger.update(loss=loss.item())
-        metric_logger.update(lr=optimizer.param_groups[0]["lr"])
-
-    # gather the stats from all processes
-    metric_logger.synchronize_between_processes()
-    print("Averaged stats:", metric_logger.global_avg())     
-    return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}  
-
-
-@torch.no_grad()
-def evaluate(model, data_loader, device, config):
-    # evaluate
-    model.eval() 
-    
-    metric_logger = utils.MetricLogger(delimiter="  ")
-    header = 'Caption generation:'
-    print_freq = 10
-
-    result = []
-    for image, image_id in metric_logger.log_every(data_loader, print_freq, header): 
-        
-        image = image.to(device)       
-        
-        captions = model.generate(image, sample=False, num_beams=config['num_beams'], max_length=config['max_length'], 
-                                  min_length=config['min_length'])
-        
-        for caption, img_id in zip(captions, image_id):
-            result.append({"image_id": img_id.item(), "caption": caption})
-  
-    return result
-
-
-def main(args, config):
-    utils.init_distributed_mode(args)    
-    
-    device = torch.device(args.device)
-
-    # fix the seed for reproducibility
-    seed = args.seed + utils.get_rank()
-    torch.manual_seed(seed)
-    np.random.seed(seed)
-    random.seed(seed)
-    cudnn.benchmark = True
-
-    #### Dataset #### 
-    print("Creating captioning dataset")
-    train_dataset, val_dataset, test_dataset = create_dataset('caption_coco', config)  
-
-    if args.distributed:
-        num_tasks = utils.get_world_size()
-        global_rank = utils.get_rank()            
-        samplers = create_sampler([train_dataset,val_dataset,test_dataset], [True,False,False], num_tasks, global_rank)         
-    else:
-        samplers = [None, None, None]
-    
-    train_loader, val_loader, test_loader = create_loader([train_dataset, val_dataset, test_dataset],samplers,
-                                                          batch_size=[config['batch_size']]*3,num_workers=[4,4,4],
-                                                          is_trains=[True, False, False], collate_fns=[None,None,None])         
-
-    #### Model #### 
-    print("Creating model")
-    model = blip_decoder(pretrained=config['pretrained'], image_size=config['image_size'], vit=config['vit'], 
-                           vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'], 
-                           prompt=config['prompt'])
-
-    model = model.to(device)   
-    
-    model_without_ddp = model
-    if args.distributed:
-        model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
-        model_without_ddp = model.module    
-    
-    optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay'])
-            
-    best = 0
-    best_epoch = 0
-
-    print("Start training")
-    start_time = time.time()    
-    for epoch in range(0, config['max_epoch']):
-        if not args.evaluate:        
-            if args.distributed:
-                train_loader.sampler.set_epoch(epoch)
-                
-            cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr'])
-                
-            train_stats = train(model, train_loader, optimizer, epoch, device) 
-        
-        val_result = evaluate(model_without_ddp, val_loader, device, config)  
-        val_result_file = save_result(val_result, args.result_dir, 'val_epoch%d'%epoch, remove_duplicate='image_id')        
-  
-        test_result = evaluate(model_without_ddp, test_loader, device, config)  
-        test_result_file = save_result(test_result, args.result_dir, 'test_epoch%d'%epoch, remove_duplicate='image_id')  
-
-        if utils.is_main_process():   
-            coco_val = coco_caption_eval(config['coco_gt_root'],val_result_file,'val')
-            coco_test = coco_caption_eval(config['coco_gt_root'],test_result_file,'test')
-            
-            if args.evaluate:            
-                log_stats = {**{f'val_{k}': v for k, v in coco_val.eval.items()},
-                             **{f'test_{k}': v for k, v in coco_test.eval.items()},                       
-                            }
-                with open(os.path.join(args.output_dir, "evaluate.txt"),"a") as f:
-                    f.write(json.dumps(log_stats) + "\n")                   
-            else:             
-                save_obj = {
-                    'model': model_without_ddp.state_dict(),
-                    'optimizer': optimizer.state_dict(),
-                    'config': config,
-                    'epoch': epoch,
-                }
-
-                if coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4'] > best:
-                    best = coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4']
-                    best_epoch = epoch                
-                    torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth')) 
-                    
-                log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
-                             **{f'val_{k}': v for k, v in coco_val.eval.items()},
-                             **{f'test_{k}': v for k, v in coco_test.eval.items()},                       
-                             'epoch': epoch,
-                             'best_epoch': best_epoch,
-                            }
-                with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
-                    f.write(json.dumps(log_stats) + "\n")     
-                    
-        if args.evaluate: 
-            break
-        dist.barrier()     
-
-    total_time = time.time() - start_time
-    total_time_str = str(datetime.timedelta(seconds=int(total_time)))
-    print('Training time {}'.format(total_time_str)) 
-
-
-if __name__ == '__main__':
-    parser = argparse.ArgumentParser()
-    parser.add_argument('--config', default='./configs/caption_coco.yaml')
-    parser.add_argument('--output_dir', default='output/Caption_coco')        
-    parser.add_argument('--evaluate', action='store_true')    
-    parser.add_argument('--device', default='cuda')
-    parser.add_argument('--seed', default=42, type=int)
-    parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes')    
-    parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')
-    parser.add_argument('--distributed', default=True, type=bool)
-    args = parser.parse_args()
-
-    config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
-
-    args.result_dir = os.path.join(args.output_dir, 'result')
-
-    Path(args.output_dir).mkdir(parents=True, exist_ok=True)
-    Path(args.result_dir).mkdir(parents=True, exist_ok=True)
-        
-    yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w'))    
-    
-    main(args, config)
\ No newline at end of file
diff --git a/spaces/SamerKharboush/chatGPT-Sam-Turbo/custom.css b/spaces/SamerKharboush/chatGPT-Sam-Turbo/custom.css
deleted file mode 100644
index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000
--- a/spaces/SamerKharboush/chatGPT-Sam-Turbo/custom.css
+++ /dev/null
@@ -1,162 +0,0 @@
-:root {
-    --chatbot-color-light: #F3F3F3;
-    --chatbot-color-dark: #121111;
-}
-
-/* status_display */
-#status_display {
-    display: flex;
-    min-height: 2.5em;
-    align-items: flex-end;
-    justify-content: flex-end;
-}
-#status_display p {
-    font-size: .85em;
-    font-family: monospace;
-    color: var(--body-text-color-subdued);
-}
-
-#chuanhu_chatbot, #status_display {
-    transition: all 0.6s;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
-    padding-inline-start: 2em !important;
-}
-
-/* 亮色 */
-#chuanhu_chatbot {
-    background-color: var(--chatbot-color-light) !important;
-}
-[data-testid = "bot"] {
-    background-color: #FFFFFF !important;
-}
-[data-testid = "user"] {
-    background-color: #95EC69 !important;
-}
-/* 对话气泡 */
-[class *= "message"] {
-    border-radius: var(--radius-xl) !important;
-    border: none;
-    padding: var(--spacing-xl) !important;
-    font-size: var(--text-md) !important;
-    line-height: var(--line-md) !important;
-    min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-    min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
-    max-width: 85%;
-    border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
-    max-width: 85%;
-    width: auto !important;
-    border-bottom-right-radius: 0 !important;
-}
-/* 表格 */
-table {
-    margin: 1em 0;
-    border-collapse: collapse;
-    empty-cells: show;
-}
-td,th {
-    border: 1.2px solid var(--border-color-primary) !important;
-    padding: 0.2em;
-}
-thead {
-    background-color: rgba(175,184,193,0.2);
-}
-thead th {
-    padding: .5em .2em;
-}
-/* 行内代码 */
-code {
-    display: inline;
-    white-space: break-spaces;
-    border-radius: 6px;
-    margin: 0 2px 0 2px;
-    padding: .2em .4em .1em .4em;
-    background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-pre code {
-    display: block;
-    overflow: auto;
-    white-space: pre;
-    background-color: hsla(0, 0%, 0%, 80%)!important;
-    border-radius: 10px;
-    padding: 1.4em 1.2em 0em 1.4em;
-    margin: 1.2em 2em 1.2em 0.5em;
-    color: #FFF;
-    box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* 代码高亮样式 */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/Sense-X/uniformer_image_demo/README.md b/spaces/Sense-X/uniformer_image_demo/README.md
deleted file mode 100644
index d9c60ca136c04ffffac2d4d9b23a29c472bd7be9..0000000000000000000000000000000000000000
--- a/spaces/Sense-X/uniformer_image_demo/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Uniformer_image_demo
-emoji: 📉
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
-license: mit
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.  
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/StatsByZach/app/home.py b/spaces/StatsByZach/app/home.py
deleted file mode 100644
index a60c5251f2882fe9416cbe02682ccce644b37f3c..0000000000000000000000000000000000000000
--- a/spaces/StatsByZach/app/home.py
+++ /dev/null
@@ -1,84 +0,0 @@
-##### home.py #####
-# Home page
-# Zach Andrews
-
-# Import modules
-from shiny import *
-import shinyswatch
-import plotly.express as px
-from shinywidgets import output_widget, render_widget
-import pandas as pd
-from configure import base_url
-
-# Create app
-home = App(ui.page_fluid(
-    ui.tags.base(href=base_url),
-    ui.tags.div(
-         {"style": "width:75%;margin: 0 auto"},
-        ui.tags.style(
-            """
-            h4 {
-                margin-top: 1em;font-size:35px;
-            }
-            h2{
-                font-size:25px;
-            }
-            """
-         ),
-    shinyswatch.theme.darkly(),ui.tags.h4("Stats By Zach"),
-    ui.tags.i("A website for hockey analytics"),
-    ui.navset_tab(
-        ui.nav_control(
-             ui.a(
-                "Home",
-                href="home/"
-            ),
-        ),
-        ui.nav_menu(
-            "Skater Charts",
-            ui.nav_control(
-             ui.a(
-                "On-Ice xG Rates",
-                href="skater-xg-rates/"
-            ),
-            ui.a(
-                "On-Ice xGF%",
-                href="skater-xg-percentages/"
-            ),
-        ),
-        ),
-        ui.nav_menu(
-            "Goalie Charts",
-            ui.nav_control(
-             ui.a(
-                "GSAx Timeline",
-                href="gsax-timeline/"
-            ),
-             ui.a(
-                "GSAx Leaderboard",
-                href="gsax-leaderboard/"
-            ),
-             ui.a(
-                "GSAx Comparison",
-                href="gsax-comparison/"
-            )
-        ),
-        ),ui.nav_menu(
-            "Team Charts",
-            ui.nav_control(
-             ui.a(
-                "Team xG Rates",
-                href="team-xg-rates/"
-            ),
-        ),
-        ),ui.nav_control(
-             ui.a(
-                "Games",
-                href="games/"
-            ),
-        ),ui.nav_control(
-             ui.a(
-                "About",
-                href="about/"
-            ),
-        )),ui.tags.br(),ui.tags.h5("Welcome to Stats By Zach!"),ui.tags.h6("The 2023-24 NHL regular season is here, and the StatsByZach website is officially up and running for it! As I've state before, this website is still a work in progress, with lots of work to be done in terms of styling and compatibility especially. Along with that, I am focusing on finding a new hosting solution, adding more charts, and some prerformace enhancements as well. Thank you for paying the site a visit, and I do hope you can use my data to better understand the NHL. The website gets updated daily, and I try to make improvements on a regular basis, so please do visit the site often, and feel free to reach out to me on Twitter @StatsByZach for any feedback or suggestions. Enjoy the site, and happy hockey season!"))), None)
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.h b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.h
deleted file mode 100644
index 3a2b582ab430575ec6fdb0e0799c5c2c39a56b15..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.h
+++ /dev/null
@@ -1,57 +0,0 @@
-/* ****************************************************************************
- *
- * Copyright (c) Brainwy software Ltda.
- *
- * This source code is subject to terms and conditions of the Apache License, Version 2.0. A
- * copy of the license can be found in the License.html file at the root of this distribution. If
- * you cannot locate the Apache License, Version 2.0, please send an email to
- * vspython@microsoft.com. By using this source code in any fashion, you are agreeing to be bound
- * by the terms of the Apache License, Version 2.0.
- *
- * You must not remove this notice, or any other, from this software.
- *
- * ***************************************************************************/
-
-#ifndef _ATTACH_DLL_H_
-#define _ATTACH_DLL_H_
-
-#if defined DLL_EXPORT
-#define DECLDIR __declspec(dllexport)
-#else
-#define DECLDIR __declspec(dllimport)
-#endif
-
-
-extern "C"
-{
-    DECLDIR int AttachAndRunPythonCode(const char *command, int *result );
-    
-    /*
-     * Helper to print debug information from the current process
-     */
-    DECLDIR int PrintDebugInfo();
-    
-    /*
-    Could be used with ctypes (note that the threading should be initialized, so, 
-    doing it in a thread as below is recommended):
-    
-    def check():
-        
-        import ctypes
-        lib = ctypes.cdll.LoadLibrary(r'C:\...\attach_x86.dll')
-        print 'result', lib.AttachDebuggerTracing(0)
-        
-    t = threading.Thread(target=check)
-    t.start()
-    t.join()
-    */
-    DECLDIR int AttachDebuggerTracing(
-        bool showDebugInfo, 
-        void* pSetTraceFunc, // Actually PyObject*, but we don't want to include it here.
-        void* pTraceFunc,  // Actually PyObject*, but we don't want to include it here.
-        unsigned int threadId,
-        void* pPyNone  // Actually PyObject*, but we don't want to include it here.
-    );
-}
-
-#endif
\ No newline at end of file
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/furthest_point_sample.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/furthest_point_sample.py
deleted file mode 100644
index 374b7a878f1972c183941af28ba1df216ac1a60f..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/furthest_point_sample.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
-    'furthest_point_sampling_forward',
-    'furthest_point_sampling_with_dist_forward'
-])
-
-
-class FurthestPointSampling(Function):
-    """Uses iterative furthest point sampling to select a set of features whose
-    corresponding points have the furthest distance."""
-
-    @staticmethod
-    def forward(ctx, points_xyz: torch.Tensor,
-                num_points: int) -> torch.Tensor:
-        """
-        Args:
-            points_xyz (Tensor): (B, N, 3) where N > num_points.
-            num_points (int): Number of points in the sampled set.
-
-        Returns:
-             Tensor: (B, num_points) indices of the sampled points.
-        """
-        assert points_xyz.is_contiguous()
-
-        B, N = points_xyz.size()[:2]
-        output = torch.cuda.IntTensor(B, num_points)
-        temp = torch.cuda.FloatTensor(B, N).fill_(1e10)
-
-        ext_module.furthest_point_sampling_forward(
-            points_xyz,
-            temp,
-            output,
-            b=B,
-            n=N,
-            m=num_points,
-        )
-        if torch.__version__ != 'parrots':
-            ctx.mark_non_differentiable(output)
-        return output
-
-    @staticmethod
-    def backward(xyz, a=None):
-        return None, None
-
-
-class FurthestPointSamplingWithDist(Function):
-    """Uses iterative furthest point sampling to select a set of features whose
-    corresponding points have the furthest distance."""
-
-    @staticmethod
-    def forward(ctx, points_dist: torch.Tensor,
-                num_points: int) -> torch.Tensor:
-        """
-        Args:
-            points_dist (Tensor): (B, N, N) Distance between each point pair.
-            num_points (int): Number of points in the sampled set.
-
-        Returns:
-             Tensor: (B, num_points) indices of the sampled points.
-        """
-        assert points_dist.is_contiguous()
-
-        B, N, _ = points_dist.size()
-        output = points_dist.new_zeros([B, num_points], dtype=torch.int32)
-        temp = points_dist.new_zeros([B, N]).fill_(1e10)
-
-        ext_module.furthest_point_sampling_with_dist_forward(
-            points_dist, temp, output, b=B, n=N, m=num_points)
-        if torch.__version__ != 'parrots':
-            ctx.mark_non_differentiable(output)
-        return output
-
-    @staticmethod
-    def backward(xyz, a=None):
-        return None, None
-
-
-furthest_point_sample = FurthestPointSampling.apply
-furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply
diff --git a/spaces/TEL123/Real-CUGAN/README.md b/spaces/TEL123/Real-CUGAN/README.md
deleted file mode 100644
index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000
--- a/spaces/TEL123/Real-CUGAN/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Real CUGAN
-emoji: 🐢
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: gpl-3.0
-duplicated_from: DianXian/Real-CUGAN
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TEnngal/TEnngal/Dockerfile b/spaces/TEnngal/TEnngal/Dockerfile
deleted file mode 100644
index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000
--- a/spaces/TEnngal/TEnngal/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM weaigc/bingo:latest
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-CMD npm start
diff --git a/spaces/TEnngal/bingo/src/components/user-menu.tsx b/spaces/TEnngal/bingo/src/components/user-menu.tsx
deleted file mode 100644
index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000
--- a/spaces/TEnngal/bingo/src/components/user-menu.tsx
+++ /dev/null
@@ -1,113 +0,0 @@
-'use client'
-
-import { useEffect, useState } from 'react'
-import Image from 'next/image'
-import { toast } from 'react-hot-toast'
-import { Button } from '@/components/ui/button'
-import pkg from '../../package.json'
-import {
-  DropdownMenu,
-  DropdownMenuContent,
-  DropdownMenuItem,
-  DropdownMenuSeparator,
-  DropdownMenuTrigger
-} from '@/components/ui/dropdown-menu'
-import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons'
-import SettingIcon from '@/assets/images/settings.svg'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-export function UserMenu() {
-  const [host, setHost] = useState('')
-  const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
-  useEffect(() => {
-    setHost(location.host)
-  }, [])
-
-  useEffect(() => {
-    if (isCopied) {
-      toast.success('复制成功')
-    }
-  }, [isCopied])
-  return (
-    
- - - - - - - location.href='#dialog="settings"' - } - className="cursor-pointer" - > - 设置用户 - - - - location.href='#dialog="voice"' - } - className="cursor-pointer" - > - 语音设置 - - - - - 开源地址 - - - - - - - - 托管地址 - 🤗 - - - - - - - 复制站点 - - - - - -
版本信息 {pkg.version}
-
- - -
站点域名
-
copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer"> - {host} -
-
-
-
-
- ) -} diff --git a/spaces/Tape/yoga/openpose/util.py b/spaces/Tape/yoga/openpose/util.py deleted file mode 100644 index 16dc24a3450e9a7ccf3950f0982236ce03928547..0000000000000000000000000000000000000000 --- a/spaces/Tape/yoga/openpose/util.py +++ /dev/null @@ -1,198 +0,0 @@ -import numpy as np -import math -import cv2 -import matplotlib -from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas -from matplotlib.figure import Figure -import numpy as np -import matplotlib.pyplot as plt -import cv2 - - -def padRightDownCorner(img, stride, padValue): - h = img.shape[0] - w = img.shape[1] - - pad = 4 * [None] - pad[0] = 0 # up - pad[1] = 0 # left - pad[2] = 0 if (h % stride == 0) else stride - (h % stride) # down - pad[3] = 0 if (w % stride == 0) else stride - (w % stride) # right - - img_padded = img - pad_up = np.tile(img_padded[0:1, :, :]*0 + padValue, (pad[0], 1, 1)) - img_padded = np.concatenate((pad_up, img_padded), axis=0) - pad_left = np.tile(img_padded[:, 0:1, :]*0 + padValue, (1, pad[1], 1)) - img_padded = np.concatenate((pad_left, img_padded), axis=1) - pad_down = np.tile(img_padded[-2:-1, :, :]*0 + padValue, (pad[2], 1, 1)) - img_padded = np.concatenate((img_padded, pad_down), axis=0) - pad_right = np.tile(img_padded[:, -2:-1, :]*0 + padValue, (1, pad[3], 1)) - img_padded = np.concatenate((img_padded, pad_right), axis=1) - - return img_padded, pad - -# transfer caffe model to pytorch which will match the layer name -def transfer(model, model_weights): - transfered_model_weights = {} - for weights_name in model.state_dict().keys(): - transfered_model_weights[weights_name] = model_weights['.'.join(weights_name.split('.')[1:])] - return transfered_model_weights - -# draw the body keypoint and lims -def draw_bodypose(canvas, candidate, subset): - stickwidth = 4 - limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \ - [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \ - [1, 16], [16, 18], [3, 17], [6, 18]] - - colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \ - [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \ - [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]] - for i in range(18): - for n in range(len(subset)): - index = int(subset[n][i]) - if index == -1: - continue - x, y = candidate[index][0:2] - cv2.circle(canvas, (int(x), int(y)), 4, colors[i], thickness=-1) - for i in range(17): - for n in range(len(subset)): - index = subset[n][np.array(limbSeq[i]) - 1] - if -1 in index: - continue - cur_canvas = canvas.copy() - Y = candidate[index.astype(int), 0] - X = candidate[index.astype(int), 1] - mX = np.mean(X) - mY = np.mean(Y) - length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5 - angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1])) - polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1) - cv2.fillConvexPoly(cur_canvas, polygon, colors[i]) - canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0) - # plt.imsave("preview.jpg", canvas[:, :, [2, 1, 0]]) - # plt.imshow(canvas[:, :, [2, 1, 0]]) - return canvas - -def draw_handpose(canvas, all_hand_peaks, show_number=False): - edges = [[0, 1], [1, 2], [2, 3], [3, 4], [0, 5], [5, 6], [6, 7], [7, 8], [0, 9], [9, 10], \ - [10, 11], [11, 12], [0, 13], [13, 14], [14, 15], [15, 16], [0, 17], [17, 18], [18, 19], [19, 20]] - fig = Figure(figsize=plt.figaspect(canvas)) - - fig.subplots_adjust(0, 0, 1, 1) - fig.subplots_adjust(bottom=0, top=1, left=0, right=1) - bg = FigureCanvas(fig) - ax = fig.subplots() - ax.axis('off') - ax.imshow(canvas) - - width, height = ax.figure.get_size_inches() * ax.figure.get_dpi() - - for peaks in all_hand_peaks: - for ie, e in enumerate(edges): - if np.sum(np.all(peaks[e], axis=1)==0)==0: - x1, y1 = peaks[e[0]] - x2, y2 = peaks[e[1]] - ax.plot([x1, x2], [y1, y2], color=matplotlib.colors.hsv_to_rgb([ie/float(len(edges)), 1.0, 1.0])) - - for i, keyponit in enumerate(peaks): - x, y = keyponit - ax.plot(x, y, 'r.') - if show_number: - ax.text(x, y, str(i)) - bg.draw() - canvas = np.fromstring(bg.tostring_rgb(), dtype='uint8').reshape(int(height), int(width), 3) - return canvas - -# image drawed by opencv is not good. -def draw_handpose_by_opencv(canvas, peaks, show_number=False): - edges = [[0, 1], [1, 2], [2, 3], [3, 4], [0, 5], [5, 6], [6, 7], [7, 8], [0, 9], [9, 10], \ - [10, 11], [11, 12], [0, 13], [13, 14], [14, 15], [15, 16], [0, 17], [17, 18], [18, 19], [19, 20]] - # cv2.rectangle(canvas, (x, y), (x+w, y+w), (0, 255, 0), 2, lineType=cv2.LINE_AA) - # cv2.putText(canvas, 'left' if is_left else 'right', (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2) - for ie, e in enumerate(edges): - if np.sum(np.all(peaks[e], axis=1)==0)==0: - x1, y1 = peaks[e[0]] - x2, y2 = peaks[e[1]] - cv2.line(canvas, (x1, y1), (x2, y2), matplotlib.colors.hsv_to_rgb([ie/float(len(edges)), 1.0, 1.0])*255, thickness=2) - - for i, keyponit in enumerate(peaks): - x, y = keyponit - cv2.circle(canvas, (x, y), 4, (0, 0, 255), thickness=-1) - if show_number: - cv2.putText(canvas, str(i), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 0, 0), lineType=cv2.LINE_AA) - return canvas - -# detect hand according to body pose keypoints -# please refer to https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/src/openpose/hand/handDetector.cpp -def handDetect(candidate, subset, oriImg): - # right hand: wrist 4, elbow 3, shoulder 2 - # left hand: wrist 7, elbow 6, shoulder 5 - ratioWristElbow = 0.33 - detect_result = [] - image_height, image_width = oriImg.shape[0:2] - for person in subset.astype(int): - # if any of three not detected - has_left = np.sum(person[[5, 6, 7]] == -1) == 0 - has_right = np.sum(person[[2, 3, 4]] == -1) == 0 - if not (has_left or has_right): - continue - hands = [] - #left hand - if has_left: - left_shoulder_index, left_elbow_index, left_wrist_index = person[[5, 6, 7]] - x1, y1 = candidate[left_shoulder_index][:2] - x2, y2 = candidate[left_elbow_index][:2] - x3, y3 = candidate[left_wrist_index][:2] - hands.append([x1, y1, x2, y2, x3, y3, True]) - # right hand - if has_right: - right_shoulder_index, right_elbow_index, right_wrist_index = person[[2, 3, 4]] - x1, y1 = candidate[right_shoulder_index][:2] - x2, y2 = candidate[right_elbow_index][:2] - x3, y3 = candidate[right_wrist_index][:2] - hands.append([x1, y1, x2, y2, x3, y3, False]) - - for x1, y1, x2, y2, x3, y3, is_left in hands: - # pos_hand = pos_wrist + ratio * (pos_wrist - pos_elbox) = (1 + ratio) * pos_wrist - ratio * pos_elbox - # handRectangle.x = posePtr[wrist*3] + ratioWristElbow * (posePtr[wrist*3] - posePtr[elbow*3]); - # handRectangle.y = posePtr[wrist*3+1] + ratioWristElbow * (posePtr[wrist*3+1] - posePtr[elbow*3+1]); - # const auto distanceWristElbow = getDistance(poseKeypoints, person, wrist, elbow); - # const auto distanceElbowShoulder = getDistance(poseKeypoints, person, elbow, shoulder); - # handRectangle.width = 1.5f * fastMax(distanceWristElbow, 0.9f * distanceElbowShoulder); - x = x3 + ratioWristElbow * (x3 - x2) - y = y3 + ratioWristElbow * (y3 - y2) - distanceWristElbow = math.sqrt((x3 - x2) ** 2 + (y3 - y2) ** 2) - distanceElbowShoulder = math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2) - width = 1.5 * max(distanceWristElbow, 0.9 * distanceElbowShoulder) - # x-y refers to the center --> offset to topLeft point - # handRectangle.x -= handRectangle.width / 2.f; - # handRectangle.y -= handRectangle.height / 2.f; - x -= width / 2 - y -= width / 2 # width = height - # overflow the image - if x < 0: x = 0 - if y < 0: y = 0 - width1 = width - width2 = width - if x + width > image_width: width1 = image_width - x - if y + width > image_height: width2 = image_height - y - width = min(width1, width2) - # the max hand box value is 20 pixels - if width >= 20: - detect_result.append([int(x), int(y), int(width), is_left]) - - ''' - return value: [[x, y, w, True if left hand else False]]. - width=height since the network require squared input. - x, y is the coordinate of top left - ''' - return detect_result - -# get max index of 2d array -def npmax(array): - arrayindex = array.argmax(1) - arrayvalue = array.max(1) - i = arrayvalue.argmax() - j = arrayindex[i] - return i, j diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/linter.sh b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/linter.sh deleted file mode 100644 index e873186fe3ccf146630884255de0f7b98434abdc..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/linter.sh +++ /dev/null @@ -1,42 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -# cd to detectron2 project root -cd "$(dirname "${BASH_SOURCE[0]}")/.." - -{ - black --version | grep -E "21\." > /dev/null -} || { - echo "Linter requires 'black==21.*' !" - exit 1 -} - -ISORT_VERSION=$(isort --version-number) -if [[ "$ISORT_VERSION" != 4.3* ]]; then - echo "Linter requires isort==4.3.21 !" - exit 1 -fi - -set -v - -echo "Running isort ..." -isort -y -sp . --atomic - -echo "Running black ..." -black -l 100 . - -echo "Running flake8 ..." -if [ -x "$(command -v flake8-3)" ]; then - flake8-3 . -else - python3 -m flake8 . -fi - -# echo "Running mypy ..." -# Pytorch does not have enough type annotations -# mypy detectron2/solver detectron2/structures detectron2/config - -echo "Running clang-format ..." -find . -regex ".*\.\(cpp\|c\|cc\|cu\|cxx\|h\|hh\|hpp\|hxx\|tcc\|mm\|m\)" -print0 | xargs -0 clang-format -i - -command -v arc > /dev/null && arc lint diff --git a/spaces/TheBritishLibrary/British-Library-books-genre-classifier/app.py b/spaces/TheBritishLibrary/British-Library-books-genre-classifier/app.py deleted file mode 100644 index 9ebdf34b8f42312776fe0bef1a4b4afeaf45ebb2..0000000000000000000000000000000000000000 --- a/spaces/TheBritishLibrary/British-Library-books-genre-classifier/app.py +++ /dev/null @@ -1,196 +0,0 @@ -from functools import lru_cache -from fastai.text.all import * -from fastcore.all import * -import matplotlib.cm as cm -import html -import gradio as gr - -learn_inf = load_learner("20210928-model.pkl") - - -def _value2rgba(x, cmap=cm.RdYlGn, alpha_mult=1.0): - "Convert a value `x` from 0 to 1 (inclusive) to an RGBA tuple according to `cmap` times transparency `alpha_mult`." - c = cmap(x) - rgb = (np.array(c[:-1]) * 255).astype(int) - a = c[-1] * alpha_mult - return tuple(rgb.tolist() + [a]) - - -def _eval_dropouts(mod): - module_name = mod.__class__.__name__ - if "Dropout" in module_name or "BatchNorm" in module_name: - mod.training = False - for module in mod.children(): - _eval_dropouts(module) - - -def _piece_attn_html(pieces, attns, sep=" ", **kwargs): - html_code, spans = [''], [] - for p, a in zip(pieces, attns): - p = html.escape(p) - c = str(_value2rgba(a, alpha_mult=0.5, **kwargs)) - spans.append( - f'{p}' - ) - html_code.append(sep.join(spans)) - html_code.append("") - return "".join(html_code) - - - -@lru_cache(maxsize=1024 * 2) -def _intrinsic_attention(learn, text, class_id=None): - "Calculate the intrinsic attention of the input w.r.t to an output `class_id`, or the classification given by the model if `None`." - learn.model.train() - _eval_dropouts(learn.model) - learn.model.zero_grad() - learn.model.reset() - dl = learn.dls.test_dl([text]) - batch = next(iter(dl))[0] - emb = learn.model[0].module.encoder(batch).detach().requires_grad_(True) - emb.retain_grad() - lstm = learn.model[0].module(emb, True) - learn.model.eval() - cl = learn.model[1]((lstm, torch.zeros_like(batch).bool(),))[ - 0 - ].softmax(dim=-1) - if class_id is None: - class_id = cl.argmax() - cl[0][class_id].backward() - attn = emb.grad.squeeze().abs().sum(dim=-1) - attn /= attn.max() - tok, _ = learn.dls.decode_batch((*tuplify(batch), *tuplify(cl)))[0] - return tok, attn - - -@patch -def intrinsic_attention(x: TextLearner, text: str, class_id: int = None, **kwargs): - "Shows the `intrinsic attention for `text`, optional `class_id`" - if isinstance(x, LMLearner): - raise Exception("Language models are not supported") - text, attn = _intrinsic_attention(x, text, class_id) - return _piece_attn_html(text.split(), to_np(attn), **kwargs) - - -labels = learn_inf.dls.vocab[1] - - -@lru_cache(maxsize=1024 * 2) -def predict_label(title): - *_, probs = learn_inf.predict(title) - return probs - - -def predict(title): - # *_, probs = learn_inf.predict(title) - - probs = predict_label(title) - return learn_inf.intrinsic_attention(title), { - labels[i]: float(probs[i]) for i in range(len(labels)) - } - - -sample_text = [ - [ - "Poems on various subjects. Whereto is prefixed a short essay on the structure of English verse" - ], - [ - "Journal of a Residence in China and the neighbouring countries from 1830 to 1833. With an introductory essay by the Hon. and Rev. Baptist Wriothesley Noel. [With a map.]" - ], - ["The Adventures of Oliver Twist. [With plates.]"], - ["['The Adventures of Sherlock Holmes', 'Single Works']"], - [ - "['Coal, Iron, and Oil; or, the Practical American miner. A plain and popular work on our mines and mineral resources ... With numerous maps and engravings, etc']" - ], - [ - "Summer Travelling in Iceland; being the narrative of two journeys across the island ... With a chapter on Askja by E. Delmar Morgan ... Containing also a literal translation of three sagas. Maps, etc'" - ], - [ - "Histoire de France au moyen aÃÇge, depuis Philippe-Auguste jusqu'aÃÄ la fin du reÃÄgne de Louis XI. 1223-1483. Troisieme eÃÅdition" - ], - [ - "Two Centuries of Soho: its institutions, firms, and amusements. By the Clergy of St. Anne's, Soho, J. H. Cardwell ... H. B. Freeman ... G. C. Wilton ... assisted by other contributors, etc" - ], - ["""A Christmas Carol"""], -] - -description = """ -British Library Books genre detection model -""" - -article = """ -[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5245175.svg)](https://doi.org/10.5281/zenodo.5245175) - -# British Library Books genre detection demo - -This demo allows you to play with a 'genre' detection model which has been trained to predict, from the title of a book, whether it is 'fiction' or 'non-fiction'. -The model was trained with the [fastai](https://docs.fast.ai/) library on training data drawn from [digitised books](https://www.bl.uk/collection-guides/digitised-printed-books) at the British Library. These Books are mainly from the 19th Century. -The demo also shows you which parts of the input the model is using most to make its prediction. You can hover over the words to see the attention score assigned to that word. This gives you some sense of which words are important to the model in making a prediction. - -The examples include titles from the BL books collection. You may notice that the model makes mistakes on short titles in particular, this can partly be explained by the title format in the original data. For example the novel *'Vanity Fair'* by William Makepeace Thackeray -is found in the training data as: - -``` -Vanity Fair. A novel without a hero ... With all the original illustrations by the author, etc -``` - -You can see that the model gets a bit of help with the genre here 😉. Since the model was trained for a very particular dataset and task it might not work well on titles that don't match this original corpus. - -## XXMAJ? - -You may see some strange tokens in the output. These are tokens used by fastai to indicate particularly things about the text. `xxmaj` is used to indicate the next word begins with a capital in the original text `xxbos` is used to indicate the beginning of a sentence. These can be quite important for helping the model make predictions. As an example, you can try `oliver twist` and `Oliver Twist` and see how the results of the model change. - - -## Background - -This model was developed as part of work by the [Living with Machines](https://livingwithmachines.ac.uk/). The process of training the model and working with the data is documented in a tutorial which will be released soon. - -## Model description - -This model is intended to predict, from the title of a book, whether it is 'fiction' or 'non-fiction'. This model was trained on data created from the [Digitised printed books (18th-19th Century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. -This dataset is dominated by English language books though it includes books in several other languages in much smaller numbers. This model was originally developed for use as part of the Living with Machines project to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was `fiction` or `non-fiction`. -You can find more information about the model [here]((https://doi.org/10.5281/zenodo.5245175)) - -## Training data - -The model is trained on a particular collection of books digitised by the British Library. As a result, the model may do less well on titles that look different to this data. In particular, the training data, was mostly English, and mostly from the 19th Century. The model is likely to do less well with non-English languages and book titles which fall outside of the 19th Century. Since the data was derived from books catalogued by the British Library it is also possible the model will perform less well for books held by other institutions if, for example, they catalogue book titles in different ways, or have different biases in the types of books they hold. - -## Model performance - -The model's performance on a held-out test set is as follows: - - -``` - precision recall f1-score support - - Fiction 0.91 0.88 0.90 296 - Non-fiction 0.94 0.95 0.95 554 - - accuracy 0.93 850 - macro avg 0.93 0.92 0.92 850 -weighted avg 0.93 0.93 0.93 850 -``` - -### Credits -> This work was partly supported by [Living with Machines](https://livingwithmachines.ac.uk/). This project, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London. -> Code for showing attention was adapted from [Zachary Mueller's](https://github.com/muellerzr) [fastinference](https://muellerzr.github.io/fastinference/) library. -""" - -gr_interface = gr.Interface( - fn=predict, - inputs=gr.inputs.Textbox(), - outputs=[ - gr.outputs.HTML("Intrinsic attention"), - gr.outputs.Label(num_top_classes=len(labels), label="Confidence"), - ], - title="British Library 19th Century Books Genre Classifier", - description=description, - article=article, - examples=sample_text, - allow_screenshot=True, - theme="huggingface" -) -gr_interface.launch(inline=False, share=False) - - - diff --git a/spaces/UVA-GCOM/Group_1/README.md b/spaces/UVA-GCOM/Group_1/README.md deleted file mode 100644 index 62decb33b268721d41864fd7b8a2b3f416fba7ae..0000000000000000000000000000000000000000 --- a/spaces/UVA-GCOM/Group_1/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Heart Attack Predictor V1_Group 1 -emoji: 🏥 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: mit -duplicated_from: paragon-analytics/Employee-Turnover ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vicent3/sharp-transformers-traveltaxi/style.css b/spaces/Vicent3/sharp-transformers-traveltaxi/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Vicent3/sharp-transformers-traveltaxi/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Vipitis/ShaderCoder/utils/generation.py b/spaces/Vipitis/ShaderCoder/utils/generation.py deleted file mode 100644 index e46b0aecc28a4f3d2da23c6400f45c12a442bc56..0000000000000000000000000000000000000000 --- a/spaces/Vipitis/ShaderCoder/utils/generation.py +++ /dev/null @@ -1,62 +0,0 @@ -from accelerate import Accelerator -from transformers import TextIteratorStreamer -from threading import Thread -from .tree_utils import full_func_head, grab_before_comments - -def combine_generation_kwargs(temperature=2.0, max_new_tokens=512, top_p=0.95, repetition_penalty=1.2): - """ - Combines the generation kwargs into a single dict. - """ - gen_kwargs = {} - gen_kwargs["do_sample"] = True - gen_kwargs["temperature"] = temperature - gen_kwargs["max_new_tokens"] = max_new_tokens - gen_kwargs["top_p"] = top_p - gen_kwargs["repetition_penalty"] = repetition_penalty - return gen_kwargs - - -def stream_generation(prompt:str, pipe, gen_kwargs:dict): - accelerator = Accelerator() - device = accelerator.device - """ - Text generation function - Args: - prompt (str): The context to start generation from. - pipe (Pipeline): The pipeline to use for generation (we take the model and tokenizer form it) - gen_kwargs (dict): The generation kwargs. - Returns: - str: The generated text. (it iterates over time) - """ - # Tokenize the model_context - model_inputs = pipe.tokenizer(prompt, return_tensors="pt") - model_inputs.to(device) - model = pipe.model.to(device) #is this also required? - - # Start generation on a separate thread, so that we don't block the UI. The text is pulled from the streamer - # in the main thread. Adds timeout to the streamer to handle exceptions in the generation thread. - streamer = TextIteratorStreamer(pipe.tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=45.0) #IPEX takes a bit on first inference, to avoid an error with the empty queue timeout on the first time, we just wait longer. - generate_kwargs = dict(model_inputs, streamer=streamer, **gen_kwargs) - t = Thread(target=pipe.model.generate, kwargs=generate_kwargs) - t.start() - - # Pull the generated text from the streamer, and update the model output. - model_output = "" - for new_text in streamer: - # print("step", end="") - model_output += new_text - yield model_output - streamer.on_finalized_text("stream reached the end.") - return model_output #is this ever reached? - -def construct_model_context(func_node, prompt=""): - """ - Constructs the model context from a function node. - returns: model_context, start_byte - """ - model_context, start_byte = grab_before_comments(func_node) - model_context += full_func_head(func_node) - if prompt != "": - model_context = "//Title: " + prompt + "\n" + model_context #prepend user prompt/title - model_context = "//Language: Shadertoy GLSL fragment shader\n" + model_context #prepend system prompt, language hint - return model_context, start_byte \ No newline at end of file diff --git a/spaces/VishnuVardhanBR/chatbot/trends.py b/spaces/VishnuVardhanBR/chatbot/trends.py deleted file mode 100644 index ed21033bbbcf9a392273f6a755328aa84dd66216..0000000000000000000000000000000000000000 --- a/spaces/VishnuVardhanBR/chatbot/trends.py +++ /dev/null @@ -1,65 +0,0 @@ -from pytrends.request import TrendReq -import pandas as pd -import os -import pickle -from datetime import datetime, timedelta - -all_keywords = [ - "Allen Solly", - "Van Heusen", - "Peter England", - "Adidas", - "Louis Philippe", - "Biba", - "Jockey", - "Levi's", - "Nike", - "Puma", - "Reebok", - "United Colors of Benetton", - "HRX", - "Forever 21", - "Arrow", - "Raymond", - "Park Avenue", - "Wrangler", - "Pepe Jeans", - "Jack & Jones", - "Indian Terrain", - "Spykar", - "Wildcraft", - "Calvin Klein Jeans", - "Tommy Hilfiger", -] - -def get_trending_brands(): - # Check if a file was written less than 7 days ago - if os.path.exists('trending_brands.pkl'): - file_mod_time = datetime.fromtimestamp(os.path.getmtime('trending_brands.pkl')) - current_time = datetime.now() - if (current_time - file_mod_time).days < 7: - with open('trending_brands.pkl', 'rb') as file: - return pickle.load(file) - - pytrends = TrendReq(hl='en-US', tz=360) - - chunk_size = 5 - keyword_chunks = [all_keywords[i:i+chunk_size] for i in range(0, len(all_keywords), chunk_size)] - popularity_scores = [] - - for chunk in keyword_chunks: - pytrends.build_payload(chunk, timeframe='now 7-d', geo='IN') - data = pytrends.interest_over_time() - chunk_popularity = data.mean().sort_values(ascending=False) - popularity_scores.append(chunk_popularity) - - combined_scores = pd.concat(popularity_scores, axis=1) - - overall_scores = combined_scores.mean(axis=1).sort_values(ascending=False) - ordered_keywords = overall_scores.index.tolist() - - # Save the result to a file - with open('trending_brands.pkl', 'wb') as file: - pickle.dump(ordered_keywords, file) - - return ordered_keywords \ No newline at end of file diff --git a/spaces/Waranchari/Image_Classification/app.py b/spaces/Waranchari/Image_Classification/app.py deleted file mode 100644 index 756d816ca0f5568b89af66df80983b342c6ba140..0000000000000000000000000000000000000000 --- a/spaces/Waranchari/Image_Classification/app.py +++ /dev/null @@ -1,38 +0,0 @@ -import pandas as pd -import streamlit as st -from transformers import pipeline -from PIL import Image - -pipeline = pipeline(task="image-classification", model="microsoft/resnet-50") - -def predict(image): - predictions = pipeline(image) - return {p["label"]: p["score"] for p in predictions} - -def main(): - st.title("Image Classification") - - with st.form("my_form"): - uploaded_file = st.file_uploader("Choose an image file", type=["jpg", "jpeg", "png"]) - - if uploaded_file is not None: - # Display the uploaded image - image = Image.open(uploaded_file) - st.image(image, caption="Uploaded Image", use_column_width=True) - clicked = st.form_submit_button("Predict") - if clicked: - results = predict(image) - k = [] - v = [] - for key, value in results.items(): - value = round(value*100,2) - v.append(value) - k.append(key) - vp = [str(item) + '%' for item in v] - result = k[0] - st.success('The predicted image is {}'.format(result)) - df = pd.DataFrame({'Prediction': k,'Accuracy':vp}) - st.dataframe(df,hide_index=True) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Warlord-K/TryOn/app.py b/spaces/Warlord-K/TryOn/app.py deleted file mode 100644 index c6c4f0b8808873f6d304fb6aec4d288c0673643b..0000000000000000000000000000000000000000 --- a/spaces/Warlord-K/TryOn/app.py +++ /dev/null @@ -1,55 +0,0 @@ -from utils.model import load_seg, load_inpainting, generate_with_mask, generate -from utils.scraper import extract_link -import gradio as gr - -extractor, model = load_seg() -prompt_pipe = load_inpainting(using_prompt = True) -cloth_pipe = load_inpainting() - -def generate_with_mask_(image_path: str, cloth_path: str = None, prompt: str = None): - """ - Generate Image. - - Request Body - request = { - "image" : Input Image URL - "cloth" : Cloth Image URL - "prompt" : Prompt, In case example image is not provided - } - - Return Body: - { - gen: Generated Image - } - """ - using_prompt = True if prompt else False - image_url = extract_link(image_path) - cloth_url = extract_link(cloth_path) - image_path = image_url if image_url else image_path - cloth_path = cloth_url if cloth_url else cloth_path - if using_prompt: - gen = generate(image_path, extractor, model, prompt_pipe, cloth_path, prompt) - else: - gen = generate_with_mask(image_path, extractor, model, cloth_pipe, cloth_path, prompt) - return gen - - -with gr.Blocks() as demo: - gr.Markdown('# Try On Clothes Online!') - gr.Markdown('## Add Your Image via an Image URL or By Simply Uploading it') - gr.Markdown('## Paste in a link of a Product from Flipkart, Amazon or Myntra and See the Preview! You can also paste in any Image link!') - gr.Markdown('## Optionally you can add a prompt to generate an image instead of using an example image.') - with gr.Row(): - with gr.Column(): - image = gr.inputs.Image(type = "filepath", label = "Input Image") - # image_url = gr.inputs.Textbox(label = "Input Image URL") - cloth_url = gr.inputs.Textbox(label = "Cloth Image URL") - prompt = gr.inputs.Textbox(label="Optional Prompt") - output = gr.outputs.Image(type = "pil", label="Generated Image") - run = gr.Button(label="Generate Preview") - gr.Markdown('## Made By [Yatharth Gupta](https://www.linkedin.com/in/yatharth-g/)') - run.click(generate_with_mask_, inputs=[image, cloth_url, prompt], outputs=output) - -demo.launch() - -demo.launch() \ No newline at end of file diff --git a/spaces/Willow123/InternLM-XComposer/README.md b/spaces/Willow123/InternLM-XComposer/README.md deleted file mode 100644 index d73b37843a9da156f8fcce56596d69b552cc26a2..0000000000000000000000000000000000000000 --- a/spaces/Willow123/InternLM-XComposer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: InternLM XComposer -emoji: 🏢 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Woogiepark/stabilityai-stable-diffusion2/app.py b/spaces/Woogiepark/stabilityai-stable-diffusion2/app.py deleted file mode 100644 index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000 --- a/spaces/Woogiepark/stabilityai-stable-diffusion2/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2").launch() \ No newline at end of file diff --git a/spaces/XzJosh/Azusa-Bert-VITS2/modules.py b/spaces/XzJosh/Azusa-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azusa-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/losses.py b/spaces/XzJosh/Jiaran-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Yuliang/ECON/lib/smplx/vertex_ids.py b/spaces/Yuliang/ECON/lib/smplx/vertex_ids.py deleted file mode 100644 index 060ed8ed60117a33358944abb85891abb3de8e30..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/smplx/vertex_ids.py +++ /dev/null @@ -1,75 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -from __future__ import absolute_import, division, print_function - -# Joint name to vertex mapping. SMPL/SMPL-H/SMPL-X vertices that correspond to -# MSCOCO and OpenPose joints -vertex_ids = { - "smplh": { - "nose": 332, - "reye": 6260, - "leye": 2800, - "rear": 4071, - "lear": 583, - "rthumb": 6191, - "rindex": 5782, - "rmiddle": 5905, - "rring": 6016, - "rpinky": 6133, - "lthumb": 2746, - "lindex": 2319, - "lmiddle": 2445, - "lring": 2556, - "lpinky": 2673, - "LBigToe": 3216, - "LSmallToe": 3226, - "LHeel": 3387, - "RBigToe": 6617, - "RSmallToe": 6624, - "RHeel": 6787, - }, - "smplx": { - "nose": 9120, - "reye": 9929, - "leye": 9448, - "rear": 616, - "lear": 6, - "rthumb": 8079, - "rindex": 7669, - "rmiddle": 7794, - "rring": 7905, - "rpinky": 8022, - "lthumb": 5361, - "lindex": 4933, - "lmiddle": 5058, - "lring": 5169, - "lpinky": 5286, - "LBigToe": 5770, - "LSmallToe": 5780, - "LHeel": 8846, - "RBigToe": 8463, - "RSmallToe": 8474, - "RHeel": 8635, - }, - "mano": { - "thumb": 744, - "index": 320, - "middle": 443, - "ring": 554, - "pinky": 671, - }, -} diff --git a/spaces/ZeroTwo3/WavJourney/VoiceParser/hubert_manager.py b/spaces/ZeroTwo3/WavJourney/VoiceParser/hubert_manager.py deleted file mode 100644 index 5f8445147a8997fdb54e1246e9a85af40342c748..0000000000000000000000000000000000000000 --- a/spaces/ZeroTwo3/WavJourney/VoiceParser/hubert_manager.py +++ /dev/null @@ -1,33 +0,0 @@ -import os.path -import shutil -import urllib.request - -import huggingface_hub - - -class HuBERTManager: - @staticmethod - def make_sure_hubert_installed(download_url: str = 'https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt', file_name: str = 'hubert.pt'): - install_dir = os.path.join('VoiceParser', 'hubert') - if not os.path.isdir(install_dir): - os.makedirs(install_dir, exist_ok=True) - install_file = os.path.join(install_dir, file_name) - if not os.path.isfile(install_file): - print('Downloading HuBERT base model') - urllib.request.urlretrieve(download_url, install_file) - print('Downloaded HuBERT') - return install_file - - - @staticmethod - def make_sure_tokenizer_installed(model: str = 'quantifier_hubert_base_ls960_14.pth', repo: str = 'GitMylo/bark-voice-cloning', local_file: str = 'tokenizer.pth'): - install_dir = os.path.join('VoiceParser', 'hubert') - if not os.path.isdir(install_dir): - os.makedirs(install_dir, exist_ok=True) - install_file = os.path.join(install_dir, local_file) - if not os.path.isfile(install_file): - print('Downloading HuBERT custom tokenizer') - huggingface_hub.hf_hub_download(repo, model, local_dir=install_dir, local_dir_use_symlinks=False) - shutil.move(os.path.join(install_dir, model), install_file) - print('Downloaded tokenizer') - return install_file \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/midas/midas/__init__.py b/spaces/abhishek/sketch-to-image/annotator/midas/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/datasets/cityscapes_769x769.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/datasets/cityscapes_769x769.py deleted file mode 100644 index 336c7b254fe392b4703039fec86a83acdbd2e1a5..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/datasets/cityscapes_769x769.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = './cityscapes.py' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (769, 769) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2049, 1025), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2049, 1025), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/processing.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/processing.py deleted file mode 100644 index 3d90b96e0823d5f116755e7f498d25d17017224a..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/processing.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import subprocess -import tempfile - -from annotator.uniformer.mmcv.utils import requires_executable - - -@requires_executable('ffmpeg') -def convert_video(in_file, - out_file, - print_cmd=False, - pre_options='', - **kwargs): - """Convert a video with ffmpeg. - - This provides a general api to ffmpeg, the executed command is:: - - `ffmpeg -y -i ` - - Options(kwargs) are mapped to ffmpeg commands with the following rules: - - - key=val: "-key val" - - key=True: "-key" - - key=False: "" - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - pre_options (str): Options appears before "-i ". - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = [] - for k, v in kwargs.items(): - if isinstance(v, bool): - if v: - options.append(f'-{k}') - elif k == 'log_level': - assert v in [ - 'quiet', 'panic', 'fatal', 'error', 'warning', 'info', - 'verbose', 'debug', 'trace' - ] - options.append(f'-loglevel {v}') - else: - options.append(f'-{k} {v}') - cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \ - f'{out_file}' - if print_cmd: - print(cmd) - subprocess.call(cmd, shell=True) - - -@requires_executable('ffmpeg') -def resize_video(in_file, - out_file, - size=None, - ratio=None, - keep_ar=False, - log_level='info', - print_cmd=False): - """Resize a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1). - ratio (tuple or float): Expected resize ratio, (2, 0.5) means - (w*2, h*0.5). - keep_ar (bool): Whether to keep original aspect ratio. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - if size is None and ratio is None: - raise ValueError('expected size or ratio must be specified') - if size is not None and ratio is not None: - raise ValueError('size and ratio cannot be specified at the same time') - options = {'log_level': log_level} - if size: - if not keep_ar: - options['vf'] = f'scale={size[0]}:{size[1]}' - else: - options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \ - 'force_original_aspect_ratio=decrease' - else: - if not isinstance(ratio, tuple): - ratio = (ratio, ratio) - options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"' - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def cut_video(in_file, - out_file, - start=None, - end=None, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Cut a clip from a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - start (None or float): Start time (in seconds). - end (None or float): End time (in seconds). - vcodec (None or str): Output video codec, None for unchanged. - acodec (None or str): Output audio codec, None for unchanged. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - if start: - options['ss'] = start - else: - start = 0 - if end: - options['t'] = end - start - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def concat_video(video_list, - out_file, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Concatenate multiple videos into a single one. - - Args: - video_list (list): A list of video filenames - out_file (str): Output video filename - vcodec (None or str): Output video codec, None for unchanged - acodec (None or str): Output audio codec, None for unchanged - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True) - with open(tmp_filename, 'w') as f: - for filename in video_list: - f.write(f'file {osp.abspath(filename)}\n') - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - convert_video( - tmp_filename, - out_file, - print_cmd, - pre_options='-f concat -safe 0', - **options) - os.close(tmp_filehandler) - os.remove(tmp_filename) diff --git a/spaces/agueroooooooooo/Transport_Mode_Detector/app.py b/spaces/agueroooooooooo/Transport_Mode_Detector/app.py deleted file mode 100644 index 89eb91edff67b4ee1bf7db128bc5b77a271e9739..0000000000000000000000000000000000000000 --- a/spaces/agueroooooooooo/Transport_Mode_Detector/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from modality_lstm import ModalityLSTM -import torch.nn as nn -from helper import score_to_modality -from PIL import Image - -label_mapping = { - 'car': [0,'images/Cars.jpg'], - 'walk': [1,'images/walk.jpg'], - 'bus': [2,'images/bus.jpg'], - 'train': [3,'images/train.jpg'], - 'subway': [4,'images/subway.jpg'], - 'bike': [5,'images/bike.jpg'], - 'run': [6,'images/walk.jpg'], - 'boat': [7,'images/walk.jpg'], - 'airplane': [8,'images/walk.jpg'], - 'motorcycle': [9,'images/walk.jpg'], - 'taxi': [10,'images/taxi.jpg'] - } - -def pred(dist,speed,accel,timedelta,jerk,bearing,bearing_rate): - - - - batch_size = 1 - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - train_on_gpu = False - output_size = 5 - hidden_dim = 128 - trip_dim = 7 - n_layers = 2 - drop_prob = 0.2 - net = ModalityLSTM(trip_dim, output_size, batch_size, hidden_dim, n_layers, train_on_gpu, drop_prob, lstm_drop_prob=0.2) - net.load_state_dict(torch.load("Model_Wieghts",map_location=torch.device('cpu'))) - net.eval() - - a=torch.tensor([[dist,speed,accel,timedelta,jerk,bearing,bearing_rate]]) - a=a.float() - a=a.unsqueeze(0) - l = torch.tensor([1]).long() - b,c=net(a,l) - b=b.squeeze(0) - b=score_to_modality(b) - b=b[0] - print(b) - for k,v in label_mapping.items(): - if b == v[0]: - return (str(k),Image.open(v[1])) - - - - - - - - - - - - - - - - - - - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=pred, inputs=['number',"number","number",'number',"number","number","number"], outputs=["text",gr.outputs.Image(type="pil")]) -iface.launch() \ No newline at end of file diff --git a/spaces/ai-moroz/webui-cpu/app.py b/spaces/ai-moroz/webui-cpu/app.py deleted file mode 100644 index 30fb8dcd930c373dc7c6837b8937eec273b329d9..0000000000000000000000000000000000000000 --- a/spaces/ai-moroz/webui-cpu/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/inputs=\[component\],/&\\n queue=False,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/outputs=\[token_counter\]/outputs=[token_counter], queue=False/g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://huggingface.co/spaces/ai-moroz/webui-cpu/resolve/main/header_patch.py -O /home/user/app/header_patch.py") -#os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -#AOM3A3.safetensors -os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AOM3A3.safetensors") -os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AOM3A3.vae.pt") -#Embeddings -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/diona-gi.pt -O /home/user/app/stable-diffusion-webui/embeddings/diona-gi.pt") -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/xiao-gi.pt -O /home/user/app/stable-diffusion-webui/embeddings/xiao-gi.pt") -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/naruko-nrt.pt -O /home/user/app/stable-diffusion-webui/embeddings/naruko-nrt.pt") -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/thoma-gi.pt -O /home/user/app/stable-diffusion-webui/embeddings/thoma-gi.pt") -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/kaeya-gi.pt -O /home/user/app/stable-diffusion-webui/embeddings/kaeya-gi.pt") -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/ciel-tm.pt -O /home/user/app/stable-diffusion-webui/embeddings/ciel-tm.pt") -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/grinteeth.pt -O /home/user/app/stable-diffusion-webui/embeddings/grinteeth.pt") - -if "IS_SHARED_UI" in os.environ: - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - os.system(f"wget -q {os.getenv('EMBED_LINK')} -O /home/user/app/stable-diffusion-webui/embeddings/{os.getenv('EMBED_NAME')}") - #os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - #os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - os.system(f"wget -q {os.getenv('EMBED_LINK')} -O /home/user/app/stable-diffusion-webui/embeddings/{os.getenv('EMBED_NAME')}") - #os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - #os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"python launch.py --precision full --no-half --use-cpu SD GFPGAN BSRGAN ESRGAN SCUNet CodeFormer --all --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --skip-torch-cuda-test") diff --git a/spaces/akhaliq/Detic/tools/merge_lvis_coco.py b/spaces/akhaliq/Detic/tools/merge_lvis_coco.py deleted file mode 100644 index abc2b673a30541fd71679a549acd9a53f7693183..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/tools/merge_lvis_coco.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from collections import defaultdict -import torch -import sys -import json -import numpy as np - -from detectron2.structures import Boxes, pairwise_iou -COCO_PATH = 'datasets/coco/annotations/instances_train2017.json' -IMG_PATH = 'datasets/coco/train2017/' -LVIS_PATH = 'datasets/lvis/lvis_v1_train.json' -NO_SEG = False -if NO_SEG: - SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_box.json' -else: - SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_mask.json' -THRESH = 0.7 -DEBUG = False - -# This mapping is extracted from the official LVIS mapping: -# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json -COCO_SYNSET_CATEGORIES = [ - {"synset": "person.n.01", "coco_cat_id": 1}, - {"synset": "bicycle.n.01", "coco_cat_id": 2}, - {"synset": "car.n.01", "coco_cat_id": 3}, - {"synset": "motorcycle.n.01", "coco_cat_id": 4}, - {"synset": "airplane.n.01", "coco_cat_id": 5}, - {"synset": "bus.n.01", "coco_cat_id": 6}, - {"synset": "train.n.01", "coco_cat_id": 7}, - {"synset": "truck.n.01", "coco_cat_id": 8}, - {"synset": "boat.n.01", "coco_cat_id": 9}, - {"synset": "traffic_light.n.01", "coco_cat_id": 10}, - {"synset": "fireplug.n.01", "coco_cat_id": 11}, - {"synset": "stop_sign.n.01", "coco_cat_id": 13}, - {"synset": "parking_meter.n.01", "coco_cat_id": 14}, - {"synset": "bench.n.01", "coco_cat_id": 15}, - {"synset": "bird.n.01", "coco_cat_id": 16}, - {"synset": "cat.n.01", "coco_cat_id": 17}, - {"synset": "dog.n.01", "coco_cat_id": 18}, - {"synset": "horse.n.01", "coco_cat_id": 19}, - {"synset": "sheep.n.01", "coco_cat_id": 20}, - {"synset": "beef.n.01", "coco_cat_id": 21}, - {"synset": "elephant.n.01", "coco_cat_id": 22}, - {"synset": "bear.n.01", "coco_cat_id": 23}, - {"synset": "zebra.n.01", "coco_cat_id": 24}, - {"synset": "giraffe.n.01", "coco_cat_id": 25}, - {"synset": "backpack.n.01", "coco_cat_id": 27}, - {"synset": "umbrella.n.01", "coco_cat_id": 28}, - {"synset": "bag.n.04", "coco_cat_id": 31}, - {"synset": "necktie.n.01", "coco_cat_id": 32}, - {"synset": "bag.n.06", "coco_cat_id": 33}, - {"synset": "frisbee.n.01", "coco_cat_id": 34}, - {"synset": "ski.n.01", "coco_cat_id": 35}, - {"synset": "snowboard.n.01", "coco_cat_id": 36}, - {"synset": "ball.n.06", "coco_cat_id": 37}, - {"synset": "kite.n.03", "coco_cat_id": 38}, - {"synset": "baseball_bat.n.01", "coco_cat_id": 39}, - {"synset": "baseball_glove.n.01", "coco_cat_id": 40}, - {"synset": "skateboard.n.01", "coco_cat_id": 41}, - {"synset": "surfboard.n.01", "coco_cat_id": 42}, - {"synset": "tennis_racket.n.01", "coco_cat_id": 43}, - {"synset": "bottle.n.01", "coco_cat_id": 44}, - {"synset": "wineglass.n.01", "coco_cat_id": 46}, - {"synset": "cup.n.01", "coco_cat_id": 47}, - {"synset": "fork.n.01", "coco_cat_id": 48}, - {"synset": "knife.n.01", "coco_cat_id": 49}, - {"synset": "spoon.n.01", "coco_cat_id": 50}, - {"synset": "bowl.n.03", "coco_cat_id": 51}, - {"synset": "banana.n.02", "coco_cat_id": 52}, - {"synset": "apple.n.01", "coco_cat_id": 53}, - {"synset": "sandwich.n.01", "coco_cat_id": 54}, - {"synset": "orange.n.01", "coco_cat_id": 55}, - {"synset": "broccoli.n.01", "coco_cat_id": 56}, - {"synset": "carrot.n.01", "coco_cat_id": 57}, - # {"synset": "frank.n.02", "coco_cat_id": 58}, - {"synset": "sausage.n.01", "coco_cat_id": 58}, - {"synset": "pizza.n.01", "coco_cat_id": 59}, - {"synset": "doughnut.n.02", "coco_cat_id": 60}, - {"synset": "cake.n.03", "coco_cat_id": 61}, - {"synset": "chair.n.01", "coco_cat_id": 62}, - {"synset": "sofa.n.01", "coco_cat_id": 63}, - {"synset": "pot.n.04", "coco_cat_id": 64}, - {"synset": "bed.n.01", "coco_cat_id": 65}, - {"synset": "dining_table.n.01", "coco_cat_id": 67}, - {"synset": "toilet.n.02", "coco_cat_id": 70}, - {"synset": "television_receiver.n.01", "coco_cat_id": 72}, - {"synset": "laptop.n.01", "coco_cat_id": 73}, - {"synset": "mouse.n.04", "coco_cat_id": 74}, - {"synset": "remote_control.n.01", "coco_cat_id": 75}, - {"synset": "computer_keyboard.n.01", "coco_cat_id": 76}, - {"synset": "cellular_telephone.n.01", "coco_cat_id": 77}, - {"synset": "microwave.n.02", "coco_cat_id": 78}, - {"synset": "oven.n.01", "coco_cat_id": 79}, - {"synset": "toaster.n.02", "coco_cat_id": 80}, - {"synset": "sink.n.01", "coco_cat_id": 81}, - {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82}, - {"synset": "book.n.01", "coco_cat_id": 84}, - {"synset": "clock.n.01", "coco_cat_id": 85}, - {"synset": "vase.n.01", "coco_cat_id": 86}, - {"synset": "scissors.n.01", "coco_cat_id": 87}, - {"synset": "teddy.n.01", "coco_cat_id": 88}, - {"synset": "hand_blower.n.01", "coco_cat_id": 89}, - {"synset": "toothbrush.n.01", "coco_cat_id": 90}, -] - - -def get_bbox(ann): - bbox = ann['bbox'] - return [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]] - - -if __name__ == '__main__': - file_name_key = 'file_name' if 'v0.5' in LVIS_PATH else 'coco_url' - coco_data = json.load(open(COCO_PATH, 'r')) - lvis_data = json.load(open(LVIS_PATH, 'r')) - - coco_cats = coco_data['categories'] - lvis_cats = lvis_data['categories'] - - num_find = 0 - num_not_find = 0 - num_twice = 0 - coco2lviscats = {} - synset2lvisid = {x['synset']: x['id'] for x in lvis_cats} - # cocoid2synset = {x['coco_cat_id']: x['synset'] for x in COCO_SYNSET_CATEGORIES} - coco2lviscats = {x['coco_cat_id']: synset2lvisid[x['synset']] \ - for x in COCO_SYNSET_CATEGORIES if x['synset'] in synset2lvisid} - print(len(coco2lviscats)) - - lvis_file2id = {x[file_name_key][-16:]: x['id'] for x in lvis_data['images']} - lvis_id2img = {x['id']: x for x in lvis_data['images']} - lvis_catid2name = {x['id']: x['name'] for x in lvis_data['categories']} - - coco_file2anns = {} - coco_id2img = {x['id']: x for x in coco_data['images']} - coco_img2anns = defaultdict(list) - for ann in coco_data['annotations']: - coco_img = coco_id2img[ann['image_id']] - file_name = coco_img['file_name'][-16:] - if ann['category_id'] in coco2lviscats and \ - file_name in lvis_file2id: - lvis_image_id = lvis_file2id[file_name] - lvis_image = lvis_id2img[lvis_image_id] - lvis_cat_id = coco2lviscats[ann['category_id']] - if lvis_cat_id in lvis_image['neg_category_ids']: - continue - if DEBUG: - import cv2 - img_path = IMG_PATH + file_name - img = cv2.imread(img_path) - print(lvis_catid2name[lvis_cat_id]) - print('neg', [lvis_catid2name[x] for x in lvis_image['neg_category_ids']]) - cv2.imshow('img', img) - cv2.waitKey() - ann['category_id'] = lvis_cat_id - ann['image_id'] = lvis_image_id - coco_img2anns[file_name].append(ann) - - lvis_img2anns = defaultdict(list) - for ann in lvis_data['annotations']: - lvis_img = lvis_id2img[ann['image_id']] - file_name = lvis_img[file_name_key][-16:] - lvis_img2anns[file_name].append(ann) - - ann_id_count = 0 - anns = [] - for file_name in lvis_img2anns: - coco_anns = coco_img2anns[file_name] - lvis_anns = lvis_img2anns[file_name] - ious = pairwise_iou( - Boxes(torch.tensor([get_bbox(x) for x in coco_anns])), - Boxes(torch.tensor([get_bbox(x) for x in lvis_anns])) - ) - - for ann in lvis_anns: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - - for i, ann in enumerate(coco_anns): - if len(ious[i]) == 0 or ious[i].max() < THRESH: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - else: - duplicated = False - for j in range(len(ious[i])): - if ious[i, j] >= THRESH and \ - coco_anns[i]['category_id'] == lvis_anns[j]['category_id']: - duplicated = True - if not duplicated: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - if NO_SEG: - for ann in anns: - del ann['segmentation'] - lvis_data['annotations'] = anns - - print('# Images', len(lvis_data['images'])) - print('# Anns', len(lvis_data['annotations'])) - json.dump(lvis_data, open(SAVE_PATH, 'w')) diff --git a/spaces/akhaliq/steerable-nafx/README.md b/spaces/akhaliq/steerable-nafx/README.md deleted file mode 100644 index 1459945f239b375a1febc6667ce099211a4c5151..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/steerable-nafx/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Steerable Nafx -emoji: 💻 -colorFrom: green -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py deleted file mode 100644 index 9a89a838b9a5cb264e9ae9d269fbedca6e2d6333..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from pip._internal.distributions.base import AbstractDistribution -from pip._internal.distributions.sdist import SourceDistribution -from pip._internal.distributions.wheel import WheelDistribution -from pip._internal.req.req_install import InstallRequirement - - -def make_distribution_for_install_requirement( - install_req: InstallRequirement, -) -> AbstractDistribution: - """Returns a Distribution for the given InstallRequirement""" - # Editable requirements will always be source distributions. They use the - # legacy logic until we create a modern standard for them. - if install_req.editable: - return SourceDistribution(install_req) - - # If it's a wheel, it's a WheelDistribution - if install_req.is_wheel: - return WheelDistribution(install_req) - - # Otherwise, a SourceDistribution - return SourceDistribution(install_req) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py deleted file mode 100644 index 6039a0543204739849335ad27894cf64224ad828..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -import re -import sys -import os - -from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style, BEL -from .winterm import WinTerm, WinColor, WinStyle -from .win32 import windll, winapi_test - - -winterm = None -if windll is not None: - winterm = WinTerm() - - -class StreamWrapper(object): - ''' - Wraps a stream (such as stdout), acting as a transparent proxy for all - attribute access apart from method 'write()', which is delegated to our - Converter instance. - ''' - def __init__(self, wrapped, converter): - # double-underscore everything to prevent clashes with names of - # attributes on the wrapped stream object. - self.__wrapped = wrapped - self.__convertor = converter - - def __getattr__(self, name): - return getattr(self.__wrapped, name) - - def __enter__(self, *args, **kwargs): - # special method lookup bypasses __getattr__/__getattribute__, see - # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit - # thus, contextlib magic methods are not proxied via __getattr__ - return self.__wrapped.__enter__(*args, **kwargs) - - def __exit__(self, *args, **kwargs): - return self.__wrapped.__exit__(*args, **kwargs) - - def write(self, text): - self.__convertor.write(text) - - def isatty(self): - stream = self.__wrapped - if 'PYCHARM_HOSTED' in os.environ: - if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__): - return True - try: - stream_isatty = stream.isatty - except AttributeError: - return False - else: - return stream_isatty() - - @property - def closed(self): - stream = self.__wrapped - try: - return stream.closed - except AttributeError: - return True - - -class AnsiToWin32(object): - ''' - Implements a 'write()' method which, on Windows, will strip ANSI character - sequences from the text, and if outputting to a tty, will convert them into - win32 function calls. - ''' - ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer - ANSI_OSC_RE = re.compile('\001?\033\\]([^\a]*)(\a)\002?') # Operating System Command - - def __init__(self, wrapped, convert=None, strip=None, autoreset=False): - # The wrapped stream (normally sys.stdout or sys.stderr) - self.wrapped = wrapped - - # should we reset colors to defaults after every .write() - self.autoreset = autoreset - - # create the proxy wrapping our output stream - self.stream = StreamWrapper(wrapped, self) - - on_windows = os.name == 'nt' - # We test if the WinAPI works, because even if we are on Windows - # we may be using a terminal that doesn't support the WinAPI - # (e.g. Cygwin Terminal). In this case it's up to the terminal - # to support the ANSI codes. - conversion_supported = on_windows and winapi_test() - - # should we strip ANSI sequences from our output? - if strip is None: - strip = conversion_supported or (not self.stream.closed and not self.stream.isatty()) - self.strip = strip - - # should we should convert ANSI sequences into win32 calls? - if convert is None: - convert = conversion_supported and not self.stream.closed and self.stream.isatty() - self.convert = convert - - # dict of ansi codes to win32 functions and parameters - self.win32_calls = self.get_win32_calls() - - # are we wrapping stderr? - self.on_stderr = self.wrapped is sys.stderr - - def should_wrap(self): - ''' - True if this class is actually needed. If false, then the output - stream will not be affected, nor will win32 calls be issued, so - wrapping stdout is not actually required. This will generally be - False on non-Windows platforms, unless optional functionality like - autoreset has been requested using kwargs to init() - ''' - return self.convert or self.strip or self.autoreset - - def get_win32_calls(self): - if self.convert and winterm: - return { - AnsiStyle.RESET_ALL: (winterm.reset_all, ), - AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT), - AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL), - AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL), - AnsiFore.BLACK: (winterm.fore, WinColor.BLACK), - AnsiFore.RED: (winterm.fore, WinColor.RED), - AnsiFore.GREEN: (winterm.fore, WinColor.GREEN), - AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW), - AnsiFore.BLUE: (winterm.fore, WinColor.BLUE), - AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA), - AnsiFore.CYAN: (winterm.fore, WinColor.CYAN), - AnsiFore.WHITE: (winterm.fore, WinColor.GREY), - AnsiFore.RESET: (winterm.fore, ), - AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True), - AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True), - AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True), - AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True), - AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True), - AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True), - AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True), - AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True), - AnsiBack.BLACK: (winterm.back, WinColor.BLACK), - AnsiBack.RED: (winterm.back, WinColor.RED), - AnsiBack.GREEN: (winterm.back, WinColor.GREEN), - AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW), - AnsiBack.BLUE: (winterm.back, WinColor.BLUE), - AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA), - AnsiBack.CYAN: (winterm.back, WinColor.CYAN), - AnsiBack.WHITE: (winterm.back, WinColor.GREY), - AnsiBack.RESET: (winterm.back, ), - AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True), - AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True), - AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True), - AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True), - AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True), - AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True), - AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True), - AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True), - } - return dict() - - def write(self, text): - if self.strip or self.convert: - self.write_and_convert(text) - else: - self.wrapped.write(text) - self.wrapped.flush() - if self.autoreset: - self.reset_all() - - - def reset_all(self): - if self.convert: - self.call_win32('m', (0,)) - elif not self.strip and not self.stream.closed: - self.wrapped.write(Style.RESET_ALL) - - - def write_and_convert(self, text): - ''' - Write the given text to our wrapped stream, stripping any ANSI - sequences from the text, and optionally converting them into win32 - calls. - ''' - cursor = 0 - text = self.convert_osc(text) - for match in self.ANSI_CSI_RE.finditer(text): - start, end = match.span() - self.write_plain_text(text, cursor, start) - self.convert_ansi(*match.groups()) - cursor = end - self.write_plain_text(text, cursor, len(text)) - - - def write_plain_text(self, text, start, end): - if start < end: - self.wrapped.write(text[start:end]) - self.wrapped.flush() - - - def convert_ansi(self, paramstring, command): - if self.convert: - params = self.extract_params(command, paramstring) - self.call_win32(command, params) - - - def extract_params(self, command, paramstring): - if command in 'Hf': - params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';')) - while len(params) < 2: - # defaults: - params = params + (1,) - else: - params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0) - if len(params) == 0: - # defaults: - if command in 'JKm': - params = (0,) - elif command in 'ABCD': - params = (1,) - - return params - - - def call_win32(self, command, params): - if command == 'm': - for param in params: - if param in self.win32_calls: - func_args = self.win32_calls[param] - func = func_args[0] - args = func_args[1:] - kwargs = dict(on_stderr=self.on_stderr) - func(*args, **kwargs) - elif command in 'J': - winterm.erase_screen(params[0], on_stderr=self.on_stderr) - elif command in 'K': - winterm.erase_line(params[0], on_stderr=self.on_stderr) - elif command in 'Hf': # cursor position - absolute - winterm.set_cursor_position(params, on_stderr=self.on_stderr) - elif command in 'ABCD': # cursor position - relative - n = params[0] - # A - up, B - down, C - forward, D - back - x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command] - winterm.cursor_adjust(x, y, on_stderr=self.on_stderr) - - - def convert_osc(self, text): - for match in self.ANSI_OSC_RE.finditer(text): - start, end = match.span() - text = text[:start] + text[end:] - paramstring, command = match.groups() - if command == BEL: - if paramstring.count(";") == 1: - params = paramstring.split(";") - # 0 - change title and icon (we will only change title) - # 1 - change icon (we don't support this) - # 2 - change title - if params[0] in '02': - winterm.set_title(params[1]) - return text diff --git a/spaces/algomuffin/jojo_fork/e4e/models/latent_codes_pool.py b/spaces/algomuffin/jojo_fork/e4e/models/latent_codes_pool.py deleted file mode 100644 index 0281d4b5e80f8eb26e824fa35b4f908dcb6634e6..0000000000000000000000000000000000000000 --- a/spaces/algomuffin/jojo_fork/e4e/models/latent_codes_pool.py +++ /dev/null @@ -1,55 +0,0 @@ -import random -import torch - - -class LatentCodesPool: - """This class implements latent codes buffer that stores previously generated w latent codes. - This buffer enables us to update discriminators using a history of generated w's - rather than the ones produced by the latest encoder. - """ - - def __init__(self, pool_size): - """Initialize the ImagePool class - Parameters: - pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created - """ - self.pool_size = pool_size - if self.pool_size > 0: # create an empty pool - self.num_ws = 0 - self.ws = [] - - def query(self, ws): - """Return w's from the pool. - Parameters: - ws: the latest generated w's from the generator - Returns w's from the buffer. - By 50/100, the buffer will return input w's. - By 50/100, the buffer will return w's previously stored in the buffer, - and insert the current w's to the buffer. - """ - if self.pool_size == 0: # if the buffer size is 0, do nothing - return ws - return_ws = [] - for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512) - # w = torch.unsqueeze(image.data, 0) - if w.ndim == 2: - i = random.randint(0, len(w) - 1) # apply a random latent index as a candidate - w = w[i] - self.handle_w(w, return_ws) - return_ws = torch.stack(return_ws, 0) # collect all the images and return - return return_ws - - def handle_w(self, w, return_ws): - if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer - self.num_ws = self.num_ws + 1 - self.ws.append(w) - return_ws.append(w) - else: - p = random.uniform(0, 1) - if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer - random_id = random.randint(0, self.pool_size - 1) # randint is inclusive - tmp = self.ws[random_id].clone() - self.ws[random_id] = w - return_ws.append(tmp) - else: # by another 50% chance, the buffer will return the current image - return_ws.append(w) diff --git a/spaces/ali-ghamdan/realesrgan-models/realesrgan/data/__init__.py b/spaces/ali-ghamdan/realesrgan-models/realesrgan/data/__init__.py deleted file mode 100644 index a3f8fdd1aa47c12de9687c578094303eb7369246..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/realesrgan-models/realesrgan/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import dataset modules for registry -# scan all the files that end with '_dataset.py' under the data folder -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames] diff --git a/spaces/aliabd/SummerTime/evaluation/base_metric.py b/spaces/aliabd/SummerTime/evaluation/base_metric.py deleted file mode 100644 index fc6349011a2b7971ba7330e0d28579d9fe5a94fb..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/evaluation/base_metric.py +++ /dev/null @@ -1,27 +0,0 @@ -from typing import List, Tuple, Dict - - -class SummMetric: - metric_name: str = None - range: Tuple[float, float] = None - higher_is_better: bool = None - requires_heavy_compute: bool = None - - def evaluate( - self, - # TODO zhangir: integrate with dataset api - inputs: List[str], - targets: List[str], - keys: List[str], - ) -> Dict[str, float]: - """ - All metrics should have this function. - :input: A list of summaries. - :target: A list of target summaries corresponding to each entry of input. - :keys: Which metrics to return, - e.g, ['rouge_1_f_score', 'rouge_2_f_score'] - :return: A dictionary with keys metrics and values scores. - """ - raise NotImplementedError( - "the base class for metrics shouldn't be instantiated!" - ) diff --git a/spaces/aliabid94/AutoGPT/autogpt/json_utils/__init__.py b/spaces/aliabid94/AutoGPT/autogpt/json_utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/allknowingroger/Image-Models-Test46/app.py b/spaces/allknowingroger/Image-Models-Test46/app.py deleted file mode 100644 index 99dfa0247e77ee5463d2cb9872420328776c57fb..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test46/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Yntec/Noosphere_v3_CVAE", - "dpwm/lora-trained-xl-2", - "vic-yes/fast-efmediastyle-class", - "georgeNakayama/textual_inversion_scnnt_710", - "digiplay/whatamix_v1", - "Yntec/RealRainbows", - "Yntec/yabalMixTrue25D_v2_VAE", - "arham061/arham-lora", - "TheUpperCaseGuy/finetune-lora-stable-diffusion", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alphunt/diffdock-alphunt-demo/utils/torsion.py b/spaces/alphunt/diffdock-alphunt-demo/utils/torsion.py deleted file mode 100644 index e25ca42d989b137132b789a848c2cb54b85c0ec4..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/utils/torsion.py +++ /dev/null @@ -1,94 +0,0 @@ -import networkx as nx -import numpy as np -import torch, copy -from scipy.spatial.transform import Rotation as R -from torch_geometric.utils import to_networkx -from torch_geometric.data import Data - -""" - Preprocessing and computation for torsional updates to conformers -""" - - -def get_transformation_mask(pyg_data): - G = to_networkx(pyg_data.to_homogeneous(), to_undirected=False) - to_rotate = [] - edges = pyg_data['ligand', 'ligand'].edge_index.T.numpy() - for i in range(0, edges.shape[0], 2): - assert edges[i, 0] == edges[i+1, 1] - - G2 = G.to_undirected() - G2.remove_edge(*edges[i]) - if not nx.is_connected(G2): - l = list(sorted(nx.connected_components(G2), key=len)[0]) - if len(l) > 1: - if edges[i, 0] in l: - to_rotate.append([]) - to_rotate.append(l) - else: - to_rotate.append(l) - to_rotate.append([]) - continue - to_rotate.append([]) - to_rotate.append([]) - - mask_edges = np.asarray([0 if len(l) == 0 else 1 for l in to_rotate], dtype=bool) - mask_rotate = np.zeros((np.sum(mask_edges), len(G.nodes())), dtype=bool) - idx = 0 - for i in range(len(G.edges())): - if mask_edges[i]: - mask_rotate[idx][np.asarray(to_rotate[i], dtype=int)] = True - idx += 1 - - return mask_edges, mask_rotate - - -def modify_conformer_torsion_angles(pos, edge_index, mask_rotate, torsion_updates, as_numpy=False): - pos = copy.deepcopy(pos) - if type(pos) != np.ndarray: pos = pos.cpu().numpy() - - for idx_edge, e in enumerate(edge_index.cpu().numpy()): - if torsion_updates[idx_edge] == 0: - continue - u, v = e[0], e[1] - - # check if need to reverse the edge, v should be connected to the part that gets rotated - assert not mask_rotate[idx_edge, u] - assert mask_rotate[idx_edge, v] - - rot_vec = pos[u] - pos[v] # convention: positive rotation if pointing inwards - rot_vec = rot_vec * torsion_updates[idx_edge] / np.linalg.norm(rot_vec) # idx_edge! - rot_mat = R.from_rotvec(rot_vec).as_matrix() - - pos[mask_rotate[idx_edge]] = (pos[mask_rotate[idx_edge]] - pos[v]) @ rot_mat.T + pos[v] - - if not as_numpy: pos = torch.from_numpy(pos.astype(np.float32)) - return pos - - -def perturb_batch(data, torsion_updates, split=False, return_updates=False): - if type(data) is Data: - return modify_conformer_torsion_angles(data.pos, - data.edge_index.T[data.edge_mask], - data.mask_rotate, torsion_updates) - pos_new = [] if split else copy.deepcopy(data.pos) - edges_of_interest = data.edge_index.T[data.edge_mask] - idx_node = 0 - idx_edges = 0 - torsion_update_list = [] - for i, mask_rotate in enumerate(data.mask_rotate): - pos = data.pos[idx_node:idx_node + mask_rotate.shape[1]] - edges = edges_of_interest[idx_edges:idx_edges + mask_rotate.shape[0]] - idx_node - torsion_update = torsion_updates[idx_edges:idx_edges + mask_rotate.shape[0]] - torsion_update_list.append(torsion_update) - pos_new_ = modify_conformer_torsion_angles(pos, edges, mask_rotate, torsion_update) - if split: - pos_new.append(pos_new_) - else: - pos_new[idx_node:idx_node + mask_rotate.shape[1]] = pos_new_ - - idx_node += mask_rotate.shape[1] - idx_edges += mask_rotate.shape[0] - if return_updates: - return pos_new, torsion_update_list - return pos_new \ No newline at end of file diff --git a/spaces/altafalam3/Text-Summarizer/extractive_summarizer/bert_parent.py b/spaces/altafalam3/Text-Summarizer/extractive_summarizer/bert_parent.py deleted file mode 100644 index 4891d39a8c284d04773d34550d8ccbb65938a0af..0000000000000000000000000000000000000000 --- a/spaces/altafalam3/Text-Summarizer/extractive_summarizer/bert_parent.py +++ /dev/null @@ -1,176 +0,0 @@ -from typing import List, Union - -import torch -import streamlit as st -import numpy as np -from numpy import ndarray -from transformers import (AlbertModel, AlbertTokenizer, BertModel, - BertTokenizer, DistilBertModel, DistilBertTokenizer, - PreTrainedModel, PreTrainedTokenizer, XLMModel, - XLMTokenizer, XLNetModel, XLNetTokenizer) - -@st.cache() -def load_hf_model(base_model, model_name, device): - model = base_model.from_pretrained(model_name, output_hidden_states=True).to(device) - return model - -class BertParent(object): - """ - Base handler for BERT models. - """ - - MODELS = { - 'bert-base-uncased': (BertModel, BertTokenizer), - 'bert-large-uncased': (BertModel, BertTokenizer), - 'xlnet-base-cased': (XLNetModel, XLNetTokenizer), - 'xlm-mlm-enfr-1024': (XLMModel, XLMTokenizer), - 'distilbert-base-uncased': (DistilBertModel, DistilBertTokenizer), - 'albert-base-v1': (AlbertModel, AlbertTokenizer), - 'albert-large-v1': (AlbertModel, AlbertTokenizer) - } - - def __init__( - self, - model: str, - custom_model: PreTrainedModel = None, - custom_tokenizer: PreTrainedTokenizer = None, - gpu_id: int = 0, - ): - """ - :param model: Model is the string path for the bert weights. If given a keyword, the s3 path will be used. - :param custom_model: This is optional if a custom bert model is used. - :param custom_tokenizer: Place to use custom tokenizer. - """ - base_model, base_tokenizer = self.MODELS.get(model, (None, None)) - - self.device = torch.device("cpu") - if torch.cuda.is_available(): - assert ( - isinstance(gpu_id, int) and (0 <= gpu_id and gpu_id < torch.cuda.device_count()) - ), f"`gpu_id` must be an integer between 0 to {torch.cuda.device_count() - 1}. But got: {gpu_id}" - - self.device = torch.device(f"cuda:{gpu_id}") - - if custom_model: - self.model = custom_model.to(self.device) - else: - # self.model = base_model.from_pretrained( - # model, output_hidden_states=True).to(self.device) - self.model = load_hf_model(base_model, model, self.device) - - if custom_tokenizer: - self.tokenizer = custom_tokenizer - else: - self.tokenizer = base_tokenizer.from_pretrained(model) - - self.model.eval() - - - def tokenize_input(self, text: str) -> torch.tensor: - """ - Tokenizes the text input. - :param text: Text to tokenize. - :return: Returns a torch tensor. - """ - tokenized_text = self.tokenizer.tokenize(text) - indexed_tokens = self.tokenizer.convert_tokens_to_ids(tokenized_text) - return torch.tensor([indexed_tokens]).to(self.device) - - def _pooled_handler(self, hidden: torch.Tensor, - reduce_option: str) -> torch.Tensor: - """ - Handles torch tensor. - :param hidden: The hidden torch tensor to process. - :param reduce_option: The reduce option to use, such as mean, etc. - :return: Returns a torch tensor. - """ - - if reduce_option == 'max': - return hidden.max(dim=1)[0].squeeze() - - elif reduce_option == 'median': - return hidden.median(dim=1)[0].squeeze() - - return hidden.mean(dim=1).squeeze() - - def extract_embeddings( - self, - text: str, - hidden: Union[List[int], int] = -2, - reduce_option: str = 'mean', - hidden_concat: bool = False, - ) -> torch.Tensor: - """ - Extracts the embeddings for the given text. - :param text: The text to extract embeddings for. - :param hidden: The hidden layer(s) to use for a readout handler. - :param squeeze: If we should squeeze the outputs (required for some layers). - :param reduce_option: How we should reduce the items. - :param hidden_concat: Whether or not to concat multiple hidden layers. - :return: A torch vector. - """ - tokens_tensor = self.tokenize_input(text) - pooled, hidden_states = self.model(tokens_tensor)[-2:] - - # deprecated temporary keyword functions. - if reduce_option == 'concat_last_4': - last_4 = [hidden_states[i] for i in (-1, -2, -3, -4)] - cat_hidden_states = torch.cat(tuple(last_4), dim=-1) - return torch.mean(cat_hidden_states, dim=1).squeeze() - - elif reduce_option == 'reduce_last_4': - last_4 = [hidden_states[i] for i in (-1, -2, -3, -4)] - return torch.cat(tuple(last_4), dim=1).mean(axis=1).squeeze() - - elif type(hidden) == int: - hidden_s = hidden_states[hidden] - return self._pooled_handler(hidden_s, reduce_option) - - elif hidden_concat: - last_states = [hidden_states[i] for i in hidden] - cat_hidden_states = torch.cat(tuple(last_states), dim=-1) - return torch.mean(cat_hidden_states, dim=1).squeeze() - - last_states = [hidden_states[i] for i in hidden] - hidden_s = torch.cat(tuple(last_states), dim=1) - - return self._pooled_handler(hidden_s, reduce_option) - - def create_matrix( - self, - content: List[str], - hidden: Union[List[int], int] = -2, - reduce_option: str = 'mean', - hidden_concat: bool = False, - ) -> ndarray: - """ - Create matrix from the embeddings. - :param content: The list of sentences. - :param hidden: Which hidden layer to use. - :param reduce_option: The reduce option to run. - :param hidden_concat: Whether or not to concat multiple hidden layers. - :return: A numpy array matrix of the given content. - """ - - return np.asarray([ - np.squeeze(self.extract_embeddings( - t, hidden=hidden, reduce_option=reduce_option, hidden_concat=hidden_concat - ).data.cpu().numpy()) for t in content - ]) - - def __call__( - self, - content: List[str], - hidden: int = -2, - reduce_option: str = 'mean', - hidden_concat: bool = False, - ) -> ndarray: - """ - Create matrix from the embeddings. - :param content: The list of sentences. - :param hidden: Which hidden layer to use. - :param reduce_option: The reduce option to run. - :param hidden_concat: Whether or not to concat multiple hidden layers. - :return: A numpy array matrix of the given content. - """ - return self.create_matrix(content, hidden, reduce_option, hidden_concat) \ No newline at end of file diff --git a/spaces/amankishore/sjc/adapt.py b/spaces/amankishore/sjc/adapt.py deleted file mode 100644 index 418252b461f7c95f948866152f8d82a0bb9c55a1..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/adapt.py +++ /dev/null @@ -1,163 +0,0 @@ -from pathlib import Path -import json -from math import sqrt -import numpy as np -import torch -from abc import ABCMeta, abstractmethod - - -class ScoreAdapter(metaclass=ABCMeta): - - @abstractmethod - def denoise(self, xs, σ, **kwargs): - pass - - def score(self, xs, σ, **kwargs): - Ds = self.denoise(xs, σ, **kwargs) - grad_log_p_t = (Ds - xs) / (σ ** 2) - return grad_log_p_t - - @abstractmethod - def data_shape(self): - return (3, 256, 256) # for example - - def samps_centered(self): - # if centered, samples expected to be in range [-1, 1], else [0, 1] - return True - - @property - @abstractmethod - def σ_max(self): - pass - - @property - @abstractmethod - def σ_min(self): - pass - - def cond_info(self, batch_size): - return {} - - @abstractmethod - def unet_is_cond(self): - return False - - @abstractmethod - def use_cls_guidance(self): - return False # most models do not use cls guidance - - def classifier_grad(self, xs, σ, ys): - raise NotImplementedError() - - @abstractmethod - def snap_t_to_nearest_tick(self, t): - # need to confirm for each model; continuous time model doesn't need this - return t, None - - @property - def device(self): - return self._device - - def checkpoint_root(self): - """the path at which the pretrained checkpoints are stored""" - with Path(__file__).resolve().with_name("env.json").open("r") as f: - root = json.load(f)['data_root'] - root = Path(root) / "diffusion_ckpts" - return root - - -def karras_t_schedule(ρ=7, N=10, σ_max=80, σ_min=0.002): - ts = [] - for i in range(N): - - t = ( - σ_max ** (1 / ρ) + (i / (N - 1)) * (σ_min ** (1 / ρ) - σ_max ** (1 / ρ)) - ) ** ρ - ts.append(t) - return ts - - -def power_schedule(σ_max, σ_min, num_stages): - σs = np.exp(np.linspace(np.log(σ_max), np.log(σ_min), num_stages)) - return σs - - -class Karras(): - - @classmethod - @torch.no_grad() - def inference( - cls, model, batch_size, num_t, *, - σ_max=80, cls_scaling=1, - init_xs=None, heun=True, - langevin=False, - S_churn=80, S_min=0.05, S_max=50, S_noise=1.003, - ): - σ_max = min(σ_max, model.σ_max) - σ_min = model.σ_min - ts = karras_t_schedule(ρ=7, N=num_t, σ_max=σ_max, σ_min=σ_min) - assert len(ts) == num_t - ts = [model.snap_t_to_nearest_tick(t)[0] for t in ts] - ts.append(0) # 0 is the destination - σ_max = ts[0] - - cond_inputs = model.cond_info(batch_size) - - def compute_step(xs, σ): - grad_log_p_t = model.score( - xs, σ, **(cond_inputs if model.unet_is_cond() else {}) - ) - if model.use_cls_guidance(): - grad_cls = model.classifier_grad(xs, σ, cond_inputs["y"]) - grad_cls = grad_cls * cls_scaling - grad_log_p_t += grad_cls - d_i = -1 * σ * grad_log_p_t - return d_i - - if init_xs is not None: - xs = init_xs.to(model.device) - else: - xs = σ_max * torch.randn( - batch_size, *model.data_shape(), device=model.device - ) - - yield xs - - for i in range(num_t): - t_i = ts[i] - - if langevin and (S_min < t_i and t_i < S_max): - xs, t_i = cls.noise_backward_in_time( - model, xs, t_i, S_noise, S_churn / num_t - ) - - Δt = ts[i+1] - t_i - - d_1 = compute_step(xs, σ=t_i) - xs_1 = xs + Δt * d_1 - - # Heun's 2nd order method; don't apply on the last step - if (not heun) or (ts[i+1] == 0): - xs = xs_1 - else: - d_2 = compute_step(xs_1, σ=ts[i+1]) - xs = xs + Δt * (d_1 + d_2) / 2 - - yield xs - - @staticmethod - def noise_backward_in_time(model, xs, t_i, S_noise, S_churn_i): - n = S_noise * torch.randn_like(xs) - γ_i = min(sqrt(2)-1, S_churn_i) - t_i_hat = t_i * (1 + γ_i) - t_i_hat = model.snap_t_to_nearest_tick(t_i_hat)[0] - xs = xs + n * sqrt(t_i_hat ** 2 - t_i ** 2) - return xs, t_i_hat - - -def test(): - pass - - -if __name__ == "__main__": - test() diff --git a/spaces/amankishore/sjc/ncsn/layers.py b/spaces/amankishore/sjc/ncsn/layers.py deleted file mode 100644 index 283889b86d0ad0bf06114602989cdb988f282770..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/ncsn/layers.py +++ /dev/null @@ -1,456 +0,0 @@ -import torch.nn as nn -import torch -from torch.nn.parameter import Parameter -import torch.nn.functional as F -from .normalization import * -from functools import partial -import math -import torch.nn.init as init - - -def get_act(config): - if config.model.nonlinearity.lower() == 'elu': - return nn.ELU() - elif config.model.nonlinearity.lower() == 'relu': - return nn.ReLU() - elif config.model.nonlinearity.lower() == 'lrelu': - return nn.LeakyReLU(negative_slope=0.2) - elif config.model.nonlinearity.lower() == 'swish': - def swish(x): - return x * torch.sigmoid(x) - return swish - else: - raise NotImplementedError('activation function does not exist!') - -def spectral_norm(layer, n_iters=1): - return torch.nn.utils.spectral_norm(layer, n_power_iterations=n_iters) - -def conv1x1(in_planes, out_planes, stride=1, bias=True, spec_norm=False): - "1x1 convolution" - conv = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, - padding=0, bias=bias) - if spec_norm: - conv = spectral_norm(conv) - return conv - - -def conv3x3(in_planes, out_planes, stride=1, bias=True, spec_norm=False): - "3x3 convolution with padding" - conv = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=bias) - if spec_norm: - conv = spectral_norm(conv) - - return conv - - -def stride_conv3x3(in_planes, out_planes, kernel_size, bias=True, spec_norm=False): - conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=2, - padding=kernel_size // 2, bias=bias) - if spec_norm: - conv = spectral_norm(conv) - return conv - - -def dilated_conv3x3(in_planes, out_planes, dilation, bias=True, spec_norm=False): - conv = nn.Conv2d(in_planes, out_planes, kernel_size=3, padding=dilation, dilation=dilation, bias=bias) - if spec_norm: - conv = spectral_norm(conv) - - return conv - -class CRPBlock(nn.Module): - def __init__(self, features, n_stages, act=nn.ReLU(), maxpool=True, spec_norm=False): - super().__init__() - self.convs = nn.ModuleList() - for i in range(n_stages): - self.convs.append(conv3x3(features, features, stride=1, bias=False, spec_norm=spec_norm)) - self.n_stages = n_stages - if maxpool: - self.maxpool = nn.MaxPool2d(kernel_size=5, stride=1, padding=2) - else: - self.maxpool = nn.AvgPool2d(kernel_size=5, stride=1, padding=2) - - self.act = act - - def forward(self, x): - x = self.act(x) - path = x - for i in range(self.n_stages): - path = self.maxpool(path) - path = self.convs[i](path) - x = path + x - return x - - -class CondCRPBlock(nn.Module): - def __init__(self, features, n_stages, num_classes, normalizer, act=nn.ReLU(), spec_norm=False): - super().__init__() - self.convs = nn.ModuleList() - self.norms = nn.ModuleList() - self.normalizer = normalizer - for i in range(n_stages): - self.norms.append(normalizer(features, num_classes, bias=True)) - self.convs.append(conv3x3(features, features, stride=1, bias=False, spec_norm=spec_norm)) - - self.n_stages = n_stages - self.maxpool = nn.AvgPool2d(kernel_size=5, stride=1, padding=2) - self.act = act - - def forward(self, x, y): - x = self.act(x) - path = x - for i in range(self.n_stages): - path = self.norms[i](path, y) - path = self.maxpool(path) - path = self.convs[i](path) - - x = path + x - return x - - -class RCUBlock(nn.Module): - def __init__(self, features, n_blocks, n_stages, act=nn.ReLU(), spec_norm=False): - super().__init__() - - for i in range(n_blocks): - for j in range(n_stages): - setattr(self, '{}_{}_conv'.format(i + 1, j + 1), conv3x3(features, features, stride=1, bias=False, - spec_norm=spec_norm)) - - self.stride = 1 - self.n_blocks = n_blocks - self.n_stages = n_stages - self.act = act - - def forward(self, x): - for i in range(self.n_blocks): - residual = x - for j in range(self.n_stages): - x = self.act(x) - x = getattr(self, '{}_{}_conv'.format(i + 1, j + 1))(x) - - x += residual - return x - - -class CondRCUBlock(nn.Module): - def __init__(self, features, n_blocks, n_stages, num_classes, normalizer, act=nn.ReLU(), spec_norm=False): - super().__init__() - - for i in range(n_blocks): - for j in range(n_stages): - setattr(self, '{}_{}_norm'.format(i + 1, j + 1), normalizer(features, num_classes, bias=True)) - setattr(self, '{}_{}_conv'.format(i + 1, j + 1), - conv3x3(features, features, stride=1, bias=False, spec_norm=spec_norm)) - - self.stride = 1 - self.n_blocks = n_blocks - self.n_stages = n_stages - self.act = act - self.normalizer = normalizer - - def forward(self, x, y): - for i in range(self.n_blocks): - residual = x - for j in range(self.n_stages): - x = getattr(self, '{}_{}_norm'.format(i + 1, j + 1))(x, y) - x = self.act(x) - x = getattr(self, '{}_{}_conv'.format(i + 1, j + 1))(x) - - x += residual - return x - - -class MSFBlock(nn.Module): - def __init__(self, in_planes, features, spec_norm=False): - """ - :param in_planes: tuples of input planes - """ - super().__init__() - assert isinstance(in_planes, list) or isinstance(in_planes, tuple) - self.convs = nn.ModuleList() - self.features = features - - for i in range(len(in_planes)): - self.convs.append(conv3x3(in_planes[i], features, stride=1, bias=True, spec_norm=spec_norm)) - - def forward(self, xs, shape): - sums = torch.zeros(xs[0].shape[0], self.features, *shape, device=xs[0].device) - for i in range(len(self.convs)): - h = self.convs[i](xs[i]) - h = F.interpolate(h, size=shape, mode='bilinear', align_corners=True) - sums += h - return sums - - -class CondMSFBlock(nn.Module): - def __init__(self, in_planes, features, num_classes, normalizer, spec_norm=False): - """ - :param in_planes: tuples of input planes - """ - super().__init__() - assert isinstance(in_planes, list) or isinstance(in_planes, tuple) - - self.convs = nn.ModuleList() - self.norms = nn.ModuleList() - self.features = features - self.normalizer = normalizer - - for i in range(len(in_planes)): - self.convs.append(conv3x3(in_planes[i], features, stride=1, bias=True, spec_norm=spec_norm)) - self.norms.append(normalizer(in_planes[i], num_classes, bias=True)) - - def forward(self, xs, y, shape): - sums = torch.zeros(xs[0].shape[0], self.features, *shape, device=xs[0].device) - for i in range(len(self.convs)): - h = self.norms[i](xs[i], y) - h = self.convs[i](h) - h = F.interpolate(h, size=shape, mode='bilinear', align_corners=True) - sums += h - return sums - - -class RefineBlock(nn.Module): - def __init__(self, in_planes, features, act=nn.ReLU(), start=False, end=False, maxpool=True, spec_norm=False): - super().__init__() - - assert isinstance(in_planes, tuple) or isinstance(in_planes, list) - self.n_blocks = n_blocks = len(in_planes) - - self.adapt_convs = nn.ModuleList() - for i in range(n_blocks): - self.adapt_convs.append( - RCUBlock(in_planes[i], 2, 2, act, spec_norm=spec_norm) - ) - - self.output_convs = RCUBlock(features, 3 if end else 1, 2, act, spec_norm=spec_norm) - - if not start: - self.msf = MSFBlock(in_planes, features, spec_norm=spec_norm) - - self.crp = CRPBlock(features, 2, act, maxpool=maxpool, spec_norm=spec_norm) - - def forward(self, xs, output_shape): - assert isinstance(xs, tuple) or isinstance(xs, list) - hs = [] - for i in range(len(xs)): - h = self.adapt_convs[i](xs[i]) - hs.append(h) - - if self.n_blocks > 1: - h = self.msf(hs, output_shape) - else: - h = hs[0] - - h = self.crp(h) - h = self.output_convs(h) - - return h - - - -class CondRefineBlock(nn.Module): - def __init__(self, in_planes, features, num_classes, normalizer, act=nn.ReLU(), start=False, end=False, spec_norm=False): - super().__init__() - - assert isinstance(in_planes, tuple) or isinstance(in_planes, list) - self.n_blocks = n_blocks = len(in_planes) - - self.adapt_convs = nn.ModuleList() - for i in range(n_blocks): - self.adapt_convs.append( - CondRCUBlock(in_planes[i], 2, 2, num_classes, normalizer, act, spec_norm=spec_norm) - ) - - self.output_convs = CondRCUBlock(features, 3 if end else 1, 2, num_classes, normalizer, act, spec_norm=spec_norm) - - if not start: - self.msf = CondMSFBlock(in_planes, features, num_classes, normalizer, spec_norm=spec_norm) - - self.crp = CondCRPBlock(features, 2, num_classes, normalizer, act, spec_norm=spec_norm) - - def forward(self, xs, y, output_shape): - assert isinstance(xs, tuple) or isinstance(xs, list) - hs = [] - for i in range(len(xs)): - h = self.adapt_convs[i](xs[i], y) - hs.append(h) - - if self.n_blocks > 1: - h = self.msf(hs, y, output_shape) - else: - h = hs[0] - - h = self.crp(h, y) - h = self.output_convs(h, y) - - return h - - -class ConvMeanPool(nn.Module): - def __init__(self, input_dim, output_dim, kernel_size=3, biases=True, adjust_padding=False, spec_norm=False): - super().__init__() - if not adjust_padding: - conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride=1, padding=kernel_size // 2, bias=biases) - if spec_norm: - conv = spectral_norm(conv) - self.conv = conv - else: - conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride=1, padding=kernel_size // 2, bias=biases) - if spec_norm: - conv = spectral_norm(conv) - - self.conv = nn.Sequential( - nn.ZeroPad2d((1, 0, 1, 0)), - conv - ) - - def forward(self, inputs): - output = self.conv(inputs) - output = sum([output[:, :, ::2, ::2], output[:, :, 1::2, ::2], - output[:, :, ::2, 1::2], output[:, :, 1::2, 1::2]]) / 4. - return output - -class MeanPoolConv(nn.Module): - def __init__(self, input_dim, output_dim, kernel_size=3, biases=True, spec_norm=False): - super().__init__() - self.conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride=1, padding=kernel_size // 2, bias=biases) - if spec_norm: - self.conv = spectral_norm(self.conv) - - def forward(self, inputs): - output = inputs - output = sum([output[:, :, ::2, ::2], output[:, :, 1::2, ::2], - output[:, :, ::2, 1::2], output[:, :, 1::2, 1::2]]) / 4. - return self.conv(output) - - -class UpsampleConv(nn.Module): - def __init__(self, input_dim, output_dim, kernel_size=3, biases=True, spec_norm=False): - super().__init__() - self.conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride=1, padding=kernel_size // 2, bias=biases) - if spec_norm: - self.conv = spectral_norm(self.conv) - self.pixelshuffle = nn.PixelShuffle(upscale_factor=2) - - def forward(self, inputs): - output = inputs - output = torch.cat([output, output, output, output], dim=1) - output = self.pixelshuffle(output) - return self.conv(output) - - -class ConditionalResidualBlock(nn.Module): - def __init__(self, input_dim, output_dim, num_classes, resample=None, act=nn.ELU(), - normalization=ConditionalBatchNorm2d, adjust_padding=False, dilation=None, spec_norm=False): - super().__init__() - self.non_linearity = act - self.input_dim = input_dim - self.output_dim = output_dim - self.resample = resample - self.normalization = normalization - if resample == 'down': - if dilation is not None: - self.conv1 = dilated_conv3x3(input_dim, input_dim, dilation=dilation, spec_norm=spec_norm) - self.normalize2 = normalization(input_dim, num_classes) - self.conv2 = dilated_conv3x3(input_dim, output_dim, dilation=dilation, spec_norm=spec_norm) - conv_shortcut = partial(dilated_conv3x3, dilation=dilation, spec_norm=spec_norm) - else: - self.conv1 = conv3x3(input_dim, input_dim, spec_norm=spec_norm) - self.normalize2 = normalization(input_dim, num_classes) - self.conv2 = ConvMeanPool(input_dim, output_dim, 3, adjust_padding=adjust_padding, spec_norm=spec_norm) - conv_shortcut = partial(ConvMeanPool, kernel_size=1, adjust_padding=adjust_padding, spec_norm=spec_norm) - - elif resample is None: - if dilation is not None: - conv_shortcut = partial(dilated_conv3x3, dilation=dilation, spec_norm=spec_norm) - self.conv1 = dilated_conv3x3(input_dim, output_dim, dilation=dilation, spec_norm=spec_norm) - self.normalize2 = normalization(output_dim, num_classes) - self.conv2 = dilated_conv3x3(output_dim, output_dim, dilation=dilation, spec_norm=spec_norm) - else: - conv_shortcut = nn.Conv2d - self.conv1 = conv3x3(input_dim, output_dim, spec_norm=spec_norm) - self.normalize2 = normalization(output_dim, num_classes) - self.conv2 = conv3x3(output_dim, output_dim, spec_norm=spec_norm) - else: - raise Exception('invalid resample value') - - if output_dim != input_dim or resample is not None: - self.shortcut = conv_shortcut(input_dim, output_dim) - - self.normalize1 = normalization(input_dim, num_classes) - - - def forward(self, x, y): - output = self.normalize1(x, y) - output = self.non_linearity(output) - output = self.conv1(output) - output = self.normalize2(output, y) - output = self.non_linearity(output) - output = self.conv2(output) - - if self.output_dim == self.input_dim and self.resample is None: - shortcut = x - else: - shortcut = self.shortcut(x) - - return shortcut + output - - -class ResidualBlock(nn.Module): - def __init__(self, input_dim, output_dim, resample=None, act=nn.ELU(), - normalization=nn.BatchNorm2d, adjust_padding=False, dilation=None, spec_norm=False): - super().__init__() - self.non_linearity = act - self.input_dim = input_dim - self.output_dim = output_dim - self.resample = resample - self.normalization = normalization - if resample == 'down': - if dilation is not None: - self.conv1 = dilated_conv3x3(input_dim, input_dim, dilation=dilation, spec_norm=spec_norm) - self.normalize2 = normalization(input_dim) - self.conv2 = dilated_conv3x3(input_dim, output_dim, dilation=dilation, spec_norm=spec_norm) - conv_shortcut = partial(dilated_conv3x3, dilation=dilation, spec_norm=spec_norm) - else: - self.conv1 = conv3x3(input_dim, input_dim, spec_norm=spec_norm) - self.normalize2 = normalization(input_dim) - self.conv2 = ConvMeanPool(input_dim, output_dim, 3, adjust_padding=adjust_padding, spec_norm=spec_norm) - conv_shortcut = partial(ConvMeanPool, kernel_size=1, adjust_padding=adjust_padding, spec_norm=spec_norm) - - elif resample is None: - if dilation is not None: - conv_shortcut = partial(dilated_conv3x3, dilation=dilation, spec_norm=spec_norm) - self.conv1 = dilated_conv3x3(input_dim, output_dim, dilation=dilation, spec_norm=spec_norm) - self.normalize2 = normalization(output_dim) - self.conv2 = dilated_conv3x3(output_dim, output_dim, dilation=dilation, spec_norm=spec_norm) - else: - # conv_shortcut = nn.Conv2d ### Something wierd here. - conv_shortcut = partial(conv1x1, spec_norm=spec_norm) - self.conv1 = conv3x3(input_dim, output_dim, spec_norm=spec_norm) - self.normalize2 = normalization(output_dim) - self.conv2 = conv3x3(output_dim, output_dim, spec_norm=spec_norm) - else: - raise Exception('invalid resample value') - - if output_dim != input_dim or resample is not None: - self.shortcut = conv_shortcut(input_dim, output_dim) - - self.normalize1 = normalization(input_dim) - - - def forward(self, x): - output = self.normalize1(x) - output = self.non_linearity(output) - output = self.conv1(output) - output = self.normalize2(output) - output = self.non_linearity(output) - output = self.conv2(output) - - if self.output_dim == self.input_dim and self.resample is None: - shortcut = x - else: - shortcut = self.shortcut(x) - - return shortcut + output diff --git a/spaces/andresgtn/sidewalk-semantic-segmentation/app.py b/spaces/andresgtn/sidewalk-semantic-segmentation/app.py deleted file mode 100644 index ad12bb901d4051ae4cd0c7fcd387d1ba13100423..0000000000000000000000000000000000000000 --- a/spaces/andresgtn/sidewalk-semantic-segmentation/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import numpy as np -import gradio as gr -import torch -from torch import nn -from transformers import SegformerForSemanticSegmentation, SegformerFeatureExtractor - - -#extractor = AutoFeatureExtractor.from_pretrained("andresgtn/segformer-b0-finetuned-ade-64-64-finetuned-semantic-sidewalk") -extractor = SegformerFeatureExtractor() -model = SegformerForSemanticSegmentation.from_pretrained("andresgtn/segformer-b0-finetuned-ade-64-64-finetuned-semantic-sidewalk") - -def rescale_output_image(logits, image): - - upsampled_logits = nn.functional.interpolate( - logits, - size=image.shape[::-1][1:][::-1], # (height, width) - mode='bilinear', - align_corners=False - ) - pred_seg = upsampled_logits.argmax(dim=1)[0] - return pred_seg - -# classify function -def classify(im): - inputs = extractor(images=im, return_tensors="pt")#.to("cuda") - outputs = model(**inputs) - logits = outputs.logits - #classes = logits[0].detach().cpu().numpy().argmax(axis=0) - #classes = rescale_output_image(logits, im).detach().cpu().numpy() - classes = rescale_output_image(logits, im).detach().numpy() - colors = np.array([[128,0,0], [128,128,0], [0, 0, 128], [128,0,128], [0, 0, 0]]) - return colors[classes] - -# sample images -sample_images = [["https://s3.amazonaws.com/moonup/production/uploads/1664719956531-611f9702593efbee33a4f7c9.png"], -["https://s3.amazonaws.com/moonup/production/uploads/1664719956737-611f9702593efbee33a4f7c9.png"]] - -# define gradio interface -title = "Semantic segmentation on sidewalk images" -description = "Drop an image of a sidewalk" -interface = gr.Interface(classify, gr.Image(), 'image', examples=sample_images, - description=description, title=title)# FILL HERE - -interface.launch(debug=False) \ No newline at end of file diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/README.md b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/README.md deleted file mode 100644 index b6610df03d409633e572ef49d67a445d35a63967..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/README.md +++ /dev/null @@ -1,163 +0,0 @@ -# Grounding DINO - ---- - -[![arXiv](https://img.shields.io/badge/arXiv-2303.05499-b31b1b.svg)](https://arxiv.org/abs/2303.05499) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/wxWDt5UiwY8) -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/cMa77r3YrDk) -[![HuggingFace space](https://img.shields.io/badge/🤗-HuggingFace%20Space-cyan.svg)](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) - -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-mscoco)](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-odinw)](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco-minival)](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded) - - - -Official PyTorch implementation of [Grounding DINO](https://arxiv.org/abs/2303.05499), a stronger open-set object detector. Code is available now! - - -## Highlight - -- **Open-Set Detection.** Detect **everything** with language! -- **High Performancce.** COCO zero-shot **52.5 AP** (training without COCO data!). COCO fine-tune **63.0 AP**. -- **Flexible.** Collaboration with Stable Diffusion for Image Editting. - -## News -[2023/03/28] A YouTube [video](https://youtu.be/cMa77r3YrDk) about Grounding DINO and basic object detection prompt engineering. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/28] Add a [demo](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) on Hugging Face Space! \ -[2023/03/27] Support CPU-only mode. Now the model can run on machines without GPUs.\ -[2023/03/25] A [demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) for Grounding DINO is available at Colab. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/22] Code is available Now! - -
- -Description - -ODinW -
- - - -## TODO - -- [x] Release inference code and demo. -- [x] Release checkpoints. -- [ ] Grounding DINO with Stable Diffusion and GLIGEN demos. -- [ ] Release training codes. - -## Install - -If you have a CUDA environment, please make sure the environment variable `CUDA_HOME` is set. It will be compiled under CPU-only mode if no CUDA available. - -```bash -pip install -e . -``` - -## Demo - -```bash -CUDA_VISIBLE_DEVICES=6 python demo/inference_on_a_image.py \ - -c /path/to/config \ - -p /path/to/checkpoint \ - -i .asset/cats.png \ - -o "outputs/0" \ - -t "cat ear." \ - [--cpu-only] # open it for cpu mode -``` -See the `demo/inference_on_a_image.py` for more details. - -**Web UI** - -We also provide a demo code to integrate Grounding DINO with Gradio Web UI. See the file `demo/gradio_app.py` for more details. - -## Checkpoints - - - - - - - - - - - - - - - - - - - - - - - - - -
namebackboneDatabox AP on COCOCheckpointConfig
1GroundingDINO-TSwin-TO365,GoldG,Cap4M48.4 (zero-shot) / 57.2 (fine-tune)Github link | HF linklink
- -## Results - -
- -COCO Object Detection Results - -COCO -
- -
- -ODinW Object Detection Results - -ODinW -
- -
- -Marrying Grounding DINO with Stable Diffusion for Image Editing - -GD_SD -
- -
- -Marrying Grounding DINO with GLIGEN for more Detailed Image Editing - -GD_GLIGEN -
- -## Model - -Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder. - -![arch](.asset/arch.png) - - -## Acknowledgement - -Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work! - -We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well. - -Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models. - - -## Citation - -If you find our work helpful for your research, please consider citing the following BibTeX entry. - -```bibtex -@inproceedings{ShilongLiu2023GroundingDM, - title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection}, - author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang}, - year={2023} -} -``` - - - - diff --git a/spaces/arbml/Ashaar/poetry_diacritizer/util/learning_rates.py b/spaces/arbml/Ashaar/poetry_diacritizer/util/learning_rates.py deleted file mode 100644 index dd3325b4ed746f2d65e00750e40156aef6b6d851..0000000000000000000000000000000000000000 --- a/spaces/arbml/Ashaar/poetry_diacritizer/util/learning_rates.py +++ /dev/null @@ -1,70 +0,0 @@ -import numpy as np -import math - - -class LearningRateDecay: - def __init__(self, lr=0.002, warmup_steps=4000.0) -> None: - self.lr = lr - self.warmup_steps = warmup_steps - - def __call__(self, global_step) -> float: - step = global_step + 1.0 - lr = ( - self.lr - * self.warmup_steps ** 0.5 - * np.minimum(step * self.warmup_steps ** -1.5, step ** -0.5) - ) - - return lr - -class SquareRootScheduler: - def __init__(self, lr=0.1): - self.lr = lr - - def __call__(self, global_step): - global_step = global_step // 1000 - return self.lr * pow(global_step + 1.0, -0.5) - - -class CosineScheduler: - def __init__( - self, max_update, base_lr=0.02, final_lr=0, warmup_steps=0, warmup_begin_lr=0 - ): - self.base_lr_orig = base_lr - self.max_update = max_update - self.final_lr = final_lr - self.warmup_steps = warmup_steps - self.warmup_begin_lr = warmup_begin_lr - self.max_steps = self.max_update - self.warmup_steps - - def get_warmup_lr(self, global_step): - increase = ( - (self.base_lr_orig - self.warmup_begin_lr) - * float(global_step) - / float(self.warmup_steps) - ) - return self.warmup_begin_lr + increase - - def __call__(self, global_step): - if global_step < self.warmup_steps: - return self.get_warmup_lr(global_step) - if global_step <= self.max_update: - self.base_lr = ( - self.final_lr - + (self.base_lr_orig - self.final_lr) - * ( - 1 - + math.cos( - math.pi * (global_step - self.warmup_steps) / self.max_steps - ) - ) - / 2 - ) - return self.base_lr - -def adjust_learning_rate(optimizer, global_step): - lr = LearningRateDecay()(global_step=global_step) - for param_group in optimizer.param_groups: - param_group["lr"] = lr - return lr - diff --git a/spaces/arch-123/bingo/src/lib/isomorphic/index.ts b/spaces/arch-123/bingo/src/lib/isomorphic/index.ts deleted file mode 100644 index d4ebae951004bc8ec388f82548f4204a6c2a0a50..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,8 +0,0 @@ -'use client' - -import Debug from 'debug' -export * from 'ifw' - -export const debug = typeof document === 'undefined' ? Debug('bingo') - : process.env.NEXT_PUBLIC_DEBUG ? console.info.bind(console) - : () => {} diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/speaker_encoder_config.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/speaker_encoder_config.py deleted file mode 100644 index 6dceb00277ba68efe128936ff7f9456338f9753f..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/speaker_encoder_config.py +++ /dev/null @@ -1,11 +0,0 @@ -from dataclasses import asdict, dataclass - -from TTS.encoder.configs.base_encoder_config import BaseEncoderConfig - - -@dataclass -class SpeakerEncoderConfig(BaseEncoderConfig): - """Defines parameters for Speaker Encoder model.""" - - model: str = "speaker_encoder" - class_name_key: str = "speaker_name" diff --git a/spaces/artificialguybr/video-dubbing/whisper/tests/test_tokenizer.py b/spaces/artificialguybr/video-dubbing/whisper/tests/test_tokenizer.py deleted file mode 100644 index 09d0351e12adf783922183c95fddb961d2f1426a..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/whisper/tests/test_tokenizer.py +++ /dev/null @@ -1,24 +0,0 @@ -from whisper.tokenizer import get_tokenizer - - -def test_tokenizer(): - gpt2_tokenizer = get_tokenizer(multilingual=False) - multilingual_tokenizer = get_tokenizer(multilingual=True) - - text = "다람쥐 헌 쳇바퀴에 타고파" - gpt2_tokens = gpt2_tokenizer.encode(text) - multilingual_tokens = multilingual_tokenizer.encode(text) - - assert gpt2_tokenizer.decode(gpt2_tokens) == text - assert multilingual_tokenizer.decode(multilingual_tokens) == text - assert len(gpt2_tokens) > len(multilingual_tokens) - - -def test_split_on_unicode(): - multilingual_tokenizer = get_tokenizer(multilingual=True) - - tokens = [8404, 871, 287, 6, 246, 526, 3210, 20378] - words, word_tokens = multilingual_tokenizer.split_tokens_on_unicode(tokens) - - assert words == [" elle", " est", " l", "'", "�", "é", "rit", "oire"] - assert word_tokens == [[8404], [871], [287], [6], [246], [526], [3210], [20378]] diff --git a/spaces/arxify/RVC-beta-v2-0618/Dockerfile b/spaces/arxify/RVC-beta-v2-0618/Dockerfile deleted file mode 100644 index 49f62d5f9c0901931de6523721b3a97b40f34219..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -# syntax=docker/dockerfile:1 - -FROM python:3.10-bullseye - -EXPOSE 7865 - -WORKDIR /app - -COPY . . - -RUN pip3 install -r requirements.txt - -CMD ["python3", "infer-web.py"] \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CAST.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CAST.py deleted file mode 100644 index ff13bd4a8927e358d743964f5c2d7de0a10ce211..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CAST.py +++ /dev/null @@ -1,101 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Cipher/CAST.py: Self-test for the CAST-128 (CAST5) cipher -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test suite for Crypto.Cipher.CAST""" - -import unittest - -from Crypto.Util.py3compat import bchr - -from Crypto.Cipher import CAST - -# This is a list of (plaintext, ciphertext, key) tuples. -test_data = [ - # Test vectors from RFC 2144, B.1 - ('0123456789abcdef', '238b4fe5847e44b2', - '0123456712345678234567893456789a', - '128-bit key'), - - ('0123456789abcdef', 'eb6a711a2c02271b', - '01234567123456782345', - '80-bit key'), - - ('0123456789abcdef', '7ac816d16e9b302e', - '0123456712', - '40-bit key'), -] - - -class KeyLength(unittest.TestCase): - - def runTest(self): - self.assertRaises(ValueError, CAST.new, bchr(0) * 4, CAST.MODE_ECB) - self.assertRaises(ValueError, CAST.new, bchr(0) * 17, CAST.MODE_ECB) - - -class TestOutput(unittest.TestCase): - - def runTest(self): - # Encrypt/Decrypt data and test output parameter - - cipher = CAST.new(b'4'*16, CAST.MODE_ECB) - - pt = b'5' * 16 - ct = cipher.encrypt(pt) - - output = bytearray(16) - res = cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - self.assertEqual(res, None) - - res = cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - self.assertEqual(res, None) - - output = memoryview(bytearray(16)) - cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - - cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - - self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) - self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) - - shorter_output = bytearray(7) - self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) - self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) - - -def get_tests(config={}): - from .common import make_block_tests - - tests = make_block_tests(CAST, "CAST", test_data) - tests.append(KeyLength()) - tests.append(TestOutput()) - return tests - -if __name__ == '__main__': - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/roll_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/roll_dataset.py deleted file mode 100644 index a2915eeb3e8fb4dfb4b2bb33e0464ad0783d854c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/roll_dataset.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset - - -class RollDataset(BaseWrapperDataset): - def __init__(self, dataset, shifts): - super().__init__(dataset) - self.shifts = shifts - - def __getitem__(self, index): - item = self.dataset[index] - return torch.roll(item, self.shifts) diff --git a/spaces/avans06/whisper-webui-translate/src/whisper/dummyWhisperContainer.py b/spaces/avans06/whisper-webui-translate/src/whisper/dummyWhisperContainer.py deleted file mode 100644 index dddc2dc50e78880befe29d15c924ab811413a8f8..0000000000000000000000000000000000000000 --- a/spaces/avans06/whisper-webui-translate/src/whisper/dummyWhisperContainer.py +++ /dev/null @@ -1,101 +0,0 @@ -from typing import List - -import ffmpeg -from src.config import ModelConfig -from src.hooks.progressListener import ProgressListener -from src.modelCache import ModelCache -from src.prompts.abstractPromptStrategy import AbstractPromptStrategy -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer - -class DummyWhisperContainer(AbstractWhisperContainer): - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - super().__init__(model_name, device, compute_type, download_root, cache, models) - - def ensure_downloaded(self): - """ - Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before - passing the container to a subprocess. - """ - print("[Dummy] Ensuring that the model is downloaded") - - def _create_model(self): - print("[Dummy] Creating dummy whisper model " + self.model_name + " for device " + str(self.device)) - return None - - def create_callback(self, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - prompt_strategy: AbstractPromptStrategy - The prompt strategy to use. If not specified, the prompt from Whisper will be used. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - return DummyWhisperCallback(self, language=language, task=task, prompt_strategy=prompt_strategy, **decodeOptions) - -class DummyWhisperCallback(AbstractWhisperCallback): - def __init__(self, model_container: DummyWhisperContainer, **decodeOptions: dict): - self.model_container = model_container - self.decodeOptions = decodeOptions - - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - print("[Dummy] Invoking dummy whisper callback for segment " + str(segment_index)) - - # Estimate length - if isinstance(audio, str): - audio_length = ffmpeg.probe(audio)["format"]["duration"] - # Format is pcm_s16le at a sample rate of 16000, loaded as a float32 array. - else: - audio_length = len(audio) / 16000 - - # Convert the segments to a format that is easier to serialize - whisper_segments = [{ - "text": "Dummy text for segment " + str(segment_index), - "start": 0, - "end": audio_length, - - # Extra fields added by faster-whisper - "words": [] - }] - - result = { - "segments": whisper_segments, - "text": "Dummy text for segment " + str(segment_index), - "language": "en" if detected_language is None else detected_language, - - # Extra fields added by faster-whisper - "language_probability": 1.0, - "duration": audio_length, - } - - if progress_listener is not None: - progress_listener.on_finished() - return result \ No newline at end of file diff --git a/spaces/awacke1/AI-BigGAN-Image-Gen/README.md b/spaces/awacke1/AI-BigGAN-Image-Gen/README.md deleted file mode 100644 index 8e1bf4ff7508f3d197f40e79c95dfbac5f1a28b9..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AI-BigGAN-Image-Gen/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ImgGen🧠🖼️-AIImageGenerator -emoji: 🧠🖼️ -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/main.py b/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/main.py deleted file mode 100644 index a57ba031d85c0a96fb39f4cb67f8225a09d5da17..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/main.py +++ /dev/null @@ -1,23 +0,0 @@ -from fastapi import FastAPI -from fastapi.staticfiles import StaticFiles -from fastapi.responses import FileResponse - -from transformers import pipeline - -app = FastAPI() - -pipe_flan = pipeline("text2text-generation", model="google/flan-t5-small") -#pipe_flan = pipeline("text2text-generation", model="google/flan-t5-large") -#Try large rather than small? google/flan-t5-small - - -@app.get("/infer_t5") -def t5(input): - output = pipe_flan(input) - return {"output": output[0]["generated_text"]} - -app.mount("/", StaticFiles(directory="static", html=True), name="static") - -@app.get("/") -def index() -> FileResponse: - return FileResponse(path="/app/static/index.html", media_type="text/html") diff --git a/spaces/awacke1/HTML5.Aframe.Frogger.Test/style.css b/spaces/awacke1/HTML5.Aframe.Frogger.Test/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5.Aframe.Frogger.Test/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/PromptSuperHeroImageGenerator/index.html b/spaces/awacke1/PromptSuperHeroImageGenerator/index.html deleted file mode 100644 index 6250c2958a7186a4e64f21c02b0359ff5ecd7e97..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PromptSuperHeroImageGenerator/index.html +++ /dev/null @@ -1,16 +0,0 @@ - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/awacke1/StreamlitAIPP1/app.py b/spaces/awacke1/StreamlitAIPP1/app.py deleted file mode 100644 index d2ea9861f7c6c8cdd343ed8ea2e309962169d3d8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitAIPP1/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import streamlit as st -import time - -def main(): - st.title("Simple Streamlit Program") - - # Wait for 5 seconds - with st.spinner("Waiting for 5 seconds..."): - time.sleep(5) - st.success("Completed!") - - # File Upload - st.header("File Upload") - uploaded_file = st.file_uploader("Upload a file") - - if uploaded_file is not None: - file_contents = uploaded_file.read() - st.markdown("### File Contents") - st.markdown(f"```{file_contents.decode('utf-8')}```") - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/awacke1/mixture-of-experts-dr-llama/app.py b/spaces/awacke1/mixture-of-experts-dr-llama/app.py deleted file mode 100644 index 73282a536fdcbd216d67ae576dced4f6a83429b1..0000000000000000000000000000000000000000 --- a/spaces/awacke1/mixture-of-experts-dr-llama/app.py +++ /dev/null @@ -1,794 +0,0 @@ -# Imports -import base64 -import glob -import json -import math -import openai -import os -import pytz -import re -import requests -import streamlit as st -import textract -import time -import zipfile -import huggingface_hub -import dotenv -from audio_recorder_streamlit import audio_recorder -from bs4 import BeautifulSoup -from collections import deque -from datetime import datetime -from dotenv import load_dotenv -from huggingface_hub import InferenceClient -from io import BytesIO -from langchain.chat_models import ChatOpenAI -from langchain.chains import ConversationalRetrievalChain -from langchain.embeddings import OpenAIEmbeddings -from langchain.memory import ConversationBufferMemory -from langchain.text_splitter import CharacterTextSplitter -from langchain.vectorstores import FAISS -from openai import ChatCompletion -from PyPDF2 import PdfReader -from templates import bot_template, css, user_template -from xml.etree import ElementTree as ET -import streamlit.components.v1 as components # Import Streamlit Components for HTML5 - - -st.set_page_config(page_title="🐪Llama Whisperer🦙 Voice Chat🌟", layout="wide") - - -def add_Med_Licensing_Exam_Dataset(): - import streamlit as st - from datasets import load_dataset - dataset = load_dataset("augtoma/usmle_step_1")['test'] # Using 'test' split - st.title("USMLE Step 1 Dataset Viewer") - if len(dataset) == 0: - st.write("😢 The dataset is empty.") - else: - st.write(""" - 🔍 Use the search box to filter questions or use the grid to scroll through the dataset. - """) - - # 👩‍🔬 Search Box - search_term = st.text_input("Search for a specific question:", "") - - # 🎛 Pagination - records_per_page = 100 - num_records = len(dataset) - num_pages = max(int(num_records / records_per_page), 1) - - # Skip generating the slider if num_pages is 1 (i.e., all records fit in one page) - if num_pages > 1: - page_number = st.select_slider("Select page:", options=list(range(1, num_pages + 1))) - else: - page_number = 1 # Only one page - - # 📊 Display Data - start_idx = (page_number - 1) * records_per_page - end_idx = start_idx + records_per_page - - # 🧪 Apply the Search Filter - filtered_data = [] - for record in dataset[start_idx:end_idx]: - if isinstance(record, dict) and 'text' in record and 'id' in record: - if search_term: - if search_term.lower() in record['text'].lower(): - st.markdown(record) - filtered_data.append(record) - else: - filtered_data.append(record) - - # 🌐 Render the Grid - for record in filtered_data: - st.write(f"## Question ID: {record['id']}") - st.write(f"### Question:") - st.write(f"{record['text']}") - st.write(f"### Answer:") - st.write(f"{record['answer']}") - st.write("---") - - st.write(f"😊 Total Records: {num_records} | 📄 Displaying {start_idx+1} to {min(end_idx, num_records)}") - -# 1. Constants and Top Level UI Variables - -# My Inference API Copy -# API_URL = 'https://qe55p8afio98s0u3.us-east-1.aws.endpoints.huggingface.cloud' # Dr Llama -# Original: -API_URL = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-chat-hf" -API_KEY = os.getenv('API_KEY') -MODEL1="meta-llama/Llama-2-7b-chat-hf" -MODEL1URL="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf" -HF_KEY = os.getenv('HF_KEY') -headers = { - "Authorization": f"Bearer {HF_KEY}", - "Content-Type": "application/json" -} -key = os.getenv('OPENAI_API_KEY') -prompt = f"Write instructions to teach anyone to write a discharge plan. List the entities, features and relationships to CCDA and FHIR objects in boldface." -should_save = st.sidebar.checkbox("💾 Save", value=True, help="Save your session data.") - -# 2. Prompt label button demo for LLM -def add_witty_humor_buttons(): - with st.expander("Wit and Humor 🤣", expanded=True): - # Tip about the Dromedary family - st.markdown("🔬 **Fun Fact**: Dromedaries, part of the camel family, have a single hump and are adapted to arid environments. Their 'superpowers' include the ability to survive without water for up to 7 days, thanks to their specialized blood cells and water storage in their hump.") - - # Define button descriptions - descriptions = { - "Generate Limericks 😂": "Write ten random adult limericks based on quotes that are tweet length and make you laugh 🎭", - "Wise Quotes 🧙": "Generate ten wise quotes that are tweet length 🦉", - "Funny Rhymes 🎤": "Create ten funny rhymes that are tweet length 🎶", - "Medical Jokes 💉": "Create ten medical jokes that are tweet length 🏥", - "Minnesota Humor ❄️": "Create ten jokes about Minnesota that are tweet length 🌨️", - "Top Funny Stories 📖": "Create ten funny stories that are tweet length 📚", - "More Funny Rhymes 🎙️": "Create ten more funny rhymes that are tweet length 🎵" - } - - # Create columns - col1, col2, col3 = st.columns([1, 1, 1], gap="small") - - # Add buttons to columns - if col1.button("Generate Limericks 😂"): - StreamLLMChatResponse(descriptions["Generate Limericks 😂"]) - - if col2.button("Wise Quotes 🧙"): - StreamLLMChatResponse(descriptions["Wise Quotes 🧙"]) - - if col3.button("Funny Rhymes 🎤"): - StreamLLMChatResponse(descriptions["Funny Rhymes 🎤"]) - - col4, col5, col6 = st.columns([1, 1, 1], gap="small") - - if col4.button("Medical Jokes 💉"): - StreamLLMChatResponse(descriptions["Medical Jokes 💉"]) - - if col5.button("Minnesota Humor ❄️"): - StreamLLMChatResponse(descriptions["Minnesota Humor ❄️"]) - - if col6.button("Top Funny Stories 📖"): - StreamLLMChatResponse(descriptions["Top Funny Stories 📖"]) - - col7 = st.columns(1, gap="small") - - if col7[0].button("More Funny Rhymes 🎙️"): - StreamLLMChatResponse(descriptions["More Funny Rhymes 🎙️"]) - -def SpeechSynthesis(result): - documentHTML5=''' - - - - Read It Aloud - - - -

🔊 Read It Aloud

- -
- - - - ''' - - components.html(documentHTML5, width=1280, height=1024) - #return result - - -# 3. Stream Llama Response -# @st.cache_resource -def StreamLLMChatResponse(prompt): - try: - endpoint_url = API_URL - hf_token = API_KEY - client = InferenceClient(endpoint_url, token=hf_token) - gen_kwargs = dict( - max_new_tokens=512, - top_k=30, - top_p=0.9, - temperature=0.2, - repetition_penalty=1.02, - stop_sequences=["\nUser:", "<|endoftext|>", ""], - ) - stream = client.text_generation(prompt, stream=True, details=True, **gen_kwargs) - report=[] - res_box = st.empty() - collected_chunks=[] - collected_messages=[] - allresults='' - for r in stream: - if r.token.special: - continue - if r.token.text in gen_kwargs["stop_sequences"]: - break - collected_chunks.append(r.token.text) - chunk_message = r.token.text - collected_messages.append(chunk_message) - try: - report.append(r.token.text) - if len(r.token.text) > 0: - result="".join(report).strip() - res_box.markdown(f'*{result}*') - - except: - st.write('Stream llm issue') - SpeechSynthesis(result) - return result - except: - st.write('Llama model is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).') - -# 4. Run query with payload -def query(payload): - response = requests.post(API_URL, headers=headers, json=payload) - st.markdown(response.json()) - return response.json() -def get_output(prompt): - return query({"inputs": prompt}) - -# 5. Auto name generated output files from time and content -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") - replaced_prompt = prompt.replace(" ", "_").replace("\n", "_") - safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:45] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -# 6. Speech transcription via OpenAI service -def transcribe_audio(openai_key, file_path, model): - openai.api_key = openai_key - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - chatResponse = chat_with_model(response.json().get('text'), '') # ************************************* - transcript = response.json().get('text') - filename = generate_filename(transcript, 'txt') - response = chatResponse - user_prompt = transcript - create_file(filename, user_prompt, response, should_save) - return transcript - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -# 7. Auto stop on silence audio control for recording WAV files -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder(key='audio_recorder') - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -# 8. File creator that interprets type and creates output file for text, markdown and code -def create_file(filename, prompt, response, should_save=True): - if not should_save: - return - base_filename, ext = os.path.splitext(filename) - if ext in ['.txt', '.htm', '.md']: - with open(f"{base_filename}.md", 'w') as file: - try: - content = prompt.strip() + '\r\n' + response - file.write(content) - except: - st.write('.') - - #has_python_code = re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response) - #has_python_code = bool(re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response)) - #if has_python_code: - # python_code = re.findall(r"```python([\s\S]*?)```", response)[0].strip() - # with open(f"{base_filename}-Code.py", 'w') as file: - # file.write(python_code) - # with open(f"{base_filename}.md", 'w') as file: - # content = prompt.strip() + '\r\n' + response - # file.write(content) - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -# 9. Sidebar with UI controls to review and re-run prompts and continue responses -@st.cache_resource -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -# 10. Read in and provide UI for past files -@st.cache_resource -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - -# 11. Chat with GPT - Caution on quota - now favoring fastest AI pipeline STT Whisper->LLM Llama->TTS -@st.cache_resource -def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - start_time = time.time() - report = [] - res_box = st.empty() - collected_chunks = [] - collected_messages = [] - for chunk in openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=conversation, temperature=0.5, stream=True): - collected_chunks.append(chunk) - chunk_message = chunk['choices'][0]['delta'] - collected_messages.append(chunk_message) - content=chunk["choices"][0].get("delta",{}).get("content") - try: - report.append(content) - if len(content) > 0: - result = "".join(report).strip() - res_box.markdown(f'*{result}*') - except: - st.write(' ') - full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) - st.write("Elapsed time:") - st.write(time.time() - start_time) - return full_reply_content - -# 12. Embedding VectorDB for LLM query of documents to text to compress inputs and prompt together as Chat memory using Langchain -@st.cache_resource -def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - -def extract_mime_type(file): - if isinstance(file, str): - pattern = r"type='(.*?)'" - match = re.search(pattern, file) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract MIME type from {file}") - elif isinstance(file, streamlit.UploadedFile): - return file.type - else: - raise TypeError("Input should be a string or a streamlit.UploadedFile object") - -def extract_file_extension(file): - # get the file name directly from the UploadedFile object - file_name = file.name - pattern = r".*?\.(.*?)$" - match = re.search(pattern, file_name) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract file extension from {file_name}") - -# Normalize input as text from PDF and other formats -@st.cache_resource -def pdf2txt(docs): - text = "" - for file in docs: - file_extension = extract_file_extension(file) - st.write(f"File type extension: {file_extension}") - if file_extension.lower() in ['py', 'txt', 'html', 'htm', 'xml', 'json']: - text += file.getvalue().decode('utf-8') - elif file_extension.lower() == 'pdf': - from PyPDF2 import PdfReader - pdf = PdfReader(BytesIO(file.getvalue())) - for page in range(len(pdf.pages)): - text += pdf.pages[page].extract_text() # new PyPDF2 syntax - return text - -def txt2chunks(text): - text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len) - return text_splitter.split_text(text) - -# Vector Store using FAISS -@st.cache_resource -def vector_store(text_chunks): - embeddings = OpenAIEmbeddings(openai_api_key=key) - return FAISS.from_texts(texts=text_chunks, embedding=embeddings) - -# Memory and Retrieval chains -@st.cache_resource -def get_chain(vectorstore): - llm = ChatOpenAI() - memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) - return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory) - -def process_user_input(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - for i, message in enumerate(st.session_state.chat_history): - template = user_template if i % 2 == 0 else bot_template - st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True) - filename = generate_filename(user_question, 'txt') - response = message.content - user_prompt = user_question - create_file(filename, user_prompt, response, should_save) - -def divide_prompt(prompt, max_length): - words = prompt.split() - chunks = [] - current_chunk = [] - current_length = 0 - for word in words: - if len(word) + current_length <= max_length: - current_length += len(word) + 1 - current_chunk.append(word) - else: - chunks.append(' '.join(current_chunk)) - current_chunk = [word] - current_length = len(word) - chunks.append(' '.join(current_chunk)) - return chunks - - -# 13. Provide way of saving all and deleting all to give way of reviewing output and saving locally before clearing it - -@st.cache_resource -def create_zip_of_files(files): - zip_name = "all_files.zip" - with zipfile.ZipFile(zip_name, 'w') as zipf: - for file in files: - zipf.write(file) - return zip_name - -@st.cache_resource -def get_zip_download_link(zip_file): - with open(zip_file, 'rb') as f: - data = f.read() - b64 = base64.b64encode(data).decode() - href = f'Download All' - return href - -# 14. Inference Endpoints for Whisper (best fastest STT) on NVIDIA T4 and Llama (best fastest AGI LLM) on NVIDIA A10 -# My Inference Endpoint -API_URL_IE = f'https://tonpixzfvq3791u9.us-east-1.aws.endpoints.huggingface.cloud' -# Original -API_URL_IE = "https://api-inference.huggingface.co/models/openai/whisper-small.en" -MODEL2 = "openai/whisper-small.en" -MODEL2_URL = "https://huggingface.co/openai/whisper-small.en" -#headers = { -# "Authorization": "Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", -# "Content-Type": "audio/wav" -#} -HF_KEY = os.getenv('HF_KEY') -headers = { - "Authorization": f"Bearer {HF_KEY}", - "Content-Type": "audio/wav" -} - -#@st.cache_resource -def query(filename): - with open(filename, "rb") as f: - data = f.read() - response = requests.post(API_URL_IE, headers=headers, data=data) - return response.json() - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") - replaced_prompt = prompt.replace(" ", "_").replace("\n", "_") - safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:90] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -# 15. Audio recorder to Wav file -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - -# 16. Speech transcription to file output -def transcribe_audio(filename): - output = query(filename) - return output - - -def whisper_main(): - st.title("Speech to Text") - st.write("Record your speech and get the text.") - - # Audio, transcribe, GPT: - filename = save_and_play_audio(audio_recorder) - if filename is not None: - transcription = transcribe_audio(filename) - #try: - - transcript = transcription['text'] - #except: - #st.write('Whisper model is asleep. Starting up now on T4 GPU - please give 5 minutes then retry as it scales up from zero to activate running container(s).') - - st.write(transcript) - response = StreamLLMChatResponse(transcript) - # st.write(response) - redundant with streaming result? - filename = generate_filename(transcript, ".txt") - create_file(filename, transcript, response, should_save) - #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -import streamlit as st - -# Sample function to demonstrate a response, replace with your own logic -def StreamMedChatResponse(topic): - st.write(f"Showing resources or questions related to: {topic}") - -def add_multi_system_agent_topics(): - with st.expander("Multi-System Agent AI Topics 🤖", expanded=True): - st.markdown("🤖 **Explore Multi-System Agent AI Topics**: This section provides a variety of topics related to multi-system agent AI systems.") - - # Define multi-system agent AI topics and descriptions - descriptions = { - "Reinforcement Learning 🎮": "Questions related to reinforcement learning algorithms and applications 🕹️", - "Natural Language Processing 🗣️": "Questions about natural language processing techniques and chatbot development 🗨️", - "Multi-Agent Systems 🤝": "Questions pertaining to multi-agent systems and cooperative AI interactions 🤖", - "Conversational AI 🗨️": "Questions on building conversational AI agents and chatbots for various platforms 💬", - "Distributed AI Systems 🌐": "Questions about distributed AI systems and their implementation in networked environments 🌐", - "AI Ethics and Bias 🤔": "Questions related to ethics and bias considerations in AI systems and decision-making 🧠", - "AI in Healthcare 🏥": "Questions about the application of AI in healthcare and medical diagnosis 🩺", - "AI in Autonomous Vehicles 🚗": "Questions on the use of AI in autonomous vehicles and self-driving technology 🚗" - } - - # Create columns - col1, col2, col3, col4 = st.columns([1, 1, 1, 1], gap="small") - - # Add buttons to columns - if col1.button("Reinforcement Learning 🎮"): - st.write(descriptions["Reinforcement Learning 🎮"]) - StreamLLMChatResponse(descriptions["Reinforcement Learning 🎮"]) - - if col2.button("Natural Language Processing 🗣️"): - st.write(descriptions["Natural Language Processing 🗣️"]) - StreamLLMChatResponse(descriptions["Natural Language Processing 🗣️"]) - - if col3.button("Multi-Agent Systems 🤝"): - st.write(descriptions["Multi-Agent Systems 🤝"]) - StreamLLMChatResponse(descriptions["Multi-Agent Systems 🤝"]) - - if col4.button("Conversational AI 🗨️"): - st.write(descriptions["Conversational AI 🗨️"]) - StreamLLMChatResponse(descriptions["Conversational AI 🗨️"]) - - col5, col6, col7, col8 = st.columns([1, 1, 1, 1], gap="small") - - if col5.button("Distributed AI Systems 🌐"): - st.write(descriptions["Distributed AI Systems 🌐"]) - StreamLLMChatResponse(descriptions["Distributed AI Systems 🌐"]) - - if col6.button("AI Ethics and Bias 🤔"): - st.write(descriptions["AI Ethics and Bias 🤔"]) - StreamLLMChatResponse(descriptions["AI Ethics and Bias 🤔"]) - - if col7.button("AI in Healthcare 🏥"): - st.write(descriptions["AI in Healthcare 🏥"]) - StreamLLMChatResponse(descriptions["AI in Healthcare 🏥"]) - - if col8.button("AI in Autonomous Vehicles 🚗"): - st.write(descriptions["AI in Autonomous Vehicles 🚗"]) - StreamLLMChatResponse(descriptions["AI in Autonomous Vehicles 🚗"]) - - -# 17. Main -def main(): - - st.title("Try Some Topics:") - prompt = f"Write ten funny jokes that are tweet length stories that make you laugh. Show as markdown outline with emojis for each." - - # Add Wit and Humor buttons - # add_witty_humor_buttons() - # Calling the function to add the multi-system agent AI topics buttons - add_multi_system_agent_topics() - - example_input = st.text_input("Enter your example text:", value=prompt, help="Enter text to get a response from DromeLlama.") - if st.button("Run Prompt With DromeLlama", help="Click to run the prompt."): - try: - StreamLLMChatResponse(example_input) - except: - st.write('DromeLlama is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).') - - openai.api_key = os.getenv('OPENAI_KEY') - menu = ["txt", "htm", "xlsx", "csv", "md", "py"] - choice = st.sidebar.selectbox("Output File Type:", menu) - model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["pdf", "xml", "json", "xlsx", "csv", "html", "htm", "md", "txt"]) - document_sections = deque() - document_responses = {} - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - if len(document_sections) > 0: - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section, model_choice) - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response, should_save) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - user_prompt_sections = divide_prompt(user_prompt, max_length) - full_response = '' - for prompt_section in user_prompt_sections: - response = chat_with_model(prompt_section, ''.join(list(document_sections)), model_choice) - full_response += response + '\n' # Combine the responses - response = full_response - st.write('Response:') - st.write(response) - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response, should_save) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - # Compose a file sidebar of past encounters - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - if st.sidebar.button("🗑 Delete All"): - for file in all_files: - os.remove(file) - st.experimental_rerun() - if st.sidebar.button("⬇️ Download All"): - zip_file = create_zip_of_files(all_files) - st.sidebar.markdown(get_zip_download_link(zip_file), unsafe_allow_html=True) - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - - # new - llama - response = StreamLLMChatResponse(file_contents) - filename = generate_filename(user_prompt, ".md") - create_file(filename, file_contents, response, should_save) - SpeechSynthesis(response) - - # old - gpt - #response = chat_with_model(user_prompt, file_contents, model_choice) - #filename = generate_filename(file_contents, choice) - #create_file(filename, user_prompt, response, should_save) - - st.experimental_rerun() - - # Feedback - # Step: Give User a Way to Upvote or Downvote - feedback = st.radio("Step 8: Give your feedback", ("👍 Upvote", "👎 Downvote")) - if feedback == "👍 Upvote": - st.write("You upvoted 👍. Thank you for your feedback!") - else: - st.write("You downvoted 👎. Thank you for your feedback!") - - load_dotenv() - st.write(css, unsafe_allow_html=True) - st.header("Chat with documents :books:") - user_question = st.text_input("Ask a question about your documents:") - if user_question: - process_user_input(user_question) - with st.sidebar: - st.subheader("Your documents") - docs = st.file_uploader("import documents", accept_multiple_files=True) - with st.spinner("Processing"): - raw = pdf2txt(docs) - if len(raw) > 0: - length = str(len(raw)) - text_chunks = txt2chunks(raw) - vectorstore = vector_store(text_chunks) - st.session_state.conversation = get_chain(vectorstore) - st.markdown('# AI Search Index of Length:' + length + ' Created.') # add timing - filename = generate_filename(raw, 'txt') - create_file(filename, raw, '', should_save) - -# 18. Run AI Pipeline -if __name__ == "__main__": - whisper_main() - main() - add_Med_Licensing_Exam_Dataset() \ No newline at end of file diff --git a/spaces/azer123456789/nicky007-stable-diffusion-logo-fine-tuned/README.md b/spaces/azer123456789/nicky007-stable-diffusion-logo-fine-tuned/README.md deleted file mode 100644 index 9b5ae4179dd00a721dfef4521be7c253e11efc81..0000000000000000000000000000000000000000 --- a/spaces/azer123456789/nicky007-stable-diffusion-logo-fine-tuned/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nicky007 Stable Diffusion Logo Fine Tuned -emoji: 🌖 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/math/Math3Node.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/math/Math3Node.js deleted file mode 100644 index 5bc866146fe38adab12a7972f5238b2c423040fd..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/math/Math3Node.js +++ /dev/null @@ -1,121 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { TempNode } from '../core/TempNode.js'; - -function Math3Node( a, b, c, method ) { - - TempNode.call( this ); - - this.a = a; - this.b = b; - this.c = c; - - this.method = method; - -} - -Math3Node.MIX = 'mix'; -Math3Node.CLAMP = 'clamp'; -Math3Node.REFRACT = 'refract'; -Math3Node.SMOOTHSTEP = 'smoothstep'; -Math3Node.FACEFORWARD = 'faceforward'; - -Math3Node.prototype = Object.create( TempNode.prototype ); -Math3Node.prototype.constructor = Math3Node; -Math3Node.prototype.nodeType = "Math3"; - -Math3Node.prototype.getType = function ( builder ) { - - var a = builder.getTypeLength( this.a.getType( builder ) ); - var b = builder.getTypeLength( this.b.getType( builder ) ); - var c = builder.getTypeLength( this.c.getType( builder ) ); - - if ( a > b && a > c ) { - - return this.a.getType( builder ); - - } else if ( b > c ) { - - return this.b.getType( builder ); - - } - - return this.c.getType( builder ); - -}; - -Math3Node.prototype.generate = function ( builder, output ) { - - var a, b, c, - al = builder.getTypeLength( this.a.getType( builder ) ), - bl = builder.getTypeLength( this.b.getType( builder ) ), - cl = builder.getTypeLength( this.c.getType( builder ) ), - type = this.getType( builder ); - - // optimzer - - switch ( this.method ) { - - case Math3Node.REFRACT: - - a = this.a.build( builder, type ); - b = this.b.build( builder, type ); - c = this.c.build( builder, 'f' ); - - break; - - case Math3Node.MIX: - - a = this.a.build( builder, type ); - b = this.b.build( builder, type ); - c = this.c.build( builder, cl === 1 ? 'f' : type ); - - break; - - default: - - a = this.a.build( builder, type ); - b = this.b.build( builder, type ); - c = this.c.build( builder, type ); - - break; - - } - - return builder.format( this.method + '( ' + a + ', ' + b + ', ' + c + ' )', type, output ); - -}; - -Math3Node.prototype.copy = function ( source ) { - - TempNode.prototype.copy.call( this, source ); - - this.a = source.a; - this.b = source.b; - this.c = source.c; - this.method = source.method; - -}; - -Math3Node.prototype.toJSON = function ( meta ) { - - var data = this.getJSONNode( meta ); - - if ( ! data ) { - - data = this.createJSONNode( meta ); - - data.a = this.a.toJSON( meta ).uuid; - data.b = this.b.toJSON( meta ).uuid; - data.c = this.c.toJSON( meta ).uuid; - data.method = this.method; - - } - - return data; - -}; - -export { Math3Node }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Interpolations.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Interpolations.js deleted file mode 100644 index 0fb812c1ff9a78f234b36bd3b1dc768fcf479eda..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Interpolations.js +++ /dev/null @@ -1,81 +0,0 @@ -/** - * @author zz85 / http://www.lab4games.net/zz85/blog - * - * Bezier Curves formulas obtained from - * http://en.wikipedia.org/wiki/Bézier_curve - */ - -function CatmullRom( t, p0, p1, p2, p3 ) { - - var v0 = ( p2 - p0 ) * 0.5; - var v1 = ( p3 - p1 ) * 0.5; - var t2 = t * t; - var t3 = t * t2; - return ( 2 * p1 - 2 * p2 + v0 + v1 ) * t3 + ( - 3 * p1 + 3 * p2 - 2 * v0 - v1 ) * t2 + v0 * t + p1; - -} - -// - -function QuadraticBezierP0( t, p ) { - - var k = 1 - t; - return k * k * p; - -} - -function QuadraticBezierP1( t, p ) { - - return 2 * ( 1 - t ) * t * p; - -} - -function QuadraticBezierP2( t, p ) { - - return t * t * p; - -} - -function QuadraticBezier( t, p0, p1, p2 ) { - - return QuadraticBezierP0( t, p0 ) + QuadraticBezierP1( t, p1 ) + - QuadraticBezierP2( t, p2 ); - -} - -// - -function CubicBezierP0( t, p ) { - - var k = 1 - t; - return k * k * k * p; - -} - -function CubicBezierP1( t, p ) { - - var k = 1 - t; - return 3 * k * k * t * p; - -} - -function CubicBezierP2( t, p ) { - - return 3 * ( 1 - t ) * t * t * p; - -} - -function CubicBezierP3( t, p ) { - - return t * t * t * p; - -} - -function CubicBezier( t, p0, p1, p2, p3 ) { - - return CubicBezierP0( t, p0 ) + CubicBezierP1( t, p1 ) + CubicBezierP2( t, p2 ) + - CubicBezierP3( t, p3 ); - -} - -export { CatmullRom, QuadraticBezier, CubicBezier }; diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621111527.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621111527.py deleted file mode 100644 index 152de6689bf9d96901148e8acb07268812c99d11..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621111527.py +++ /dev/null @@ -1,37 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -background = st.selectbox("表格线条是否透明",(False,True)) -extractor_mode = st.selectbox("单页抽取 OR 全文抽取",("单页抽取","全文抽取")) - -def extractor(page,result_name): - tables_all= cam.read_pdf("input.pdf", pages=page, process_background=background) - result_all = pd.ExcelWriter(result_name, engine='xlsxwriter') - for i in range(0,len(tables_all)): - table = tables_all[i].df - sheetname = str(i) - table.to_excel(result_all, sheetname,index=False) - result_all.save() - with open(result_name,'rb') as f: - st.download_button('抽取完成, 点击下载!', f,file_name=result_name,mime="application/vnd.ms-excel") - - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - if extractor_mode == "单页抽取": - page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) - extractor(page_number,"result.xlsx") - if extractor_mode == "全文抽取": - extractor("all","result_all.xlsx") \ No newline at end of file diff --git a/spaces/bennyguo/threestudio/Dockerfile b/spaces/bennyguo/threestudio/Dockerfile deleted file mode 100644 index a0870b339ac159f8e8953776ad7224c5f6a2c05d..0000000000000000000000000000000000000000 --- a/spaces/bennyguo/threestudio/Dockerfile +++ /dev/null @@ -1,67 +0,0 @@ -# Reference: -# https://github.com/cvpaperchallenge/Ascender -# https://github.com/nerfstudio-project/nerfstudio - -FROM nvidia/cuda:11.8.0-devel-ubuntu22.04 - -ARG USER_NAME=dreamer -ARG GROUP_NAME=dreamers -ARG UID=1000 -ARG GID=1000 - -# Set compute capability for nerfacc and tiny-cuda-nn -# See https://developer.nvidia.com/cuda-gpus and limit number to speed-up build -ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX" -ENV TCNN_CUDA_ARCHITECTURES=90;89;86;80;75;70;61;60 -# Speed-up build for RTX 30xx -# ENV TORCH_CUDA_ARCH_LIST="8.6" -# ENV TCNN_CUDA_ARCHITECTURES=86 -# Speed-up build for RTX 40xx -# ENV TORCH_CUDA_ARCH_LIST="8.9" -# ENV TCNN_CUDA_ARCHITECTURES=89 - -ENV CUDA_HOME=/usr/local/cuda -ENV PATH=${CUDA_HOME}/bin:/home/${USER_NAME}/.local/bin:${PATH} -ENV LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH} -ENV LIBRARY_PATH=${CUDA_HOME}/lib64/stubs:${LIBRARY_PATH} - -# apt install by root user -RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ - build-essential \ - curl \ - git \ - libegl1-mesa-dev \ - libgl1-mesa-dev \ - libgles2-mesa-dev \ - libglib2.0-0 \ - libsm6 \ - libxext6 \ - libxrender1 \ - python-is-python3 \ - python3.10-dev \ - python3-pip \ - wget \ - && rm -rf /var/lib/apt/lists/* - -# Change user to non-root user -RUN groupadd -g ${GID} ${GROUP_NAME} \ - && useradd -ms /bin/sh -u ${UID} -g ${GID} ${USER_NAME} -USER ${USER_NAME} - -RUN pip install --upgrade pip setuptools ninja -RUN pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --index-url https://download.pytorch.org/whl/cu118 -# Install nerfacc and tiny-cuda-nn before installing requirements.txt -# because these two installations are time consuming and error prone -# RUN pip install git+https://github.com/KAIR-BAIR/nerfacc.git@v0.5.2 -RUN pip install nerfacc==0.5.2 -f https://nerfacc-bucket.s3.us-west-2.amazonaws.com/whl/torch-2.0.0_cu118.html -RUN pip install git+https://github.com/NVlabs/tiny-cuda-nn.git#subdirectory=bindings/torch - -COPY requirements.txt /tmp -RUN cd /tmp && pip install -r requirements.txt - -# avoid caching the old version -ADD "https://api.github.com/repos/threestudio-project/threestudio/commits?per_page=1" latest_commit -RUN git clone https://github.com/threestudio-project/threestudio.git /home/${USER_NAME}/threestudio -WORKDIR /home/${USER_NAME}/threestudio -RUN git checkout 27d69d9845016c8b8aa0bac92ab6d4fea8d1e1b8 -CMD ["python", "gradio_app.py", "launch", "--listen", "--hf-space"] diff --git a/spaces/bilby/bilby-retrievalqa/README.md b/spaces/bilby/bilby-retrievalqa/README.md deleted file mode 100644 index 92bbe4919f5b2a207fab5faf9d70063784004340..0000000000000000000000000000000000000000 --- a/spaces/bilby/bilby-retrievalqa/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bilby Retrievalqa -emoji: 🏆 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bpHigh/AI-Research-Buddy/README.md b/spaces/bpHigh/AI-Research-Buddy/README.md deleted file mode 100644 index df611cc4f31d5c3883cc2edcc1e14df96c454e41..0000000000000000000000000000000000000000 --- a/spaces/bpHigh/AI-Research-Buddy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI Research Buddy -emoji: 💻 -colorFrom: purple -colorTo: gray -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/losses.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/losses.py deleted file mode 100644 index be293e739bdc2d91273f30fb789befe7c8b49a43..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/losses.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility module to handle adversarial losses without requiring to mess up the main training loop. -""" - -import typing as tp - -import flashy -import torch -import torch.nn as nn -import torch.nn.functional as F - - -ADVERSARIAL_LOSSES = ['mse', 'hinge', 'hinge2'] - - -AdvLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor], torch.Tensor]] -FeatLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] - - -class AdversarialLoss(nn.Module): - """Adversary training wrapper. - - Args: - adversary (nn.Module): The adversary module will be used to estimate the logits given the fake and real samples. - We assume here the adversary output is ``Tuple[List[torch.Tensor], List[List[torch.Tensor]]]`` - where the first item is a list of logits and the second item is a list of feature maps. - optimizer (torch.optim.Optimizer): Optimizer used for training the given module. - loss (AdvLossType): Loss function for generator training. - loss_real (AdvLossType): Loss function for adversarial training on logits from real samples. - loss_fake (AdvLossType): Loss function for adversarial training on logits from fake samples. - loss_feat (FeatLossType): Feature matching loss function for generator training. - normalize (bool): Whether to normalize by number of sub-discriminators. - - Example of usage: - adv_loss = AdversarialLoss(adversaries, optimizer, loss, loss_real, loss_fake) - for real in loader: - noise = torch.randn(...) - fake = model(noise) - adv_loss.train_adv(fake, real) - loss, _ = adv_loss(fake, real) - loss.backward() - """ - def __init__(self, - adversary: nn.Module, - optimizer: torch.optim.Optimizer, - loss: AdvLossType, - loss_real: AdvLossType, - loss_fake: AdvLossType, - loss_feat: tp.Optional[FeatLossType] = None, - normalize: bool = True): - super().__init__() - self.adversary: nn.Module = adversary - flashy.distrib.broadcast_model(self.adversary) - self.optimizer = optimizer - self.loss = loss - self.loss_real = loss_real - self.loss_fake = loss_fake - self.loss_feat = loss_feat - self.normalize = normalize - - def _save_to_state_dict(self, destination, prefix, keep_vars): - # Add the optimizer state dict inside our own. - super()._save_to_state_dict(destination, prefix, keep_vars) - destination[prefix + 'optimizer'] = self.optimizer.state_dict() - return destination - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - # Load optimizer state. - self.optimizer.load_state_dict(state_dict.pop(prefix + 'optimizer')) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def get_adversary_pred(self, x): - """Run adversary model, validating expected output format.""" - logits, fmaps = self.adversary(x) - assert isinstance(logits, list) and all([isinstance(t, torch.Tensor) for t in logits]), \ - f'Expecting a list of tensors as logits but {type(logits)} found.' - assert isinstance(fmaps, list), f'Expecting a list of features maps but {type(fmaps)} found.' - for fmap in fmaps: - assert isinstance(fmap, list) and all([isinstance(f, torch.Tensor) for f in fmap]), \ - f'Expecting a list of tensors as feature maps but {type(fmap)} found.' - return logits, fmaps - - def train_adv(self, fake: torch.Tensor, real: torch.Tensor) -> torch.Tensor: - """Train the adversary with the given fake and real example. - - We assume the adversary output is the following format: Tuple[List[torch.Tensor], List[List[torch.Tensor]]]. - The first item being the logits and second item being a list of feature maps for each sub-discriminator. - - This will automatically synchronize gradients (with `flashy.distrib.eager_sync_model`) - and call the optimizer. - """ - loss = torch.tensor(0., device=fake.device) - all_logits_fake_is_fake, _ = self.get_adversary_pred(fake.detach()) - all_logits_real_is_fake, _ = self.get_adversary_pred(real.detach()) - n_sub_adversaries = len(all_logits_fake_is_fake) - for logit_fake_is_fake, logit_real_is_fake in zip(all_logits_fake_is_fake, all_logits_real_is_fake): - loss += self.loss_fake(logit_fake_is_fake) + self.loss_real(logit_real_is_fake) - - if self.normalize: - loss /= n_sub_adversaries - - self.optimizer.zero_grad() - with flashy.distrib.eager_sync_model(self.adversary): - loss.backward() - self.optimizer.step() - - return loss - - def forward(self, fake: torch.Tensor, real: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Return the loss for the generator, i.e. trying to fool the adversary, - and feature matching loss if provided. - """ - adv = torch.tensor(0., device=fake.device) - feat = torch.tensor(0., device=fake.device) - with flashy.utils.readonly(self.adversary): - all_logits_fake_is_fake, all_fmap_fake = self.get_adversary_pred(fake) - all_logits_real_is_fake, all_fmap_real = self.get_adversary_pred(real) - n_sub_adversaries = len(all_logits_fake_is_fake) - for logit_fake_is_fake in all_logits_fake_is_fake: - adv += self.loss(logit_fake_is_fake) - if self.loss_feat: - for fmap_fake, fmap_real in zip(all_fmap_fake, all_fmap_real): - feat += self.loss_feat(fmap_fake, fmap_real) - - if self.normalize: - adv /= n_sub_adversaries - feat /= n_sub_adversaries - - return adv, feat - - -def get_adv_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_loss - elif loss_type == 'hinge': - return hinge_loss - elif loss_type == 'hinge2': - return hinge2_loss - raise ValueError('Unsupported loss') - - -def get_fake_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_fake_loss - elif loss_type in ['hinge', 'hinge2']: - return hinge_fake_loss - raise ValueError('Unsupported loss') - - -def get_real_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_real_loss - elif loss_type in ['hinge', 'hinge2']: - return hinge_real_loss - raise ValueError('Unsupported loss') - - -def mse_real_loss(x: torch.Tensor) -> torch.Tensor: - return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x)) - - -def mse_fake_loss(x: torch.Tensor) -> torch.Tensor: - return F.mse_loss(x, torch.tensor(0., device=x.device).expand_as(x)) - - -def hinge_real_loss(x: torch.Tensor) -> torch.Tensor: - return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -def hinge_fake_loss(x: torch.Tensor) -> torch.Tensor: - return -torch.mean(torch.min(-x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -def mse_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0], device=x.device) - return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x)) - - -def hinge_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0], device=x.device) - return -x.mean() - - -def hinge2_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0]) - return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -class FeatureMatchingLoss(nn.Module): - """Feature matching loss for adversarial training. - - Args: - loss (nn.Module): Loss to use for feature matching (default=torch.nn.L1). - normalize (bool): Whether to normalize the loss. - by number of feature maps. - """ - def __init__(self, loss: nn.Module = torch.nn.L1Loss(), normalize: bool = True): - super().__init__() - self.loss = loss - self.normalize = normalize - - def forward(self, fmap_fake: tp.List[torch.Tensor], fmap_real: tp.List[torch.Tensor]) -> torch.Tensor: - assert len(fmap_fake) == len(fmap_real) and len(fmap_fake) > 0 - feat_loss = torch.tensor(0., device=fmap_fake[0].device) - feat_scale = torch.tensor(0., device=fmap_fake[0].device) - n_fmaps = 0 - for (feat_fake, feat_real) in zip(fmap_fake, fmap_real): - assert feat_fake.shape == feat_real.shape - n_fmaps += 1 - feat_loss += self.loss(feat_fake, feat_real) - feat_scale += torch.mean(torch.abs(feat_real)) - - if self.normalize: - feat_loss /= n_fmaps - - return feat_loss diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/tracing.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/tracing.py deleted file mode 100644 index 577df4e2f4ad0a1a309d31d7c28311be11f87247..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/tracing.py +++ /dev/null @@ -1,71 +0,0 @@ -import inspect -import torch - -from detectron2.utils.env import TORCH_VERSION - -try: - from torch.fx._symbolic_trace import is_fx_tracing as is_fx_tracing_current - - tracing_current_exists = True -except ImportError: - tracing_current_exists = False - -try: - from torch.fx._symbolic_trace import _orig_module_call - - tracing_legacy_exists = True -except ImportError: - tracing_legacy_exists = False - - -@torch.jit.ignore -def is_fx_tracing_legacy() -> bool: - """ - Returns a bool indicating whether torch.fx is currently symbolically tracing a module. - Can be useful for gating module logic that is incompatible with symbolic tracing. - """ - return torch.nn.Module.__call__ is not _orig_module_call - - -@torch.jit.ignore -def is_fx_tracing() -> bool: - """Returns whether execution is currently in - Torch FX tracing mode""" - if TORCH_VERSION >= (1, 10) and tracing_current_exists: - return is_fx_tracing_current() - elif tracing_legacy_exists: - return is_fx_tracing_legacy() - else: - # Can't find either current or legacy tracing indication code. - # Enabling this assert_fx_safe() call regardless of tracing status. - return False - - -@torch.jit.ignore -def assert_fx_safe(condition: bool, message: str) -> torch.Tensor: - """An FX-tracing safe version of assert. - Avoids erroneous type assertion triggering when types are masked inside - an fx.proxy.Proxy object during tracing. - Args: condition - either a boolean expression or a string representing - the condition to test. If this assert triggers an exception when tracing - due to dynamic control flow, try encasing the expression in quotation - marks and supplying it as a string.""" - # Must return a concrete tensor for compatibility with PyTorch <=1.8. - # If <=1.8 compatibility is not needed, return type can be converted to None - if not is_fx_tracing(): - try: - if isinstance(condition, str): - caller_frame = inspect.currentframe().f_back - torch._assert( - eval(condition, caller_frame.f_globals, caller_frame.f_locals), message - ) - return torch.ones(1) - else: - torch._assert(condition, message) - return torch.ones(1) - except torch.fx.proxy.TraceError as e: - print( - "Found a non-FX compatible assertion. Skipping the check. Failure is shown below" - + str(e) - ) - return torch.zeros(1) diff --git a/spaces/cahya/indonesian-whisperer/README.md b/spaces/cahya/indonesian-whisperer/README.md deleted file mode 100644 index 6c1dd64da2f9226f205fc5e6bddf22f947e2906f..0000000000000000000000000000000000000000 --- a/spaces/cahya/indonesian-whisperer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Indonesian Whisperer -emoji: 🇮🇩 -colorFrom: purple -colorTo: red -sdk: docker -pinned: true -license: cc -tags: -- whisper-event ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/losses/__init__.py b/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/losses/__init__.py deleted file mode 100644 index b03080a907cb5cb4b316ceb74866ddbc406b33bf..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/losses/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .stft_loss import * # NOQA diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py deleted file mode 100644 index 40cf18131810307157a9a7d1f6d5922b00fd73d5..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py +++ /dev/null @@ -1,8 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco_panoptic_separated import dataloader -from ..common.models.panoptic_fpn import model -from ..common.train import train - -model.backbone.bottom_up.freeze_at = 2 -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/keypoint_rcnn_fpn.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/keypoint_rcnn_fpn.py deleted file mode 100644 index 56b3994df249884d4816fc9a5c7f553a9ab6f400..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/keypoint_rcnn_fpn.py +++ /dev/null @@ -1,33 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.roi_heads import KRCNNConvDeconvUpsampleHead - -from .mask_rcnn_fpn import model - -[model.roi_heads.pop(x) for x in ["mask_in_features", "mask_pooler", "mask_head"]] - -model.roi_heads.update( - num_classes=1, - keypoint_in_features=["p2", "p3", "p4", "p5"], - keypoint_pooler=L(ROIPooler)( - output_size=14, - scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - keypoint_head=L(KRCNNConvDeconvUpsampleHead)( - input_shape=ShapeSpec(channels=256, width=14, height=14), - num_keypoints=17, - conv_dims=[512] * 8, - loss_normalizer="visible", - ), -) - -# Detectron1 uses 2000 proposals per-batch, but this option is per-image in detectron2. -# 1000 proposals per-image is found to hurt box AP. -# Therefore we increase it to 1500 per-image. -model.proposal_generator.post_nms_topk = (1500, 1000) - -# Keypoint AP degrades (though box AP improves) when using plain L1 loss -model.roi_heads.box_predictor.smooth_l1_beta = 0.5 diff --git a/spaces/cchuang2009/News-Forum/app.py b/spaces/cchuang2009/News-Forum/app.py deleted file mode 100644 index cee952e680289672e550d61bb54af422b35f3f53..0000000000000000000000000000000000000000 --- a/spaces/cchuang2009/News-Forum/app.py +++ /dev/null @@ -1,57 +0,0 @@ -# load required modules, 載入必要模組 -import pandas as pd -from datetime import datetime -import streamlit as st - -st.set_page_config(page_title="News Archive", page_icon=":newspaper:", layout="wide") - -# Read CSV file, 讀取資料 -#df = pd.read_csv("news.csv") -df = pd.read_csv("https://raw.githubusercontent.com/cchuang2009/streamlit-News-Forum/main/news.csv") - -# Convert date column to datetime 轉換時間資料格式 -df["Published_date"] = pd.to_datetime(df["Published_date"]) - -# Sort by date, 依照時間排序 -df = df.sort_values("Published_date", ascending=False) - -# Set default selection to current year and month, 預定使用登錄的年月 -now = datetime.now() -default_year_month = now.strftime("%Y-%b") - -# Get unique year-month combinations from dataframe, 利用年月設定資料現選項 -year_months = df["Published_date"].dt.strftime("%Y-%b").unique() -months = sorted(year_months, reverse=True) - -# Add the last year-month to the months list if it's not already there -if default_year_month not in months: - months.append(default_year_month) - - -# Sidebar menu for selecting month, 設定左邊選項 -selected_month = st.sidebar.selectbox("Select Month", months, index=months.index(default_year_month)) - -# Keyword search box, 關鍵詞查詢 -search_term = st.sidebar.text_input("Search News", "") - -# Filter dataframe by selected month and search term, 關鍵詞查詢結果 -filtered_df = df[(df["Published_date"].dt.strftime("%Y-%b") == selected_month) & (df["Title"].str.contains(search_term, case=False))] - -# Display selected news, 顯示選取的項目 -st.write(f"## News for :blue[{selected_month}]") - -for title, source, date in filtered_df[["Title", "Source", "Published_date"]].itertuples(index=False): - with st.expander(f'**{title}**'): - st.write(f"{source}", unsafe_allow_html=True) - st.write(f"*Published on :orange[{date.date()}]*") - -# Show last 5 news articles in sidebar, 列出最新的五個訊息 -st.sidebar.markdown("## Last 5 News Articles") -last_5_articles = df.head()[["Title", "Source", "Published_date"]].values.tolist()[::-1] -for article in last_5_articles: - title, source, date = article - st.sidebar.markdown(f"[{title}] - *Published on :orange[{date.date()}]*") - -# If no selection made, show the most recent news article in main area, 如果如果沒有選項, 使用最後的日期內訊息 -if not selected_month: - st.write(f"# Latest News: [{df.iloc[0]['Title']}]({df.iloc[0]['Source']})") diff --git a/spaces/ccolas/TastyPiano/src/music/pipeline/synth2midi.py b/spaces/ccolas/TastyPiano/src/music/pipeline/synth2midi.py deleted file mode 100644 index b1257a3fa3ec84e51f2ef4bd9861b7d9ede68219..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/pipeline/synth2midi.py +++ /dev/null @@ -1,146 +0,0 @@ -import mido -mido.set_backend('mido.backends.pygame') -from mido import Message, MidiFile, MidiTrack -import time -import pynput -import sys -sys.path.append('../../') -from src.music.config import SYNTH_RECORDED_MIDI_PATH -from datetime import datetime - -#TODO: debug this with other cable, keyboard and sound card -global KEY_PRESSED -KEY_PRESSED = None - -def on_press(key): - global KEY_PRESSED - try: - KEY_PRESSED = key.name - except: - pass - -def on_release(key): - global KEY_PRESSED - KEY_PRESSED = None - - -def is_pressed(key): - global KEY_PRESSED - return KEY_PRESSED == key - -# keyboard listener -listener = pynput.keyboard.Listener(on_press=on_press, on_release=on_release) -listener.start() - -LEN_MIDI_RECORDINGS = 30 -class MidiRecorder: - def __init__(self, place='', len_midi_recordings=LEN_MIDI_RECORDINGS): - self.place = place - self.len_midi_recordings = len_midi_recordings - self.port = mido.open_input(mido.get_input_names()[0]) - - def get_filename(self): - now = datetime.now() - return self.place + '_' + now.strftime("%b_%d_%Y_%Hh%Mm%Ss") + '.mid' - - def read_last_midi_msgs(self): - return list(self.port.iter_pending()) - - def live_read(self): - while not is_pressed('esc'): - for msg in self.read_last_midi_msgs(): - print(msg) - - def check_if_recording_started(self, msgs, t_init): - started = False - if len(msgs) > 0: - for m in msgs: - if m.type == 'note_on': - started = True - t_init = time.time() - return started, t_init - - def create_empty_midi(self): - mid = MidiFile() - track = MidiTrack() - mid.tracks.append(track) - track.append(Message('program_change', program=0, time=0)) - return mid, track - - def record_next_N_seconds(self, n=None, saving_path=None): - if saving_path is None: - saving_path = SYNTH_RECORDED_PATH + self.get_filename() - if n is None: - n = self.len_midi_recordings - - print(f'Recoding the next {n} secs.' - f'\n\tRecording starts when the first key is pressed;' - f'\n\tPress Enter to end the recording;' - f'\n\tPress BackSpace (<--) to cancel the recording;' - f'\n\tSaving to {saving_path}') - try: - mid, track = self.create_empty_midi() - started = False - backspace_pressed = False - t_init = time.time() - while not is_pressed('enter') and (time.time() - t_init) < n: - msgs = self.read_last_midi_msgs() - if not started: - started, t_init = self.check_if_recording_started(msgs, t_init) - if started: - print("\n\t--> First note pressed, it's on!") - for m in msgs: - print(m) - if m.type == 'note_on' and m.velocity == 0: - m_off = Message(type='note_off', velocity=127, note=m.note, channel=m.channel, time=m.time) - track.append(m_off) - track.append(m) - if is_pressed('backspace'): - backspace_pressed = True - print('\n \t--> Recording cancelled! (you pressed BackSpace)') - break - # save the file - if not backspace_pressed and len(mid.tracks[0]) > 0: - mid.save(saving_path) - print(f'\n--> Recording saved, duration: {mid.length:.2f} secs, {len(mid.tracks[0])} events.') - except: - print('\n --> The recording failed.') - - - def run(self): - # with pynput.Listener( - # on_press=self.on_press) as listener: - # listener.join() - ready_msg = False - print('Starting the recording loop!\n\tPress BackSpace to cancel the current recording;\n\tPress Esc to quit the loop (only works while not recording)') - while True: - if not ready_msg: - print('-------\nReady to record!') - print('Press space to start a recording\n') - ready_msg = True - - if is_pressed('space'): - self.record_next_N_seconds() - ready_msg = False - if is_pressed('esc'): - print('End of the recording session. See you soon!') - break - - -midi_recorder = MidiRecorder(place='home') -midi_recorder.live_read() -# midi_recorder.run() - - -# try: -# controls[msg.control] = msg.value -# except: -# notes.append(msg.note) -# port = mido.open_input() -# while True: -# for msg in port.iter_pending(): -# print(msg) -# -# print('start pause') -# time.sleep(5) -# print('stop pause') \ No newline at end of file diff --git a/spaces/cfwef/gpt/theme.py b/spaces/cfwef/gpt/theme.py deleted file mode 100644 index 1a186aacabf5d982cbe9426a198f2a0b4bdef9d1..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/theme.py +++ /dev/null @@ -1,152 +0,0 @@ -import gradio as gr - -# gradio可用颜色列表 -# gr.themes.utils.colors.slate (石板色) -# gr.themes.utils.colors.gray (灰色) -# gr.themes.utils.colors.zinc (锌色) -# gr.themes.utils.colors.neutral (中性色) -# gr.themes.utils.colors.stone (石头色) -# gr.themes.utils.colors.red (红色) -# gr.themes.utils.colors.orange (橙色) -# gr.themes.utils.colors.amber (琥珀色) -# gr.themes.utils.colors.yellow (黄色) -# gr.themes.utils.colors.lime (酸橙色) -# gr.themes.utils.colors.green (绿色) -# gr.themes.utils.colors.emerald (祖母绿) -# gr.themes.utils.colors.teal (青蓝色) -# gr.themes.utils.colors.cyan (青色) -# gr.themes.utils.colors.sky (天蓝色) -# gr.themes.utils.colors.blue (蓝色) -# gr.themes.utils.colors.indigo (靛蓝色) -# gr.themes.utils.colors.violet (紫罗兰色) -# gr.themes.utils.colors.purple (紫色) -# gr.themes.utils.colors.fuchsia (洋红色) -# gr.themes.utils.colors.pink (粉红色) -# gr.themes.utils.colors.rose (玫瑰色) - -def adjust_theme(): - try: - color_er = gr.themes.utils.colors.pink - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")], - font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - except: - set_theme = None; print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - -advanced_css = """ -/* 设置表格的外边距为1em,内部单元格之间边框合并,空单元格显示. */ -.markdown-body table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} - -/* 设置表格单元格的内边距为5px,边框粗细为1.2px,颜色为--border-color-primary. */ -.markdown-body th, .markdown-body td { - border: 1.2px solid var(--border-color-primary); - padding: 5px; -} - -/* 设置表头背景颜色为rgba(175,184,193,0.2),透明度为0.2. */ -.markdown-body thead { - background-color: rgba(175,184,193,0.2); -} - -/* 设置表头单元格的内边距为0.5em和0.2em. */ -.markdown-body thead th { - padding: .5em .2em; -} - -/* 去掉列表前缀的默认间距,使其与文本线对齐. */ -.markdown-body ol, .markdown-body ul { - padding-inline-start: 2em !important; -} - -/* 设定聊天气泡的样式,包括圆角、最大宽度和阴影等. */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - /* padding: var(--spacing-xl) !important; */ - /* font-size: var(--text-md) !important; */ - /* line-height: var(--line-md) !important; */ - /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ - /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ -} -[data-testid = "bot"] { - max-width: 95%; - /* width: auto !important; */ - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 100%; - /* width: auto !important; */ - border-bottom-right-radius: 0 !important; -} - -/* 行内代码的背景设为淡灰色,设定圆角和间距. */ -.markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */ -.markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(175,184,193,0.2); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} -""" \ No newline at end of file diff --git a/spaces/chansung/llm-discord-bot/health_check_200.py b/spaces/chansung/llm-discord-bot/health_check_200.py deleted file mode 100644 index 15d7a435d6e6af9d4d888e68233d84437d2df38e..0000000000000000000000000000000000000000 --- a/spaces/chansung/llm-discord-bot/health_check_200.py +++ /dev/null @@ -1,20 +0,0 @@ -import sys -from http.server import BaseHTTPRequestHandler, HTTPServer - -class S(BaseHTTPRequestHandler): - def _set_headers(self): - self.send_response(200) - self.send_header('Content-type', 'application/json') - self.end_headers() - - def do_GET(self): - self._set_headers() - self.wfile.write(b"") - -def run_dummy_server(server_class=HTTPServer, handler_class=S, port=7860): - server_address = ('', port) - httpd = server_class(server_address, handler_class) - print('Starting httpd...') - httpd.serve_forever() - -run_dummy_server() \ No newline at end of file diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/src/fsner/__init__.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/src/fsner/__init__.py deleted file mode 100644 index 130813cc119c1689912b3de28abb59cb18a92045..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/src/fsner/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .model import FSNERModel -from .tokenizer_utils import FSNERTokenizerUtils - - -__all__ = ["FSNERModel", "FSNERTokenizerUtils"] diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/modeling_tf_outputs.py b/spaces/chendl/compositional_test/transformers/src/transformers/modeling_tf_outputs.py deleted file mode 100644 index f8148b169543fa022e67bbc56f5d75291ea7612d..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/modeling_tf_outputs.py +++ /dev/null @@ -1,989 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import warnings -from dataclasses import dataclass -from typing import List, Optional, Tuple - -import tensorflow as tf - -from .utils import ModelOutput - - -@dataclass -class TFBaseModelOutput(ModelOutput): - """ - Base class for model's outputs, with potential hidden states and attentions. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - last_hidden_state: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFBaseModelOutputWithNoAttention(ModelOutput): - """ - Base class for model's outputs, with potential hidden states. - - Args: - last_hidden_state (`tf.Tensor` shape `(batch_size, num_channels, height, width)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for - the output of each layer) of shape `(batch_size, num_channels, height, width)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - """ - - last_hidden_state: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor, ...]] = None - - -@dataclass -class TFBaseModelOutputWithPooling(ModelOutput): - """ - Base class for model's outputs that also contains a pooling of the last hidden states. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`): - Last layer hidden-state of the first token of the sequence (classification token) further processed by a - Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence - prediction (classification) objective during pretraining. - - This output is usually *not* a good summary of the semantic content of the input, you're often better with - averaging or pooling the sequence of hidden-states for the whole input sequence. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - last_hidden_state: tf.Tensor = None - pooler_output: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFBaseModelOutputWithPoolingAndNoAttention(ModelOutput): - """ - Base class for model's outputs that also contains a pooling of the last hidden states. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`): - Sequence of hidden-states at the output of the last layer of the model. - pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`): - Last layer hidden-state after a pooling operation on the spatial dimensions. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for - the output of each layer) of shape `(batch_size, num_channels, height, width)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - """ - - last_hidden_state: tf.Tensor = None - pooler_output: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor, ...]] = None - - -@dataclass -class TFBaseModelOutputWithPoolingAndCrossAttentions(ModelOutput): - """ - Base class for model's outputs that also contains a pooling of the last hidden states. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`): - Last layer hidden-state of the first token of the sequence (classification token) further processed by a - Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence - prediction (classification) objective during pretraining. - - This output is usually *not* a good summary of the semantic content of the input, you're often better with - averaging or pooling the sequence of hidden-states for the whole input sequence. - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the - weighted average in the cross-attention heads. - """ - - last_hidden_state: tf.Tensor = None - pooler_output: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - cross_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFBaseModelOutputWithPast(ModelOutput): - """ - Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding). - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - - If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, - hidden_size)` is output. - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - last_hidden_state: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFBaseModelOutputWithCrossAttentions(ModelOutput): - """ - Base class for model's outputs, with potential hidden states and attentions. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the - weighted average in the cross-attention heads. - """ - - last_hidden_state: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - cross_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFBaseModelOutputWithPastAndCrossAttentions(ModelOutput): - """ - Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding). - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - - If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, - hidden_size)` is output. - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the - weighted average in the cross-attention heads. - """ - - last_hidden_state: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - cross_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSeq2SeqModelOutput(ModelOutput): - """ - Base class for model encoder's outputs that also contains : pre-computed hidden states that can speed up sequential - decoding. - - Args: - last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the decoder of the model. - - If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, - hidden_size)` is output. - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be - used (see `past_key_values` input) to speed up sequential decoding. - decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. - decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the - weighted average in the cross-attention heads. - encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder of the model. - encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. - encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - """ - - last_hidden_state: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - decoder_attentions: Optional[Tuple[tf.Tensor]] = None - cross_attentions: Optional[Tuple[tf.Tensor]] = None - encoder_last_hidden_state: Optional[tf.Tensor] = None - encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - encoder_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFCausalLMOutput(ModelOutput): - """ - Base class for causal language model (or autoregressive) outputs. - - Args: - loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided): - Language modeling loss (for next-token prediction). - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFCausalLMOutputWithPast(ModelOutput): - """ - Base class for causal language model (or autoregressive) outputs. - - Args: - loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided): - Language modeling loss (for next-token prediction). - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFCausalLMOutputWithCrossAttentions(ModelOutput): - """ - Base class for causal language model (or autoregressive) outputs. - - Args: - loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided): - Language modeling loss (for next-token prediction). - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the - weighted average in the cross-attention heads. - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - cross_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFMaskedLMOutput(ModelOutput): - """ - Base class for masked language models outputs. - - Args: - loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided): - Masked language modeling (MLM) loss. - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSeq2SeqLMOutput(ModelOutput): - """ - Base class for sequence-to-sequence language models outputs. - - Args: - loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided): - Language modeling loss. - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be - used (see `past_key_values` input) to speed up sequential decoding. - decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. - decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the - weighted average in the cross-attention heads. - encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder of the model. - encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. - encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - decoder_attentions: Optional[Tuple[tf.Tensor]] = None - cross_attentions: Optional[Tuple[tf.Tensor]] = None - encoder_last_hidden_state: Optional[tf.Tensor] = None - encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - encoder_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFNextSentencePredictorOutput(ModelOutput): - """ - Base class for outputs of models predicting if two sentences are consecutive or not. - - Args: - loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `next_sentence_label` is provided): - Next sentence prediction loss. - logits (`tf.Tensor` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSequenceClassifierOutput(ModelOutput): - """ - Base class for outputs of sentence classification models. - - Args: - loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Classification (or regression if config.num_labels==1) scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSeq2SeqSequenceClassifierOutput(ModelOutput): - """ - Base class for outputs of sequence-to-sequence sentence classification models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `label` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Classification (or regression if config.num_labels==1) scores (before SoftMax). - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be - used (see `past_key_values` input) to speed up sequential decoding. - decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. - decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)` - encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder of the model. - encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. - encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - decoder_attentions: Optional[Tuple[tf.Tensor]] = None - cross_attentions: Optional[Tuple[tf.Tensor]] = None - encoder_last_hidden_state: Optional[tf.Tensor] = None - encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - encoder_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSemanticSegmenterOutput(ModelOutput): - """ - Base class for outputs of semantic segmentation models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels, logits_height, logits_width)`): - Classification scores for each pixel. - - - - The logits returned do not necessarily have the same size as the `pixel_values` passed as inputs. This is - to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the - original image size as post-processing. You should always check your logits shape and resize as needed. - - - - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for - the output of each layer) of shape `(batch_size, patch_size, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSemanticSegmenterOutputWithNoAttention(ModelOutput): - """ - Base class for outputs of semantic segmentation models that do not output attention scores. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels, logits_height, logits_width)`): - Classification scores for each pixel. - - - - The logits returned do not necessarily have the same size as the `pixel_values` passed as inputs. This is - to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the - original image size as post-processing. You should always check your logits shape and resize as needed. - - - - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for - the output of each layer) of shape `(batch_size, patch_size, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFImageClassifierOutput(ModelOutput): - """ - Base class for outputs of image classification models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Classification (or regression if config.num_labels==1) scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for - the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called - feature maps) of the model at the output of each stage. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFMultipleChoiceModelOutput(ModelOutput): - """ - Base class for outputs of multiple choice models. - - Args: - loss (`tf.Tensor` of shape *(batch_size, )*, *optional*, returned when `labels` is provided): - Classification loss. - logits (`tf.Tensor` of shape `(batch_size, num_choices)`): - *num_choices* is the second dimension of the input tensors. (see *input_ids* above). - - Classification scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFTokenClassifierOutput(ModelOutput): - """ - Base class for outputs of token classification models. - - Args: - loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of unmasked labels, returned when `labels` is provided) : - Classification loss. - logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.num_labels)`): - Classification scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFQuestionAnsweringModelOutput(ModelOutput): - """ - Base class for outputs of question answering models. - - Args: - loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `start_positions` and `end_positions` are provided): - Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. - start_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`): - Span-start scores (before SoftMax). - end_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`): - Span-end scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - start_logits: tf.Tensor = None - end_logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSeq2SeqQuestionAnsweringModelOutput(ModelOutput): - """ - Base class for outputs of sequence-to-sequence question answering models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. - start_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`): - Span-start scores (before SoftMax). - end_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`): - Span-end scores (before SoftMax). - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be - used (see `past_key_values` input) to speed up sequential decoding. - decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. - decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder of the model. - encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. - encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the - self-attention heads. - """ - - loss: Optional[tf.Tensor] = None - start_logits: tf.Tensor = None - end_logits: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - decoder_attentions: Optional[Tuple[tf.Tensor]] = None - encoder_last_hidden_state: Optional[tf.Tensor] = None - encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None - encoder_attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFSequenceClassifierOutputWithPast(ModelOutput): - """ - Base class for outputs of sentence classification models. - - Args: - loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Classification (or regression if config.num_labels==1) scores (before SoftMax). - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - past_key_values: Optional[List[tf.Tensor]] = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -@dataclass -class TFImageClassifierOutputWithNoAttention(ModelOutput): - """ - Base class for outputs of image classification models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`): - Classification (or regression if config.num_labels==1) scores (before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for - the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called - feature maps) of the model at the output of each stage. - """ - - loss: Optional[tf.Tensor] = None - logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor, ...]] = None - - -@dataclass -class TFMaskedImageModelingOutput(ModelOutput): - """ - Base class for outputs of masked image completion / in-painting models. - - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `bool_masked_pos` is provided): - Reconstruction loss. - reconstruction (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`): - Reconstructed / completed images. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when - `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for - the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called - feature maps) of the model at the output of each stage. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when - `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`. - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[tf.Tensor] = None - reconstruction: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - @property - def logits(self): - warnings.warn( - "logits attribute is deprecated and will be removed in version 5 of Transformers." - " Please use the reconstruction attribute to retrieve the final output instead.", - FutureWarning, - ) - return self.reconstruction diff --git a/spaces/chopey/DhivehiTransliteration/app.py b/spaces/chopey/DhivehiTransliteration/app.py deleted file mode 100644 index 03e80c8d17a7d8ddc2d5f9532b068e13f4daf9e0..0000000000000000000000000000000000000000 --- a/spaces/chopey/DhivehiTransliteration/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import torch -from transformers import T5Tokenizer, T5ForConditionalGeneration, Trainer, TrainingArguments - - -def transliteration(source_word): - #return "Hello " + name + "!!" - #source_word = "Manik aai Ameela maqaamun vakikuran hushahalhaifi" - source_word_str = source_word.lower() - - tokenizer = T5Tokenizer.from_pretrained("chopey/dvt5-base") - model = T5ForConditionalGeneration.from_pretrained('chopey/model_t5_base') - - input_ids = tokenizer.encode(source_word_str, return_tensors="pt") - output_ids = model.generate(input_ids, max_length=512) - transliteration = tokenizer.decode(output_ids[0], skip_special_tokens=True) - - return transliteration - -#test -iface = gr.Interface(fn=transliteration, inputs="text", outputs="text") -iface.launch() - - - - - - diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/FontFile.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/FontFile.py deleted file mode 100644 index 5ec0a6632e3182382467688662ebc5e6c324da91..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/FontFile.py +++ /dev/null @@ -1,110 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# base class for raster font file parsers -# -# history: -# 1997-06-05 fl created -# 1997-08-19 fl restrict image width -# -# Copyright (c) 1997-1998 by Secret Labs AB -# Copyright (c) 1997-1998 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - - -import os - -from . import Image, _binary - -WIDTH = 800 - - -def puti16(fp, values): - """Write network order (big-endian) 16-bit sequence""" - for v in values: - if v < 0: - v += 65536 - fp.write(_binary.o16be(v)) - - -class FontFile: - """Base class for raster font file handlers.""" - - bitmap = None - - def __init__(self): - self.info = {} - self.glyph = [None] * 256 - - def __getitem__(self, ix): - return self.glyph[ix] - - def compile(self): - """Create metrics and bitmap""" - - if self.bitmap: - return - - # create bitmap large enough to hold all data - h = w = maxwidth = 0 - lines = 1 - for glyph in self: - if glyph: - d, dst, src, im = glyph - h = max(h, src[3] - src[1]) - w = w + (src[2] - src[0]) - if w > WIDTH: - lines += 1 - w = src[2] - src[0] - maxwidth = max(maxwidth, w) - - xsize = maxwidth - ysize = lines * h - - if xsize == 0 and ysize == 0: - return "" - - self.ysize = h - - # paste glyphs into bitmap - self.bitmap = Image.new("1", (xsize, ysize)) - self.metrics = [None] * 256 - x = y = 0 - for i in range(256): - glyph = self[i] - if glyph: - d, dst, src, im = glyph - xx = src[2] - src[0] - # yy = src[3] - src[1] - x0, y0 = x, y - x = x + xx - if x > WIDTH: - x, y = 0, y + h - x0, y0 = x, y - x = xx - s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0 - self.bitmap.paste(im.crop(src), s) - self.metrics[i] = d, dst, s - - def save(self, filename): - """Save font""" - - self.compile() - - # font data - self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG") - - # font metrics - with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp: - fp.write(b"PILfont\n") - fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!! - fp.write(b"DATA\n") - for id in range(256): - m = self.metrics[id] - if not m: - puti16(fp, [0] * 10) - else: - puti16(fp, m[0] + m[1] + m[2]) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/summary.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/summary.py deleted file mode 100644 index ef152cad769074d092e34b03a337b5c896560415..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/summary.py +++ /dev/null @@ -1,39 +0,0 @@ -from typing import Optional - -from clickhouse_connect.datatypes.registry import get_from_name - -from clickhouse_connect.driver.query import QueryResult - - -class QuerySummary: - summary = {} - - def __init__(self, summary: Optional[dict] = None): - if summary is not None: - self.summary = summary - - @property - def written_rows(self) -> int: - return int(self.summary.get('written_rows', 0)) - - def written_bytes(self) -> int: - return int(self.summary.get('written_bytes', 0)) - - def query_id(self) -> str: - return self.summary.get('query_id', '') - - def as_query_result(self) -> QueryResult: - data = [] - column_names = [] - column_types = [] - str_type = get_from_name('String') - int_type = get_from_name('Int64') - for key, value in self.summary.items(): - column_names.append(key) - if value.isnumeric(): - data.append(int(value)) - column_types.append(int_type) - else: - data.append(value) - column_types.append(str_type) - return QueryResult([data], column_names=tuple(column_names), column_types=tuple(column_types)) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cycler.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cycler.py deleted file mode 100644 index f86b68de64b8066b98d8fa2d92bf5983ea582237..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cycler.py +++ /dev/null @@ -1,501 +0,0 @@ -""" -Cycler -====== - -Cycling through combinations of values, producing dictionaries. - -You can add cyclers:: - - from cycler import cycler - cc = (cycler(color=list('rgb')) + - cycler(linestyle=['-', '--', '-.'])) - for d in cc: - print(d) - -Results in:: - - {'color': 'r', 'linestyle': '-'} - {'color': 'g', 'linestyle': '--'} - {'color': 'b', 'linestyle': '-.'} - - -You can multiply cyclers:: - - from cycler import cycler - cc = (cycler(color=list('rgb')) * - cycler(linestyle=['-', '--', '-.'])) - for d in cc: - print(d) - -Results in:: - - {'color': 'r', 'linestyle': '-'} - {'color': 'r', 'linestyle': '--'} - {'color': 'r', 'linestyle': '-.'} - {'color': 'g', 'linestyle': '-'} - {'color': 'g', 'linestyle': '--'} - {'color': 'g', 'linestyle': '-.'} - {'color': 'b', 'linestyle': '-'} - {'color': 'b', 'linestyle': '--'} - {'color': 'b', 'linestyle': '-.'} -""" - - -import copy -from functools import reduce -from itertools import product, cycle -from operator import mul, add - -__version__ = '0.10.0' - - -def _process_keys(left, right): - """ - Helper function to compose cycler keys. - - Parameters - ---------- - left, right : iterable of dictionaries or None - The cyclers to be composed. - - Returns - ------- - keys : set - The keys in the composition of the two cyclers. - """ - l_peek = next(iter(left)) if left is not None else {} - r_peek = next(iter(right)) if right is not None else {} - l_key = set(l_peek.keys()) - r_key = set(r_peek.keys()) - if l_key & r_key: - raise ValueError("Can not compose overlapping cycles") - return l_key | r_key - - -def concat(left, right): - r""" - Concatenate `Cycler`\s, as if chained using `itertools.chain`. - - The keys must match exactly. - - Examples - -------- - >>> num = cycler('a', range(3)) - >>> let = cycler('a', 'abc') - >>> num.concat(let) - cycler('a', [0, 1, 2, 'a', 'b', 'c']) - - Returns - ------- - `Cycler` - The concatenated cycler. - """ - if left.keys != right.keys: - raise ValueError("Keys do not match:\n" - "\tIntersection: {both!r}\n" - "\tDisjoint: {just_one!r}".format( - both=left.keys & right.keys, - just_one=left.keys ^ right.keys)) - _l = left.by_key() - _r = right.by_key() - return reduce(add, (_cycler(k, _l[k] + _r[k]) for k in left.keys)) - - -class Cycler: - """ - Composable cycles. - - This class has compositions methods: - - ``+`` - for 'inner' products (zip) - - ``+=`` - in-place ``+`` - - ``*`` - for outer products (`itertools.product`) and integer multiplication - - ``*=`` - in-place ``*`` - - and supports basic slicing via ``[]``. - - Parameters - ---------- - left, right : Cycler or None - The 'left' and 'right' cyclers. - op : func or None - Function which composes the 'left' and 'right' cyclers. - """ - - def __call__(self): - return cycle(self) - - def __init__(self, left, right=None, op=None): - """ - Semi-private init. - - Do not use this directly, use `cycler` function instead. - """ - if isinstance(left, Cycler): - self._left = Cycler(left._left, left._right, left._op) - elif left is not None: - # Need to copy the dictionary or else that will be a residual - # mutable that could lead to strange errors - self._left = [copy.copy(v) for v in left] - else: - self._left = None - - if isinstance(right, Cycler): - self._right = Cycler(right._left, right._right, right._op) - elif right is not None: - # Need to copy the dictionary or else that will be a residual - # mutable that could lead to strange errors - self._right = [copy.copy(v) for v in right] - else: - self._right = None - - self._keys = _process_keys(self._left, self._right) - self._op = op - - def __contains__(self, k): - return k in self._keys - - @property - def keys(self): - """The keys this Cycler knows about.""" - return set(self._keys) - - def change_key(self, old, new): - """ - Change a key in this cycler to a new name. - Modification is performed in-place. - - Does nothing if the old key is the same as the new key. - Raises a ValueError if the new key is already a key. - Raises a KeyError if the old key isn't a key. - """ - if old == new: - return - if new in self._keys: - raise ValueError( - "Can't replace {old} with {new}, {new} is already a key" - .format(old=old, new=new) - ) - if old not in self._keys: - raise KeyError("Can't replace {old} with {new}, {old} is not a key" - .format(old=old, new=new)) - - self._keys.remove(old) - self._keys.add(new) - - if self._right is not None and old in self._right.keys: - self._right.change_key(old, new) - - # self._left should always be non-None - # if self._keys is non-empty. - elif isinstance(self._left, Cycler): - self._left.change_key(old, new) - else: - # It should be completely safe at this point to - # assume that the old key can be found in each - # iteration. - self._left = [{new: entry[old]} for entry in self._left] - - @classmethod - def _from_iter(cls, label, itr): - """ - Class method to create 'base' Cycler objects - that do not have a 'right' or 'op' and for which - the 'left' object is not another Cycler. - - Parameters - ---------- - label : str - The property key. - - itr : iterable - Finite length iterable of the property values. - - Returns - ------- - `Cycler` - New 'base' cycler. - """ - ret = cls(None) - ret._left = list({label: v} for v in itr) - ret._keys = {label} - return ret - - def __getitem__(self, key): - # TODO : maybe add numpy style fancy slicing - if isinstance(key, slice): - trans = self.by_key() - return reduce(add, (_cycler(k, v[key]) for k, v in trans.items())) - else: - raise ValueError("Can only use slices with Cycler.__getitem__") - - def __iter__(self): - if self._right is None: - for left in self._left: - yield dict(left) - else: - for a, b in self._op(self._left, self._right): - out = {} - out.update(a) - out.update(b) - yield out - - def __add__(self, other): - """ - Pair-wise combine two equal length cyclers (zip). - - Parameters - ---------- - other : Cycler - """ - if len(self) != len(other): - raise ValueError("Can only add equal length cycles, " - f"not {len(self)} and {len(other)}") - return Cycler(self, other, zip) - - def __mul__(self, other): - """ - Outer product of two cyclers (`itertools.product`) or integer - multiplication. - - Parameters - ---------- - other : Cycler or int - """ - if isinstance(other, Cycler): - return Cycler(self, other, product) - elif isinstance(other, int): - trans = self.by_key() - return reduce(add, (_cycler(k, v*other) for k, v in trans.items())) - else: - return NotImplemented - - def __rmul__(self, other): - return self * other - - def __len__(self): - op_dict = {zip: min, product: mul} - if self._right is None: - return len(self._left) - l_len = len(self._left) - r_len = len(self._right) - return op_dict[self._op](l_len, r_len) - - def __iadd__(self, other): - """ - In-place pair-wise combine two equal length cyclers (zip). - - Parameters - ---------- - other : Cycler - """ - if not isinstance(other, Cycler): - raise TypeError("Cannot += with a non-Cycler object") - # True shallow copy of self is fine since this is in-place - old_self = copy.copy(self) - self._keys = _process_keys(old_self, other) - self._left = old_self - self._op = zip - self._right = Cycler(other._left, other._right, other._op) - return self - - def __imul__(self, other): - """ - In-place outer product of two cyclers (`itertools.product`). - - Parameters - ---------- - other : Cycler - """ - if not isinstance(other, Cycler): - raise TypeError("Cannot *= with a non-Cycler object") - # True shallow copy of self is fine since this is in-place - old_self = copy.copy(self) - self._keys = _process_keys(old_self, other) - self._left = old_self - self._op = product - self._right = Cycler(other._left, other._right, other._op) - return self - - def __eq__(self, other): - if len(self) != len(other): - return False - if self.keys ^ other.keys: - return False - return all(a == b for a, b in zip(self, other)) - - def __ne__(self, other): - return not (self == other) - - __hash__ = None - - def __repr__(self): - op_map = {zip: '+', product: '*'} - if self._right is None: - lab = self.keys.pop() - itr = list(v[lab] for v in self) - return f"cycler({lab!r}, {itr!r})" - else: - op = op_map.get(self._op, '?') - msg = "({left!r} {op} {right!r})" - return msg.format(left=self._left, op=op, right=self._right) - - def _repr_html_(self): - # an table showing the value of each key through a full cycle - output = "" - sorted_keys = sorted(self.keys, key=repr) - for key in sorted_keys: - output += f"" - for d in iter(self): - output += "" - for k in sorted_keys: - output += f"" - output += "" - output += "
{key!r}
{d[k]!r}
" - return output - - def by_key(self): - """ - Values by key. - - This returns the transposed values of the cycler. Iterating - over a `Cycler` yields dicts with a single value for each key, - this method returns a `dict` of `list` which are the values - for the given key. - - The returned value can be used to create an equivalent `Cycler` - using only `+`. - - Returns - ------- - transpose : dict - dict of lists of the values for each key. - """ - - # TODO : sort out if this is a bottle neck, if there is a better way - # and if we care. - - keys = self.keys - out = {k: list() for k in keys} - - for d in self: - for k in keys: - out[k].append(d[k]) - return out - - # for back compatibility - _transpose = by_key - - def simplify(self): - """ - Simplify the cycler into a sum (but no products) of cyclers. - - Returns - ------- - simple : Cycler - """ - # TODO: sort out if it is worth the effort to make sure this is - # balanced. Currently it is is - # (((a + b) + c) + d) vs - # ((a + b) + (c + d)) - # I would believe that there is some performance implications - trans = self.by_key() - return reduce(add, (_cycler(k, v) for k, v in trans.items())) - - concat = concat - - -def cycler(*args, **kwargs): - """ - Create a new `Cycler` object from a single positional argument, - a pair of positional arguments, or the combination of keyword arguments. - - cycler(arg) - cycler(label1=itr1[, label2=iter2[, ...]]) - cycler(label, itr) - - Form 1 simply copies a given `Cycler` object. - - Form 2 composes a `Cycler` as an inner product of the - pairs of keyword arguments. In other words, all of the - iterables are cycled simultaneously, as if through zip(). - - Form 3 creates a `Cycler` from a label and an iterable. - This is useful for when the label cannot be a keyword argument - (e.g., an integer or a name that has a space in it). - - Parameters - ---------- - arg : Cycler - Copy constructor for Cycler (does a shallow copy of iterables). - label : name - The property key. In the 2-arg form of the function, - the label can be any hashable object. In the keyword argument - form of the function, it must be a valid python identifier. - itr : iterable - Finite length iterable of the property values. - Can be a single-property `Cycler` that would - be like a key change, but as a shallow copy. - - Returns - ------- - cycler : Cycler - New `Cycler` for the given property - - """ - if args and kwargs: - raise TypeError("cyl() can only accept positional OR keyword " - "arguments -- not both.") - - if len(args) == 1: - if not isinstance(args[0], Cycler): - raise TypeError("If only one positional argument given, it must " - "be a Cycler instance.") - return Cycler(args[0]) - elif len(args) == 2: - return _cycler(*args) - elif len(args) > 2: - raise TypeError("Only a single Cycler can be accepted as the lone " - "positional argument. Use keyword arguments instead.") - - if kwargs: - return reduce(add, (_cycler(k, v) for k, v in kwargs.items())) - - raise TypeError("Must have at least a positional OR keyword arguments") - - -def _cycler(label, itr): - """ - Create a new `Cycler` object from a property name and iterable of values. - - Parameters - ---------- - label : hashable - The property key. - itr : iterable - Finite length iterable of the property values. - - Returns - ------- - cycler : Cycler - New `Cycler` for the given property - """ - if isinstance(itr, Cycler): - keys = itr.keys - if len(keys) != 1: - msg = "Can not create Cycler from a multi-property Cycler" - raise ValueError(msg) - - lab = keys.pop() - # Doesn't need to be a new list because - # _from_iter() will be creating that new list anyway. - itr = (v[lab] for v in itr) - - return Cycler._from_iter(label, itr) diff --git a/spaces/cihyFjudo/fairness-paper-search/Tamil Actress Yvijaya Hot Sex Photo.md b/spaces/cihyFjudo/fairness-paper-search/Tamil Actress Yvijaya Hot Sex Photo.md deleted file mode 100644 index 1d1ac7a5e77d9da712aa960dcda1b3f8b92ef862..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tamil Actress Yvijaya Hot Sex Photo.md +++ /dev/null @@ -1,9 +0,0 @@ - -

Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Photos
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Unrated Videos
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian XXX Videos
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Indian Videos
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian MP4 Videos
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Indian Images
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Leaked Videos
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Leaked Pics
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian XXX Posts

-

Tamil Actress Yvijaya Hot Sex Photo


Download ••• https://tinurli.com/2uwk8J



-

Hot Indian Actress Rare HQ Photos: Kannada Actress All Actress Hot Photos Tamil Actress very Hot sri lanka Hot Indian Actress Rare HQ Photos: Tamil Actress Meera INDIAN ACTRESS: South Indian actress Priyamani full Different boob show, nipple visible tamil actress Hot Indian Actress Rare HQ Photos: Telugu Actress Richa

-

Shruthi South Indian Television Serial Actress Tamil Serial Artists: Shruthi Raj Tamil Serial Artists: Suzan, Shruthi Raj and Devi Kiruba Marathi Actor And Actress Shruti Marathe Old And New Tamil serial actress: Sruthi Raj Tamil Serial Artists: Serial Artist Shamitha tamil tv serial, sun tv tamil serial, tamil actress names

-

Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Photos
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r XXX Videos
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r HD Videos
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Indian Videos
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r MP4 Videos
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Indian Images
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Leaked Videos
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Leaked Pics
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r XXX Posts

-

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Tamil Aunties Upskirt Lifting Saree Peeing Photosl __FULL__.md b/spaces/cihyFjudo/fairness-paper-search/Tamil Aunties Upskirt Lifting Saree Peeing Photosl __FULL__.md deleted file mode 100644 index eceb215b451d8a6cc4d748c2697f154f4768b270..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tamil Aunties Upskirt Lifting Saree Peeing Photosl __FULL__.md +++ /dev/null @@ -1,7 +0,0 @@ - -

Huge tamil boobs bhabi saree bra fucking Different boob show, nipple visible tamil actress Bihari wives lifting petticoat sex pic bihari cheating bhabhi samyuktha sexy dress removing big bollywood: Mona Lisa Without Clothes Wallpapers in Hot Bra Desi college girl transparent saree show 1stbuzz: kasthuri hot Dasi Girls: Sobia Kashmiri

-

Girls lifting saree hot picture Telugu saree wearing aunties photos Bihari wives lifting petticoat sex pic Removing Saree Sex photos of indian aunties bhabhi and girls Aunty girls removing saree shows sexy big boobs photos Komal aunty yellow petticoat blouse me boob show HD photos Mallu wife big hips remove

-

Tamil Aunties Upskirt Lifting Saree Peeing Photosl


DOWNLOADhttps://tinurli.com/2uwkjx



-

Bihari wives lifting petticoat sex pic Indian Aunty Lifting Saree Removing Saree Sex photos of indian aunties bhabhi and girls Sexy ass photo in blouse and petticoat Komal aunty yellow petticoat blouse me boob show HD photos Mallu wife big hips remove saree petticoat sex XXX photos Bengali sexy nude

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/The Motu Patlu - King of Kings Full Movie Download Experience the Thrill of Nuclear Fusion in Animation.md b/spaces/cihyFjudo/fairness-paper-search/The Motu Patlu - King of Kings Full Movie Download Experience the Thrill of Nuclear Fusion in Animation.md deleted file mode 100644 index d394254926225aaa34ecdb2bcc737c9b7e524619..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/The Motu Patlu - King of Kings Full Movie Download Experience the Thrill of Nuclear Fusion in Animation.md +++ /dev/null @@ -1,6 +0,0 @@ -

the Motu Patlu - King of Kings full movie download


DOWNLOADhttps://tinurli.com/2uwk99



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Umbai Gazal Malayalam MP3 Free Download The Ultimate Playlist of Gazals in Malayalam.md b/spaces/cihyFjudo/fairness-paper-search/Umbai Gazal Malayalam MP3 Free Download The Ultimate Playlist of Gazals in Malayalam.md deleted file mode 100644 index 636f9ccc1ebde5f3e7210d20b4850d87727531fa..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Umbai Gazal Malayalam MP3 Free Download The Ultimate Playlist of Gazals in Malayalam.md +++ /dev/null @@ -1,6 +0,0 @@ -

umbaigazalmalayalammp3freedownload


Download Zip ✏ ✏ ✏ https://tinurli.com/2uwkJq



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageMode.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageMode.py deleted file mode 100644 index a0b33514296df734501c553493b0a535eca49046..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageMode.py +++ /dev/null @@ -1,90 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard mode descriptors -# -# History: -# 2006-03-20 fl Added -# -# Copyright (c) 2006 by Secret Labs AB. -# Copyright (c) 2006 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import sys - -# mode descriptor cache -_modes = None - - -class ModeDescriptor: - """Wrapper for mode strings.""" - - def __init__(self, mode, bands, basemode, basetype, typestr): - self.mode = mode - self.bands = bands - self.basemode = basemode - self.basetype = basetype - self.typestr = typestr - - def __str__(self): - return self.mode - - -def getmode(mode): - """Gets a mode descriptor for the given mode.""" - global _modes - if not _modes: - # initialize mode cache - modes = {} - endian = "<" if sys.byteorder == "little" else ">" - for m, (basemode, basetype, bands, typestr) in { - # core modes - # Bits need to be extended to bytes - "1": ("L", "L", ("1",), "|b1"), - "L": ("L", "L", ("L",), "|u1"), - "I": ("L", "I", ("I",), endian + "i4"), - "F": ("L", "F", ("F",), endian + "f4"), - "P": ("P", "L", ("P",), "|u1"), - "RGB": ("RGB", "L", ("R", "G", "B"), "|u1"), - "RGBX": ("RGB", "L", ("R", "G", "B", "X"), "|u1"), - "RGBA": ("RGB", "L", ("R", "G", "B", "A"), "|u1"), - "CMYK": ("RGB", "L", ("C", "M", "Y", "K"), "|u1"), - "YCbCr": ("RGB", "L", ("Y", "Cb", "Cr"), "|u1"), - # UNDONE - unsigned |u1i1i1 - "LAB": ("RGB", "L", ("L", "A", "B"), "|u1"), - "HSV": ("RGB", "L", ("H", "S", "V"), "|u1"), - # extra experimental modes - "RGBa": ("RGB", "L", ("R", "G", "B", "a"), "|u1"), - "BGR;15": ("RGB", "L", ("B", "G", "R"), "|u1"), - "BGR;16": ("RGB", "L", ("B", "G", "R"), "|u1"), - "BGR;24": ("RGB", "L", ("B", "G", "R"), "|u1"), - "LA": ("L", "L", ("L", "A"), "|u1"), - "La": ("L", "L", ("L", "a"), "|u1"), - "PA": ("RGB", "L", ("P", "A"), "|u1"), - }.items(): - modes[m] = ModeDescriptor(m, bands, basemode, basetype, typestr) - # mapping modes - for i16mode, typestr in { - # I;16 == I;16L, and I;32 == I;32L - "I;16": "u2", - "I;16BS": ">i2", - "I;16N": endian + "u2", - "I;16NS": endian + "i2", - "I;32": "u4", - "I;32L": "i4", - "I;32LS": ">> import sys - >>> handler = logging.StreamHandler(sys.stdout) - >>> formatter = LevelFormatter( - ... fmt={ - ... '*': '[%(levelname)s] %(message)s', - ... 'DEBUG': '%(name)s [%(levelname)s] %(message)s', - ... 'INFO': '%(message)s', - ... }) - >>> handler.setFormatter(formatter) - >>> log = logging.getLogger('test') - >>> log.setLevel(logging.DEBUG) - >>> log.addHandler(handler) - >>> log.debug('this uses a custom format string') - test [DEBUG] this uses a custom format string - >>> log.info('this also uses a custom format string') - this also uses a custom format string - >>> log.warning("this one uses the default format string") - [WARNING] this one uses the default format string - """ - - def __init__(self, fmt=None, datefmt=None, style="%"): - if style != "%": - raise ValueError( - "only '%' percent style is supported in both python 2 and 3" - ) - if fmt is None: - fmt = DEFAULT_FORMATS - if isinstance(fmt, str): - default_format = fmt - custom_formats = {} - elif isinstance(fmt, Mapping): - custom_formats = dict(fmt) - default_format = custom_formats.pop("*", None) - else: - raise TypeError("fmt must be a str or a dict of str: %r" % fmt) - super(LevelFormatter, self).__init__(default_format, datefmt) - self.default_format = self._fmt - self.custom_formats = {} - for level, fmt in custom_formats.items(): - level = logging._checkLevel(level) - self.custom_formats[level] = fmt - - def format(self, record): - if self.custom_formats: - fmt = self.custom_formats.get(record.levelno, self.default_format) - if self._fmt != fmt: - self._fmt = fmt - # for python >= 3.2, _style needs to be set if _fmt changes - if PercentStyle: - self._style = PercentStyle(fmt) - return super(LevelFormatter, self).format(record) - - -def configLogger(**kwargs): - """A more sophisticated logging system configuation manager. - - This is more or less the same as :py:func:`logging.basicConfig`, - with some additional options and defaults. - - The default behaviour is to create a ``StreamHandler`` which writes to - sys.stderr, set a formatter using the ``DEFAULT_FORMATS`` strings, and add - the handler to the top-level library logger ("fontTools"). - - A number of optional keyword arguments may be specified, which can alter - the default behaviour. - - Args: - - logger: Specifies the logger name or a Logger instance to be - configured. (Defaults to "fontTools" logger). Unlike ``basicConfig``, - this function can be called multiple times to reconfigure a logger. - If the logger or any of its children already exists before the call is - made, they will be reset before the new configuration is applied. - filename: Specifies that a ``FileHandler`` be created, using the - specified filename, rather than a ``StreamHandler``. - filemode: Specifies the mode to open the file, if filename is - specified. (If filemode is unspecified, it defaults to ``a``). - format: Use the specified format string for the handler. This - argument also accepts a dictionary of format strings keyed by - level name, to allow customising the records appearance for - specific levels. The special ``'*'`` key is for 'any other' level. - datefmt: Use the specified date/time format. - level: Set the logger level to the specified level. - stream: Use the specified stream to initialize the StreamHandler. Note - that this argument is incompatible with ``filename`` - if both - are present, ``stream`` is ignored. - handlers: If specified, this should be an iterable of already created - handlers, which will be added to the logger. Any handler in the - list which does not have a formatter assigned will be assigned the - formatter created in this function. - filters: If specified, this should be an iterable of already created - filters. If the ``handlers`` do not already have filters assigned, - these filters will be added to them. - propagate: All loggers have a ``propagate`` attribute which determines - whether to continue searching for handlers up the logging hierarchy. - If not provided, the "propagate" attribute will be set to ``False``. - """ - # using kwargs to enforce keyword-only arguments in py2. - handlers = kwargs.pop("handlers", None) - if handlers is None: - if "stream" in kwargs and "filename" in kwargs: - raise ValueError( - "'stream' and 'filename' should not be " "specified together" - ) - else: - if "stream" in kwargs or "filename" in kwargs: - raise ValueError( - "'stream' or 'filename' should not be " - "specified together with 'handlers'" - ) - if handlers is None: - filename = kwargs.pop("filename", None) - mode = kwargs.pop("filemode", "a") - if filename: - h = logging.FileHandler(filename, mode) - else: - stream = kwargs.pop("stream", None) - h = logging.StreamHandler(stream) - handlers = [h] - # By default, the top-level library logger is configured. - logger = kwargs.pop("logger", "fontTools") - if not logger or isinstance(logger, str): - # empty "" or None means the 'root' logger - logger = logging.getLogger(logger) - # before (re)configuring, reset named logger and its children (if exist) - _resetExistingLoggers(parent=logger.name) - # use DEFAULT_FORMATS if 'format' is None - fs = kwargs.pop("format", None) - dfs = kwargs.pop("datefmt", None) - # XXX: '%' is the only format style supported on both py2 and 3 - style = kwargs.pop("style", "%") - fmt = LevelFormatter(fs, dfs, style) - filters = kwargs.pop("filters", []) - for h in handlers: - if h.formatter is None: - h.setFormatter(fmt) - if not h.filters: - for f in filters: - h.addFilter(f) - logger.addHandler(h) - if logger.name != "root": - # stop searching up the hierarchy for handlers - logger.propagate = kwargs.pop("propagate", False) - # set a custom severity level - level = kwargs.pop("level", None) - if level is not None: - logger.setLevel(level) - if kwargs: - keys = ", ".join(kwargs.keys()) - raise ValueError("Unrecognised argument(s): %s" % keys) - - -def _resetExistingLoggers(parent="root"): - """Reset the logger named 'parent' and all its children to their initial - state, if they already exist in the current configuration. - """ - root = logging.root - # get sorted list of all existing loggers - existing = sorted(root.manager.loggerDict.keys()) - if parent == "root": - # all the existing loggers are children of 'root' - loggers_to_reset = [parent] + existing - elif parent not in existing: - # nothing to do - return - elif parent in existing: - loggers_to_reset = [parent] - # collect children, starting with the entry after parent name - i = existing.index(parent) + 1 - prefixed = parent + "." - pflen = len(prefixed) - num_existing = len(existing) - while i < num_existing: - if existing[i][:pflen] == prefixed: - loggers_to_reset.append(existing[i]) - i += 1 - for name in loggers_to_reset: - if name == "root": - root.setLevel(logging.WARNING) - for h in root.handlers[:]: - root.removeHandler(h) - for f in root.filters[:]: - root.removeFilters(f) - root.disabled = False - else: - logger = root.manager.loggerDict[name] - logger.level = logging.NOTSET - logger.handlers = [] - logger.filters = [] - logger.propagate = True - logger.disabled = False - - -class Timer(object): - """Keeps track of overall time and split/lap times. - - >>> import time - >>> timer = Timer() - >>> time.sleep(0.01) - >>> print("First lap:", timer.split()) - First lap: ... - >>> time.sleep(0.02) - >>> print("Second lap:", timer.split()) - Second lap: ... - >>> print("Overall time:", timer.time()) - Overall time: ... - - Can be used as a context manager inside with-statements. - - >>> with Timer() as t: - ... time.sleep(0.01) - >>> print("%0.3f seconds" % t.elapsed) - 0... seconds - - If initialised with a logger, it can log the elapsed time automatically - upon exiting the with-statement. - - >>> import logging - >>> log = logging.getLogger("my-fancy-timer-logger") - >>> configLogger(logger=log, level="DEBUG", format="%(message)s", stream=sys.stdout) - >>> with Timer(log, 'do something'): - ... time.sleep(0.01) - Took ... to do something - - The same Timer instance, holding a reference to a logger, can be reused - in multiple with-statements, optionally with different messages or levels. - - >>> timer = Timer(log) - >>> with timer(): - ... time.sleep(0.01) - elapsed time: ...s - >>> with timer('redo it', level=logging.INFO): - ... time.sleep(0.02) - Took ... to redo it - - It can also be used as a function decorator to log the time elapsed to run - the decorated function. - - >>> @timer() - ... def test1(): - ... time.sleep(0.01) - >>> @timer('run test 2', level=logging.INFO) - ... def test2(): - ... time.sleep(0.02) - >>> test1() - Took ... to run 'test1' - >>> test2() - Took ... to run test 2 - """ - - # timeit.default_timer choses the most accurate clock for each platform - _time = timeit.default_timer - default_msg = "elapsed time: %(time).3fs" - default_format = "Took %(time).3fs to %(msg)s" - - def __init__(self, logger=None, msg=None, level=None, start=None): - self.reset(start) - if logger is None: - for arg in ("msg", "level"): - if locals().get(arg) is not None: - raise ValueError("'%s' can't be specified without a 'logger'" % arg) - self.logger = logger - self.level = level if level is not None else TIME_LEVEL - self.msg = msg - - def reset(self, start=None): - """Reset timer to 'start_time' or the current time.""" - if start is None: - self.start = self._time() - else: - self.start = start - self.last = self.start - self.elapsed = 0.0 - - def time(self): - """Return the overall time (in seconds) since the timer started.""" - return self._time() - self.start - - def split(self): - """Split and return the lap time (in seconds) in between splits.""" - current = self._time() - self.elapsed = current - self.last - self.last = current - return self.elapsed - - def formatTime(self, msg, time): - """Format 'time' value in 'msg' and return formatted string. - If 'msg' contains a '%(time)' format string, try to use that. - Otherwise, use the predefined 'default_format'. - If 'msg' is empty or None, fall back to 'default_msg'. - """ - if not msg: - msg = self.default_msg - if msg.find("%(time)") < 0: - msg = self.default_format % {"msg": msg, "time": time} - else: - try: - msg = msg % {"time": time} - except (KeyError, ValueError): - pass # skip if the format string is malformed - return msg - - def __enter__(self): - """Start a new lap""" - self.last = self._time() - self.elapsed = 0.0 - return self - - def __exit__(self, exc_type, exc_value, traceback): - """End the current lap. If timer has a logger, log the time elapsed, - using the format string in self.msg (or the default one). - """ - time = self.split() - if self.logger is None or exc_type: - # if there's no logger attached, or if any exception occurred in - # the with-statement, exit without logging the time - return - message = self.formatTime(self.msg, time) - # Allow log handlers to see the individual parts to facilitate things - # like a server accumulating aggregate stats. - msg_parts = {"msg": self.msg, "time": time} - self.logger.log(self.level, message, msg_parts) - - def __call__(self, func_or_msg=None, **kwargs): - """If the first argument is a function, return a decorator which runs - the wrapped function inside Timer's context manager. - Otherwise, treat the first argument as a 'msg' string and return an updated - Timer instance, referencing the same logger. - A 'level' keyword can also be passed to override self.level. - """ - if isinstance(func_or_msg, Callable): - func = func_or_msg - # use the function name when no explicit 'msg' is provided - if not self.msg: - self.msg = "run '%s'" % func.__name__ - - @wraps(func) - def wrapper(*args, **kwds): - with self: - return func(*args, **kwds) - - return wrapper - else: - msg = func_or_msg or kwargs.get("msg") - level = kwargs.get("level", self.level) - return self.__class__(self.logger, msg, level) - - def __float__(self): - return self.elapsed - - def __int__(self): - return int(self.elapsed) - - def __str__(self): - return "%.3f" % self.elapsed - - -class ChannelsFilter(logging.Filter): - """Provides a hierarchical filter for log entries based on channel names. - - Filters out records emitted from a list of enabled channel names, - including their children. It works the same as the ``logging.Filter`` - class, but allows the user to specify multiple channel names. - - >>> import sys - >>> handler = logging.StreamHandler(sys.stdout) - >>> handler.setFormatter(logging.Formatter("%(message)s")) - >>> filter = ChannelsFilter("A.B", "C.D") - >>> handler.addFilter(filter) - >>> root = logging.getLogger() - >>> root.addHandler(handler) - >>> root.setLevel(level=logging.DEBUG) - >>> logging.getLogger('A.B').debug('this record passes through') - this record passes through - >>> logging.getLogger('A.B.C').debug('records from children also pass') - records from children also pass - >>> logging.getLogger('C.D').debug('this one as well') - this one as well - >>> logging.getLogger('A.B.').debug('also this one') - also this one - >>> logging.getLogger('A.F').debug('but this one does not!') - >>> logging.getLogger('C.DE').debug('neither this one!') - """ - - def __init__(self, *names): - self.names = names - self.num = len(names) - self.lengths = {n: len(n) for n in names} - - def filter(self, record): - if self.num == 0: - return True - for name in self.names: - nlen = self.lengths[name] - if name == record.name: - return True - elif record.name.find(name, 0, nlen) == 0 and record.name[nlen] == ".": - return True - return False - - -class CapturingLogHandler(logging.Handler): - def __init__(self, logger, level): - super(CapturingLogHandler, self).__init__(level=level) - self.records = [] - if isinstance(logger, str): - self.logger = logging.getLogger(logger) - else: - self.logger = logger - - def __enter__(self): - self.original_disabled = self.logger.disabled - self.original_level = self.logger.level - self.original_propagate = self.logger.propagate - - self.logger.addHandler(self) - self.logger.setLevel(self.level) - self.logger.disabled = False - self.logger.propagate = False - - return self - - def __exit__(self, type, value, traceback): - self.logger.removeHandler(self) - self.logger.setLevel(self.original_level) - self.logger.disabled = self.original_disabled - self.logger.propagate = self.original_propagate - - return self - - def emit(self, record): - self.records.append(record) - - def assertRegex(self, regexp, msg=None): - import re - - pattern = re.compile(regexp) - for r in self.records: - if pattern.search(r.getMessage()): - return True - if msg is None: - msg = "Pattern '%s' not found in logger records" % regexp - assert 0, msg - - -class LogMixin(object): - """Mixin class that adds logging functionality to another class. - - You can define a new class that subclasses from ``LogMixin`` as well as - other base classes through multiple inheritance. - All instances of that class will have a ``log`` property that returns - a ``logging.Logger`` named after their respective ``.``. - - For example: - - >>> class BaseClass(object): - ... pass - >>> class MyClass(LogMixin, BaseClass): - ... pass - >>> a = MyClass() - >>> isinstance(a.log, logging.Logger) - True - >>> print(a.log.name) - fontTools.misc.loggingTools.MyClass - >>> class AnotherClass(MyClass): - ... pass - >>> b = AnotherClass() - >>> isinstance(b.log, logging.Logger) - True - >>> print(b.log.name) - fontTools.misc.loggingTools.AnotherClass - """ - - @property - def log(self): - if not hasattr(self, "_log"): - name = ".".join((self.__class__.__module__, self.__class__.__name__)) - self._log = logging.getLogger(name) - return self._log - - -def deprecateArgument(name, msg, category=UserWarning): - """Raise a warning about deprecated function argument 'name'.""" - warnings.warn("%r is deprecated; %s" % (name, msg), category=category, stacklevel=3) - - -def deprecateFunction(msg, category=UserWarning): - """Decorator to raise a warning when a deprecated function is called.""" - - def decorator(func): - @wraps(func) - def wrapper(*args, **kwargs): - warnings.warn( - "%r is deprecated; %s" % (func.__name__, msg), - category=category, - stacklevel=2, - ) - return func(*args, **kwargs) - - return wrapper - - return decorator - - -if __name__ == "__main__": - import doctest - - sys.exit(doctest.testmod(optionflags=doctest.ELLIPSIS).failed) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/G_M_A_P_.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/G_M_A_P_.py deleted file mode 100644 index 39b0050c5f0591a2b36c21242863655ca1f3ef47..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/G_M_A_P_.py +++ /dev/null @@ -1,142 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import tobytes, tostr, safeEval -from . import DefaultTable - -GMAPFormat = """ - > # big endian - tableVersionMajor: H - tableVersionMinor: H - flags: H - recordsCount: H - recordsOffset: H - fontNameLength: H -""" -# psFontName is a byte string which follows the record above. This is zero padded -# to the beginning of the records array. The recordsOffsst is 32 bit aligned. - -GMAPRecordFormat1 = """ - > # big endian - UV: L - cid: H - gid: H - ggid: H - name: 32s -""" - - -class GMAPRecord(object): - def __init__(self, uv=0, cid=0, gid=0, ggid=0, name=""): - self.UV = uv - self.cid = cid - self.gid = gid - self.ggid = ggid - self.name = name - - def toXML(self, writer, ttFont): - writer.begintag("GMAPRecord") - writer.newline() - writer.simpletag("UV", value=self.UV) - writer.newline() - writer.simpletag("cid", value=self.cid) - writer.newline() - writer.simpletag("gid", value=self.gid) - writer.newline() - writer.simpletag("glyphletGid", value=self.gid) - writer.newline() - writer.simpletag("GlyphletName", value=self.name) - writer.newline() - writer.endtag("GMAPRecord") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - value = attrs["value"] - if name == "GlyphletName": - self.name = value - else: - setattr(self, name, safeEval(value)) - - def compile(self, ttFont): - if self.UV is None: - self.UV = 0 - nameLen = len(self.name) - if nameLen < 32: - self.name = self.name + "\0" * (32 - nameLen) - data = sstruct.pack(GMAPRecordFormat1, self) - return data - - def __repr__(self): - return ( - "GMAPRecord[ UV: " - + str(self.UV) - + ", cid: " - + str(self.cid) - + ", gid: " - + str(self.gid) - + ", ggid: " - + str(self.ggid) - + ", Glyphlet Name: " - + str(self.name) - + " ]" - ) - - -class table_G_M_A_P_(DefaultTable.DefaultTable): - - dependencies = [] - - def decompile(self, data, ttFont): - dummy, newData = sstruct.unpack2(GMAPFormat, data, self) - self.psFontName = tostr(newData[: self.fontNameLength]) - assert ( - self.recordsOffset % 4 - ) == 0, "GMAP error: recordsOffset is not 32 bit aligned." - newData = data[self.recordsOffset :] - self.gmapRecords = [] - for i in range(self.recordsCount): - gmapRecord, newData = sstruct.unpack2( - GMAPRecordFormat1, newData, GMAPRecord() - ) - gmapRecord.name = gmapRecord.name.strip("\0") - self.gmapRecords.append(gmapRecord) - - def compile(self, ttFont): - self.recordsCount = len(self.gmapRecords) - self.fontNameLength = len(self.psFontName) - self.recordsOffset = 4 * (((self.fontNameLength + 12) + 3) // 4) - data = sstruct.pack(GMAPFormat, self) - data = data + tobytes(self.psFontName) - data = data + b"\0" * (self.recordsOffset - len(data)) - for record in self.gmapRecords: - data = data + record.compile(ttFont) - return data - - def toXML(self, writer, ttFont): - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - formatstring, names, fixes = sstruct.getformat(GMAPFormat) - for name in names: - value = getattr(self, name) - writer.simpletag(name, value=value) - writer.newline() - writer.simpletag("PSFontName", value=self.psFontName) - writer.newline() - for gmapRecord in self.gmapRecords: - gmapRecord.toXML(writer, ttFont) - - def fromXML(self, name, attrs, content, ttFont): - if name == "GMAPRecord": - if not hasattr(self, "gmapRecords"): - self.gmapRecords = [] - gmapRecord = GMAPRecord() - self.gmapRecords.append(gmapRecord) - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - gmapRecord.fromXML(name, attrs, content, ttFont) - else: - value = attrs["value"] - if name == "PSFontName": - self.psFontName = value - else: - setattr(self, name, safeEval(value)) diff --git a/spaces/cncn102/bingo1/src/lib/bots/bing/tts.ts b/spaces/cncn102/bingo1/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py deleted file mode 100644 index 76e4b272b479a26c63d120c818c140870cd8c287..0000000000000000000000000000000000000000 --- a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .backbone import build_backbone diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/version.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/version.py deleted file mode 100644 index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000 --- a/spaces/codelion/Grounding_DINO_demo/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.0' diff --git a/spaces/coding-alt/IF/README.md b/spaces/coding-alt/IF/README.md deleted file mode 100644 index bfb11d4a094e88ea1eecdfe4489a5e868664587e..0000000000000000000000000000000000000000 --- a/spaces/coding-alt/IF/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: IF -emoji: 🔥 -colorFrom: pink -colorTo: red -sdk: docker -python_version: 3.10.11 -app_file: app.py -pinned: false -license: other -duplicated_from: DeepFloyd/IF ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/cabac.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/cabac.h deleted file mode 100644 index fdbf86b45e741fc6a8bf4728cdf00b5fefe1e08c..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/cabac.h +++ /dev/null @@ -1,108 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_ARM_CABAC_H -#define AVCODEC_ARM_CABAC_H - -#include "config.h" -#if HAVE_ARMV6T2_INLINE - -#include "libavutil/attributes.h" -#include "libavutil/internal.h" -#include "libavcodec/cabac.h" - -#define get_cabac_inline get_cabac_inline_arm -static av_always_inline int get_cabac_inline_arm(CABACContext *c, - uint8_t *const state) -{ - int bit; - void *reg_b, *reg_c, *tmp; - - __asm__ volatile( - "ldrb %[bit] , [%[state]] \n\t" - "add %[r_b] , %[tables] , %[lps_off] \n\t" - "mov %[tmp] , %[range] \n\t" - "and %[range] , %[range] , #0xC0 \n\t" - "add %[r_b] , %[r_b] , %[bit] \n\t" - "ldrb %[range] , [%[r_b], %[range], lsl #1] \n\t" - "add %[r_b] , %[tables] , %[norm_off] \n\t" - "sub %[r_c] , %[tmp] , %[range] \n\t" - "lsl %[tmp] , %[r_c] , #17 \n\t" - "cmp %[tmp] , %[low] \n\t" - "it gt \n\t" - "movgt %[range] , %[r_c] \n\t" - "itt cc \n\t" - "mvncc %[bit] , %[bit] \n\t" - "subcc %[low] , %[low] , %[tmp] \n\t" - "add %[r_c] , %[tables] , %[mlps_off] \n\t" - "ldrb %[tmp] , [%[r_b], %[range]] \n\t" - "ldrb %[r_b] , [%[r_c], %[bit]] \n\t" - "lsl %[low] , %[low] , %[tmp] \n\t" - "lsl %[range] , %[range] , %[tmp] \n\t" - "uxth %[r_c] , %[low] \n\t" - "strb %[r_b] , [%[state]] \n\t" - "tst %[r_c] , %[r_c] \n\t" - "bne 2f \n\t" - "ldr %[r_c] , [%[c], %[byte]] \n\t" -#if UNCHECKED_BITSTREAM_READER - "ldrh %[tmp] , [%[r_c]] \n\t" - "add %[r_c] , %[r_c] , #2 \n\t" - "str %[r_c] , [%[c], %[byte]] \n\t" -#else - "ldr %[r_b] , [%[c], %[end]] \n\t" - "ldrh %[tmp] , [%[r_c]] \n\t" - "cmp %[r_c] , %[r_b] \n\t" - "itt lt \n\t" - "addlt %[r_c] , %[r_c] , #2 \n\t" - "strlt %[r_c] , [%[c], %[byte]] \n\t" -#endif - "sub %[r_c] , %[low] , #1 \n\t" - "add %[r_b] , %[tables] , %[norm_off] \n\t" - "eor %[r_c] , %[low] , %[r_c] \n\t" - "rev %[tmp] , %[tmp] \n\t" - "lsr %[r_c] , %[r_c] , #15 \n\t" - "lsr %[tmp] , %[tmp] , #15 \n\t" - "ldrb %[r_c] , [%[r_b], %[r_c]] \n\t" - "movw %[r_b] , #0xFFFF \n\t" - "sub %[tmp] , %[tmp] , %[r_b] \n\t" - "rsb %[r_c] , %[r_c] , #7 \n\t" - "lsl %[tmp] , %[tmp] , %[r_c] \n\t" - "add %[low] , %[low] , %[tmp] \n\t" - "2: \n\t" - : [bit]"=&r"(bit), - [low]"+&r"(c->low), - [range]"+&r"(c->range), - [r_b]"=&r"(reg_b), - [r_c]"=&r"(reg_c), - [tmp]"=&r"(tmp) - : [c]"r"(c), - [state]"r"(state), - [tables]"r"(ff_h264_cabac_tables), - [byte]"M"(offsetof(CABACContext, bytestream)), - [end]"M"(offsetof(CABACContext, bytestream_end)), - [norm_off]"I"(H264_NORM_SHIFT_OFFSET), - [lps_off]"I"(H264_LPS_RANGE_OFFSET), - [mlps_off]"I"(H264_MLPS_STATE_OFFSET + 128) - : "memory", "cc" - ); - - return bit & 1; -} -#endif /* HAVE_ARMV6T2_INLINE */ - -#endif /* AVCODEC_ARM_CABAC_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_jpeg_syntax_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_jpeg_syntax_template.c deleted file mode 100644 index e06abdc674b8f2f029dec09b892fcccf60409632..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_jpeg_syntax_template.c +++ /dev/null @@ -1,196 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -static int FUNC(frame_header)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawFrameHeader *current) -{ - int err, i; - - HEADER("Frame Header"); - - u(16, Lf, 8, 8 + 3 * JPEG_MAX_COMPONENTS); - - u(8, P, 2, 16); - u(16, Y, 0, JPEG_MAX_HEIGHT); - u(16, X, 1, JPEG_MAX_WIDTH); - u(8, Nf, 1, JPEG_MAX_COMPONENTS); - - for (i = 0; i < current->Nf; i++) { - us(8, C[i], i, 0, JPEG_MAX_COMPONENTS); - us(4, H[i], i, 1, 4); - us(4, V[i], i, 1, 4); - us(8, Tq[i], i, 0, 3); - } - - return 0; -} - -static int FUNC(quantisation_table)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawQuantisationTable *current) -{ - int err, i; - - u(4, Pq, 0, 1); - u(4, Tq, 0, 3); - - if (current->Pq) { - for (i = 0; i < 64; i++) - us(16, Q[i], i, 1, 255); - } else { - for (i = 0; i < 64; i++) - us(8, Q[i], i, 1, 255); - } - - return 0; -} - -static int FUNC(dqt)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawQuantisationTableSpecification *current) -{ - int err, i, n; - - HEADER("Quantisation Tables"); - - u(16, Lq, 2, 2 + 4 * 65); - n = current->Lq / 65; - - for (i = 0; i < n; i++) - CHECK(FUNC(quantisation_table)(ctx, rw, ¤t->table[i])); - - return 0; -} - -static int FUNC(huffman_table)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawHuffmanTable *current) -{ - int err, i, j, ij; - - u(4, Tc, 0, 1); - u(4, Th, 0, 3); - - for (i = 0; i < 16; i++) - us(8, L[i], i, 0, 255); - - ij = 0; - for (i = 0; i < 16; i++) { - for (j = 0; j < current->L[i]; j++) { - if (ij >= FF_ARRAY_ELEMS(current->V)) - return AVERROR_INVALIDDATA; - us(8, V[ij], ij, 0, 255); - ++ij; - } - } - - return 0; -} - -static int FUNC(dht)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawHuffmanTableSpecification *current) -{ - int err, i, j, n; - - HEADER("Huffman Tables"); - - u(16, Lh, 2, 2 + 8 * (1 + 16 + 256)); - - n = 2; - for (i = 0; n < current->Lh; i++) { - if (i >= 8) - return AVERROR_INVALIDDATA; - - CHECK(FUNC(huffman_table)(ctx, rw, ¤t->table[i])); - - ++n; - for (j = 0; j < 16; j++) - n += 1 + current->table[i].L[j]; - } - - return 0; -} - -static int FUNC(scan_header)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawScanHeader *current) -{ - int err, j; - - HEADER("Scan"); - - u(16, Ls, 6, 6 + 2 * JPEG_MAX_COMPONENTS); - - u(8, Ns, 1, 4); - for (j = 0; j < current->Ns; j++) { - us(8, Cs[j], j, 0, JPEG_MAX_COMPONENTS); - us(4, Td[j], j, 0, 3); - us(4, Ta[j], j, 0, 3); - } - - u(8, Ss, 0, 63); - u(8, Se, 0, 63); - u(4, Ah, 0, 13); - u(4, Al, 0, 15); - - return 0; -} - -static int FUNC(application_data)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawApplicationData *current) -{ - int err, i; - - HEADER("Application Data"); - - u(16, Lp, 2, 65535); - - if (current->Lp > 2) { -#ifdef READ - current->Ap_ref = av_buffer_alloc(current->Lp - 2); - if (!current->Ap_ref) - return AVERROR(ENOMEM); - current->Ap = current->Ap_ref->data; -#endif - - for (i = 0; i < current->Lp - 2; i++) - us(8, Ap[i], i, 0, 255); - } - - return 0; -} - -static int FUNC(comment)(CodedBitstreamContext *ctx, RWContext *rw, - JPEGRawComment *current) -{ - int err, i; - - HEADER("Comment"); - - u(16, Lc, 2, 65535); - - if (current->Lc > 2) { -#ifdef READ - current->Cm_ref = av_buffer_alloc(current->Lc - 2); - if (!current->Cm_ref) - return AVERROR(ENOMEM); - current->Cm = current->Cm_ref->data; -#endif - - for (i = 0; i < current->Lc - 2; i++) - us(8, Cm[i], i, 0, 255); - } - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cljrdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cljrdec.c deleted file mode 100644 index 914f853c8fd9247b9acf98af0210dd54fc982d32..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cljrdec.c +++ /dev/null @@ -1,93 +0,0 @@ -/* - * Cirrus Logic AccuPak (CLJR) decoder - * Copyright (c) 2003 Alex Beregszaszi - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Cirrus Logic AccuPak decoder. - */ - -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" - -static int decode_frame(AVCodecContext *avctx, AVFrame *p, - int *got_frame, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - GetBitContext gb; - int x, y, ret; - - if (avctx->height <= 0 || avctx->width <= 0) { - av_log(avctx, AV_LOG_ERROR, "Invalid width or height\n"); - return AVERROR_INVALIDDATA; - } - - if (buf_size / avctx->height < avctx->width) { - av_log(avctx, AV_LOG_ERROR, - "Resolution larger than buffer size. Invalid header?\n"); - return AVERROR_INVALIDDATA; - } - - if ((ret = ff_get_buffer(avctx, p, 0)) < 0) - return ret; - p->pict_type = AV_PICTURE_TYPE_I; - p->key_frame = 1; - - init_get_bits(&gb, buf, buf_size * 8); - - for (y = 0; y < avctx->height; y++) { - uint8_t *luma = &p->data[0][y * p->linesize[0]]; - uint8_t *cb = &p->data[1][y * p->linesize[1]]; - uint8_t *cr = &p->data[2][y * p->linesize[2]]; - for (x = 0; x < avctx->width; x += 4) { - luma[3] = (get_bits(&gb, 5)*33) >> 2; - luma[2] = (get_bits(&gb, 5)*33) >> 2; - luma[1] = (get_bits(&gb, 5)*33) >> 2; - luma[0] = (get_bits(&gb, 5)*33) >> 2; - luma += 4; - *(cb++) = get_bits(&gb, 6) << 2; - *(cr++) = get_bits(&gb, 6) << 2; - } - } - - *got_frame = 1; - - return buf_size; -} - -static av_cold int decode_init(AVCodecContext *avctx) -{ - avctx->pix_fmt = AV_PIX_FMT_YUV411P; - return 0; -} - -const FFCodec ff_cljr_decoder = { - .p.name = "cljr", - CODEC_LONG_NAME("Cirrus Logic AccuPak"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_CLJR, - .init = decode_init, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; - diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flvdec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flvdec.h deleted file mode 100644 index d5aff74a9828c5b77221fd106ccd1bbb313d7ae0..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flvdec.h +++ /dev/null @@ -1,28 +0,0 @@ -/* - * FLV decoder header. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_FLVDEC_H -#define AVCODEC_FLVDEC_H - -#include "mpegvideo.h" - -int ff_flv_decode_picture_header(MpegEncContext *s); - -#endif /* AVCODEC_FLVDEC_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ivi.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ivi.h deleted file mode 100644 index 06cd4d95ff23263ad801d0411ef7cfdca27e62d2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ivi.h +++ /dev/null @@ -1,342 +0,0 @@ -/* - * common functions for Indeo Video Interactive codecs (Indeo4 and Indeo5) - * - * Copyright (c) 2009 Maxim Poliakovski - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * This file contains structures and macros shared by both Indeo4 and - * Indeo5 decoders. - */ - -#ifndef AVCODEC_IVI_H -#define AVCODEC_IVI_H - -#include "avcodec.h" -#include "get_bits.h" -#include - -/** - * Indeo 4 frame types. - */ -enum { - IVI4_FRAMETYPE_INTRA = 0, - IVI4_FRAMETYPE_INTRA1 = 1, ///< intra frame with slightly different bitstream coding - IVI4_FRAMETYPE_INTER = 2, ///< non-droppable P-frame - IVI4_FRAMETYPE_BIDIR = 3, ///< bidirectional frame - IVI4_FRAMETYPE_INTER_NOREF = 4, ///< droppable P-frame - IVI4_FRAMETYPE_NULL_FIRST = 5, ///< empty frame with no data - IVI4_FRAMETYPE_NULL_LAST = 6 ///< empty frame with no data -}; - -#define IVI_VLC_BITS 13 ///< max number of bits of the ivi's huffman codes -#define IVI5_IS_PROTECTED 0x20 - -/** - * huffman codebook descriptor - */ -typedef struct IVIHuffDesc { - int32_t num_rows; - uint8_t xbits[16]; -} IVIHuffDesc; - -/** - * macroblock/block huffman table descriptor - */ -typedef struct IVIHuffTab { - int32_t tab_sel; /// index of one of the predefined tables - /// or "7" for custom one - VLC *tab; /// pointer to the table associated with tab_sel - - /// the following are used only when tab_sel == 7 - IVIHuffDesc cust_desc; /// custom Huffman codebook descriptor - VLC cust_tab; /// vlc table for custom codebook -} IVIHuffTab; - -enum { - IVI_MB_HUFF = 0, /// Huffman table is used for coding macroblocks - IVI_BLK_HUFF = 1 /// Huffman table is used for coding blocks -}; - - -/** - * Common scan patterns (defined in ivi_common.c) - */ -extern const uint8_t ff_ivi_vertical_scan_8x8[64]; -extern const uint8_t ff_ivi_horizontal_scan_8x8[64]; -extern const uint8_t ff_ivi_direct_scan_4x4[16]; - - -/** - * Declare inverse transform function types - */ -typedef void (InvTransformPtr)(const int32_t *in, int16_t *out, ptrdiff_t pitch, const uint8_t *flags); -typedef void (DCTransformPtr) (const int32_t *in, int16_t *out, ptrdiff_t pitch, int blk_size); - - -/** - * run-value (RLE) table descriptor - */ -typedef struct RVMapDesc { - uint8_t eob_sym; ///< end of block symbol - uint8_t esc_sym; ///< escape symbol - uint8_t runtab[256]; - int8_t valtab[256]; -} RVMapDesc; - -extern const RVMapDesc ff_ivi_rvmap_tabs[9]; - - -/** - * information for Indeo macroblock (16x16, 8x8 or 4x4) - */ -typedef struct IVIMbInfo { - int16_t xpos; - int16_t ypos; - uint32_t buf_offs; ///< address in the output buffer for this mb - uint8_t type; ///< macroblock type: 0 - INTRA, 1 - INTER - uint8_t cbp; ///< coded block pattern - int8_t q_delta; ///< quant delta - int8_t mv_x; ///< motion vector (x component) - int8_t mv_y; ///< motion vector (y component) - int8_t b_mv_x; ///< second motion vector (x component) - int8_t b_mv_y; ///< second motion vector (y component) -} IVIMbInfo; - - -/** - * information for Indeo tile - */ -typedef struct IVITile { - int xpos; - int ypos; - int width; - int height; - int mb_size; - int is_empty; ///< = 1 if this tile doesn't contain any data - int data_size; ///< size of the data in bytes - int num_MBs; ///< number of macroblocks in this tile - IVIMbInfo *mbs; ///< array of macroblock descriptors - IVIMbInfo *ref_mbs; ///< ptr to the macroblock descriptors of the reference tile -} IVITile; - - -/** - * information for Indeo wavelet band - */ -typedef struct IVIBandDesc { - int plane; ///< plane number this band belongs to - int band_num; ///< band number - int width; - int height; - int aheight; ///< aligned band height - const uint8_t *data_ptr; ///< ptr to the first byte of the band data - int data_size; ///< size of the band data - int16_t *buf; ///< pointer to the output buffer for this band - int16_t *ref_buf; ///< pointer to the reference frame buffer (for motion compensation) - int16_t *b_ref_buf; ///< pointer to the second reference frame buffer (for motion compensation) - int16_t *bufs[4]; ///< array of pointers to the band buffers - ptrdiff_t pitch; ///< pitch associated with the buffers above - int is_empty; ///< = 1 if this band doesn't contain any data - int mb_size; ///< macroblock size - int blk_size; ///< block size - int is_halfpel; ///< precision of the motion compensation: 0 - fullpel, 1 - halfpel - int inherit_mv; ///< tells if motion vector is inherited from reference macroblock - int inherit_qdelta; ///< tells if quantiser delta is inherited from reference macroblock - int qdelta_present; ///< tells if Qdelta signal is present in the bitstream (Indeo5 only) - int quant_mat; ///< dequant matrix index - int glob_quant; ///< quant base for this band - const uint8_t *scan; ///< ptr to the scan pattern - int scan_size; ///< size of the scantable - - IVIHuffTab blk_vlc; ///< vlc table for decoding block data - - int num_corr; ///< number of correction entries - uint8_t corr[61*2]; ///< rvmap correction pairs - int rvmap_sel; ///< rvmap table selector - RVMapDesc *rv_map; ///< ptr to the RLE table for this band - int num_tiles; ///< number of tiles in this band - IVITile *tiles; ///< array of tile descriptors - InvTransformPtr *inv_transform; - int transform_size; - DCTransformPtr *dc_transform; - int is_2d_trans; ///< 1 indicates that the two-dimensional inverse transform is used - int32_t checksum; ///< for debug purposes - int checksum_present; - int bufsize; ///< band buffer size in bytes - const uint16_t *intra_base; ///< quantization matrix for intra blocks - const uint16_t *inter_base; ///< quantization matrix for inter blocks - const uint8_t *intra_scale; ///< quantization coefficient for intra blocks - const uint8_t *inter_scale; ///< quantization coefficient for inter blocks -} IVIBandDesc; - - -/** - * color plane (luma or chroma) information - */ -typedef struct IVIPlaneDesc { - uint16_t width; - uint16_t height; - uint8_t num_bands; ///< number of bands this plane subdivided into - IVIBandDesc *bands; ///< array of band descriptors -} IVIPlaneDesc; - - -typedef struct IVIPicConfig { - uint16_t pic_width; - uint16_t pic_height; - uint16_t chroma_width; - uint16_t chroma_height; - uint16_t tile_width; - uint16_t tile_height; - uint8_t luma_bands; - uint8_t chroma_bands; -} IVIPicConfig; - -typedef struct IVI45DecContext { - GetBitContext gb; - RVMapDesc rvmap_tabs[9]; ///< local corrected copy of the static rvmap tables - - uint32_t frame_num; - int frame_type; - int prev_frame_type; ///< frame type of the previous frame - uint32_t data_size; ///< size of the frame data in bytes from picture header - int is_scalable; - const uint8_t *frame_data; ///< input frame data pointer - int inter_scal; ///< signals a sequence of scalable inter frames - uint32_t frame_size; ///< frame size in bytes - uint32_t pic_hdr_size; ///< picture header size in bytes - uint8_t frame_flags; - uint16_t checksum; ///< frame checksum - - IVIPicConfig pic_conf; - IVIPlaneDesc planes[3]; ///< color planes - - int buf_switch; ///< used to switch between three buffers - int dst_buf; ///< buffer index for the currently decoded frame - int ref_buf; ///< inter frame reference buffer index - int ref2_buf; ///< temporal storage for switching buffers - int b_ref_buf; ///< second reference frame buffer index - - IVIHuffTab mb_vlc; ///< current macroblock table descriptor - IVIHuffTab blk_vlc; ///< current block table descriptor - - uint8_t rvmap_sel; - uint8_t in_imf; - uint8_t in_q; ///< flag for explicitly stored quantiser delta - uint8_t pic_glob_quant; - uint8_t unknown1; - - uint16_t gop_hdr_size; - uint8_t gop_flags; - uint32_t lock_word; - - int show_indeo4_info; - uint8_t has_b_frames; - uint8_t has_transp; ///< transparency mode status: 1 - enabled - uint8_t uses_tiling; - uint8_t uses_haar; - uint8_t uses_fullpel; - - int (*decode_pic_hdr) (struct IVI45DecContext *ctx, AVCodecContext *avctx); - int (*decode_band_hdr) (struct IVI45DecContext *ctx, IVIBandDesc *band, AVCodecContext *avctx); - int (*decode_mb_info) (struct IVI45DecContext *ctx, IVIBandDesc *band, IVITile *tile, AVCodecContext *avctx); - void (*switch_buffers) (struct IVI45DecContext *ctx); - int (*is_nonnull_frame)(struct IVI45DecContext *ctx); - - int gop_invalid; - int buf_invalid[4]; - - int is_indeo4; - - AVFrame *p_frame; - int got_p_frame; -} IVI45DecContext; - -/** compare some properties of two pictures */ -static inline int ivi_pic_config_cmp(IVIPicConfig *str1, IVIPicConfig *str2) -{ - return str1->pic_width != str2->pic_width || str1->pic_height != str2->pic_height || - str1->chroma_width != str2->chroma_width || str1->chroma_height != str2->chroma_height || - str1->tile_width != str2->tile_width || str1->tile_height != str2->tile_height || - str1->luma_bands != str2->luma_bands || str1->chroma_bands != str2->chroma_bands; -} - -/** calculate number of tiles in a stride */ -#define IVI_NUM_TILES(stride, tile_size) (((stride) + (tile_size) - 1) / (tile_size)) - -/** calculate number of macroblocks in a tile */ -#define IVI_MBs_PER_TILE(tile_width, tile_height, mb_size) \ - ((((tile_width) + (mb_size) - 1) / (mb_size)) * (((tile_height) + (mb_size) - 1) / (mb_size))) - -/** convert unsigned values into signed ones (the sign is in the LSB) */ -#define IVI_TOSIGNED(val) (-(((val) >> 1) ^ -((val) & 1))) - -/** scale motion vector */ -static inline int ivi_scale_mv(int mv, int mv_scale) -{ - return (mv + (mv > 0) + (mv_scale - 1)) >> mv_scale; -} - -/** - * Initialize static codes used for macroblock and block decoding. - */ -void ff_ivi_init_static_vlc(void); - -/** - * Decode a huffman codebook descriptor from the bitstream - * and select specified huffman table. - * - * @param[in,out] gb the GetBit context - * @param[in] desc_coded flag signalling if table descriptor was coded - * @param[in] which_tab codebook purpose (IVI_MB_HUFF or IVI_BLK_HUFF) - * @param[out] huff_tab pointer to the descriptor of the selected table - * @param[in] avctx AVCodecContext pointer - * @return zero on success, negative value otherwise - */ -int ff_ivi_dec_huff_desc(GetBitContext *gb, int desc_coded, int which_tab, - IVIHuffTab *huff_tab, AVCodecContext *avctx); - -/** - * Initialize planes (prepares descriptors, allocates buffers etc). - * - * @param[in,out] planes pointer to the array of the plane descriptors - * @param[in] cfg pointer to the ivi_pic_config structure describing picture layout - * @param[in] is_indeo4 flag signalling if it is Indeo 4 or not - * @return result code: 0 - OK - */ -int ff_ivi_init_planes(AVCodecContext *avctx, IVIPlaneDesc *planes, - const IVIPicConfig *cfg, int is_indeo4); - -/** - * Initialize tile and macroblock descriptors. - * - * @param[in,out] planes pointer to the array of the plane descriptors - * @param[in] tile_width tile width - * @param[in] tile_height tile height - * @return result code: 0 - OK - */ -int ff_ivi_init_tiles(IVIPlaneDesc *planes, int tile_width, int tile_height); - -int ff_ivi_decode_frame(AVCodecContext *avctx, AVFrame *data, - int *got_frame, AVPacket *avpkt); -int ff_ivi_decode_close(AVCodecContext *avctx); - -#endif /* AVCODEC_IVI_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopusdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopusdec.c deleted file mode 100644 index 9b9a6103430828fc7f98efbdd6c59a9b7de845c1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopusdec.c +++ /dev/null @@ -1,252 +0,0 @@ -/* - * Opus decoder using libopus - * Copyright (c) 2012 Nicolas George - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include - -#include "libavutil/internal.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/ffmath.h" -#include "libavutil/opt.h" - -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "internal.h" -#include "mathops.h" -#include "libopus.h" -#include "vorbis_data.h" - -struct libopus_context { - AVClass *class; - OpusMSDecoder *dec; - int pre_skip; -#ifndef OPUS_SET_GAIN - union { int i; double d; } gain; -#endif -#ifdef OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST - int apply_phase_inv; -#endif -}; - -#define OPUS_HEAD_SIZE 19 - -static av_cold int libopus_decode_init(AVCodecContext *avc) -{ - struct libopus_context *opus = avc->priv_data; - int ret, channel_map = 0, gain_db = 0, nb_streams, nb_coupled, channels; - uint8_t mapping_arr[8] = { 0, 1 }, *mapping; - - channels = avc->extradata_size >= 10 ? avc->extradata[9] : (avc->ch_layout.nb_channels == 1) ? 1 : 2; - if (channels <= 0) { - av_log(avc, AV_LOG_WARNING, - "Invalid number of channels %d, defaulting to stereo\n", channels); - channels = 2; - } - - avc->sample_rate = 48000; - avc->sample_fmt = avc->request_sample_fmt == AV_SAMPLE_FMT_FLT ? - AV_SAMPLE_FMT_FLT : AV_SAMPLE_FMT_S16; - av_channel_layout_uninit(&avc->ch_layout); - if (channels > 8) { - avc->ch_layout.order = AV_CHANNEL_ORDER_UNSPEC; - avc->ch_layout.nb_channels = channels; - } else { - av_channel_layout_copy(&avc->ch_layout, &ff_vorbis_ch_layouts[channels - 1]); - } - - if (avc->extradata_size >= OPUS_HEAD_SIZE) { - opus->pre_skip = AV_RL16(avc->extradata + 10); - gain_db = sign_extend(AV_RL16(avc->extradata + 16), 16); - channel_map = AV_RL8 (avc->extradata + 18); - } - if (avc->extradata_size >= OPUS_HEAD_SIZE + 2 + channels) { - nb_streams = avc->extradata[OPUS_HEAD_SIZE + 0]; - nb_coupled = avc->extradata[OPUS_HEAD_SIZE + 1]; - if (nb_streams + nb_coupled != channels) - av_log(avc, AV_LOG_WARNING, "Inconsistent channel mapping.\n"); - mapping = avc->extradata + OPUS_HEAD_SIZE + 2; - } else { - if (channels > 2 || channel_map) { - av_log(avc, AV_LOG_ERROR, - "No channel mapping for %d channels.\n", channels); - return AVERROR(EINVAL); - } - nb_streams = 1; - nb_coupled = channels > 1; - mapping = mapping_arr; - } - - if (channels > 2 && channels <= 8) { - const uint8_t *vorbis_offset = ff_vorbis_channel_layout_offsets[channels - 1]; - int ch; - - /* Remap channels from Vorbis order to ffmpeg order */ - for (ch = 0; ch < channels; ch++) - mapping_arr[ch] = mapping[vorbis_offset[ch]]; - mapping = mapping_arr; - } - - opus->dec = opus_multistream_decoder_create(avc->sample_rate, channels, - nb_streams, nb_coupled, - mapping, &ret); - if (!opus->dec) { - av_log(avc, AV_LOG_ERROR, "Unable to create decoder: %s\n", - opus_strerror(ret)); - return ff_opus_error_to_averror(ret); - } - -#ifdef OPUS_SET_GAIN - ret = opus_multistream_decoder_ctl(opus->dec, OPUS_SET_GAIN(gain_db)); - if (ret != OPUS_OK) - av_log(avc, AV_LOG_WARNING, "Failed to set gain: %s\n", - opus_strerror(ret)); -#else - { - double gain_lin = ff_exp10(gain_db / (20.0 * 256)); - if (avc->sample_fmt == AV_SAMPLE_FMT_FLT) - opus->gain.d = gain_lin; - else - opus->gain.i = FFMIN(gain_lin * 65536, INT_MAX); - } -#endif - -#ifdef OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST - ret = opus_multistream_decoder_ctl(opus->dec, - OPUS_SET_PHASE_INVERSION_DISABLED(!opus->apply_phase_inv)); - if (ret != OPUS_OK) - av_log(avc, AV_LOG_WARNING, - "Unable to set phase inversion: %s\n", - opus_strerror(ret)); -#endif - - /* Decoder delay (in samples) at 48kHz */ - avc->delay = avc->internal->skip_samples = opus->pre_skip; - - return 0; -} - -static av_cold int libopus_decode_close(AVCodecContext *avc) -{ - struct libopus_context *opus = avc->priv_data; - - if (opus->dec) { - opus_multistream_decoder_destroy(opus->dec); - opus->dec = NULL; - } - return 0; -} - -#define MAX_FRAME_SIZE (960 * 6) - -static int libopus_decode(AVCodecContext *avc, AVFrame *frame, - int *got_frame_ptr, AVPacket *pkt) -{ - struct libopus_context *opus = avc->priv_data; - int ret, nb_samples; - - frame->nb_samples = MAX_FRAME_SIZE; - if ((ret = ff_get_buffer(avc, frame, 0)) < 0) - return ret; - - if (avc->sample_fmt == AV_SAMPLE_FMT_S16) - nb_samples = opus_multistream_decode(opus->dec, pkt->data, pkt->size, - (opus_int16 *)frame->data[0], - frame->nb_samples, 0); - else - nb_samples = opus_multistream_decode_float(opus->dec, pkt->data, pkt->size, - (float *)frame->data[0], - frame->nb_samples, 0); - - if (nb_samples < 0) { - av_log(avc, AV_LOG_ERROR, "Decoding error: %s\n", - opus_strerror(nb_samples)); - return ff_opus_error_to_averror(nb_samples); - } - -#ifndef OPUS_SET_GAIN - { - int i = avc->ch_layout.nb_channels * nb_samples; - if (avc->sample_fmt == AV_SAMPLE_FMT_FLT) { - float *pcm = (float *)frame->data[0]; - for (; i > 0; i--, pcm++) - *pcm = av_clipf(*pcm * opus->gain.d, -1, 1); - } else { - int16_t *pcm = (int16_t *)frame->data[0]; - for (; i > 0; i--, pcm++) - *pcm = av_clip_int16(((int64_t)opus->gain.i * *pcm) >> 16); - } - } -#endif - - frame->nb_samples = nb_samples; - *got_frame_ptr = 1; - - return pkt->size; -} - -static void libopus_flush(AVCodecContext *avc) -{ - struct libopus_context *opus = avc->priv_data; - - opus_multistream_decoder_ctl(opus->dec, OPUS_RESET_STATE); - /* The stream can have been extracted by a tool that is not Opus-aware. - Therefore, any packet can become the first of the stream. */ - avc->internal->skip_samples = opus->pre_skip; -} - - -#define OFFSET(x) offsetof(struct libopus_context, x) -#define FLAGS AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_DECODING_PARAM -static const AVOption libopusdec_options[] = { -#ifdef OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST - { "apply_phase_inv", "Apply intensity stereo phase inversion", OFFSET(apply_phase_inv), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS }, -#endif - { NULL }, -}; - -static const AVClass libopusdec_class = { - .class_name = "libopusdec", - .item_name = av_default_item_name, - .option = libopusdec_options, - .version = LIBAVUTIL_VERSION_INT, -}; - - -const FFCodec ff_libopus_decoder = { - .p.name = "libopus", - CODEC_LONG_NAME("libopus Opus"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_OPUS, - .priv_data_size = sizeof(struct libopus_context), - .init = libopus_decode_init, - .close = libopus_decode_close, - FF_CODEC_DECODE_CB(libopus_decode), - .flush = libopus_flush, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE | - FF_CODEC_CAP_INIT_CLEANUP, - .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_FLT, - AV_SAMPLE_FMT_S16, - AV_SAMPLE_FMT_NONE }, - .p.priv_class = &libopusdec_class, - .p.wrapper_name = "libopus", -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/m101.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/m101.c deleted file mode 100644 index 3def577b746f6c5bba37b0c68d5c96d8ae159c44..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/m101.c +++ /dev/null @@ -1,115 +0,0 @@ -/* - * Copyright (c) 2016 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/intreadwrite.h" - -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" - - -static av_cold int m101_decode_init(AVCodecContext *avctx) -{ - if (avctx->extradata_size < 6*4) { - avpriv_request_sample(avctx, "Missing or too small extradata (size %d)", avctx->extradata_size); - return AVERROR_INVALIDDATA; - } - - if (avctx->extradata[2*4] == 10) - avctx->pix_fmt = AV_PIX_FMT_YUV422P10; - else if (avctx->extradata[2*4] == 8) { - avctx->pix_fmt = AV_PIX_FMT_YUYV422; - } else { - avpriv_request_sample(avctx, "BPS %d", avctx->extradata[2*4]); - return AVERROR_INVALIDDATA; - } - - return 0; -} - -static int m101_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int stride, ret; - int x, y; - int min_stride = 2 * avctx->width; - int bits = avctx->extradata[2*4]; - - stride = AV_RL32(avctx->extradata + 5*4); - - if (avctx->pix_fmt == AV_PIX_FMT_YUV422P10) - min_stride = (avctx->width + 15) / 16 * 40; - - if (stride < min_stride || avpkt->size < stride * (uint64_t)avctx->height) { - av_log(avctx, AV_LOG_ERROR, "stride (%d) is invalid for packet sized %d\n", - stride, avpkt->size); - return AVERROR_INVALIDDATA; - } - - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - frame->pict_type = AV_PICTURE_TYPE_I; - frame->key_frame = 1; - frame->interlaced_frame = ((avctx->extradata[3*4] & 3) != 3); - if (frame->interlaced_frame) - frame->top_field_first = avctx->extradata[3*4] & 1; - - for (y = 0; y < avctx->height; y++) { - int src_y = y; - if (frame->interlaced_frame) - src_y = ((y&1)^frame->top_field_first) ? y/2 : (y/2 + avctx->height/2); - if (bits == 8) { - uint8_t *line = frame->data[0] + y*frame->linesize[0]; - memcpy(line, buf + src_y*stride, 2*avctx->width); - } else { - int block; - uint16_t *luma = (uint16_t*)&frame->data[0][y*frame->linesize[0]]; - uint16_t *cb = (uint16_t*)&frame->data[1][y*frame->linesize[1]]; - uint16_t *cr = (uint16_t*)&frame->data[2][y*frame->linesize[2]]; - for (block = 0; 16*block < avctx->width; block ++) { - const uint8_t *buf_src = buf + src_y*stride + 40*block; - for (x = 0; x < 16 && x + 16*block < avctx->width; x++) { - int xd = x + 16*block; - if (x&1) { - luma [xd] = (4*buf_src[2*x + 0]) + ((buf_src[32 + (x>>1)]>>4)&3); - } else { - luma [xd] = (4*buf_src[2*x + 0]) + (buf_src[32 + (x>>1)] &3); - cb[xd>>1] = (4*buf_src[2*x + 1]) + ((buf_src[32 + (x>>1)]>>2)&3); - cr[xd>>1] = (4*buf_src[2*x + 3]) + (buf_src[32 + (x>>1)]>>6); - } - } - } - } - } - - *got_frame = 1; - return avpkt->size; -} - -const FFCodec ff_m101_decoder = { - .p.name = "m101", - CODEC_LONG_NAME("Matrox Uncompressed SD"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_M101, - .init = m101_decode_init, - FF_CODEC_DECODE_CB(m101_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/6play apk Everything You Need to Know About the Best Streaming Platform in France.md b/spaces/congsaPfin/Manga-OCR/logs/6play apk Everything You Need to Know About the Best Streaming Platform in France.md deleted file mode 100644 index 1dd75a18fde617f0d7189aeed67a40a56b5c0d3d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/6play apk Everything You Need to Know About the Best Streaming Platform in France.md +++ /dev/null @@ -1,119 +0,0 @@ -
-

6play apk: A Streaming App for Live and Replay TV

-

If you are looking for a streaming app that lets you watch live and replay TV from various channels, you might want to check out 6play apk. This app is developed by M6 Distribution Digital, a French media company that owns several TV channels, such as M6, W9, 6ter, Gulli, Paris Première, and Téva. With 6play apk, you can enjoy unlimited access to more than 6,500 hours of exclusive programs (TV & Digital) and a unique personalized experience. You can also discover original programs available exclusively on 6play.

-

6play apk


DOWNLOAD ->>> https://urlca.com/2uOaZV



-

What is 6play apk?

-

6play apk is an Android app that allows you to watch live and replay TV from the M6 Group channels and other partners. You can also access free 24/24 6play channels that offer continuous streaming of your favorite programs. You can also subscribe to the premium option to enjoy ad-free viewing, offline download, cast to TV, and connected TV features.

-

Features of 6play apk

-

Here are some of the features that make 6play apk a great streaming app for live and replay TV:

-

Live TV

-

You can watch M6, W9, 6ter, Gulli, Téva and Paris Première live on your Android device. You can find live your series, major sporting events, entertainment, kids programs, and news magazines.

-

Streaming

-

You can find all your favorite programs on demand, such as Love Island, New house for a new life, An almost perfect dinner, Married at first sight, and many more. You can also resume playing your programs on all your screens where you left off.

-

Free 24/24 6play Channels

-

You can enjoy your favorite programs continuously 24/24 with the free 6play channels, such as Konbini 24/24, Forbidden Zone 24/24, Criminal Investigations 24/24, Telenovelas 24/24, One day a story 24/24, Vice 24/24, Love Island France.

-

6play apk download
-6play apk mod
-6play apk android
-6play apk latest version
-6play apk for pc
-6play apk mirror
-6play apk uptodown
-6play apk pure
-6play apk old version
-6play apk cracked
-6play tv replay and streaming apk
-6play tv en direct et replay apk
-6play m6 w9 6ter gulli paris premiere teva apk
-6play live tv and catch up apk
-6play france tv app apk
-6play max ad-free streaming apk
-6play premium subscription apk
-6play original programs apk
-6play konbini channel apk
-6play cast to tv option apk
-how to install 6play apk on android
-how to watch 6play apk outside france
-how to update 6play apk on android tv
-how to download 6play apk on firestick
-how to use 6play apk on samsung smart tv
-is 6play apk safe and legal
-is 6play apk compatible with chromecast
-is 6play apk available in english
-is 6play apk free or paid
-is 6play apk working on android box
-what is new in 6play apk version 5.26.5
-what is the size of 6play apk file
-what is the rating of 6play apk on google play store
-what is the best alternative to 6play apk for android
-what is the difference between 6play and molotov tv apks
-why is 6play apk not working on my device
-why is 6play apk asking for permissions
-why is 6play apk showing ads and how to remove them
-why is 6play apk not available in my country and how to access it
-why is 6play apk slow and how to fix it

-

Original Programs

-

You can discover original programs available exclusively on 6play, such as Married at first sight: life after, Vip House Tour, Fan of... You can also watch cult series like NCIS Hawaii, 9 1 1; movies like 7 years in Tibet; documentaries like Lady Diana, Harry and Meghan: the big unboxing.

-

Live by 6play Channel

-

You can experience all the emotions of live events thanks to the Live by 6play channel. You can watch major sporting events like MMA - Cage Warriors; and find exclusive concerts.

-

Recommendations

-

You can enjoy a personalized experience with selections of programs recommended for you. You can also discover collections designed for you: Konbini, K for Korea, History in Series, The Best of Reality Series...

-

Preferences

-

You can manage your preferences simply in one click thanks to "My List". You can access your favorite programs in one click in your personalized space.

-

Multi-Screen Recovery

-

You can start playing a program on your mobile, tablet or computer and finish it on another screen.

-

The Premium

-

You can take advantage of the Paris Première or téva channels, live or in replay, whenever you want on all screens, in a non-binding subscription.

How to download and install 6play apk?

-

To download and install 6play apk on your Android device, you need to follow these steps:

-
    -
  1. Go to the Google Play Store and search for 6play, TV, Replay & Streaming or use this link: [6play, TV, Replay & Streaming](^1^).
  2. -
  3. Tap on the Install button and wait for the app to download.
  4. -
  5. Once the app is installed, open it and sign in with your 6play account or create one for free.
  6. -
  7. Enjoy watching live and replay TV from various channels and original programs on 6play.
  8. -
-

If you want to download and install 6play apk on your Android TV, you need to follow these steps:

-
    -
  1. Go to the Google Play Store on your Android TV and search for 6Play for ANDROID TV or use this link: [6Play for ANDROID TV](^3^).
  2. -
  3. Tap on the Install button and wait for the app to download.
  4. -
  5. Once the app is installed, open it and sign in with your 6play account or create one for free.
  6. -
  7. Enjoy watching live and replay TV from various channels and original programs on 6play.
  8. -
-

Pros and cons of 6play apk

-

Like any streaming app, 6play apk has its pros and cons. Here are some of them:

- - - - - - - - -
ProsCons
- A wide range of programs from various channels and genres- Some programs are geo-restricted or require a premium subscription
- Free 24/24 6play channels that offer continuous streaming of your favorite programs- Ads may interrupt your viewing experience unless you subscribe to the premium option
- Original programs available exclusively on 6play- The app may not be compatible with some devices or regions
- A personalized experience with recommendations and preferences- The app may have some bugs or glitches that affect its performance
- A multi-screen recovery feature that lets you resume playing your programs on any device- The app may consume a lot of data or battery if you stream a lot of content
- A premium option that offers ad-free viewing, offline download, cast to TV, and connected TV features- The premium option costs €1.99 per month per channel, which may be expensive for some users
-

Conclusion

-

6play apk is a streaming app that lets you watch live and replay TV from various channels, such as M6, W9, 6ter, Gulli, Paris Première, and Téva. You can also access free 24/24 6play channels that offer continuous streaming of your favorite programs. You can also discover original programs available exclusively on 6play. You can enjoy a personalized experience with recommendations and preferences. You can also resume playing your programs on any device with the multi-screen recovery feature. You can also subscribe to the premium option to enjoy ad-free viewing, offline download, cast to TV, and connected TV features.

-

If you are looking for a streaming app that lets you watch live and replay TV from various channels, you might want to check out 6play apk. You can download it from the Google Play Store for your Android device or Android TV. You can also visit the official website of 6play for more information.

-

FAQs

-

Here are some frequently asked questions about 6play apk:

-
    -
  1. Q: Is 6play apk safe to use?
  2. -A: Yes, 6play apk is safe to use as long as you download it from the official sources, such as the Google Play Store or the official website of 6play. You should also avoid downloading any modded or hacked versions of the app as they may contain malware or viruses.
  3. Q: Is 6play apk legal?
  4. -A: Yes, 6play apk is legal as long as you use it in accordance with the terms and conditions of the app and the content providers. You should also respect the intellectual property rights of the creators and owners of the programs you watch on the app.
  5. Q: How can I contact the support team of 6play apk?
  6. -A: If you have any comments, questions, or issues with the app, you can contact the support team of 6play apk by sending an email to contact@6play.fr or by filling out the contact form on the app or the website. You can also check the FAQ section on the app or the website for more information.
  7. Q: How can I cancel my premium subscription to 6play apk?
  8. -A: If you want to cancel your premium subscription to 6play apk, you need to follow these steps:
      -
    1. Go to the Google Play Store and tap on the Menu icon.
    2. -
    3. Tap on Subscriptions and find your 6play premium subscription.
    4. -
    5. Tap on Cancel subscription and follow the instructions.
    6. -
    7. You will receive a confirmation email from Google Play.
    8. -
    -

    Note that you can still access your premium features until the end of your current billing cycle.

    -
  9. Q: How can I cast 6play apk to my TV?
  10. -A: If you want to cast 6play apk to your TV, you need to have a Chromecast device or a compatible smart TV. You also need to have the 6play app installed on your Android device and connected to the same Wi-Fi network as your TV. Then, you need to follow these steps:
      -
    1. Open the 6play app on your Android device and select the program you want to watch.
    2. -
    3. Tap on the Cast icon on the top right corner of the screen.
    4. -
    5. Select your TV from the list of available devices.
    6. -
    7. The program will start playing on your TV.
    8. -
    -

    To stop casting, tap on the Cast icon again and select Disconnect.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Resources Pubg Mobile How to Get the Best Gaming Experience.md b/spaces/congsaPfin/Manga-OCR/logs/Download Resources Pubg Mobile How to Get the Best Gaming Experience.md deleted file mode 100644 index 7ac53663237bdcd1a844c5b2409bdbeb55ee5eb1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Resources Pubg Mobile How to Get the Best Gaming Experience.md +++ /dev/null @@ -1,121 +0,0 @@ -
-

How to Download Resources PUBG Mobile: A Complete Guide

-

PUBG Mobile is one of the most popular and addictive battle royale games in the world. It offers a variety of maps, modes, weapons, skins, and other features that make it fun and exciting. However, to enjoy all these features, you need to download some additional resources from the game or from external sources. In this article, we will show you how to download resources PUBG mobile easily and safely.

-

What are Resources PUBG Mobile?

-

Resources PUBG mobile are files that contain data and graphics for different aspects of the game. They include maps, modes, skins, sounds, effects, and more. They are essential for running the game smoothly and enhancing your gaming experience.

-

download resources pubg mobile


Download Filehttps://urlca.com/2uO9Tf



-

Why do you need to download resources PUBG mobile?

-

You need to download resources PUBG mobile for several reasons:

-
    -
  • To access new maps and modes that are added in every update
  • -
  • To customize your character and weapons with different skins and outfits
  • -
  • To improve the game performance and reduce lagging issues
  • -
  • To get rewards and benefits from downloading certain resource packs
  • -
-

What are the types of resources PUBG mobile?

-

There are several types of resources PUBG mobile that you can download from the game or from external sources. Here are some of them:

- - - - - - - - -
TypeDescriptionSize
Recommended Resource PackContains core resources for PUBG mobile. Download for a better gaming experience.776 MB
Classic MapsContains classic mode maps that haven't been downloaded.Varies depending on the map
Themed ModesContains new game modes from the latest version that haven't been downloaded.Varies depending on the mode
Arena MapsContains arena mode maps that haven't been downloaded.Varies depending on the map
System Resource PackContains system resources from the latest version that haven't been downloaded.Varies depending on the version
Classic Graphics PackContains graphics resources from older game versions.826 MB
-

How to download resources PUBG mobile from the game?

-

The easiest way to download resources PUBG mobile is from the game itself. Here are the steps you need to follow:

-

How to download resource pack pubg mobile
-Pubg mobile download resources not working
-Download resources pubg mobile lite
-Pubg mobile download resources stuck
-Download resources pubg mobile error
-Download resources pubg mobile new update
-Pubg mobile download resources failed
-Download resources pubg mobile 2023
-Download resources pubg mobile apk
-Download resources pubg mobile obb
-Pubg mobile download resources slow
-Download resources pubg mobile pc
-Download resources pubg mobile ios
-Pubg mobile download resources problem
-Download resources pubg mobile kr
-Pubg mobile download resources faster
-Download resources pubg mobile vietnam
-Pubg mobile download resources tips
-Download resources pubg mobile classic mode
-Pubg mobile download resources guide
-Download resources pubg mobile themed mode
-Pubg mobile download resources rewards
-Download resources pubg mobile arena mode
-Pubg mobile download resources size
-Download resources pubg mobile system resource pack
-Pubg mobile download resources location
-Download resources pubg mobile classic graphics pack
-Pubg mobile download resources lightweight installation function
-Download resources pubg mobile excitement resource pack
-Pubg mobile download resources vintage resource pack
-Download resources pubg mobile korea superconducting tokamak advanced research experiment
-Pubg mobile download resources youtube tutorial
-Download resources pubg mobile help center
-Pubg mobile download resources kumparan.com article
-Download resources pubg mobile sportskeeda.com article
-Pubg mobile download resources expansion pack free fire max problem gaming extra video
-Download resources pubg mobile how to play like a pro tagalog explanation video
-Pubg mobile download resources how to draw map of nepal video
-Pubg mobile download resources royale adventure m9 excalibur umbra blue punisher video
-Pubg mobile download resources new update new halloween mode munno gaming video
-Pubg mobile download resources both pov jonathan vs vivone 1v1 tdm novem yt video
-Pubg mobile download resources how i achieved top 1 on the global leaderboards shibe video
-Pubg mobile download resources how to use shopify metaobjects coding with jan video
-Pubg mobile download resources tiktok see peoples liked videos foxy tech tips video
-Pubg mobile download resources snapchat how to hide your story from people foxy tech tips video

-

Step 1: Open the game and go to settings

-

Launch PUBG mobile on your device and tap on the up arrow button on the bottom right corner of the screen. Then, tap on settings.

-

Step 2: Select the download option and choose the resource packs

Step 2: Select the download option and choose the resource packs you want

-

On the settings menu, tap on the download option. You will see a list of resource packs that are available for download. You can tap on each pack to see its description, size, and rewards. Select the packs you want to download by tapping on the download button next to them.

-

Step 3: Wait for the download to complete and collect your rewards

-

After you have selected the resource packs you want, wait for the download to complete. You can see the progress bar and the remaining time on the screen. You can also pause or resume the download at any time. Once the download is finished, you can collect your rewards by tapping on the claim button. You will get some items such as silver fragments, BP, and coupons.

-

How to download resources PUBG mobile from external sources?

-

Another way to download resources PUBG mobile is from external sources. This method is useful if you want to save some data or if you have trouble downloading from the game. However, you need to be careful and only use trusted and verified websites that offer PUBG mobile OBB files. Here are the steps you need to follow:

-

Step 1: Find a reliable website that offers PUBG mobile OBB files

-

An OBB file is a data file that contains additional resources for PUBG mobile. You can find many websites that offer PUBG mobile OBB files for different versions of the game. However, not all of them are safe and secure. You need to do some research and check the reviews and ratings of the website before downloading anything. Some of the reputable websites that offer PUBG mobile OBB files are APKPure, APKMirror, and APKCombo.

-

Step 2: Download the OBB file and copy it to your device storage

-

Once you have found a reliable website, choose the OBB file that matches your game version and device compatibility. Download the OBB file to your computer or directly to your device. If you download it to your computer, you need to copy it to your device storage using a USB cable or a file manager app. The OBB file should be placed in the Android/OBB/com.tencent.ig folder on your device storage.

-

Step 3: Install the APK file and launch the game

-

In addition to the OBB file, you also need to install the APK file of PUBG mobile on your device. The APK file is an application file that contains the game itself. You can download it from the same website as the OBB file or from the official PUBG mobile website. After downloading the APK file, install it on your device by allowing unknown sources in your settings. Then, launch the game and enjoy.

-

Tips and tricks for downloading resources PUBG mobile

-

To make sure that you download resources PUBG mobile successfully and efficiently, here are some tips and tricks that you can follow:

-

Use a stable and fast internet connection

-

The most important thing for downloading resources PUBG mobile is having a good internet connection. You need a stable and fast internet connection to avoid interruptions, errors, or corruption of files. You can use Wi-Fi or mobile data, but make sure that you have enough data allowance and signal strength.

-

Check your device storage space before downloading

-

Another important thing for downloading resources PUBG mobile is having enough storage space on your device. You need to check how much space you have left before downloading any resource pack or OBB file. You can do this by going to settings > storage on your device. If you don't have enough space, you need to delete some unwanted or unnecessary files or apps from your device.

-

Delete unwanted or outdated resource packs to save space

-

If you have downloaded many resource packs or OBB files in the past, you may not need them anymore or they may be outdated. You can delete them from your device to save some space and avoid cluttering your storage. You can do this by going to settings > download in PUBG mobile and tapping on the delete button next to each resource pack or OBB file.

-

Conclusion

-

Downloading resources PUBG mobile is a simple and easy process that can enhance your gaming experience and performance. You can download resources PUBG mobile from the game itself or from external sources, depending on your preference and convenience. However, you need to be careful and only use trusted and verified websites that offer PUBG mobile OBB files. You also need to have a stable and fast internet connection, enough storage space, and delete unwanted or outdated resource packs or OBB files. We hope this article has helped you learn how to download resources PUBG mobile easily and safely.

-

FAQs

-

Here are some frequently asked questions about downloading resources PUBG mobile:

-

Q: How long does it take to download resources PUBG mobile?

-

A: The time it takes to download resources PUBG mobile depends on several factors, such as the size of the resource pack or OBB file, the speed of your internet connection, and the performance of your device. Generally, it can take from a few minutes to a few hours to download resources PUBG mobile.

-

Q: How can I update my PUBG mobile to the latest version?

-

A: You can update your PUBG mobile to the latest version by going to the Google Play Store or the App Store and tapping on the update button. Alternatively, you can download the latest APK file and OBB file from a reliable website and install them on your device.

-

Q: What are the benefits of downloading resources PUBG mobile?

-

A: The benefits of downloading resources PUBG mobile are:

-
    -
  • You can access new maps and modes that are added in every update
  • -
  • You can customize your character and weapons with different skins and outfits
  • -
  • You can improve the game performance and reduce lagging issues
  • -
  • You can get rewards and benefits from downloading certain resource packs
  • -
-

Q: What are the risks of downloading resources PUBG mobile from external sources?

-

A: The risks of downloading resources PUBG mobile from external sources are:

-
    -
  • You may download corrupted or infected files that can harm your device or compromise your data
  • -
  • You may download outdated or incompatible files that can cause errors or crashes in the game
  • -
  • You may violate the terms and conditions of PUBG mobile and get banned from the game
  • -
-

Q: How can I delete resources PUBG mobile from my device?

-

A: You can delete resources PUBG mobile from your device by going to settings > download in PUBG mobile and tapping on the delete button next to each resource pack or OBB file. You can also delete them manually by going to Android/OBB/com.tencent.ig folder on your device storage and deleting the unwanted files.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Umlando Download Music The Best Amapiano Songs of 2022.md b/spaces/congsaPfin/Manga-OCR/logs/Umlando Download Music The Best Amapiano Songs of 2022.md deleted file mode 100644 index 2cb98e3ca4829bcde9be56458fd17b8f57983bf6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Umlando Download Music The Best Amapiano Songs of 2022.md +++ /dev/null @@ -1,114 +0,0 @@ -
-

Umlando Download Music: How to Enjoy Free and Legal Music from South Africa

-

If you are a fan of South African music, you might have heard of Umlando, a popular dance style that features catchy beats and vocals. Umlando music is a fusion of traditional and modern influences, and it has become a sensation among music lovers around the world. But how can you download Umlando music for free and legally? And where can you listen to Umlando music online? In this article, we will answer these questions and more.

-

umlando download music


Download ★★★★★ https://urlca.com/2uOe5X



-

What is Umlando?

-

Umlando is a Zulu word that means history. It is used to refer to the study of the past, especially the history of people. The word is also used in the context of dance, where it refers to a popular dance style that originated in South Africa.

-

The meaning of Umlando

-

Umlando music is a genre of dance music that incorporates elements of traditional Zulu music, such as drums, chants, and melodies, with modern influences, such as electronic beats, synths, and vocals. Umlando music is inspired by the history and culture of the Zulu people, as well as their struggles and achievements. Umlando music celebrates the diversity and richness of South African music, and it aims to connect people across generations and backgrounds.

-

The popularity of Umlando music

-

Umlando music has gained popularity in recent years, thanks to the efforts of talented artists and producers who have created catchy and innovative songs. Some of the most famous Umlando artists include 9umba, Toss, Mdoovar, Sir Trill, Sino Msolo, Lady Du, Young Stunna, Slade, and many more. Their songs have been featured on various platforms, such as Apple Music, Wynk Music, YouTube, and others. Umlando music has also attracted fans from different countries, who enjoy the upbeat and energetic vibe of the genre.

-

How to download Umlando music for free and legally

-

If you want to download Umlando music for free and legally, you have several options to choose from. There are many websites that offer free music downloads, and some of them specialize in Umlando music or other genres of South African music. Here are some of the best free music download sites for Umlando music:

-

The best free music download sites for Umlando music

-
    -
  • Free Music Archive: This website provides free access to thousands of songs that can be downloaded or streamed online. You can search for Umlando music by using tags like "South Africa", "Zulu", "Amapiano", or "Dance". You can also browse through curated collections or trending tracks. You can download songs in MP3 format without creating an account.
  • -
  • Jamendo Music: This website allows artists to upload their music under Creative Commons licenses, which means that they give permission for anyone to download or stream their songs for free. You can find Umlando music by using filters like "Genre", "Mood", or "Instrument". You can also listen to online radio channels or playlists that feature Umlando music. You can download songs in MP3 format after creating a free account.
  • -
  • Internet Archive: This website is a digital library that archives various types of media, including audio files. You can find Umlando music by searching through categories like "Audio", "Music", or "Live Music Archive". You can also use keywords like "Umlando", "South Africa", or "Zulu". You can download songs in various formats, such as MP3, OGG, or FLAC, without creating an account.
  • -
-

The benefits of downloading Umlando music

-

Downloading Umlando music for free and legally has many benefits, such as:

-
    -
  • Supporting the artists: By downloading Umlando music from legitimate sources, you are showing your appreciation and respect for the artists who created the music. You are also helping them to gain more exposure and recognition for their work.
  • -
  • Enjoying offline access: By downloading Umlando music to your device, you can enjoy listening to it anytime and anywhere, even without an internet connection. You can also create your own playlists and share them with your friends.
  • -
  • Controlling the quality: By downloading Umlando music in high-quality formats, such as MP3 or FLAC, you can ensure that you get the best sound experience possible. You can also adjust the volume and equalizer settings to suit your preferences.
  • -
-

How to listen to Umlando music online

-

If you prefer to listen to Umlando music online, you have many options as well. There are many streaming services that offer access to Umlando music, and some of them are free or have free trials. Here are some of the best streaming services for Umlando music:

-

The best streaming services for Umlando music

- - - - - - - - - - - - - - - - - - - - - - - - - - -

The advantages of streaming Umlando music

-

Streaming Umlando music online has many advantages, such as:

-

umlando mp3 download free
-umlando by 9umba, toss and mdoovar
-umlando feat sir trill, sino msolo, lady du, young stunna and slade
-umlando apple music
-umlando youtube video
-umlando wynk music
-umlando amapiano song
-umlando lyrics and translation
-umlando remix download
-umlando instrumental download
-umlando fakaza music
-umlando zamusic download
-umlando datafilehost download
-umlando hiphopza download
-umlando sahiphop download
-umlando afrohouseking download
-umlando bamoza download
-umlando hitvibes download
-umlando flexyjam download
-umlando zulujam download
-umlando mp3 juice download
-umlando tubidy download
-umlando waploaded download
-umlando naijaloaded download
-umlando tooxclusive download
-umlando notjustok download
-umlando 9jaflaver download
-umlando audiomack download
-umlando soundcloud download
-umlando spotify download
-umlando deezer download
-umlando tidal download
-umlando amazon music download
-umlando pandora music download
-umlando shazam music download
-umlando genius music download
-umlando musixmatch music download
-umlando songmeanings music download
-umlando azlyrics music download
-umlando metrolyrics music download

-
    -
  • Exploring new music: By streaming Umlando music online, you can discover new songs and artists that you might not find otherwise. You can also listen to different genres and styles of music that are related to Umlando music.
  • -
  • Saving storage space: By streaming Umlando music online, you can avoid using up your device's storage space with downloaded files. You can also access your music from any device that has an internet connection.
  • -
  • Staying updated: By streaming Umlando music online, you can always listen to the latest releases and trends in the genre. You can also get notified when your favorite artists drop new songs or albums.
  • -
-

Conclusion

-

Umlando music is a genre of dance music that originated in South Africa. It is a fusion of traditional Zulu music and modern influences, and it celebrates the history and culture of the Zulu people. Umlando music is popular among music lovers around the world, who enjoy its catchy beats and vocals. You can download or stream Umlando music for free and legally from various websites or services, depending on your preferences. Whether you download or stream Umlando music, you will surely enjoy the music and have fun.

-

FAQs

-

Here are some frequently asked questions about Umlando music and how to download or stream it:

-
    -
  1. What is the difference between Umlando and Amapiano?
    Amapiano is another genre of dance music that originated in South Africa. It is similar to Umlando in some aspects, such as using electronic beats and synths, but it also has influences from jazz, soul, and kwaito. Amapiano is more mellow and smooth than Umlando, which is more upbeat and energetic.
  2. -
  3. Is Umlando music legal to download or stream?
    Yes, Umlando music is legal to download or stream, as long as you use legitimate sources that have the permission of the artists or the rights holders. You should avoid using illegal or pirated websites or services that may infringe on the intellectual property rights of the creators.
  4. -
  5. How can I support Umlando artists?
    You can support Umlando artists by downloading or streaming their music from official platforms, such as Apple Music, Wynk Music, YouTube, Spotify, Apple Music, Deezer, or SoundCloud. You can also follow them on social media, share their music with your friends, or buy their merchandise or tickets to their shows.
  6. -
  7. What are some of the best Umlando songs to listen to?
    There are many great Umlando songs to listen to, but here are some of the most popular ones:
    - 9umba & Toss - "uThixo"
    - Mdoovar - "Ntwana Ka God"
    - Sir Trill & Sino Msolo - "Isibonelo"
    - Lady Du & Young Stunna - "Catalia"
    - Slade - "Barman"
  8. -
  9. Where can I learn more about Umlando music and culture?
    You can learn more about Umlando music and culture by visiting websites or blogs that cover South African music, such as SA Music Mag, Zkhiphani, or Fakaza. You can also watch documentaries or videos that feature Umlando artists or dancers, such as "Umlando: The History of Dance in South Africa" or "Umlando: The Dance That Moves South Africa".
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Barcode Generator And Overprinter V6610 TOP Crack.md b/spaces/contluForse/HuggingGPT/assets/Barcode Generator And Overprinter V6610 TOP Crack.md deleted file mode 100644 index ce2266bb5f0179121e068d4a618bc629f92fcf1e..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Barcode Generator And Overprinter V6610 TOP Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

Barcode Generator And Overprinter V6610 Crack


Download File »»» https://ssurll.com/2uzyj0



- -Barcode Generator And Overprinter V6610 Crack · thriller michael jackson 1080p vs 720p · system programming and operating system d m ... 4d29de3e1b
-
-
-

diff --git a/spaces/contluForse/HuggingGPT/assets/CHESS Chessbase Fritz Powerbook.rar.md b/spaces/contluForse/HuggingGPT/assets/CHESS Chessbase Fritz Powerbook.rar.md deleted file mode 100644 index bdcfbc9a8e39f7400e8ac8d7f34b8317a9b5c5c3..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/CHESS Chessbase Fritz Powerbook.rar.md +++ /dev/null @@ -1,60 +0,0 @@ -

CHESS Chessbase Fritz Powerbook.rar


Download ··· https://ssurll.com/2uzwbg



-
-com Chessbase Fritz Powerbook.rar (99K) $8.00 Download: Chessbase Fritz Powerbook.rar (298K) $20.00 Download: Chessbase Fritz Powerbook.rar. ZIP version in English Chessbase Fritz Powerbook.zip. ZIP version in English Chessbase Fritz Powerbook.zip. Download: Chessbase Fritz Powerbook.zip (209K) $8.00 Download: Chessbase Fritz Powerbook.zip. ZIP version in English Chessbase Fritz Powerbook.zip. ZIP version in English Download: Chessbase Fritz Powerbook.zip. ZIP version in English Chessbase Fritz Powerbook.zip. ZIP version in English Download: Chessbase Fritz Powerbook.zip. ZIP version in English $25.00 Download: The ChessBase Manual.pdf. (10.7M) - -"In the world of psychotherapy you never get it right the first time" - -The Matrix Series - -Video: Music Video - -Audio: Music - -Transcript: Music - -Disclaimer: Do not use this content or similar content without - -express permission. All video and music files are under the property - -of The Matrix Collective. No reproduction is permitted without our - -permission. If you are a legal copyright holder and believe that anything - -on this site violates your copyright, please contact us.Review: Chaos;Child by Marceline - -There was a problem with this transaction. Please try again later. - -Chaos;Child - -This is one of those difficult-to-define books. Marceline - -Simonson’s Chaos;Child has so many different characters - -within its pages, is filled with so much intrigue and - -drama, that it could easily become disjointed. It’s - -definitely not one of those easy reads. - -In the opening paragraph, Kate Kidston, the protagonist of - -the novel, leaves her home in Whitefield, Washington and - -travels to a new town called Rekall, where she lives for - -the remainder of the book. She is immediately recruited - -by a company called Rekall, which has an unusual - -philosophy. All of their clients are “born again,” - -recreated in an “experimentation chamber.” To - -survive there, a person must “live the life” of - -another. - -The clients are � 4fefd39f24
-
-
-

diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/fileio/handlers/json_handler.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/fileio/handlers/json_handler.py deleted file mode 100644 index 18d4f15f74139d20adff18b20be5529c592a66b6..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/fileio/handlers/json_handler.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json - -import numpy as np - -from .base import BaseFileHandler - - -def set_default(obj): - """Set default json values for non-serializable values. - - It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list. - It also converts ``np.generic`` (including ``np.int32``, ``np.float32``, - etc.) into plain numbers of plain python built-in types. - """ - if isinstance(obj, (set, range)): - return list(obj) - elif isinstance(obj, np.ndarray): - return obj.tolist() - elif isinstance(obj, np.generic): - return obj.item() - raise TypeError(f'{type(obj)} is unsupported for json dump') - - -class JsonHandler(BaseFileHandler): - - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('default', set_default) - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('default', set_default) - return json.dumps(obj, **kwargs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/utils/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/utils/__init__.py deleted file mode 100644 index f2678b321c295bcceaef945111ac3524be19d6e4..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/utils/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .misc import add_prefix - -__all__ = ['add_prefix'] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/retinanet.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/retinanet.py deleted file mode 100644 index 46e0fda48254f2d1e6b8c796e00467df669e4216..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/retinanet.py +++ /dev/null @@ -1,439 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import math -from typing import List, Tuple -import torch -from fvcore.nn import sigmoid_focal_loss_jit -from torch import Tensor, nn -from torch.nn import functional as F - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.layers import CycleBatchNormList, ShapeSpec, batched_nms, cat, get_norm -from annotator.oneformer.detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from annotator.oneformer.detectron2.utils.events import get_event_storage - -from ..anchor_generator import build_anchor_generator -from ..backbone import Backbone, build_backbone -from ..box_regression import Box2BoxTransform, _dense_box_regression_loss -from ..matcher import Matcher -from .build import META_ARCH_REGISTRY -from .dense_detector import DenseDetector, permute_to_N_HWA_K # noqa - -__all__ = ["RetinaNet"] - - -logger = logging.getLogger(__name__) - - -@META_ARCH_REGISTRY.register() -class RetinaNet(DenseDetector): - """ - Implement RetinaNet in :paper:`RetinaNet`. - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - head: nn.Module, - head_in_features, - anchor_generator, - box2box_transform, - anchor_matcher, - num_classes, - focal_loss_alpha=0.25, - focal_loss_gamma=2.0, - smooth_l1_beta=0.0, - box_reg_loss_type="smooth_l1", - test_score_thresh=0.05, - test_topk_candidates=1000, - test_nms_thresh=0.5, - max_detections_per_image=100, - pixel_mean, - pixel_std, - vis_period=0, - input_format="BGR", - ): - """ - NOTE: this interface is experimental. - - Args: - backbone: a backbone module, must follow detectron2's backbone interface - head (nn.Module): a module that predicts logits and regression deltas - for each level from a list of per-level features - head_in_features (Tuple[str]): Names of the input feature maps to be used in head - anchor_generator (nn.Module): a module that creates anchors from a - list of features. Usually an instance of :class:`AnchorGenerator` - box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to - instance boxes - anchor_matcher (Matcher): label the anchors by matching them with ground truth. - num_classes (int): number of classes. Used to label background proposals. - - # Loss parameters: - focal_loss_alpha (float): focal_loss_alpha - focal_loss_gamma (float): focal_loss_gamma - smooth_l1_beta (float): smooth_l1_beta - box_reg_loss_type (str): Options are "smooth_l1", "giou", "diou", "ciou" - - # Inference parameters: - test_score_thresh (float): Inference cls score threshold, only anchors with - score > INFERENCE_TH are considered for inference (to improve speed) - test_topk_candidates (int): Select topk candidates before NMS - test_nms_thresh (float): Overlap threshold used for non-maximum suppression - (suppress boxes with IoU >= this threshold) - max_detections_per_image (int): - Maximum number of detections to return per image during inference - (100 is based on the limit established for the COCO dataset). - - pixel_mean, pixel_std: see :class:`DenseDetector`. - """ - super().__init__( - backbone, head, head_in_features, pixel_mean=pixel_mean, pixel_std=pixel_std - ) - self.num_classes = num_classes - - # Anchors - self.anchor_generator = anchor_generator - self.box2box_transform = box2box_transform - self.anchor_matcher = anchor_matcher - - # Loss parameters: - self.focal_loss_alpha = focal_loss_alpha - self.focal_loss_gamma = focal_loss_gamma - self.smooth_l1_beta = smooth_l1_beta - self.box_reg_loss_type = box_reg_loss_type - # Inference parameters: - self.test_score_thresh = test_score_thresh - self.test_topk_candidates = test_topk_candidates - self.test_nms_thresh = test_nms_thresh - self.max_detections_per_image = max_detections_per_image - # Vis parameters - self.vis_period = vis_period - self.input_format = input_format - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - backbone_shape = backbone.output_shape() - feature_shapes = [backbone_shape[f] for f in cfg.MODEL.RETINANET.IN_FEATURES] - head = RetinaNetHead(cfg, feature_shapes) - anchor_generator = build_anchor_generator(cfg, feature_shapes) - return { - "backbone": backbone, - "head": head, - "anchor_generator": anchor_generator, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RETINANET.BBOX_REG_WEIGHTS), - "anchor_matcher": Matcher( - cfg.MODEL.RETINANET.IOU_THRESHOLDS, - cfg.MODEL.RETINANET.IOU_LABELS, - allow_low_quality_matches=True, - ), - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES, - "head_in_features": cfg.MODEL.RETINANET.IN_FEATURES, - # Loss parameters: - "focal_loss_alpha": cfg.MODEL.RETINANET.FOCAL_LOSS_ALPHA, - "focal_loss_gamma": cfg.MODEL.RETINANET.FOCAL_LOSS_GAMMA, - "smooth_l1_beta": cfg.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA, - "box_reg_loss_type": cfg.MODEL.RETINANET.BBOX_REG_LOSS_TYPE, - # Inference parameters: - "test_score_thresh": cfg.MODEL.RETINANET.SCORE_THRESH_TEST, - "test_topk_candidates": cfg.MODEL.RETINANET.TOPK_CANDIDATES_TEST, - "test_nms_thresh": cfg.MODEL.RETINANET.NMS_THRESH_TEST, - "max_detections_per_image": cfg.TEST.DETECTIONS_PER_IMAGE, - # Vis parameters - "vis_period": cfg.VIS_PERIOD, - "input_format": cfg.INPUT.FORMAT, - } - - def forward_training(self, images, features, predictions, gt_instances): - # Transpose the Hi*Wi*A dimension to the middle: - pred_logits, pred_anchor_deltas = self._transpose_dense_predictions( - predictions, [self.num_classes, 4] - ) - anchors = self.anchor_generator(features) - gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances) - return self.losses(anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes) - - def losses(self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes): - """ - Args: - anchors (list[Boxes]): a list of #feature level Boxes - gt_labels, gt_boxes: see output of :meth:`RetinaNet.label_anchors`. - Their shapes are (N, R) and (N, R, 4), respectively, where R is - the total number of anchors across levels, i.e. sum(Hi x Wi x Ai) - pred_logits, pred_anchor_deltas: both are list[Tensor]. Each element in the - list corresponds to one level and has shape (N, Hi * Wi * Ai, K or 4). - Where K is the number of classes used in `pred_logits`. - - Returns: - dict[str, Tensor]: - mapping from a named loss to a scalar tensor storing the loss. - Used during training only. The dict keys are: "loss_cls" and "loss_box_reg" - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (N, R) - - valid_mask = gt_labels >= 0 - pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes) - num_pos_anchors = pos_mask.sum().item() - get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images) - normalizer = self._ema_update("loss_normalizer", max(num_pos_anchors, 1), 100) - - # classification and regression loss - gt_labels_target = F.one_hot(gt_labels[valid_mask], num_classes=self.num_classes + 1)[ - :, :-1 - ] # no loss for the last (background) class - loss_cls = sigmoid_focal_loss_jit( - cat(pred_logits, dim=1)[valid_mask], - gt_labels_target.to(pred_logits[0].dtype), - alpha=self.focal_loss_alpha, - gamma=self.focal_loss_gamma, - reduction="sum", - ) - - loss_box_reg = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type=self.box_reg_loss_type, - smooth_l1_beta=self.smooth_l1_beta, - ) - - return { - "loss_cls": loss_cls / normalizer, - "loss_box_reg": loss_box_reg / normalizer, - } - - @torch.no_grad() - def label_anchors(self, anchors, gt_instances): - """ - Args: - anchors (list[Boxes]): A list of #feature level Boxes. - The Boxes contains anchors of this image on the specific feature level. - gt_instances (list[Instances]): a list of N `Instances`s. The i-th - `Instances` contains the ground-truth per-instance annotations - for the i-th input image. - - Returns: - list[Tensor]: List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across all feature maps (sum(Hi * Wi * A)). - Label values are in {-1, 0, ..., K}, with -1 means ignore, and K means background. - - list[Tensor]: i-th element is a Rx4 tensor, where R is the total number of anchors - across feature maps. The values are the matched gt boxes for each anchor. - Values are undefined for those anchors not labeled as foreground. - """ - anchors = Boxes.cat(anchors) # Rx4 - - gt_labels = [] - matched_gt_boxes = [] - for gt_per_image in gt_instances: - match_quality_matrix = pairwise_iou(gt_per_image.gt_boxes, anchors) - matched_idxs, anchor_labels = self.anchor_matcher(match_quality_matrix) - del match_quality_matrix - - if len(gt_per_image) > 0: - matched_gt_boxes_i = gt_per_image.gt_boxes.tensor[matched_idxs] - - gt_labels_i = gt_per_image.gt_classes[matched_idxs] - # Anchors with label 0 are treated as background. - gt_labels_i[anchor_labels == 0] = self.num_classes - # Anchors with label -1 are ignored. - gt_labels_i[anchor_labels == -1] = -1 - else: - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - gt_labels_i = torch.zeros_like(matched_idxs) + self.num_classes - - gt_labels.append(gt_labels_i) - matched_gt_boxes.append(matched_gt_boxes_i) - - return gt_labels, matched_gt_boxes - - def forward_inference( - self, images: ImageList, features: List[Tensor], predictions: List[List[Tensor]] - ): - pred_logits, pred_anchor_deltas = self._transpose_dense_predictions( - predictions, [self.num_classes, 4] - ) - anchors = self.anchor_generator(features) - - results: List[Instances] = [] - for img_idx, image_size in enumerate(images.image_sizes): - scores_per_image = [x[img_idx].sigmoid_() for x in pred_logits] - deltas_per_image = [x[img_idx] for x in pred_anchor_deltas] - results_per_image = self.inference_single_image( - anchors, scores_per_image, deltas_per_image, image_size - ) - results.append(results_per_image) - return results - - def inference_single_image( - self, - anchors: List[Boxes], - box_cls: List[Tensor], - box_delta: List[Tensor], - image_size: Tuple[int, int], - ): - """ - Single-image inference. Return bounding-box detection results by thresholding - on scores and applying non-maximum suppression (NMS). - - Arguments: - anchors (list[Boxes]): list of #feature levels. Each entry contains - a Boxes object, which contains all the anchors in that feature level. - box_cls (list[Tensor]): list of #feature levels. Each entry contains - tensor of size (H x W x A, K) - box_delta (list[Tensor]): Same shape as 'box_cls' except that K becomes 4. - image_size (tuple(H, W)): a tuple of the image height and width. - - Returns: - Same as `inference`, but for only one image. - """ - pred = self._decode_multi_level_predictions( - anchors, - box_cls, - box_delta, - self.test_score_thresh, - self.test_topk_candidates, - image_size, - ) - keep = batched_nms( # per-class NMS - pred.pred_boxes.tensor, pred.scores, pred.pred_classes, self.test_nms_thresh - ) - return pred[keep[: self.max_detections_per_image]] - - -class RetinaNetHead(nn.Module): - """ - The head used in RetinaNet for object classification and box regression. - It has two subnets for the two tasks, with a common structure but separate parameters. - """ - - @configurable - def __init__( - self, - *, - input_shape: List[ShapeSpec], - num_classes, - num_anchors, - conv_dims: List[int], - norm="", - prior_prob=0.01, - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (List[ShapeSpec]): input shape - num_classes (int): number of classes. Used to label background proposals. - num_anchors (int): number of generated anchors - conv_dims (List[int]): dimensions for each convolution layer - norm (str or callable): - Normalization for conv layers except for the two output layers. - See :func:`detectron2.layers.get_norm` for supported types. - prior_prob (float): Prior weight for computing bias - """ - super().__init__() - - self._num_features = len(input_shape) - if norm == "BN" or norm == "SyncBN": - logger.info( - f"Using domain-specific {norm} in RetinaNetHead with len={self._num_features}." - ) - bn_class = nn.BatchNorm2d if norm == "BN" else nn.SyncBatchNorm - - def norm(c): - return CycleBatchNormList( - length=self._num_features, bn_class=bn_class, num_features=c - ) - - else: - norm_name = str(type(get_norm(norm, 32))) - if "BN" in norm_name: - logger.warning( - f"Shared BatchNorm (type={norm_name}) may not work well in RetinaNetHead." - ) - - cls_subnet = [] - bbox_subnet = [] - for in_channels, out_channels in zip( - [input_shape[0].channels] + list(conv_dims), conv_dims - ): - cls_subnet.append( - nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - ) - if norm: - cls_subnet.append(get_norm(norm, out_channels)) - cls_subnet.append(nn.ReLU()) - bbox_subnet.append( - nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - ) - if norm: - bbox_subnet.append(get_norm(norm, out_channels)) - bbox_subnet.append(nn.ReLU()) - - self.cls_subnet = nn.Sequential(*cls_subnet) - self.bbox_subnet = nn.Sequential(*bbox_subnet) - self.cls_score = nn.Conv2d( - conv_dims[-1], num_anchors * num_classes, kernel_size=3, stride=1, padding=1 - ) - self.bbox_pred = nn.Conv2d( - conv_dims[-1], num_anchors * 4, kernel_size=3, stride=1, padding=1 - ) - - # Initialization - for modules in [self.cls_subnet, self.bbox_subnet, self.cls_score, self.bbox_pred]: - for layer in modules.modules(): - if isinstance(layer, nn.Conv2d): - torch.nn.init.normal_(layer.weight, mean=0, std=0.01) - torch.nn.init.constant_(layer.bias, 0) - - # Use prior in model initialization to improve stability - bias_value = -(math.log((1 - prior_prob) / prior_prob)) - torch.nn.init.constant_(self.cls_score.bias, bias_value) - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - num_anchors = build_anchor_generator(cfg, input_shape).num_cell_anchors - assert ( - len(set(num_anchors)) == 1 - ), "Using different number of anchors between levels is not currently supported!" - num_anchors = num_anchors[0] - - return { - "input_shape": input_shape, - "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES, - "conv_dims": [input_shape[0].channels] * cfg.MODEL.RETINANET.NUM_CONVS, - "prior_prob": cfg.MODEL.RETINANET.PRIOR_PROB, - "norm": cfg.MODEL.RETINANET.NORM, - "num_anchors": num_anchors, - } - - def forward(self, features: List[Tensor]): - """ - Arguments: - features (list[Tensor]): FPN feature map tensors in high to low resolution. - Each tensor in the list correspond to different feature levels. - - Returns: - logits (list[Tensor]): #lvl tensors, each has shape (N, AxK, Hi, Wi). - The tensor predicts the classification probability - at each spatial position for each of the A anchors and K object - classes. - bbox_reg (list[Tensor]): #lvl tensors, each has shape (N, Ax4, Hi, Wi). - The tensor predicts 4-vector (dx,dy,dw,dh) box - regression values for every anchor. These values are the - relative offset between the anchor and the ground truth box. - """ - assert len(features) == self._num_features - logits = [] - bbox_reg = [] - for feature in features: - logits.append(self.cls_score(self.cls_subnet(feature))) - bbox_reg.append(self.bbox_pred(self.bbox_subnet(feature))) - return logits, bbox_reg diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/__init__.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/__init__.py deleted file mode 100644 index c7ffcccd7fc0f33b59d99d73d0436d60e561b0fc..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# https://github.com/xinntao/BasicSR -# flake8: noqa -from .archs import * -from .data import * -from .losses import * -from .metrics import * -from .models import * -from .ops import * -from .train import * -from .utils import * -from .version import __gitsha__, __version__ diff --git a/spaces/cscan/vocal_remover/lib/utils.py b/spaces/cscan/vocal_remover/lib/utils.py deleted file mode 100644 index 20d5bf0d2da027fd447b4b6501d7011020ca06b3..0000000000000000000000000000000000000000 --- a/spaces/cscan/vocal_remover/lib/utils.py +++ /dev/null @@ -1,30 +0,0 @@ -import os - -import cv2 -import numpy as np - - -def imread(filename, flags=cv2.IMREAD_COLOR, dtype=np.uint8): - try: - n = np.fromfile(filename, dtype) - img = cv2.imdecode(n, flags) - return img - except Exception as e: - print(e) - return None - - -def imwrite(filename, img, params=None): - try: - ext = os.path.splitext(filename)[1] - result, n = cv2.imencode(ext, img, params) - - if result: - with open(filename, mode='w+b') as f: - n.tofile(f) - return True - else: - return False - except Exception as e: - print(e) - return False diff --git a/spaces/cvlab/zero123-live/CLIP/data/prompts.md b/spaces/cvlab/zero123-live/CLIP/data/prompts.md deleted file mode 100644 index 6d8aaf7b13f04031e7ea00d58a1c131b98bdfe20..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/CLIP/data/prompts.md +++ /dev/null @@ -1,3401 +0,0 @@ -# Prompts for Image Classification - -Below are the class names and templates that are used for collecting the zero-shot classification scores in the paper. Each dataset has two lists `classes` and `templates`, where the string `{}` in the template is to be replaced with the corresponding class names. For the Facial Emotion Recognition 2013 dataset specifically, we used multiple class names for certain classes. - -This file contains prompt data for 26 of the 27 datasets shown in Table 9 of the paper; the text prompts for ImageNet (as well as other [ImageNet Testbed](https://modestyachts.github.io/imagenet-testbed/) datasets in Figure 13) can be found in [this notebook](https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb), as well as how to ensemble predictions from multiple prompts using these templates. - -If you are viewing this document on GitHub, use the table of contents icon at the upper left to browse the datasets. - - -## Birdsnap - -```bash -classes = [ - 'Acadian Flycatcher', - 'Acorn Woodpecker', - 'Alder Flycatcher', - 'Allens Hummingbird', - 'Altamira Oriole', - 'American Avocet', - 'American Bittern', - 'American Black Duck', - 'American Coot', - 'American Crow', - 'American Dipper', - 'American Golden Plover', - 'American Goldfinch', - 'American Kestrel', - 'American Oystercatcher', - 'American Pipit', - 'American Redstart', - 'American Robin', - 'American Three toed Woodpecker', - 'American Tree Sparrow', - 'American White Pelican', - 'American Wigeon', - 'American Woodcock', - 'Anhinga', - 'Annas Hummingbird', - 'Arctic Tern', - 'Ash throated Flycatcher', - 'Audubons Oriole', - 'Bairds Sandpiper', - 'Bald Eagle', - 'Baltimore Oriole', - 'Band tailed Pigeon', - 'Barn Swallow', - 'Barred Owl', - 'Barrows Goldeneye', - 'Bay breasted Warbler', - 'Bells Vireo', - 'Belted Kingfisher', - 'Bewicks Wren', - 'Black Guillemot', - 'Black Oystercatcher', - 'Black Phoebe', - 'Black Rosy Finch', - 'Black Scoter', - 'Black Skimmer', - 'Black Tern', - 'Black Turnstone', - 'Black Vulture', - 'Black and white Warbler', - 'Black backed Woodpecker', - 'Black bellied Plover', - 'Black billed Cuckoo', - 'Black billed Magpie', - 'Black capped Chickadee', - 'Black chinned Hummingbird', - 'Black chinned Sparrow', - 'Black crested Titmouse', - 'Black crowned Night Heron', - 'Black headed Grosbeak', - 'Black legged Kittiwake', - 'Black necked Stilt', - 'Black throated Blue Warbler', - 'Black throated Gray Warbler', - 'Black throated Green Warbler', - 'Black throated Sparrow', - 'Blackburnian Warbler', - 'Blackpoll Warbler', - 'Blue Grosbeak', - 'Blue Jay', - 'Blue gray Gnatcatcher', - 'Blue headed Vireo', - 'Blue winged Teal', - 'Blue winged Warbler', - 'Boat tailed Grackle', - 'Bobolink', - 'Bohemian Waxwing', - 'Bonapartes Gull', - 'Boreal Chickadee', - 'Brandts Cormorant', - 'Brant', - 'Brewers Blackbird', - 'Brewers Sparrow', - 'Bridled Titmouse', - 'Broad billed Hummingbird', - 'Broad tailed Hummingbird', - 'Broad winged Hawk', - 'Bronzed Cowbird', - 'Brown Creeper', - 'Brown Pelican', - 'Brown Thrasher', - 'Brown capped Rosy Finch', - 'Brown crested Flycatcher', - 'Brown headed Cowbird', - 'Brown headed Nuthatch', - 'Bufflehead', - 'Bullocks Oriole', - 'Burrowing Owl', - 'Bushtit', - 'Cackling Goose', - 'Cactus Wren', - 'California Gull', - 'California Quail', - 'California Thrasher', - 'California Towhee', - 'Calliope Hummingbird', - 'Canada Goose', - 'Canada Warbler', - 'Canvasback', - 'Canyon Towhee', - 'Canyon Wren', - 'Cape May Warbler', - 'Carolina Chickadee', - 'Carolina Wren', - 'Caspian Tern', - 'Cassins Finch', - 'Cassins Kingbird', - 'Cassins Sparrow', - 'Cassins Vireo', - 'Cattle Egret', - 'Cave Swallow', - 'Cedar Waxwing', - 'Cerulean Warbler', - 'Chestnut backed Chickadee', - 'Chestnut collared Longspur', - 'Chestnut sided Warbler', - 'Chihuahuan Raven', - 'Chimney Swift', - 'Chipping Sparrow', - 'Cinnamon Teal', - 'Clapper Rail', - 'Clarks Grebe', - 'Clarks Nutcracker', - 'Clay colored Sparrow', - 'Cliff Swallow', - 'Common Black Hawk', - 'Common Eider', - 'Common Gallinule', - 'Common Goldeneye', - 'Common Grackle', - 'Common Ground Dove', - 'Common Loon', - 'Common Merganser', - 'Common Murre', - 'Common Nighthawk', - 'Common Raven', - 'Common Redpoll', - 'Common Tern', - 'Common Yellowthroat', - 'Connecticut Warbler', - 'Coopers Hawk', - 'Cordilleran Flycatcher', - 'Costas Hummingbird', - 'Couchs Kingbird', - 'Crested Caracara', - 'Curve billed Thrasher', - 'Dark eyed Junco', - 'Dickcissel', - 'Double crested Cormorant', - 'Downy Woodpecker', - 'Dunlin', - 'Dusky Flycatcher', - 'Dusky Grouse', - 'Eared Grebe', - 'Eastern Bluebird', - 'Eastern Kingbird', - 'Eastern Meadowlark', - 'Eastern Phoebe', - 'Eastern Screech Owl', - 'Eastern Towhee', - 'Eastern Wood Pewee', - 'Elegant Trogon', - 'Elf Owl', - 'Eurasian Collared Dove', - 'Eurasian Wigeon', - 'European Starling', - 'Evening Grosbeak', - 'Ferruginous Hawk', - 'Ferruginous Pygmy Owl', - 'Field Sparrow', - 'Fish Crow', - 'Florida Scrub Jay', - 'Forsters Tern', - 'Fox Sparrow', - 'Franklins Gull', - 'Fulvous Whistling Duck', - 'Gadwall', - 'Gambels Quail', - 'Gila Woodpecker', - 'Glaucous Gull', - 'Glaucous winged Gull', - 'Glossy Ibis', - 'Golden Eagle', - 'Golden crowned Kinglet', - 'Golden crowned Sparrow', - 'Golden fronted Woodpecker', - 'Golden winged Warbler', - 'Grasshopper Sparrow', - 'Gray Catbird', - 'Gray Flycatcher', - 'Gray Jay', - 'Gray Kingbird', - 'Gray cheeked Thrush', - 'Gray crowned Rosy Finch', - 'Great Black backed Gull', - 'Great Blue Heron', - 'Great Cormorant', - 'Great Crested Flycatcher', - 'Great Egret', - 'Great Gray Owl', - 'Great Horned Owl', - 'Great Kiskadee', - 'Great tailed Grackle', - 'Greater Prairie Chicken', - 'Greater Roadrunner', - 'Greater Sage Grouse', - 'Greater Scaup', - 'Greater White fronted Goose', - 'Greater Yellowlegs', - 'Green Jay', - 'Green tailed Towhee', - 'Green winged Teal', - 'Groove billed Ani', - 'Gull billed Tern', - 'Hairy Woodpecker', - 'Hammonds Flycatcher', - 'Harlequin Duck', - 'Harriss Hawk', - 'Harriss Sparrow', - 'Heermanns Gull', - 'Henslows Sparrow', - 'Hepatic Tanager', - 'Hermit Thrush', - 'Herring Gull', - 'Hoary Redpoll', - 'Hooded Merganser', - 'Hooded Oriole', - 'Hooded Warbler', - 'Horned Grebe', - 'Horned Lark', - 'House Finch', - 'House Sparrow', - 'House Wren', - 'Huttons Vireo', - 'Iceland Gull', - 'Inca Dove', - 'Indigo Bunting', - 'Killdeer', - 'King Rail', - 'Ladder backed Woodpecker', - 'Lapland Longspur', - 'Lark Bunting', - 'Lark Sparrow', - 'Laughing Gull', - 'Lazuli Bunting', - 'Le Contes Sparrow', - 'Least Bittern', - 'Least Flycatcher', - 'Least Grebe', - 'Least Sandpiper', - 'Least Tern', - 'Lesser Goldfinch', - 'Lesser Nighthawk', - 'Lesser Scaup', - 'Lesser Yellowlegs', - 'Lewiss Woodpecker', - 'Limpkin', - 'Lincolns Sparrow', - 'Little Blue Heron', - 'Loggerhead Shrike', - 'Long billed Curlew', - 'Long billed Dowitcher', - 'Long billed Thrasher', - 'Long eared Owl', - 'Long tailed Duck', - 'Louisiana Waterthrush', - 'Magnificent Frigatebird', - 'Magnolia Warbler', - 'Mallard', - 'Marbled Godwit', - 'Marsh Wren', - 'Merlin', - 'Mew Gull', - 'Mexican Jay', - 'Mississippi Kite', - 'Monk Parakeet', - 'Mottled Duck', - 'Mountain Bluebird', - 'Mountain Chickadee', - 'Mountain Plover', - 'Mourning Dove', - 'Mourning Warbler', - 'Muscovy Duck', - 'Mute Swan', - 'Nashville Warbler', - 'Nelsons Sparrow', - 'Neotropic Cormorant', - 'Northern Bobwhite', - 'Northern Cardinal', - 'Northern Flicker', - 'Northern Gannet', - 'Northern Goshawk', - 'Northern Harrier', - 'Northern Hawk Owl', - 'Northern Mockingbird', - 'Northern Parula', - 'Northern Pintail', - 'Northern Rough winged Swallow', - 'Northern Saw whet Owl', - 'Northern Shrike', - 'Northern Waterthrush', - 'Nuttalls Woodpecker', - 'Oak Titmouse', - 'Olive Sparrow', - 'Olive sided Flycatcher', - 'Orange crowned Warbler', - 'Orchard Oriole', - 'Osprey', - 'Ovenbird', - 'Pacific Golden Plover', - 'Pacific Loon', - 'Pacific Wren', - 'Pacific slope Flycatcher', - 'Painted Bunting', - 'Painted Redstart', - 'Palm Warbler', - 'Pectoral Sandpiper', - 'Peregrine Falcon', - 'Phainopepla', - 'Philadelphia Vireo', - 'Pied billed Grebe', - 'Pigeon Guillemot', - 'Pileated Woodpecker', - 'Pine Grosbeak', - 'Pine Siskin', - 'Pine Warbler', - 'Piping Plover', - 'Plumbeous Vireo', - 'Prairie Falcon', - 'Prairie Warbler', - 'Prothonotary Warbler', - 'Purple Finch', - 'Purple Gallinule', - 'Purple Martin', - 'Purple Sandpiper', - 'Pygmy Nuthatch', - 'Pyrrhuloxia', - 'Red Crossbill', - 'Red Knot', - 'Red Phalarope', - 'Red bellied Woodpecker', - 'Red breasted Merganser', - 'Red breasted Nuthatch', - 'Red breasted Sapsucker', - 'Red cockaded Woodpecker', - 'Red eyed Vireo', - 'Red headed Woodpecker', - 'Red naped Sapsucker', - 'Red necked Grebe', - 'Red necked Phalarope', - 'Red shouldered Hawk', - 'Red tailed Hawk', - 'Red throated Loon', - 'Red winged Blackbird', - 'Reddish Egret', - 'Redhead', - 'Ring billed Gull', - 'Ring necked Duck', - 'Ring necked Pheasant', - 'Rock Pigeon', - 'Rock Ptarmigan', - 'Rock Sandpiper', - 'Rock Wren', - 'Rose breasted Grosbeak', - 'Roseate Tern', - 'Rosss Goose', - 'Rough legged Hawk', - 'Royal Tern', - 'Ruby crowned Kinglet', - 'Ruby throated Hummingbird', - 'Ruddy Duck', - 'Ruddy Turnstone', - 'Ruffed Grouse', - 'Rufous Hummingbird', - 'Rufous crowned Sparrow', - 'Rusty Blackbird', - 'Sage Thrasher', - 'Saltmarsh Sparrow', - 'Sanderling', - 'Sandhill Crane', - 'Sandwich Tern', - 'Says Phoebe', - 'Scaled Quail', - 'Scarlet Tanager', - 'Scissor tailed Flycatcher', - 'Scotts Oriole', - 'Seaside Sparrow', - 'Sedge Wren', - 'Semipalmated Plover', - 'Semipalmated Sandpiper', - 'Sharp shinned Hawk', - 'Sharp tailed Grouse', - 'Short billed Dowitcher', - 'Short eared Owl', - 'Snail Kite', - 'Snow Bunting', - 'Snow Goose', - 'Snowy Egret', - 'Snowy Owl', - 'Snowy Plover', - 'Solitary Sandpiper', - 'Song Sparrow', - 'Sooty Grouse', - 'Sora', - 'Spotted Owl', - 'Spotted Sandpiper', - 'Spotted Towhee', - 'Spruce Grouse', - 'Stellers Jay', - 'Stilt Sandpiper', - 'Summer Tanager', - 'Surf Scoter', - 'Surfbird', - 'Swainsons Hawk', - 'Swainsons Thrush', - 'Swallow tailed Kite', - 'Swamp Sparrow', - 'Tennessee Warbler', - 'Thayers Gull', - 'Townsends Solitaire', - 'Townsends Warbler', - 'Tree Swallow', - 'Tricolored Heron', - 'Tropical Kingbird', - 'Trumpeter Swan', - 'Tufted Titmouse', - 'Tundra Swan', - 'Turkey Vulture', - 'Upland Sandpiper', - 'Varied Thrush', - 'Veery', - 'Verdin', - 'Vermilion Flycatcher', - 'Vesper Sparrow', - 'Violet green Swallow', - 'Virginia Rail', - 'Wandering Tattler', - 'Warbling Vireo', - 'Western Bluebird', - 'Western Grebe', - 'Western Gull', - 'Western Kingbird', - 'Western Meadowlark', - 'Western Sandpiper', - 'Western Screech Owl', - 'Western Scrub Jay', - 'Western Tanager', - 'Western Wood Pewee', - 'Whimbrel', - 'White Ibis', - 'White breasted Nuthatch', - 'White crowned Sparrow', - 'White eyed Vireo', - 'White faced Ibis', - 'White headed Woodpecker', - 'White rumped Sandpiper', - 'White tailed Hawk', - 'White tailed Kite', - 'White tailed Ptarmigan', - 'White throated Sparrow', - 'White throated Swift', - 'White winged Crossbill', - 'White winged Dove', - 'White winged Scoter', - 'Wild Turkey', - 'Willet', - 'Williamsons Sapsucker', - 'Willow Flycatcher', - 'Willow Ptarmigan', - 'Wilsons Phalarope', - 'Wilsons Plover', - 'Wilsons Snipe', - 'Wilsons Warbler', - 'Winter Wren', - 'Wood Stork', - 'Wood Thrush', - 'Worm eating Warbler', - 'Wrentit', - 'Yellow Warbler', - 'Yellow bellied Flycatcher', - 'Yellow bellied Sapsucker', - 'Yellow billed Cuckoo', - 'Yellow billed Magpie', - 'Yellow breasted Chat', - 'Yellow crowned Night Heron', - 'Yellow eyed Junco', - 'Yellow headed Blackbird', - 'Yellow rumped Warbler', - 'Yellow throated Vireo', - 'Yellow throated Warbler', - 'Zone tailed Hawk', -] - -templates = [ - 'a photo of a {}, a type of bird.', -] -``` - - - -## CIFAR10 - -```bash -classes = [ - 'airplane', - 'automobile', - 'bird', - 'cat', - 'deer', - 'dog', - 'frog', - 'horse', - 'ship', - 'truck', -] - -templates = [ - 'a photo of a {}.', - 'a blurry photo of a {}.', - 'a black and white photo of a {}.', - 'a low contrast photo of a {}.', - 'a high contrast photo of a {}.', - 'a bad photo of a {}.', - 'a good photo of a {}.', - 'a photo of a small {}.', - 'a photo of a big {}.', - 'a photo of the {}.', - 'a blurry photo of the {}.', - 'a black and white photo of the {}.', - 'a low contrast photo of the {}.', - 'a high contrast photo of the {}.', - 'a bad photo of the {}.', - 'a good photo of the {}.', - 'a photo of the small {}.', - 'a photo of the big {}.', -] -``` - - - -## CIFAR100 - -```bash -classes = [ - 'apple', - 'aquarium fish', - 'baby', - 'bear', - 'beaver', - 'bed', - 'bee', - 'beetle', - 'bicycle', - 'bottle', - 'bowl', - 'boy', - 'bridge', - 'bus', - 'butterfly', - 'camel', - 'can', - 'castle', - 'caterpillar', - 'cattle', - 'chair', - 'chimpanzee', - 'clock', - 'cloud', - 'cockroach', - 'couch', - 'crab', - 'crocodile', - 'cup', - 'dinosaur', - 'dolphin', - 'elephant', - 'flatfish', - 'forest', - 'fox', - 'girl', - 'hamster', - 'house', - 'kangaroo', - 'keyboard', - 'lamp', - 'lawn mower', - 'leopard', - 'lion', - 'lizard', - 'lobster', - 'man', - 'maple tree', - 'motorcycle', - 'mountain', - 'mouse', - 'mushroom', - 'oak tree', - 'orange', - 'orchid', - 'otter', - 'palm tree', - 'pear', - 'pickup truck', - 'pine tree', - 'plain', - 'plate', - 'poppy', - 'porcupine', - 'possum', - 'rabbit', - 'raccoon', - 'ray', - 'road', - 'rocket', - 'rose', - 'sea', - 'seal', - 'shark', - 'shrew', - 'skunk', - 'skyscraper', - 'snail', - 'snake', - 'spider', - 'squirrel', - 'streetcar', - 'sunflower', - 'sweet pepper', - 'table', - 'tank', - 'telephone', - 'television', - 'tiger', - 'tractor', - 'train', - 'trout', - 'tulip', - 'turtle', - 'wardrobe', - 'whale', - 'willow tree', - 'wolf', - 'woman', - 'worm', -] - -templates = [ - 'a photo of a {}.', - 'a blurry photo of a {}.', - 'a black and white photo of a {}.', - 'a low contrast photo of a {}.', - 'a high contrast photo of a {}.', - 'a bad photo of a {}.', - 'a good photo of a {}.', - 'a photo of a small {}.', - 'a photo of a big {}.', - 'a photo of the {}.', - 'a blurry photo of the {}.', - 'a black and white photo of the {}.', - 'a low contrast photo of the {}.', - 'a high contrast photo of the {}.', - 'a bad photo of the {}.', - 'a good photo of the {}.', - 'a photo of the small {}.', - 'a photo of the big {}.', -] -``` - - - -## CLEVRCounts - -```bash -classes = [ - '10', - '3', - '4', - '5', - '6', - '7', - '8', - '9', -] - -templates = [ - 'a photo of {} objects.', -] -``` - - - -## Caltech101 - -```bash -classes = [ - 'background', - 'off-center face', - 'centered face', - 'leopard', - 'motorbike', - 'accordion', - 'airplane', - 'anchor', - 'ant', - 'barrel', - 'bass', - 'beaver', - 'binocular', - 'bonsai', - 'brain', - 'brontosaurus', - 'buddha', - 'butterfly', - 'camera', - 'cannon', - 'side of a car', - 'ceiling fan', - 'cellphone', - 'chair', - 'chandelier', - 'body of a cougar cat', - 'face of a cougar cat', - 'crab', - 'crayfish', - 'crocodile', - 'head of a crocodile', - 'cup', - 'dalmatian', - 'dollar bill', - 'dolphin', - 'dragonfly', - 'electric guitar', - 'elephant', - 'emu', - 'euphonium', - 'ewer', - 'ferry', - 'flamingo', - 'head of a flamingo', - 'garfield', - 'gerenuk', - 'gramophone', - 'grand piano', - 'hawksbill', - 'headphone', - 'hedgehog', - 'helicopter', - 'ibis', - 'inline skate', - 'joshua tree', - 'kangaroo', - 'ketch', - 'lamp', - 'laptop', - 'llama', - 'lobster', - 'lotus', - 'mandolin', - 'mayfly', - 'menorah', - 'metronome', - 'minaret', - 'nautilus', - 'octopus', - 'okapi', - 'pagoda', - 'panda', - 'pigeon', - 'pizza', - 'platypus', - 'pyramid', - 'revolver', - 'rhino', - 'rooster', - 'saxophone', - 'schooner', - 'scissors', - 'scorpion', - 'sea horse', - 'snoopy (cartoon beagle)', - 'soccer ball', - 'stapler', - 'starfish', - 'stegosaurus', - 'stop sign', - 'strawberry', - 'sunflower', - 'tick', - 'trilobite', - 'umbrella', - 'watch', - 'water lilly', - 'wheelchair', - 'wild cat', - 'windsor chair', - 'wrench', - 'yin and yang symbol', -] - -templates = [ - 'a photo of a {}.', - 'a painting of a {}.', - 'a plastic {}.', - 'a sculpture of a {}.', - 'a sketch of a {}.', - 'a tattoo of a {}.', - 'a toy {}.', - 'a rendition of a {}.', - 'a embroidered {}.', - 'a cartoon {}.', - 'a {} in a video game.', - 'a plushie {}.', - 'a origami {}.', - 'art of a {}.', - 'graffiti of a {}.', - 'a drawing of a {}.', - 'a doodle of a {}.', - 'a photo of the {}.', - 'a painting of the {}.', - 'the plastic {}.', - 'a sculpture of the {}.', - 'a sketch of the {}.', - 'a tattoo of the {}.', - 'the toy {}.', - 'a rendition of the {}.', - 'the embroidered {}.', - 'the cartoon {}.', - 'the {} in a video game.', - 'the plushie {}.', - 'the origami {}.', - 'art of the {}.', - 'graffiti of the {}.', - 'a drawing of the {}.', - 'a doodle of the {}.', -] -``` - - - -## Country211 - -```bash -classes = [ - 'Andorra', - 'United Arab Emirates', - 'Afghanistan', - 'Antigua and Barbuda', - 'Anguilla', - 'Albania', - 'Armenia', - 'Angola', - 'Antarctica', - 'Argentina', - 'Austria', - 'Australia', - 'Aruba', - 'Aland Islands', - 'Azerbaijan', - 'Bosnia and Herzegovina', - 'Barbados', - 'Bangladesh', - 'Belgium', - 'Burkina Faso', - 'Bulgaria', - 'Bahrain', - 'Benin', - 'Bermuda', - 'Brunei Darussalam', - 'Bolivia', - 'Bonaire, Saint Eustatius and Saba', - 'Brazil', - 'Bahamas', - 'Bhutan', - 'Botswana', - 'Belarus', - 'Belize', - 'Canada', - 'DR Congo', - 'Central African Republic', - 'Switzerland', - "Cote d'Ivoire", - 'Cook Islands', - 'Chile', - 'Cameroon', - 'China', - 'Colombia', - 'Costa Rica', - 'Cuba', - 'Cabo Verde', - 'Curacao', - 'Cyprus', - 'Czech Republic', - 'Germany', - 'Denmark', - 'Dominica', - 'Dominican Republic', - 'Algeria', - 'Ecuador', - 'Estonia', - 'Egypt', - 'Spain', - 'Ethiopia', - 'Finland', - 'Fiji', - 'Falkland Islands', - 'Faeroe Islands', - 'France', - 'Gabon', - 'United Kingdom', - 'Grenada', - 'Georgia', - 'French Guiana', - 'Guernsey', - 'Ghana', - 'Gibraltar', - 'Greenland', - 'Gambia', - 'Guadeloupe', - 'Greece', - 'South Georgia and South Sandwich Is.', - 'Guatemala', - 'Guam', - 'Guyana', - 'Hong Kong', - 'Honduras', - 'Croatia', - 'Haiti', - 'Hungary', - 'Indonesia', - 'Ireland', - 'Israel', - 'Isle of Man', - 'India', - 'Iraq', - 'Iran', - 'Iceland', - 'Italy', - 'Jersey', - 'Jamaica', - 'Jordan', - 'Japan', - 'Kenya', - 'Kyrgyz Republic', - 'Cambodia', - 'St. Kitts and Nevis', - 'North Korea', - 'South Korea', - 'Kuwait', - 'Cayman Islands', - 'Kazakhstan', - 'Laos', - 'Lebanon', - 'St. Lucia', - 'Liechtenstein', - 'Sri Lanka', - 'Liberia', - 'Lithuania', - 'Luxembourg', - 'Latvia', - 'Libya', - 'Morocco', - 'Monaco', - 'Moldova', - 'Montenegro', - 'Saint-Martin', - 'Madagascar', - 'Macedonia', - 'Mali', - 'Myanmar', - 'Mongolia', - 'Macau', - 'Martinique', - 'Mauritania', - 'Malta', - 'Mauritius', - 'Maldives', - 'Malawi', - 'Mexico', - 'Malaysia', - 'Mozambique', - 'Namibia', - 'New Caledonia', - 'Nigeria', - 'Nicaragua', - 'Netherlands', - 'Norway', - 'Nepal', - 'New Zealand', - 'Oman', - 'Panama', - 'Peru', - 'French Polynesia', - 'Papua New Guinea', - 'Philippines', - 'Pakistan', - 'Poland', - 'Puerto Rico', - 'Palestine', - 'Portugal', - 'Palau', - 'Paraguay', - 'Qatar', - 'Reunion', - 'Romania', - 'Serbia', - 'Russia', - 'Rwanda', - 'Saudi Arabia', - 'Solomon Islands', - 'Seychelles', - 'Sudan', - 'Sweden', - 'Singapore', - 'St. Helena', - 'Slovenia', - 'Svalbard and Jan Mayen Islands', - 'Slovakia', - 'Sierra Leone', - 'San Marino', - 'Senegal', - 'Somalia', - 'South Sudan', - 'El Salvador', - 'Sint Maarten', - 'Syria', - 'Eswatini', - 'Togo', - 'Thailand', - 'Tajikistan', - 'Timor-Leste', - 'Turkmenistan', - 'Tunisia', - 'Tonga', - 'Turkey', - 'Trinidad and Tobago', - 'Taiwan', - 'Tanzania', - 'Ukraine', - 'Uganda', - 'United States', - 'Uruguay', - 'Uzbekistan', - 'Vatican', - 'Venezuela', - 'British Virgin Islands', - 'United States Virgin Islands', - 'Vietnam', - 'Vanuatu', - 'Samoa', - 'Kosovo', - 'Yemen', - 'South Africa', - 'Zambia', - 'Zimbabwe', -] - -templates = [ - 'a photo i took in {}.', - 'a photo i took while visiting {}.', - 'a photo from my home country of {}.', - 'a photo from my visit to {}.', - 'a photo showing the country of {}.', -] -``` - - - -## DescribableTextures - -```bash -classes = [ - 'banded', - 'blotchy', - 'braided', - 'bubbly', - 'bumpy', - 'chequered', - 'cobwebbed', - 'cracked', - 'crosshatched', - 'crystalline', - 'dotted', - 'fibrous', - 'flecked', - 'freckled', - 'frilly', - 'gauzy', - 'grid', - 'grooved', - 'honeycombed', - 'interlaced', - 'knitted', - 'lacelike', - 'lined', - 'marbled', - 'matted', - 'meshed', - 'paisley', - 'perforated', - 'pitted', - 'pleated', - 'polka-dotted', - 'porous', - 'potholed', - 'scaly', - 'smeared', - 'spiralled', - 'sprinkled', - 'stained', - 'stratified', - 'striped', - 'studded', - 'swirly', - 'veined', - 'waffled', - 'woven', - 'wrinkled', - 'zigzagged', -] - -templates = [ - 'a photo of a {} texture.', - 'a photo of a {} pattern.', - 'a photo of a {} thing.', - 'a photo of a {} object.', - 'a photo of the {} texture.', - 'a photo of the {} pattern.', - 'a photo of the {} thing.', - 'a photo of the {} object.', -] -``` - - - -## EuroSAT - -```bash -classes = [ - 'forest', - 'permanent crop land', - 'residential buildings or homes or apartments', - 'river', - 'pasture land', - 'lake or sea', - 'brushland or shrubland', - 'annual crop land', - 'industrial buildings or commercial buildings', - 'highway or road', -] - -templates = [ - 'a centered satellite photo of {}.', - 'a centered satellite photo of a {}.', - 'a centered satellite photo of the {}.', -] -``` - - - -## FGVCAircraft - -```bash -classes = [ - '707-320', - '727-200', - '737-200', - '737-300', - '737-400', - '737-500', - '737-600', - '737-700', - '737-800', - '737-900', - '747-100', - '747-200', - '747-300', - '747-400', - '757-200', - '757-300', - '767-200', - '767-300', - '767-400', - '777-200', - '777-300', - 'A300B4', - 'A310', - 'A318', - 'A319', - 'A320', - 'A321', - 'A330-200', - 'A330-300', - 'A340-200', - 'A340-300', - 'A340-500', - 'A340-600', - 'A380', - 'ATR-42', - 'ATR-72', - 'An-12', - 'BAE 146-200', - 'BAE 146-300', - 'BAE-125', - 'Beechcraft 1900', - 'Boeing 717', - 'C-130', - 'C-47', - 'CRJ-200', - 'CRJ-700', - 'CRJ-900', - 'Cessna 172', - 'Cessna 208', - 'Cessna 525', - 'Cessna 560', - 'Challenger 600', - 'DC-10', - 'DC-3', - 'DC-6', - 'DC-8', - 'DC-9-30', - 'DH-82', - 'DHC-1', - 'DHC-6', - 'DHC-8-100', - 'DHC-8-300', - 'DR-400', - 'Dornier 328', - 'E-170', - 'E-190', - 'E-195', - 'EMB-120', - 'ERJ 135', - 'ERJ 145', - 'Embraer Legacy 600', - 'Eurofighter Typhoon', - 'F-16A/B', - 'F/A-18', - 'Falcon 2000', - 'Falcon 900', - 'Fokker 100', - 'Fokker 50', - 'Fokker 70', - 'Global Express', - 'Gulfstream IV', - 'Gulfstream V', - 'Hawk T1', - 'Il-76', - 'L-1011', - 'MD-11', - 'MD-80', - 'MD-87', - 'MD-90', - 'Metroliner', - 'Model B200', - 'PA-28', - 'SR-20', - 'Saab 2000', - 'Saab 340', - 'Spitfire', - 'Tornado', - 'Tu-134', - 'Tu-154', - 'Yak-42', -] - -templates = [ - 'a photo of a {}, a type of aircraft.', - 'a photo of the {}, a type of aircraft.', -] -``` - - - -## FacialEmotionRecognition2013 - -```bash -classes = [ - ['angry'], - ['disgusted'], - ['fearful'], - ['happy', 'smiling'], - ['sad', 'depressed'], - ['surprised', 'shocked', 'spooked'], - ['neutral', 'bored'], -] - -templates = [ - 'a photo of a {} looking face.', - 'a photo of a face showing the emotion: {}.', - 'a photo of a face looking {}.', - 'a face that looks {}.', - 'they look {}.', - 'look at how {} they are.', -] -``` - - - -## Flowers102 - -```bash -classes = [ - 'pink primrose', - 'hard-leaved pocket orchid', - 'canterbury bells', - 'sweet pea', - 'english marigold', - 'tiger lily', - 'moon orchid', - 'bird of paradise', - 'monkshood', - 'globe thistle', - 'snapdragon', - "colt's foot", - 'king protea', - 'spear thistle', - 'yellow iris', - 'globe flower', - 'purple coneflower', - 'peruvian lily', - 'balloon flower', - 'giant white arum lily', - 'fire lily', - 'pincushion flower', - 'fritillary', - 'red ginger', - 'grape hyacinth', - 'corn poppy', - 'prince of wales feathers', - 'stemless gentian', - 'artichoke', - 'sweet william', - 'carnation', - 'garden phlox', - 'love in the mist', - 'mexican aster', - 'alpine sea holly', - 'ruby-lipped cattleya', - 'cape flower', - 'great masterwort', - 'siam tulip', - 'lenten rose', - 'barbeton daisy', - 'daffodil', - 'sword lily', - 'poinsettia', - 'bolero deep blue', - 'wallflower', - 'marigold', - 'buttercup', - 'oxeye daisy', - 'common dandelion', - 'petunia', - 'wild pansy', - 'primula', - 'sunflower', - 'pelargonium', - 'bishop of llandaff', - 'gaura', - 'geranium', - 'orange dahlia', - 'pink and yellow dahlia', - 'cautleya spicata', - 'japanese anemone', - 'black-eyed susan', - 'silverbush', - 'californian poppy', - 'osteospermum', - 'spring crocus', - 'bearded iris', - 'windflower', - 'tree poppy', - 'gazania', - 'azalea', - 'water lily', - 'rose', - 'thorn apple', - 'morning glory', - 'passion flower', - 'lotus', - 'toad lily', - 'anthurium', - 'frangipani', - 'clematis', - 'hibiscus', - 'columbine', - 'desert-rose', - 'tree mallow', - 'magnolia', - 'cyclamen', - 'watercress', - 'canna lily', - 'hippeastrum', - 'bee balm', - 'air plant', - 'foxglove', - 'bougainvillea', - 'camellia', - 'mallow', - 'mexican petunia', - 'bromelia', - 'blanket flower', - 'trumpet creeper', - 'blackberry lily', -] - -templates = [ - 'a photo of a {}, a type of flower.', -] -``` - - - -## Food101 - -```bash -classes = [ - 'apple pie', - 'baby back ribs', - 'baklava', - 'beef carpaccio', - 'beef tartare', - 'beet salad', - 'beignets', - 'bibimbap', - 'bread pudding', - 'breakfast burrito', - 'bruschetta', - 'caesar salad', - 'cannoli', - 'caprese salad', - 'carrot cake', - 'ceviche', - 'cheese plate', - 'cheesecake', - 'chicken curry', - 'chicken quesadilla', - 'chicken wings', - 'chocolate cake', - 'chocolate mousse', - 'churros', - 'clam chowder', - 'club sandwich', - 'crab cakes', - 'creme brulee', - 'croque madame', - 'cup cakes', - 'deviled eggs', - 'donuts', - 'dumplings', - 'edamame', - 'eggs benedict', - 'escargots', - 'falafel', - 'filet mignon', - 'fish and chips', - 'foie gras', - 'french fries', - 'french onion soup', - 'french toast', - 'fried calamari', - 'fried rice', - 'frozen yogurt', - 'garlic bread', - 'gnocchi', - 'greek salad', - 'grilled cheese sandwich', - 'grilled salmon', - 'guacamole', - 'gyoza', - 'hamburger', - 'hot and sour soup', - 'hot dog', - 'huevos rancheros', - 'hummus', - 'ice cream', - 'lasagna', - 'lobster bisque', - 'lobster roll sandwich', - 'macaroni and cheese', - 'macarons', - 'miso soup', - 'mussels', - 'nachos', - 'omelette', - 'onion rings', - 'oysters', - 'pad thai', - 'paella', - 'pancakes', - 'panna cotta', - 'peking duck', - 'pho', - 'pizza', - 'pork chop', - 'poutine', - 'prime rib', - 'pulled pork sandwich', - 'ramen', - 'ravioli', - 'red velvet cake', - 'risotto', - 'samosa', - 'sashimi', - 'scallops', - 'seaweed salad', - 'shrimp and grits', - 'spaghetti bolognese', - 'spaghetti carbonara', - 'spring rolls', - 'steak', - 'strawberry shortcake', - 'sushi', - 'tacos', - 'takoyaki', - 'tiramisu', - 'tuna tartare', - 'waffles', -] - -templates = [ - 'a photo of {}, a type of food.', -] -``` - - - -## GTSRB - -```bash -classes = [ - 'red and white circle 20 kph speed limit', - 'red and white circle 30 kph speed limit', - 'red and white circle 50 kph speed limit', - 'red and white circle 60 kph speed limit', - 'red and white circle 70 kph speed limit', - 'red and white circle 80 kph speed limit', - 'end / de-restriction of 80 kph speed limit', - 'red and white circle 100 kph speed limit', - 'red and white circle 120 kph speed limit', - 'red and white circle red car and black car no passing', - 'red and white circle red truck and black car no passing', - 'red and white triangle road intersection warning', - 'white and yellow diamond priority road', - 'red and white upside down triangle yield right-of-way', - 'stop', - 'empty red and white circle', - 'red and white circle no truck entry', - 'red circle with white horizonal stripe no entry', - 'red and white triangle with exclamation mark warning', - 'red and white triangle with black left curve approaching warning', - 'red and white triangle with black right curve approaching warning', - 'red and white triangle with black double curve approaching warning', - 'red and white triangle rough / bumpy road warning', - 'red and white triangle car skidding / slipping warning', - 'red and white triangle with merging / narrow lanes warning', - 'red and white triangle with person digging / construction / road work warning', - 'red and white triangle with traffic light approaching warning', - 'red and white triangle with person walking warning', - 'red and white triangle with child and person walking warning', - 'red and white triangle with bicyle warning', - 'red and white triangle with snowflake / ice warning', - 'red and white triangle with deer warning', - 'white circle with gray strike bar no speed limit', - 'blue circle with white right turn arrow mandatory', - 'blue circle with white left turn arrow mandatory', - 'blue circle with white forward arrow mandatory', - 'blue circle with white forward or right turn arrow mandatory', - 'blue circle with white forward or left turn arrow mandatory', - 'blue circle with white keep right arrow mandatory', - 'blue circle with white keep left arrow mandatory', - 'blue circle with white arrows indicating a traffic circle', - 'white circle with gray strike bar indicating no passing for cars has ended', - 'white circle with gray strike bar indicating no passing for trucks has ended', -] - -templates = [ - 'a zoomed in photo of a "{}" traffic sign.', - 'a centered photo of a "{}" traffic sign.', - 'a close up photo of a "{}" traffic sign.', -] -``` - - - -## HatefulMemes - -```bash -classes = [ - 'meme', - 'hatespeech meme', -] - -templates = [ - 'a {}.', -] -``` - - - -## KITTI - -```bash -classes = [ - 'a photo i took of a car on my left or right side.', - 'a photo i took with a car nearby.', - 'a photo i took with a car in the distance.', - 'a photo i took with no car.', -] - -templates = [ - '{}', -] -``` - - - -## Kinetics700 - -```bash -classes = [ - 'abseiling', - 'acting in play', - 'adjusting glasses', - 'air drumming', - 'alligator wrestling', - 'answering questions', - 'applauding', - 'applying cream', - 'archaeological excavation', - 'archery', - 'arguing', - 'arm wrestling', - 'arranging flowers', - 'arresting', - 'assembling bicycle', - 'assembling computer', - 'attending conference', - 'auctioning', - 'baby waking up', - 'backflip (human)', - 'baking cookies', - 'bandaging', - 'barbequing', - 'bartending', - 'base jumping', - 'bathing dog', - 'battle rope training', - 'beatboxing', - 'bee keeping', - 'being excited', - 'being in zero gravity', - 'belly dancing', - 'bench pressing', - 'bending back', - 'bending metal', - 'biking through snow', - 'blasting sand', - 'blending fruit', - 'blowdrying hair', - 'blowing bubble gum', - 'blowing glass', - 'blowing leaves', - 'blowing nose', - 'blowing out candles', - 'bobsledding', - 'bodysurfing', - 'bookbinding', - 'bottling', - 'bouncing ball (not juggling)', - 'bouncing on bouncy castle', - 'bouncing on trampoline', - 'bowling', - 'braiding hair', - 'breading or breadcrumbing', - 'breakdancing', - 'breaking boards', - 'breaking glass', - 'breathing fire', - 'brush painting', - 'brushing floor', - 'brushing hair', - 'brushing teeth', - 'building cabinet', - 'building lego', - 'building sandcastle', - 'building shed', - 'bulldozing', - 'bungee jumping', - 'burping', - 'busking', - 'calculating', - 'calligraphy', - 'canoeing or kayaking', - 'capoeira', - 'capsizing', - 'card stacking', - 'card throwing', - 'carrying baby', - 'carrying weight', - 'cartwheeling', - 'carving ice', - 'carving marble', - 'carving pumpkin', - 'carving wood with a knife', - 'casting fishing line', - 'catching fish', - 'catching or throwing baseball', - 'catching or throwing frisbee', - 'catching or throwing softball', - 'celebrating', - 'changing gear in car', - 'changing oil', - 'changing wheel (not on bike)', - 'chasing', - 'checking tires', - 'checking watch', - 'cheerleading', - 'chewing gum', - 'chiseling stone', - 'chiseling wood', - 'chopping meat', - 'chopping wood', - 'clam digging', - 'clapping', - 'clay pottery making', - 'clean and jerk', - 'cleaning gutters', - 'cleaning pool', - 'cleaning shoes', - 'cleaning toilet', - 'cleaning windows', - 'climbing a rope', - 'climbing ladder', - 'climbing tree', - 'closing door', - 'coloring in', - 'combing hair', - 'contact juggling', - 'contorting', - 'cooking chicken', - 'cooking egg', - 'cooking on campfire', - 'cooking sausages (not on barbeque)', - 'cooking scallops', - 'cosplaying', - 'coughing', - 'counting money', - 'country line dancing', - 'cracking back', - 'cracking knuckles', - 'cracking neck', - 'crawling baby', - 'crocheting', - 'crossing eyes', - 'crossing river', - 'crying', - 'cumbia', - 'curling (sport)', - 'curling eyelashes', - 'curling hair', - 'cutting apple', - 'cutting cake', - 'cutting nails', - 'cutting orange', - 'cutting pineapple', - 'cutting watermelon', - 'dancing ballet', - 'dancing charleston', - 'dancing gangnam style', - 'dancing macarena', - 'deadlifting', - 'dealing cards', - 'decorating the christmas tree', - 'decoupage', - 'delivering mail', - 'digging', - 'dining', - 'directing traffic', - 'disc golfing', - 'diving cliff', - 'docking boat', - 'dodgeball', - 'doing aerobics', - 'doing jigsaw puzzle', - 'doing laundry', - 'doing nails', - 'doing sudoku', - 'drawing', - 'dribbling basketball', - 'drinking shots', - 'driving car', - 'driving tractor', - 'drooling', - 'drop kicking', - 'drumming fingers', - 'dumpster diving', - 'dunking basketball', - 'dyeing eyebrows', - 'dyeing hair', - 'eating burger', - 'eating cake', - 'eating carrots', - 'eating chips', - 'eating doughnuts', - 'eating hotdog', - 'eating ice cream', - 'eating nachos', - 'eating spaghetti', - 'eating watermelon', - 'egg hunting', - 'embroidering', - 'entering church', - 'exercising arm', - 'exercising with an exercise ball', - 'extinguishing fire', - 'faceplanting', - 'falling off bike', - 'falling off chair', - 'feeding birds', - 'feeding fish', - 'feeding goats', - 'fencing (sport)', - 'fidgeting', - 'filling cake', - 'filling eyebrows', - 'finger snapping', - 'fixing bicycle', - 'fixing hair', - 'flint knapping', - 'flipping bottle', - 'flipping pancake', - 'fly tying', - 'flying kite', - 'folding clothes', - 'folding napkins', - 'folding paper', - 'front raises', - 'frying vegetables', - 'gargling', - 'geocaching', - 'getting a haircut', - 'getting a piercing', - 'getting a tattoo', - 'giving or receiving award', - 'gold panning', - 'golf chipping', - 'golf driving', - 'golf putting', - 'gospel singing in church', - 'grinding meat', - 'grooming cat', - 'grooming dog', - 'grooming horse', - 'gymnastics tumbling', - 'hammer throw', - 'hand washing clothes', - 'head stand', - 'headbanging', - 'headbutting', - 'helmet diving', - 'herding cattle', - 'high fiving', - 'high jump', - 'high kick', - 'historical reenactment', - 'hitting baseball', - 'hockey stop', - 'holding snake', - 'home roasting coffee', - 'hopscotch', - 'hoverboarding', - 'huddling', - 'hugging (not baby)', - 'hugging baby', - 'hula hooping', - 'hurdling', - 'hurling (sport)', - 'ice climbing', - 'ice fishing', - 'ice skating', - 'ice swimming', - 'inflating balloons', - 'installing carpet', - 'ironing', - 'ironing hair', - 'javelin throw', - 'jaywalking', - 'jetskiing', - 'jogging', - 'juggling balls', - 'juggling fire', - 'juggling soccer ball', - 'jumping bicycle', - 'jumping into pool', - 'jumping jacks', - 'jumping sofa', - 'jumpstyle dancing', - 'karaoke', - 'kicking field goal', - 'kicking soccer ball', - 'kissing', - 'kitesurfing', - 'knitting', - 'krumping', - 'land sailing', - 'laughing', - 'lawn mower racing', - 'laying bricks', - 'laying concrete', - 'laying decking', - 'laying stone', - 'laying tiles', - 'leatherworking', - 'letting go of balloon', - 'licking', - 'lifting hat', - 'lighting candle', - 'lighting fire', - 'listening with headphones', - 'lock picking', - 'long jump', - 'longboarding', - 'looking at phone', - 'looking in mirror', - 'luge', - 'lunge', - 'making a cake', - 'making a sandwich', - 'making balloon shapes', - 'making bubbles', - 'making cheese', - 'making horseshoes', - 'making jewelry', - 'making latte art', - 'making paper aeroplanes', - 'making pizza', - 'making slime', - 'making snowman', - 'making sushi', - 'making tea', - 'making the bed', - 'marching', - 'marriage proposal', - 'massaging back', - 'massaging feet', - 'massaging legs', - 'massaging neck', - "massaging person's head", - 'metal detecting', - 'milking cow', - 'milking goat', - 'mixing colours', - 'moon walking', - 'mopping floor', - 'mosh pit dancing', - 'motorcycling', - 'mountain climber (exercise)', - 'moving baby', - 'moving child', - 'moving furniture', - 'mowing lawn', - 'mushroom foraging', - 'needle felting', - 'news anchoring', - 'opening bottle (not wine)', - 'opening coconuts', - 'opening door', - 'opening present', - 'opening refrigerator', - 'opening wine bottle', - 'packing', - 'paragliding', - 'parasailing', - 'parkour', - 'passing American football (in game)', - 'passing American football (not in game)', - 'passing soccer ball', - 'peeling apples', - 'peeling banana', - 'peeling potatoes', - 'person collecting garbage', - 'petting animal (not cat)', - 'petting cat', - 'petting horse', - 'photobombing', - 'photocopying', - 'picking apples', - 'picking blueberries', - 'pillow fight', - 'pinching', - 'pirouetting', - 'planing wood', - 'planting trees', - 'plastering', - 'playing accordion', - 'playing american football', - 'playing badminton', - 'playing bagpipes', - 'playing basketball', - 'playing bass guitar', - 'playing beer pong', - 'playing billiards', - 'playing blackjack', - 'playing cards', - 'playing cello', - 'playing checkers', - 'playing chess', - 'playing clarinet', - 'playing controller', - 'playing cricket', - 'playing cymbals', - 'playing darts', - 'playing didgeridoo', - 'playing dominoes', - 'playing drums', - 'playing field hockey', - 'playing flute', - 'playing gong', - 'playing guitar', - 'playing hand clapping games', - 'playing harmonica', - 'playing harp', - 'playing ice hockey', - 'playing keyboard', - 'playing kickball', - 'playing laser tag', - 'playing lute', - 'playing mahjong', - 'playing maracas', - 'playing marbles', - 'playing monopoly', - 'playing netball', - 'playing nose flute', - 'playing oboe', - 'playing ocarina', - 'playing organ', - 'playing paintball', - 'playing pan pipes', - 'playing piano', - 'playing piccolo', - 'playing pinball', - 'playing ping pong', - 'playing poker', - 'playing polo', - 'playing recorder', - 'playing road hockey', - 'playing rounders', - 'playing rubiks cube', - 'playing saxophone', - 'playing scrabble', - 'playing shuffleboard', - 'playing slot machine', - 'playing squash or racquetball', - 'playing tennis', - 'playing trombone', - 'playing trumpet', - 'playing ukulele', - 'playing violin', - 'playing volleyball', - 'playing with trains', - 'playing xylophone', - 'poaching eggs', - 'poking bellybutton', - 'pole vault', - 'polishing furniture', - 'polishing metal', - 'popping balloons', - 'pouring beer', - 'pouring milk', - 'pouring wine', - 'preparing salad', - 'presenting weather forecast', - 'pretending to be a statue', - 'pull ups', - 'pulling espresso shot', - 'pulling rope (game)', - 'pumping fist', - 'pumping gas', - 'punching bag', - 'punching person (boxing)', - 'push up', - 'pushing car', - 'pushing cart', - 'pushing wheelbarrow', - 'pushing wheelchair', - 'putting in contact lenses', - 'putting on eyeliner', - 'putting on foundation', - 'putting on lipstick', - 'putting on mascara', - 'putting on sari', - 'putting on shoes', - 'putting wallpaper on wall', - 'raising eyebrows', - 'reading book', - 'reading newspaper', - 'recording music', - 'repairing puncture', - 'riding a bike', - 'riding camel', - 'riding elephant', - 'riding mechanical bull', - 'riding mule', - 'riding or walking with horse', - 'riding scooter', - 'riding snow blower', - 'riding unicycle', - 'ripping paper', - 'roasting marshmallows', - 'roasting pig', - 'robot dancing', - 'rock climbing', - 'rock scissors paper', - 'roller skating', - 'rolling eyes', - 'rolling pastry', - 'rope pushdown', - 'running on treadmill', - 'sailing', - 'salsa dancing', - 'saluting', - 'sanding floor', - 'sanding wood', - 'sausage making', - 'sawing wood', - 'scrambling eggs', - 'scrapbooking', - 'scrubbing face', - 'scuba diving', - 'seasoning food', - 'separating eggs', - 'setting table', - 'sewing', - 'shaking hands', - 'shaking head', - 'shaping bread dough', - 'sharpening knives', - 'sharpening pencil', - 'shaving head', - 'shaving legs', - 'shearing sheep', - 'shining flashlight', - 'shining shoes', - 'shoot dance', - 'shooting basketball', - 'shooting goal (soccer)', - 'shooting off fireworks', - 'shopping', - 'shot put', - 'shouting', - 'shoveling snow', - 'shredding paper', - 'shucking oysters', - 'shuffling cards', - 'shuffling feet', - 'side kick', - 'sieving', - 'sign language interpreting', - 'silent disco', - 'singing', - 'sipping cup', - 'situp', - 'skateboarding', - 'ski ballet', - 'ski jumping', - 'skiing crosscountry', - 'skiing mono', - 'skiing slalom', - 'skipping rope', - 'skipping stone', - 'skydiving', - 'slacklining', - 'slapping', - 'sled dog racing', - 'sleeping', - 'slicing onion', - 'smashing', - 'smelling feet', - 'smoking', - 'smoking hookah', - 'smoking pipe', - 'snatch weight lifting', - 'sneezing', - 'snorkeling', - 'snowboarding', - 'snowkiting', - 'snowmobiling', - 'somersaulting', - 'spelunking', - 'spinning plates', - 'spinning poi', - 'splashing water', - 'spray painting', - 'spraying', - 'springboard diving', - 'square dancing', - 'squat', - 'squeezing orange', - 'stacking cups', - 'stacking dice', - 'standing on hands', - 'staring', - 'steer roping', - 'steering car', - 'sticking tongue out', - 'stomping grapes', - 'stretching arm', - 'stretching leg', - 'sucking lolly', - 'surfing crowd', - 'surfing water', - 'surveying', - 'sweeping floor', - 'swimming backstroke', - 'swimming breast stroke', - 'swimming butterfly stroke', - 'swimming front crawl', - 'swimming with dolphins', - 'swimming with sharks', - 'swing dancing', - 'swinging baseball bat', - 'swinging on something', - 'sword fighting', - 'sword swallowing', - 'tackling', - 'tagging graffiti', - 'tai chi', - 'taking photo', - 'talking on cell phone', - 'tango dancing', - 'tap dancing', - 'tapping guitar', - 'tapping pen', - 'tasting beer', - 'tasting food', - 'tasting wine', - 'testifying', - 'texting', - 'threading needle', - 'throwing axe', - 'throwing ball (not baseball or American football)', - 'throwing discus', - 'throwing knife', - 'throwing snowballs', - 'throwing tantrum', - 'throwing water balloon', - 'tickling', - 'tie dying', - 'tightrope walking', - 'tiptoeing', - 'tobogganing', - 'tossing coin', - 'tossing salad', - 'training dog', - 'trapezing', - 'treating wood', - 'trimming or shaving beard', - 'trimming shrubs', - 'trimming trees', - 'triple jump', - 'twiddling fingers', - 'tying bow tie', - 'tying knot (not on a tie)', - 'tying necktie', - 'tying shoe laces', - 'unboxing', - 'uncorking champagne', - 'unloading truck', - 'using a microscope', - 'using a paint roller', - 'using a power drill', - 'using a sledge hammer', - 'using a wrench', - 'using atm', - 'using bagging machine', - 'using circular saw', - 'using inhaler', - 'using megaphone', - 'using puppets', - 'using remote controller (not gaming)', - 'using segway', - 'vacuuming car', - 'vacuuming floor', - 'visiting the zoo', - 'wading through mud', - 'wading through water', - 'waiting in line', - 'waking up', - 'walking on stilts', - 'walking the dog', - 'walking through snow', - 'walking with crutches', - 'washing dishes', - 'washing feet', - 'washing hair', - 'washing hands', - 'watching tv', - 'water skiing', - 'water sliding', - 'watering plants', - 'waving hand', - 'waxing armpits', - 'waxing back', - 'waxing chest', - 'waxing eyebrows', - 'waxing legs', - 'weaving basket', - 'weaving fabric', - 'welding', - 'whistling', - 'windsurfing', - 'winking', - 'wood burning (art)', - 'wrapping present', - 'wrestling', - 'writing', - 'yarn spinning', - 'yawning', - 'yoga', - 'zumba' -] - -templates = [ - 'a photo of {}.', - 'a photo of a person {}.', - 'a photo of a person using {}.', - 'a photo of a person doing {}.', - 'a photo of a person during {}.', - 'a photo of a person performing {}.', - 'a photo of a person practicing {}.', - 'a video of {}.', - 'a video of a person {}.', - 'a video of a person using {}.', - 'a video of a person doing {}.', - 'a video of a person during {}.', - 'a video of a person performing {}.', - 'a video of a person practicing {}.', - 'a example of {}.', - 'a example of a person {}.', - 'a example of a person using {}.', - 'a example of a person doing {}.', - 'a example of a person during {}.', - 'a example of a person performing {}.', - 'a example of a person practicing {}.', - 'a demonstration of {}.', - 'a demonstration of a person {}.', - 'a demonstration of a person using {}.', - 'a demonstration of a person doing {}.', - 'a demonstration of a person during {}.', - 'a demonstration of a person performing {}.', - 'a demonstration of a person practicing {}.', -] -``` - - - -## MNIST - -```bash -classes = [ - '0', - '1', - '2', - '3', - '4', - '5', - '6', - '7', - '8', - '9', -] - -templates = [ - 'a photo of the number: "{}".', -] -``` - - - -## OxfordPets - -```bash -classes = [ - 'Abyssinian', - 'Bengal', - 'Birman', - 'Bombay', - 'British Shorthair', - 'Egyptian Mau', - 'Maine Coon', - 'Persian', - 'Ragdoll', - 'Russian Blue', - 'Siamese', - 'Sphynx', - 'american bulldog', - 'american pit bull terrier', - 'basset hound', - 'beagle', - 'boxer', - 'chihuahua', - 'english cocker spaniel', - 'english setter', - 'german shorthaired', - 'great pyrenees', - 'havanese', - 'japanese chin', - 'keeshond', - 'leonberger', - 'miniature pinscher', - 'newfoundland', - 'pomeranian', - 'pug', - 'saint bernard', - 'samoyed', - 'scottish terrier', - 'shiba inu', - 'staffordshire bull terrier', - 'wheaten terrier', - 'yorkshire terrier', -] - -templates = [ - 'a photo of a {}, a type of pet.', -] -``` - - - -## PascalVOC2007 - -```bash -classes = [ - 'aeroplane', - 'bicycle', - 'bird', - 'boat', - 'bottle', - 'bus', - 'car', - 'cat', - 'chair', - 'cow', - 'dog', - 'horse', - 'motorbike', - 'person', - 'sheep', - 'sofa', - 'diningtable', - 'pottedplant', - 'train', - 'tvmonitor', -] - -templates = [ - 'a photo of a {}.', -] -``` - - - -## PatchCamelyon - -```bash -classes = [ - 'lymph node', - 'lymph node containing metastatic tumor tissue', -] - -templates = [ - 'this is a photo of {}', -] -``` - - - -## RESISC45 - -```bash -classes = [ - 'airplane', - 'airport', - 'baseball diamond', - 'basketball court', - 'beach', - 'bridge', - 'chaparral', - 'church', - 'circular farmland', - 'cloud', - 'commercial area', - 'dense residential', - 'desert', - 'forest', - 'freeway', - 'golf course', - 'ground track field', - 'harbor', - 'industrial area', - 'intersection', - 'island', - 'lake', - 'meadow', - 'medium residential', - 'mobile home park', - 'mountain', - 'overpass', - 'palace', - 'parking lot', - 'railway', - 'railway station', - 'rectangular farmland', - 'river', - 'roundabout', - 'runway', - 'sea ice', - 'ship', - 'snowberg', - 'sparse residential', - 'stadium', - 'storage tank', - 'tennis court', - 'terrace', - 'thermal power station', - 'wetland', -] - -templates = [ - 'satellite imagery of {}.', - 'aerial imagery of {}.', - 'satellite photo of {}.', - 'aerial photo of {}.', - 'satellite view of {}.', - 'aerial view of {}.', - 'satellite imagery of a {}.', - 'aerial imagery of a {}.', - 'satellite photo of a {}.', - 'aerial photo of a {}.', - 'satellite view of a {}.', - 'aerial view of a {}.', - 'satellite imagery of the {}.', - 'aerial imagery of the {}.', - 'satellite photo of the {}.', - 'aerial photo of the {}.', - 'satellite view of the {}.', - 'aerial view of the {}.', -] -``` - - - -## SST2 - -```bash -classes = [ - 'negative', - 'positive', -] - -templates = [ - 'a {} review of a movie.', -] -``` - - - -## STL10 - -```bash -classes = [ - 'airplane', - 'bird', - 'car', - 'cat', - 'deer', - 'dog', - 'horse', - 'monkey', - 'ship', - 'truck', -] - -templates = [ - 'a photo of a {}.', - 'a photo of the {}.', -] -``` - - - -## SUN397 - -```bash -classes = [ - 'abbey', - 'airplane cabin', - 'airport terminal', - 'alley', - 'amphitheater', - 'amusement arcade', - 'amusement park', - 'anechoic chamber', - 'apartment building outdoor', - 'apse indoor', - 'aquarium', - 'aqueduct', - 'arch', - 'archive', - 'arrival gate outdoor', - 'art gallery', - 'art school', - 'art studio', - 'assembly line', - 'athletic field outdoor', - 'atrium public', - 'attic', - 'auditorium', - 'auto factory', - 'badlands', - 'badminton court indoor', - 'baggage claim', - 'bakery shop', - 'balcony exterior', - 'balcony interior', - 'ball pit', - 'ballroom', - 'bamboo forest', - 'banquet hall', - 'bar', - 'barn', - 'barndoor', - 'baseball field', - 'basement', - 'basilica', - 'basketball court outdoor', - 'bathroom', - 'batters box', - 'bayou', - 'bazaar indoor', - 'bazaar outdoor', - 'beach', - 'beauty salon', - 'bedroom', - 'berth', - 'biology laboratory', - 'bistro indoor', - 'boardwalk', - 'boat deck', - 'boathouse', - 'bookstore', - 'booth indoor', - 'botanical garden', - 'bow window indoor', - 'bow window outdoor', - 'bowling alley', - 'boxing ring', - 'brewery indoor', - 'bridge', - 'building facade', - 'bullring', - 'burial chamber', - 'bus interior', - 'butchers shop', - 'butte', - 'cabin outdoor', - 'cafeteria', - 'campsite', - 'campus', - 'canal natural', - 'canal urban', - 'candy store', - 'canyon', - 'car interior backseat', - 'car interior frontseat', - 'carrousel', - 'casino indoor', - 'castle', - 'catacomb', - 'cathedral indoor', - 'cathedral outdoor', - 'cavern indoor', - 'cemetery', - 'chalet', - 'cheese factory', - 'chemistry lab', - 'chicken coop indoor', - 'chicken coop outdoor', - 'childs room', - 'church indoor', - 'church outdoor', - 'classroom', - 'clean room', - 'cliff', - 'cloister indoor', - 'closet', - 'clothing store', - 'coast', - 'cockpit', - 'coffee shop', - 'computer room', - 'conference center', - 'conference room', - 'construction site', - 'control room', - 'control tower outdoor', - 'corn field', - 'corral', - 'corridor', - 'cottage garden', - 'courthouse', - 'courtroom', - 'courtyard', - 'covered bridge exterior', - 'creek', - 'crevasse', - 'crosswalk', - 'cubicle office', - 'dam', - 'delicatessen', - 'dentists office', - 'desert sand', - 'desert vegetation', - 'diner indoor', - 'diner outdoor', - 'dinette home', - 'dinette vehicle', - 'dining car', - 'dining room', - 'discotheque', - 'dock', - 'doorway outdoor', - 'dorm room', - 'driveway', - 'driving range outdoor', - 'drugstore', - 'electrical substation', - 'elevator door', - 'elevator interior', - 'elevator shaft', - 'engine room', - 'escalator indoor', - 'excavation', - 'factory indoor', - 'fairway', - 'fastfood restaurant', - 'field cultivated', - 'field wild', - 'fire escape', - 'fire station', - 'firing range indoor', - 'fishpond', - 'florist shop indoor', - 'food court', - 'forest broadleaf', - 'forest needleleaf', - 'forest path', - 'forest road', - 'formal garden', - 'fountain', - 'galley', - 'game room', - 'garage indoor', - 'garbage dump', - 'gas station', - 'gazebo exterior', - 'general store indoor', - 'general store outdoor', - 'gift shop', - 'golf course', - 'greenhouse indoor', - 'greenhouse outdoor', - 'gymnasium indoor', - 'hangar indoor', - 'hangar outdoor', - 'harbor', - 'hayfield', - 'heliport', - 'herb garden', - 'highway', - 'hill', - 'home office', - 'hospital', - 'hospital room', - 'hot spring', - 'hot tub outdoor', - 'hotel outdoor', - 'hotel room', - 'house', - 'hunting lodge outdoor', - 'ice cream parlor', - 'ice floe', - 'ice shelf', - 'ice skating rink indoor', - 'ice skating rink outdoor', - 'iceberg', - 'igloo', - 'industrial area', - 'inn outdoor', - 'islet', - 'jacuzzi indoor', - 'jail cell', - 'jail indoor', - 'jewelry shop', - 'kasbah', - 'kennel indoor', - 'kennel outdoor', - 'kindergarden classroom', - 'kitchen', - 'kitchenette', - 'labyrinth outdoor', - 'lake natural', - 'landfill', - 'landing deck', - 'laundromat', - 'lecture room', - 'library indoor', - 'library outdoor', - 'lido deck outdoor', - 'lift bridge', - 'lighthouse', - 'limousine interior', - 'living room', - 'lobby', - 'lock chamber', - 'locker room', - 'mansion', - 'manufactured home', - 'market indoor', - 'market outdoor', - 'marsh', - 'martial arts gym', - 'mausoleum', - 'medina', - 'moat water', - 'monastery outdoor', - 'mosque indoor', - 'mosque outdoor', - 'motel', - 'mountain', - 'mountain snowy', - 'movie theater indoor', - 'museum indoor', - 'music store', - 'music studio', - 'nuclear power plant outdoor', - 'nursery', - 'oast house', - 'observatory outdoor', - 'ocean', - 'office', - 'office building', - 'oil refinery outdoor', - 'oilrig', - 'operating room', - 'orchard', - 'outhouse outdoor', - 'pagoda', - 'palace', - 'pantry', - 'park', - 'parking garage indoor', - 'parking garage outdoor', - 'parking lot', - 'parlor', - 'pasture', - 'patio', - 'pavilion', - 'pharmacy', - 'phone booth', - 'physics laboratory', - 'picnic area', - 'pilothouse indoor', - 'planetarium outdoor', - 'playground', - 'playroom', - 'plaza', - 'podium indoor', - 'podium outdoor', - 'pond', - 'poolroom establishment', - 'poolroom home', - 'power plant outdoor', - 'promenade deck', - 'pub indoor', - 'pulpit', - 'putting green', - 'racecourse', - 'raceway', - 'raft', - 'railroad track', - 'rainforest', - 'reception', - 'recreation room', - 'residential neighborhood', - 'restaurant', - 'restaurant kitchen', - 'restaurant patio', - 'rice paddy', - 'riding arena', - 'river', - 'rock arch', - 'rope bridge', - 'ruin', - 'runway', - 'sandbar', - 'sandbox', - 'sauna', - 'schoolhouse', - 'sea cliff', - 'server room', - 'shed', - 'shoe shop', - 'shopfront', - 'shopping mall indoor', - 'shower', - 'skatepark', - 'ski lodge', - 'ski resort', - 'ski slope', - 'sky', - 'skyscraper', - 'slum', - 'snowfield', - 'squash court', - 'stable', - 'stadium baseball', - 'stadium football', - 'stage indoor', - 'staircase', - 'street', - 'subway interior', - 'subway station platform', - 'supermarket', - 'sushi bar', - 'swamp', - 'swimming pool indoor', - 'swimming pool outdoor', - 'synagogue indoor', - 'synagogue outdoor', - 'television studio', - 'temple east asia', - 'temple south asia', - 'tennis court indoor', - 'tennis court outdoor', - 'tent outdoor', - 'theater indoor procenium', - 'theater indoor seats', - 'thriftshop', - 'throne room', - 'ticket booth', - 'toll plaza', - 'topiary garden', - 'tower', - 'toyshop', - 'track outdoor', - 'train railway', - 'train station platform', - 'tree farm', - 'tree house', - 'trench', - 'underwater coral reef', - 'utility room', - 'valley', - 'van interior', - 'vegetable garden', - 'veranda', - 'veterinarians office', - 'viaduct', - 'videostore', - 'village', - 'vineyard', - 'volcano', - 'volleyball court indoor', - 'volleyball court outdoor', - 'waiting room', - 'warehouse indoor', - 'water tower', - 'waterfall block', - 'waterfall fan', - 'waterfall plunge', - 'watering hole', - 'wave', - 'wet bar', - 'wheat field', - 'wind farm', - 'windmill', - 'wine cellar barrel storage', - 'wine cellar bottle storage', - 'wrestling ring indoor', - 'yard', - 'youth hostel', -] - -templates = [ - 'a photo of a {}.', - 'a photo of the {}.', -] -``` - - - -## StanfordCars - -```bash -classes = [ - 'AM General Hummer SUV 2000', - 'Acura RL Sedan 2012', - 'Acura TL Sedan 2012', - 'Acura TL Type-S 2008', - 'Acura TSX Sedan 2012', - 'Acura Integra Type R 2001', - 'Acura ZDX Hatchback 2012', - 'Aston Martin V8 Vantage Convertible 2012', - 'Aston Martin V8 Vantage Coupe 2012', - 'Aston Martin Virage Convertible 2012', - 'Aston Martin Virage Coupe 2012', - 'Audi RS 4 Convertible 2008', - 'Audi A5 Coupe 2012', - 'Audi TTS Coupe 2012', - 'Audi R8 Coupe 2012', - 'Audi V8 Sedan 1994', - 'Audi 100 Sedan 1994', - 'Audi 100 Wagon 1994', - 'Audi TT Hatchback 2011', - 'Audi S6 Sedan 2011', - 'Audi S5 Convertible 2012', - 'Audi S5 Coupe 2012', - 'Audi S4 Sedan 2012', - 'Audi S4 Sedan 2007', - 'Audi TT RS Coupe 2012', - 'BMW ActiveHybrid 5 Sedan 2012', - 'BMW 1 Series Convertible 2012', - 'BMW 1 Series Coupe 2012', - 'BMW 3 Series Sedan 2012', - 'BMW 3 Series Wagon 2012', - 'BMW 6 Series Convertible 2007', - 'BMW X5 SUV 2007', - 'BMW X6 SUV 2012', - 'BMW M3 Coupe 2012', - 'BMW M5 Sedan 2010', - 'BMW M6 Convertible 2010', - 'BMW X3 SUV 2012', - 'BMW Z4 Convertible 2012', - 'Bentley Continental Supersports Conv. Convertible 2012', - 'Bentley Arnage Sedan 2009', - 'Bentley Mulsanne Sedan 2011', - 'Bentley Continental GT Coupe 2012', - 'Bentley Continental GT Coupe 2007', - 'Bentley Continental Flying Spur Sedan 2007', - 'Bugatti Veyron 16.4 Convertible 2009', - 'Bugatti Veyron 16.4 Coupe 2009', - 'Buick Regal GS 2012', - 'Buick Rainier SUV 2007', - 'Buick Verano Sedan 2012', - 'Buick Enclave SUV 2012', - 'Cadillac CTS-V Sedan 2012', - 'Cadillac SRX SUV 2012', - 'Cadillac Escalade EXT Crew Cab 2007', - 'Chevrolet Silverado 1500 Hybrid Crew Cab 2012', - 'Chevrolet Corvette Convertible 2012', - 'Chevrolet Corvette ZR1 2012', - 'Chevrolet Corvette Ron Fellows Edition Z06 2007', - 'Chevrolet Traverse SUV 2012', - 'Chevrolet Camaro Convertible 2012', - 'Chevrolet HHR SS 2010', - 'Chevrolet Impala Sedan 2007', - 'Chevrolet Tahoe Hybrid SUV 2012', - 'Chevrolet Sonic Sedan 2012', - 'Chevrolet Express Cargo Van 2007', - 'Chevrolet Avalanche Crew Cab 2012', - 'Chevrolet Cobalt SS 2010', - 'Chevrolet Malibu Hybrid Sedan 2010', - 'Chevrolet TrailBlazer SS 2009', - 'Chevrolet Silverado 2500HD Regular Cab 2012', - 'Chevrolet Silverado 1500 Classic Extended Cab 2007', - 'Chevrolet Express Van 2007', - 'Chevrolet Monte Carlo Coupe 2007', - 'Chevrolet Malibu Sedan 2007', - 'Chevrolet Silverado 1500 Extended Cab 2012', - 'Chevrolet Silverado 1500 Regular Cab 2012', - 'Chrysler Aspen SUV 2009', - 'Chrysler Sebring Convertible 2010', - 'Chrysler Town and Country Minivan 2012', - 'Chrysler 300 SRT-8 2010', - 'Chrysler Crossfire Convertible 2008', - 'Chrysler PT Cruiser Convertible 2008', - 'Daewoo Nubira Wagon 2002', - 'Dodge Caliber Wagon 2012', - 'Dodge Caliber Wagon 2007', - 'Dodge Caravan Minivan 1997', - 'Dodge Ram Pickup 3500 Crew Cab 2010', - 'Dodge Ram Pickup 3500 Quad Cab 2009', - 'Dodge Sprinter Cargo Van 2009', - 'Dodge Journey SUV 2012', - 'Dodge Dakota Crew Cab 2010', - 'Dodge Dakota Club Cab 2007', - 'Dodge Magnum Wagon 2008', - 'Dodge Challenger SRT8 2011', - 'Dodge Durango SUV 2012', - 'Dodge Durango SUV 2007', - 'Dodge Charger Sedan 2012', - 'Dodge Charger SRT-8 2009', - 'Eagle Talon Hatchback 1998', - 'FIAT 500 Abarth 2012', - 'FIAT 500 Convertible 2012', - 'Ferrari FF Coupe 2012', - 'Ferrari California Convertible 2012', - 'Ferrari 458 Italia Convertible 2012', - 'Ferrari 458 Italia Coupe 2012', - 'Fisker Karma Sedan 2012', - 'Ford F-450 Super Duty Crew Cab 2012', - 'Ford Mustang Convertible 2007', - 'Ford Freestar Minivan 2007', - 'Ford Expedition EL SUV 2009', - 'Ford Edge SUV 2012', - 'Ford Ranger SuperCab 2011', - 'Ford GT Coupe 2006', - 'Ford F-150 Regular Cab 2012', - 'Ford F-150 Regular Cab 2007', - 'Ford Focus Sedan 2007', - 'Ford E-Series Wagon Van 2012', - 'Ford Fiesta Sedan 2012', - 'GMC Terrain SUV 2012', - 'GMC Savana Van 2012', - 'GMC Yukon Hybrid SUV 2012', - 'GMC Acadia SUV 2012', - 'GMC Canyon Extended Cab 2012', - 'Geo Metro Convertible 1993', - 'HUMMER H3T Crew Cab 2010', - 'HUMMER H2 SUT Crew Cab 2009', - 'Honda Odyssey Minivan 2012', - 'Honda Odyssey Minivan 2007', - 'Honda Accord Coupe 2012', - 'Honda Accord Sedan 2012', - 'Hyundai Veloster Hatchback 2012', - 'Hyundai Santa Fe SUV 2012', - 'Hyundai Tucson SUV 2012', - 'Hyundai Veracruz SUV 2012', - 'Hyundai Sonata Hybrid Sedan 2012', - 'Hyundai Elantra Sedan 2007', - 'Hyundai Accent Sedan 2012', - 'Hyundai Genesis Sedan 2012', - 'Hyundai Sonata Sedan 2012', - 'Hyundai Elantra Touring Hatchback 2012', - 'Hyundai Azera Sedan 2012', - 'Infiniti G Coupe IPL 2012', - 'Infiniti QX56 SUV 2011', - 'Isuzu Ascender SUV 2008', - 'Jaguar XK XKR 2012', - 'Jeep Patriot SUV 2012', - 'Jeep Wrangler SUV 2012', - 'Jeep Liberty SUV 2012', - 'Jeep Grand Cherokee SUV 2012', - 'Jeep Compass SUV 2012', - 'Lamborghini Reventon Coupe 2008', - 'Lamborghini Aventador Coupe 2012', - 'Lamborghini Gallardo LP 570-4 Superleggera 2012', - 'Lamborghini Diablo Coupe 2001', - 'Land Rover Range Rover SUV 2012', - 'Land Rover LR2 SUV 2012', - 'Lincoln Town Car Sedan 2011', - 'MINI Cooper Roadster Convertible 2012', - 'Maybach Landaulet Convertible 2012', - 'Mazda Tribute SUV 2011', - 'McLaren MP4-12C Coupe 2012', - 'Mercedes-Benz 300-Class Convertible 1993', - 'Mercedes-Benz C-Class Sedan 2012', - 'Mercedes-Benz SL-Class Coupe 2009', - 'Mercedes-Benz E-Class Sedan 2012', - 'Mercedes-Benz S-Class Sedan 2012', - 'Mercedes-Benz Sprinter Van 2012', - 'Mitsubishi Lancer Sedan 2012', - 'Nissan Leaf Hatchback 2012', - 'Nissan NV Passenger Van 2012', - 'Nissan Juke Hatchback 2012', - 'Nissan 240SX Coupe 1998', - 'Plymouth Neon Coupe 1999', - 'Porsche Panamera Sedan 2012', - 'Ram C/V Cargo Van Minivan 2012', - 'Rolls-Royce Phantom Drophead Coupe Convertible 2012', - 'Rolls-Royce Ghost Sedan 2012', - 'Rolls-Royce Phantom Sedan 2012', - 'Scion xD Hatchback 2012', - 'Spyker C8 Convertible 2009', - 'Spyker C8 Coupe 2009', - 'Suzuki Aerio Sedan 2007', - 'Suzuki Kizashi Sedan 2012', - 'Suzuki SX4 Hatchback 2012', - 'Suzuki SX4 Sedan 2012', - 'Tesla Model S Sedan 2012', - 'Toyota Sequoia SUV 2012', - 'Toyota Camry Sedan 2012', - 'Toyota Corolla Sedan 2012', - 'Toyota 4Runner SUV 2012', - 'Volkswagen Golf Hatchback 2012', - 'Volkswagen Golf Hatchback 1991', - 'Volkswagen Beetle Hatchback 2012', - 'Volvo C30 Hatchback 2012', - 'Volvo 240 Sedan 1993', - 'Volvo XC90 SUV 2007', - 'smart fortwo Convertible 2012', -] - -templates = [ - 'a photo of a {}.', - 'a photo of the {}.', - 'a photo of my {}.', - 'i love my {}!', - 'a photo of my dirty {}.', - 'a photo of my clean {}.', - 'a photo of my new {}.', - 'a photo of my old {}.', -] -``` - - - -## UCF101 - -```bash -classes = [ - 'Apply Eye Makeup', - 'Apply Lipstick', - 'Archery', - 'Baby Crawling', - 'Balance Beam', - 'Band Marching', - 'Baseball Pitch', - 'Basketball', - 'Basketball Dunk', - 'Bench Press', - 'Biking', - 'Billiards', - 'Blow Dry Hair', - 'Blowing Candles', - 'Body Weight Squats', - 'Bowling', - 'Boxing Punching Bag', - 'Boxing Speed Bag', - 'Breast Stroke', - 'Brushing Teeth', - 'Clean And Jerk', - 'Cliff Diving', - 'Cricket Bowling', - 'Cricket Shot', - 'Cutting In Kitchen', - 'Diving', - 'Drumming', - 'Fencing', - 'Field Hockey Penalty', - 'Floor Gymnastics', - 'Frisbee Catch', - 'Front Crawl', - 'Golf Swing', - 'Haircut', - 'Hammer Throw', - 'Hammering', - 'Hand Stand Pushups', - 'Handstand Walking', - 'Head Massage', - 'High Jump', - 'Horse Race', - 'Horse Riding', - 'Hula Hoop', - 'Ice Dancing', - 'Javelin Throw', - 'Juggling Balls', - 'Jump Rope', - 'Jumping Jack', - 'Kayaking', - 'Knitting', - 'Long Jump', - 'Lunges', - 'Military Parade', - 'Mixing', - 'Mopping Floor', - 'Nunchucks', - 'Parallel Bars', - 'Pizza Tossing', - 'Playing Cello', - 'Playing Daf', - 'Playing Dhol', - 'Playing Flute', - 'Playing Guitar', - 'Playing Piano', - 'Playing Sitar', - 'Playing Tabla', - 'Playing Violin', - 'Pole Vault', - 'Pommel Horse', - 'Pull Ups', - 'Punch', - 'Push Ups', - 'Rafting', - 'Rock Climbing Indoor', - 'Rope Climbing', - 'Rowing', - 'Salsa Spin', - 'Shaving Beard', - 'Shotput', - 'Skate Boarding', - 'Skiing', - 'Skijet', - 'Sky Diving', - 'Soccer Juggling', - 'Soccer Penalty', - 'Still Rings', - 'Sumo Wrestling', - 'Surfing', - 'Swing', - 'Table Tennis Shot', - 'Tai Chi', - 'Tennis Swing', - 'Throw Discus', - 'Trampoline Jumping', - 'Typing', - 'Uneven Bars', - 'Volleyball Spiking', - 'Walking With Dog', - 'Wall Pushups', - 'Writing On Board', - 'Yo Yo', -] - -templates = [ - 'a photo of a person {}.', - 'a video of a person {}.', - 'a example of a person {}.', - 'a demonstration of a person {}.', - 'a photo of the person {}.', - 'a video of the person {}.', - 'a example of the person {}.', - 'a demonstration of the person {}.', - 'a photo of a person using {}.', - 'a video of a person using {}.', - 'a example of a person using {}.', - 'a demonstration of a person using {}.', - 'a photo of the person using {}.', - 'a video of the person using {}.', - 'a example of the person using {}.', - 'a demonstration of the person using {}.', - 'a photo of a person doing {}.', - 'a video of a person doing {}.', - 'a example of a person doing {}.', - 'a demonstration of a person doing {}.', - 'a photo of the person doing {}.', - 'a video of the person doing {}.', - 'a example of the person doing {}.', - 'a demonstration of the person doing {}.', - 'a photo of a person during {}.', - 'a video of a person during {}.', - 'a example of a person during {}.', - 'a demonstration of a person during {}.', - 'a photo of the person during {}.', - 'a video of the person during {}.', - 'a example of the person during {}.', - 'a demonstration of the person during {}.', - 'a photo of a person performing {}.', - 'a video of a person performing {}.', - 'a example of a person performing {}.', - 'a demonstration of a person performing {}.', - 'a photo of the person performing {}.', - 'a video of the person performing {}.', - 'a example of the person performing {}.', - 'a demonstration of the person performing {}.', - 'a photo of a person practicing {}.', - 'a video of a person practicing {}.', - 'a example of a person practicing {}.', - 'a demonstration of a person practicing {}.', - 'a photo of the person practicing {}.', - 'a video of the person practicing {}.', - 'a example of the person practicing {}.', - 'a demonstration of the person practicing {}.', -] -``` - - diff --git a/spaces/cybercorejapan/human-detection-docker/projects/human_detection/export_onnx_trt/detection_tensorrt-fp16_dynamic.py b/spaces/cybercorejapan/human-detection-docker/projects/human_detection/export_onnx_trt/detection_tensorrt-fp16_dynamic.py deleted file mode 100644 index 894efc7591795da05dc84167e56ad8a324b4c745..0000000000000000000000000000000000000000 --- a/spaces/cybercorejapan/human-detection-docker/projects/human_detection/export_onnx_trt/detection_tensorrt-fp16_dynamic.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = ['mmyolo::deploy/base_dynamic.py'] - -onnx_config = dict( - dynamic_axes={ - 'input': { - 0: 'batch', - 2: 'height', - 3: 'width' - }, - 'dets': { - 0: 'batch', - 1: 'num_dets' - }, - 'labels': { - 0: 'batch', - 1: 'num_dets' - } - }, - ) -# input_shape=[640, 800] -backend_config = dict( - type='tensorrt', - common_config=dict(fp16_mode=True, max_workspace_size=1 << 40), - model_inputs=[ - dict( - input_shapes=dict( - input=dict( - # Change into any shape you want, but recommend to use standard shape (480,640), (640,800), (768,1280), - # min_shape=[1, 3, 192, 192], - # opt_shape=[1, 3, 640, 640], - # max_shape=[1, 3, 960, 960]))) - min_shape=[1, 3, 640, 800], - opt_shape=[32, 3, 640, 800], - max_shape=[32, 3, 640, 800]))) - ]) -use_efficientnms = True # whether native NMS of TRT instead of plugin mmdeploy:TRTBatchedNMS # noqa E501 \ No newline at end of file diff --git "a/spaces/dakaiye/dky_xuexi/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" "b/spaces/dakaiye/dky_xuexi/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" deleted file mode 100644 index a564f21d231cd65c29b539573929ca5d2df63203..0000000000000000000000000000000000000000 --- "a/spaces/dakaiye/dky_xuexi/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" +++ /dev/null @@ -1,54 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - -def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - - i_say = f'请对下面的程序文件做一个概述,并对文件中的所有函数生成注释,使用markdown表格输出结果,文件名是{os.path.relpath(fp, project_folder)},文件内容是 ```{file_content}```' - i_say_show_user = f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述,并对文件中的所有函数生成注释: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - if not fast_debug: - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] - - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/darkCat/Anime-image-classification/src/utils.py b/spaces/darkCat/Anime-image-classification/src/utils.py deleted file mode 100644 index f2cdc268410e5ea45c69b84daf78e83c44abbb40..0000000000000000000000000000000000000000 --- a/spaces/darkCat/Anime-image-classification/src/utils.py +++ /dev/null @@ -1,8 +0,0 @@ -import os - -def list_models(path): - classes = [] - for cls in os.listdir(path): - if os.path.isfile(path +'/' + cls): - classes.append(cls) - return classes \ No newline at end of file diff --git a/spaces/dawood/chatbot-guide-multimodal/app.py b/spaces/dawood/chatbot-guide-multimodal/app.py deleted file mode 100644 index 17706d5065e44e298eb2573c79742b0d6c01ed43..0000000000000000000000000000000000000000 --- a/spaces/dawood/chatbot-guide-multimodal/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import gradio as gr - -def add_text(state, text): - state = state + [(text, text + "?")] - return state, state - -def add_image(state, image): - state = state + [(f"![](/file={image.name})", "Cool pic!")] - return state, state - - -with gr.Blocks(css="#chatbot .overflow-y-auto{height:500px}") as demo: - chatbot = gr.Chatbot(elem_id="chatbot") - state = gr.State([]) - - with gr.Row(): - with gr.Column(scale=0.85): - txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter, or upload an image").style(container=False) - with gr.Column(scale=0.15, min_width=0): - btn = gr.UploadButton("🖼️", file_types=["image"]) - - txt.submit(add_text, [state, txt], [state, chatbot]) - txt.submit(lambda :"", None, txt) - btn.upload(add_image, [state, btn], [state, chatbot]) - -demo.launch() \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/_version.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/_version.py deleted file mode 100644 index 6849410aae0a8010e76d5f0a44ced13d750b0989..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/_version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "1.1.0" diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/interface.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/interface.py deleted file mode 100644 index 211206c33ed8d75f6cefe6d45049f130b76dbeba..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/interface.py +++ /dev/null @@ -1,923 +0,0 @@ -""" -This file defines two useful high-level abstractions to build Gradio apps: Interface and TabbedInterface. -""" - -from __future__ import annotations - -import inspect -import json -import os -import warnings -import weakref -from typing import TYPE_CHECKING, Any, Callable - -from gradio_client.documentation import document, set_documentation_group - -from gradio import Examples, external, interpretation, utils -from gradio.blocks import Blocks -from gradio.components import ( - Button, - ClearButton, - DuplicateButton, - Interpretation, - IOComponent, - Markdown, - State, - get_component_instance, -) -from gradio.data_classes import InterfaceTypes -from gradio.deprecation import warn_deprecation -from gradio.events import Changeable, Streamable, Submittable -from gradio.flagging import CSVLogger, FlaggingCallback, FlagMethod -from gradio.layouts import Column, Row, Tab, Tabs -from gradio.pipelines import load_from_pipeline -from gradio.themes import ThemeClass as Theme - -set_documentation_group("interface") - -if TYPE_CHECKING: # Only import for type checking (is False at runtime). - from transformers.pipelines.base import Pipeline - - -@document("launch", "load", "from_pipeline", "integrate", "queue") -class Interface(Blocks): - """ - Interface is Gradio's main high-level class, and allows you to create a web-based GUI / demo - around a machine learning model (or any Python function) in a few lines of code. - You must specify three parameters: (1) the function to create a GUI for (2) the desired input components and - (3) the desired output components. Additional parameters can be used to control the appearance - and behavior of the demo. - - Example: - import gradio as gr - - def image_classifier(inp): - return {'cat': 0.3, 'dog': 0.7} - - demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label") - demo.launch() - Demos: hello_world, hello_world_3, gpt2_xl - Guides: quickstart, key-features, sharing-your-app, interface-state, reactive-interfaces, advanced-interface-features, setting-up-a-gradio-demo-for-maximum-performance - """ - - # stores references to all currently existing Interface instances - instances: weakref.WeakSet = weakref.WeakSet() - - @classmethod - def get_instances(cls) -> list[Interface]: - """ - :return: list of all current instances. - """ - return list(Interface.instances) - - @classmethod - def load( - cls, - name: str, - src: str | None = None, - api_key: str | None = None, - alias: str | None = None, - **kwargs, - ) -> Blocks: - """ - Warning: this method will be deprecated. Use the equivalent `gradio.load()` instead. This is a class - method that constructs a Blocks from a Hugging Face repo. Can accept - model repos (if src is "models") or Space repos (if src is "spaces"). The input - and output components are automatically loaded from the repo. - Parameters: - name: the name of the model (e.g. "gpt2" or "facebook/bart-base") or space (e.g. "flax-community/spanish-gpt2"), can include the `src` as prefix (e.g. "models/facebook/bart-base") - src: the source of the model: `models` or `spaces` (or leave empty if source is provided as a prefix in `name`) - api_key: optional access token for loading private Hugging Face Hub models or spaces. Find your token here: https://huggingface.co/settings/tokens. Warning: only provide this if you are loading a trusted private Space as it can be read by the Space you are loading. - alias: optional string used as the name of the loaded model instead of the default name (only applies if loading a Space running Gradio 2.x) - Returns: - a Gradio Interface object for the given model - """ - warn_deprecation( - "gr.Interface.load() will be deprecated. Use gr.load() instead." - ) - return external.load( - name=name, src=src, hf_token=api_key, alias=alias, **kwargs - ) - - @classmethod - def from_pipeline(cls, pipeline: Pipeline, **kwargs) -> Interface: - """ - Class method that constructs an Interface from a Hugging Face transformers.Pipeline object. - The input and output components are automatically determined from the pipeline. - Parameters: - pipeline: the pipeline object to use. - Returns: - a Gradio Interface object from the given Pipeline - Example: - import gradio as gr - from transformers import pipeline - pipe = pipeline("image-classification") - gr.Interface.from_pipeline(pipe).launch() - """ - interface_info = load_from_pipeline(pipeline) - kwargs = dict(interface_info, **kwargs) - interface = cls(**kwargs) - return interface - - def __init__( - self, - fn: Callable, - inputs: str | IOComponent | list[str | IOComponent] | None, - outputs: str | IOComponent | list[str | IOComponent] | None, - examples: list[Any] | list[list[Any]] | str | None = None, - cache_examples: bool | None = None, - examples_per_page: int = 10, - live: bool = False, - interpretation: Callable | str | None = None, - num_shap: float = 2.0, - title: str | None = None, - description: str | None = None, - article: str | None = None, - thumbnail: str | None = None, - theme: Theme | str | None = None, - css: str | None = None, - allow_flagging: str | None = None, - flagging_options: list[str] | list[tuple[str, str]] | None = None, - flagging_dir: str = "flagged", - flagging_callback: FlaggingCallback = CSVLogger(), - analytics_enabled: bool | None = None, - batch: bool = False, - max_batch_size: int = 4, - _api_mode: bool = False, - allow_duplication: bool = False, - **kwargs, - ): - """ - Parameters: - fn: the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component. - inputs: a single Gradio component, or list of Gradio components. Components can either be passed as instantiated objects, or referred to by their string shortcuts. The number of input components should match the number of parameters in fn. If set to None, then only the output components will be displayed. - outputs: a single Gradio component, or list of Gradio components. Components can either be passed as instantiated objects, or referred to by their string shortcuts. The number of output components should match the number of values returned by fn. If set to None, then only the input components will be displayed. - examples: sample inputs for the function; if provided, appear below the UI components and can be clicked to populate the interface. Should be nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component. A string path to a directory of examples can also be provided, but it should be within the directory with the python file running the gradio app. If there are multiple input components and a directory is provided, a log.csv file must be present in the directory to link corresponding inputs. - cache_examples: If True, caches examples in the server for fast runtime in examples. If `fn` is a generator function, then the last yielded value will be used as the output. The default option in HuggingFace Spaces is True. The default option elsewhere is False. - examples_per_page: If examples are provided, how many to display per page. - live: whether the interface should automatically rerun if any of the inputs change. - interpretation: function that provides interpretation explaining prediction output. Pass "default" to use simple built-in interpreter, "shap" to use a built-in shapley-based interpreter, or your own custom interpretation function. For more information on the different interpretation methods, see the Advanced Interface Features guide. - num_shap: a multiplier that determines how many examples are computed for shap-based interpretation. Increasing this value will increase shap runtime, but improve results. Only applies if interpretation is "shap". - title: a title for the interface; if provided, appears above the input and output components in large font. Also used as the tab title when opened in a browser window. - description: a description for the interface; if provided, appears above the input and output components and beneath the title in regular font. Accepts Markdown and HTML content. - article: an expanded article explaining the interface; if provided, appears below the input and output components in regular font. Accepts Markdown and HTML content. - thumbnail: path or url to image to use as display image when the web demo is shared on social media. - theme: Theme to use, loaded from gradio.themes. - css: custom css or path to custom css file to use with interface. - allow_flagging: one of "never", "auto", or "manual". If "never" or "auto", users will not see a button to flag an input and output. If "manual", users will see a button to flag. If "auto", every input the user submits will be automatically flagged (outputs are not flagged). If "manual", both the input and outputs are flagged when the user clicks flag button. This parameter can be set with environmental variable GRADIO_ALLOW_FLAGGING; otherwise defaults to "manual". - flagging_options: if provided, allows user to select from the list of options when flagging. Only applies if allow_flagging is "manual". Can either be a list of tuples of the form (label, value), where label is the string that will be displayed on the button and value is the string that will be stored in the flagging CSV; or it can be a list of strings ["X", "Y"], in which case the values will be the list of strings and the labels will ["Flag as X", "Flag as Y"], etc. - flagging_dir: what to name the directory where flagged data is stored. - flagging_callback: An instance of a subclass of FlaggingCallback which will be called when a sample is flagged. By default logs to a local CSV file. - analytics_enabled: Whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable if defined, or default to True. - batch: If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component. - max_batch_size: Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True) - allow_duplication: If True, then will show a 'Duplicate Spaces' button on Hugging Face Spaces. - """ - super().__init__( - analytics_enabled=analytics_enabled, - mode="interface", - css=css, - title=title or "Gradio", - theme=theme, - **kwargs, - ) - - if isinstance(fn, list): - raise DeprecationWarning( - "The `fn` parameter only accepts a single function, support for a list " - "of functions has been deprecated. Please use gradio.mix.Parallel " - "instead." - ) - - self.interface_type = InterfaceTypes.STANDARD - if (inputs is None or inputs == []) and (outputs is None or outputs == []): - raise ValueError("Must provide at least one of `inputs` or `outputs`") - elif outputs is None or outputs == []: - outputs = [] - self.interface_type = InterfaceTypes.INPUT_ONLY - elif inputs is None or inputs == []: - inputs = [] - self.interface_type = InterfaceTypes.OUTPUT_ONLY - - assert isinstance(inputs, (str, list, IOComponent)) - assert isinstance(outputs, (str, list, IOComponent)) - - if not isinstance(inputs, list): - inputs = [inputs] - if not isinstance(outputs, list): - outputs = [outputs] - - if self.space_id and cache_examples is None: - self.cache_examples = True - else: - self.cache_examples = cache_examples or False - - state_input_indexes = [ - idx for idx, i in enumerate(inputs) if i == "state" or isinstance(i, State) - ] - state_output_indexes = [ - idx for idx, o in enumerate(outputs) if o == "state" or isinstance(o, State) - ] - - if len(state_input_indexes) == 0 and len(state_output_indexes) == 0: - pass - elif len(state_input_indexes) != 1 or len(state_output_indexes) != 1: - raise ValueError( - "If using 'state', there must be exactly one state input and one state output." - ) - else: - state_input_index = state_input_indexes[0] - state_output_index = state_output_indexes[0] - if inputs[state_input_index] == "state": - default = utils.get_default_args(fn)[state_input_index] - state_variable = State(value=default) # type: ignore - else: - state_variable = inputs[state_input_index] - - inputs[state_input_index] = state_variable - outputs[state_output_index] = state_variable - - if cache_examples: - warnings.warn( - "Cache examples cannot be used with state inputs and outputs." - "Setting cache_examples to False." - ) - self.cache_examples = False - - self.input_components = [ - get_component_instance(i, render=False) for i in inputs # type: ignore - ] - self.output_components = [ - get_component_instance(o, render=False) for o in outputs # type: ignore - ] - - for component in self.input_components + self.output_components: - if not (isinstance(component, IOComponent)): - raise ValueError( - f"{component} is not a valid input/output component for Interface." - ) - - if len(self.input_components) == len(self.output_components): - same_components = [ - i is o for i, o in zip(self.input_components, self.output_components) - ] - if all(same_components): - self.interface_type = InterfaceTypes.UNIFIED - - if self.interface_type in [ - InterfaceTypes.STANDARD, - InterfaceTypes.OUTPUT_ONLY, - ]: - for o in self.output_components: - assert isinstance(o, IOComponent) - if o.interactive is None: - # Unless explicitly otherwise specified, force output components to - # be non-interactive - o.interactive = False - if ( - interpretation is None - or isinstance(interpretation, list) - or callable(interpretation) - ): - self.interpretation = interpretation - elif isinstance(interpretation, str): - self.interpretation = [ - interpretation.lower() for _ in self.input_components - ] - else: - raise ValueError("Invalid value for parameter: interpretation") - - self.api_mode = _api_mode - self.fn = fn - self.fn_durations = [0, 0] - self.__name__ = getattr(fn, "__name__", "fn") - self.live = live - self.title = title - - md = utils.get_markdown_parser() - simple_description: str | None = None - if description is not None: - description = md.render(description) - simple_description = utils.remove_html_tags(description) - self.simple_description = simple_description - self.description = description - if article is not None: - article = utils.readme_to_html(article) - article = md.render(article) - self.article = article - - self.thumbnail = thumbnail - - self.examples = examples - self.num_shap = num_shap - self.examples_per_page = examples_per_page - - self.simple_server = None - - # For allow_flagging: (1) first check for parameter, - # (2) check for env variable, (3) default to True/"manual" - if allow_flagging is None: - allow_flagging = os.getenv("GRADIO_ALLOW_FLAGGING", "manual") - if allow_flagging is True: - warnings.warn( - "The `allow_flagging` parameter in `Interface` now" - "takes a string value ('auto', 'manual', or 'never')" - ", not a boolean. Setting parameter to: 'manual'." - ) - self.allow_flagging = "manual" - elif allow_flagging == "manual": - self.allow_flagging = "manual" - elif allow_flagging is False: - warnings.warn( - "The `allow_flagging` parameter in `Interface` now" - "takes a string value ('auto', 'manual', or 'never')" - ", not a boolean. Setting parameter to: 'never'." - ) - self.allow_flagging = "never" - elif allow_flagging == "never": - self.allow_flagging = "never" - elif allow_flagging == "auto": - self.allow_flagging = "auto" - else: - raise ValueError( - "Invalid value for `allow_flagging` parameter." - "Must be: 'auto', 'manual', or 'never'." - ) - - if flagging_options is None: - self.flagging_options = [("Flag", "")] - elif not (isinstance(flagging_options, list)): - raise ValueError( - "flagging_options must be a list of strings or list of (string, string) tuples." - ) - elif all(isinstance(x, str) for x in flagging_options): - self.flagging_options = [(f"Flag as {x}", x) for x in flagging_options] - elif all(isinstance(x, tuple) for x in flagging_options): - self.flagging_options = flagging_options - else: - raise ValueError( - "flagging_options must be a list of strings or list of (string, string) tuples." - ) - - self.flagging_callback = flagging_callback - self.flagging_dir = flagging_dir - self.batch = batch - self.max_batch_size = max_batch_size - self.allow_duplication = allow_duplication - - self.share = None - self.share_url = None - self.local_url = None - - self.favicon_path = None - Interface.instances.add(self) - - param_types = utils.get_type_hints(self.fn) - param_names = inspect.getfullargspec(self.fn)[0] - if len(param_names) > 0 and inspect.ismethod(self.fn): - param_names = param_names[1:] - for param_name in param_names.copy(): - if utils.is_special_typed_parameter(param_name, param_types): - param_names.remove(param_name) - for component, param_name in zip(self.input_components, param_names): - assert isinstance(component, IOComponent) - if component.label is None: - component.label = param_name - for i, component in enumerate(self.output_components): - assert isinstance(component, IOComponent) - if component.label is None: - if len(self.output_components) == 1: - component.label = "output" - else: - component.label = f"output {i}" - - if self.allow_flagging != "never": - if ( - self.interface_type == InterfaceTypes.UNIFIED - or self.allow_flagging == "auto" - ): - self.flagging_callback.setup(self.input_components, self.flagging_dir) # type: ignore - elif self.interface_type == InterfaceTypes.INPUT_ONLY: - pass - else: - self.flagging_callback.setup( - self.input_components + self.output_components, self.flagging_dir # type: ignore - ) - - # Render the Gradio UI - with self: - self.render_title_description() - - submit_btn, clear_btn, stop_btn, flag_btns, duplicate_btn = ( - None, - None, - None, - None, - None, - ) - interpretation_btn, interpretation_set = None, None - input_component_column, interpret_component_column = None, None - - with Row(equal_height=False): - if self.interface_type in [ - InterfaceTypes.STANDARD, - InterfaceTypes.INPUT_ONLY, - InterfaceTypes.UNIFIED, - ]: - ( - submit_btn, - clear_btn, - stop_btn, - flag_btns, - input_component_column, - interpret_component_column, - interpretation_set, - ) = self.render_input_column() - if self.interface_type in [ - InterfaceTypes.STANDARD, - InterfaceTypes.OUTPUT_ONLY, - ]: - ( - submit_btn_out, - clear_btn_2_out, - duplicate_btn, - stop_btn_2_out, - flag_btns_out, - interpretation_btn, - ) = self.render_output_column(submit_btn) - submit_btn = submit_btn or submit_btn_out - clear_btn = clear_btn or clear_btn_2_out - stop_btn = stop_btn or stop_btn_2_out - flag_btns = flag_btns or flag_btns_out - - assert clear_btn is not None, "Clear button not rendered" - - self.attach_submit_events(submit_btn, stop_btn) - self.attach_clear_events( - clear_btn, input_component_column, interpret_component_column - ) - if duplicate_btn is not None: - duplicate_btn.activate() - self.attach_interpretation_events( - interpretation_btn, - interpretation_set, - input_component_column, - interpret_component_column, - ) - - self.attach_flagging_events(flag_btns, clear_btn) - self.render_examples() - self.render_article() - - self.config = self.get_config_file() - - def render_title_description(self) -> None: - if self.title: - Markdown( - f"

{self.title}

" - ) - if self.description: - Markdown(self.description) - - def render_flag_btns(self) -> list[Button]: - return [Button(label) for label, _ in self.flagging_options] - - def render_input_column( - self, - ) -> tuple[ - Button | None, - ClearButton | None, - Button | None, - list[Button] | None, - Column, - Column | None, - list[Interpretation] | None, - ]: - submit_btn, clear_btn, stop_btn, flag_btns = None, None, None, None - interpret_component_column, interpretation_set = None, None - - with Column(variant="panel"): - input_component_column = Column() - with input_component_column: - for component in self.input_components: - component.render() - if self.interpretation: - interpret_component_column = Column(visible=False) - interpretation_set = [] - with interpret_component_column: - for component in self.input_components: - interpretation_set.append(Interpretation(component)) - with Row(): - if self.interface_type in [ - InterfaceTypes.STANDARD, - InterfaceTypes.INPUT_ONLY, - ]: - clear_btn = ClearButton() - if not self.live: - submit_btn = Button("Submit", variant="primary") - # Stopping jobs only works if the queue is enabled - # We don't know if the queue is enabled when the interface - # is created. We use whether a generator function is provided - # as a proxy of whether the queue will be enabled. - # Using a generator function without the queue will raise an error. - if inspect.isgeneratorfunction( - self.fn - ) or inspect.isasyncgenfunction(self.fn): - stop_btn = Button("Stop", variant="stop", visible=False) - elif self.interface_type == InterfaceTypes.UNIFIED: - clear_btn = ClearButton() - submit_btn = Button("Submit", variant="primary") - if ( - inspect.isgeneratorfunction(self.fn) - or inspect.isasyncgenfunction(self.fn) - ) and not self.live: - stop_btn = Button("Stop", variant="stop") - if self.allow_flagging == "manual": - flag_btns = self.render_flag_btns() - elif self.allow_flagging == "auto": - flag_btns = [submit_btn] - return ( - submit_btn, - clear_btn, - stop_btn, - flag_btns, - input_component_column, - interpret_component_column, - interpretation_set, - ) - - def render_output_column( - self, - submit_btn_in: Button | None, - ) -> tuple[ - Button | None, - ClearButton | None, - DuplicateButton, - Button | None, - list | None, - Button | None, - ]: - submit_btn = submit_btn_in - interpretation_btn, clear_btn, duplicate_btn, flag_btns, stop_btn = ( - None, - None, - None, - None, - None, - ) - - with Column(variant="panel"): - for component in self.output_components: - if not (isinstance(component, State)): - component.render() - with Row(): - if self.interface_type == InterfaceTypes.OUTPUT_ONLY: - clear_btn = ClearButton() - submit_btn = Button("Generate", variant="primary") - if ( - inspect.isgeneratorfunction(self.fn) - or inspect.isasyncgenfunction(self.fn) - ) and not self.live: - # Stopping jobs only works if the queue is enabled - # We don't know if the queue is enabled when the interface - # is created. We use whether a generator function is provided - # as a proxy of whether the queue will be enabled. - # Using a generator function without the queue will raise an error. - stop_btn = Button("Stop", variant="stop", visible=False) - if self.allow_flagging == "manual": - flag_btns = self.render_flag_btns() - elif self.allow_flagging == "auto": - assert submit_btn is not None, "Submit button not rendered" - flag_btns = [submit_btn] - - if self.interpretation: - interpretation_btn = Button("Interpret") - - if self.allow_duplication: - duplicate_btn = DuplicateButton(scale=1, size="lg", _activate=False) - - return ( - submit_btn, - clear_btn, - duplicate_btn, - stop_btn, - flag_btns, - interpretation_btn, - ) - - def render_article(self): - if self.article: - Markdown(self.article) - - def attach_submit_events(self, submit_btn: Button | None, stop_btn: Button | None): - if self.live: - if self.interface_type == InterfaceTypes.OUTPUT_ONLY: - assert submit_btn is not None, "Submit button not rendered" - super().load(self.fn, None, self.output_components) - # For output-only interfaces, the user probably still want a "generate" - # button even if the Interface is live - submit_btn.click( - self.fn, - None, - self.output_components, - api_name="predict", - preprocess=not (self.api_mode), - postprocess=not (self.api_mode), - batch=self.batch, - max_batch_size=self.max_batch_size, - ) - else: - for component in self.input_components: - if isinstance(component, Streamable) and component.streaming: - component.stream( - self.fn, - self.input_components, - self.output_components, - api_name="predict", - preprocess=not (self.api_mode), - postprocess=not (self.api_mode), - ) - continue - if isinstance(component, Changeable): - component.change( - self.fn, - self.input_components, - self.output_components, - api_name="predict", - preprocess=not (self.api_mode), - postprocess=not (self.api_mode), - ) - else: - assert submit_btn is not None, "Submit button not rendered" - fn = self.fn - extra_output = [] - - triggers = [submit_btn.click] + [ - component.submit - for component in self.input_components - if isinstance(component, Submittable) - ] - predict_events = [] - - if stop_btn: - extra_output = [submit_btn, stop_btn] - - def cleanup(): - return [Button.update(visible=True), Button.update(visible=False)] - - for i, trigger in enumerate(triggers): - predict_event = trigger( - lambda: ( - submit_btn.update(visible=False), - stop_btn.update(visible=True), - ), - inputs=None, - outputs=[submit_btn, stop_btn], - queue=False, - ).then( - self.fn, - self.input_components, - self.output_components, - api_name="predict" if i == 0 else None, - scroll_to_output=True, - preprocess=not (self.api_mode), - postprocess=not (self.api_mode), - batch=self.batch, - max_batch_size=self.max_batch_size, - ) - predict_events.append(predict_event) - - predict_event.then( - cleanup, - inputs=None, - outputs=extra_output, # type: ignore - queue=False, - ) - - stop_btn.click( - cleanup, - inputs=None, - outputs=[submit_btn, stop_btn], - cancels=predict_events, - queue=False, - ) - else: - for i, trigger in enumerate(triggers): - predict_events.append( - trigger( - fn, - self.input_components, - self.output_components, - api_name="predict" if i == 0 else None, - scroll_to_output=True, - preprocess=not (self.api_mode), - postprocess=not (self.api_mode), - batch=self.batch, - max_batch_size=self.max_batch_size, - ) - ) - - def attach_clear_events( - self, - clear_btn: ClearButton, - input_component_column: Column | None, - interpret_component_column: Column | None, - ): - clear_btn.add(self.input_components + self.output_components) - clear_btn.click( - None, - [], - ( - ([input_component_column] if input_component_column else []) - + ([interpret_component_column] if self.interpretation else []) - ), # type: ignore - _js=f"""() => {json.dumps( - ( - [Column.update(visible=True)] - if self.interface_type - in [ - InterfaceTypes.STANDARD, - InterfaceTypes.INPUT_ONLY, - InterfaceTypes.UNIFIED, - ] - else [] - ) - + ([Column.update(visible=False)] if self.interpretation else []) - )} - """, - ) - - def attach_interpretation_events( - self, - interpretation_btn: Button | None, - interpretation_set: list[Interpretation] | None, - input_component_column: Column | None, - interpret_component_column: Column | None, - ): - if interpretation_btn: - interpretation_btn.click( - self.interpret_func, - inputs=self.input_components + self.output_components, - outputs=(interpretation_set or []) + [input_component_column, interpret_component_column], # type: ignore - preprocess=False, - ) - - def attach_flagging_events( - self, flag_btns: list[Button] | None, clear_btn: ClearButton - ): - if not ( - flag_btns - and self.interface_type - in ( - InterfaceTypes.STANDARD, - InterfaceTypes.OUTPUT_ONLY, - InterfaceTypes.UNIFIED, - ) - ): - return - - if self.allow_flagging == "auto": - flag_method = FlagMethod( - self.flagging_callback, "", "", visual_feedback=False - ) - flag_btns[0].click( # flag_btns[0] is just the "Submit" button - flag_method, - inputs=self.input_components, - outputs=None, - preprocess=False, - queue=False, - ) - return - - if self.interface_type == InterfaceTypes.UNIFIED: - flag_components = self.input_components - else: - flag_components = self.input_components + self.output_components - - for flag_btn, (label, value) in zip(flag_btns, self.flagging_options): - assert isinstance(value, str) - flag_method = FlagMethod(self.flagging_callback, label, value) - flag_btn.click( - lambda: Button.update(value="Saving...", interactive=False), - None, - flag_btn, - queue=False, - ) - flag_btn.click( - flag_method, - inputs=flag_components, - outputs=flag_btn, - preprocess=False, - queue=False, - ) - clear_btn.click( - flag_method.reset, - None, - flag_btn, - queue=False, - ) - - def render_examples(self): - if self.examples: - non_state_inputs = [ - c for c in self.input_components if not isinstance(c, State) - ] - non_state_outputs = [ - c for c in self.output_components if not isinstance(c, State) - ] - self.examples_handler = Examples( - examples=self.examples, - inputs=non_state_inputs, # type: ignore - outputs=non_state_outputs, # type: ignore - fn=self.fn, - cache_examples=self.cache_examples, - examples_per_page=self.examples_per_page, - _api_mode=self.api_mode, - batch=self.batch, - ) - - def __str__(self): - return self.__repr__() - - def __repr__(self): - repr = f"Gradio Interface for: {self.__name__}" - repr += f"\n{'-' * len(repr)}" - repr += "\ninputs:" - for component in self.input_components: - repr += f"\n|-{component}" - repr += "\noutputs:" - for component in self.output_components: - repr += f"\n|-{component}" - return repr - - async def interpret_func(self, *args): - return await self.interpret(list(args)) + [ - Column.update(visible=False), - Column.update(visible=True), - ] - - async def interpret(self, raw_input: list[Any]) -> list[Any]: - return [ - {"original": raw_value, "interpretation": interpretation} - for interpretation, raw_value in zip( - (await interpretation.run_interpret(self, raw_input))[0], raw_input - ) - ] - - def test_launch(self) -> None: - """ - Deprecated. - """ - warn_deprecation("The Interface.test_launch() function is deprecated.") - - -@document() -class TabbedInterface(Blocks): - """ - A TabbedInterface is created by providing a list of Interfaces, each of which gets - rendered in a separate tab. - Demos: stt_or_tts - """ - - def __init__( - self, - interface_list: list[Interface], - tab_names: list[str] | None = None, - title: str | None = None, - theme: Theme | None = None, - analytics_enabled: bool | None = None, - css: str | None = None, - ): - """ - Parameters: - interface_list: a list of interfaces to be rendered in tabs. - tab_names: a list of tab names. If None, the tab names will be "Tab 1", "Tab 2", etc. - title: a title for the interface; if provided, appears above the input and output components in large font. Also used as the tab title when opened in a browser window. - analytics_enabled: whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable or default to True. - css: custom css or path to custom css file to apply to entire Blocks - Returns: - a Gradio Tabbed Interface for the given interfaces - """ - super().__init__( - title=title or "Gradio", - theme=theme, - analytics_enabled=analytics_enabled, - mode="tabbed_interface", - css=css, - ) - if tab_names is None: - tab_names = [f"Tab {i}" for i in range(len(interface_list))] - with self: - if title: - Markdown( - f"

{title}

" - ) - with Tabs(): - for interface, tab_name in zip(interface_list, tab_names): - with Tab(label=tab_name): - interface.render() - - -def close_all(verbose: bool = True) -> None: - for io in Interface.get_instances(): - io.close(verbose) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/strings.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/strings.py deleted file mode 100644 index d85bc052969438e1e05dbf3abd9c75c8effc7d03..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/strings.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import threading -from typing import Dict - -import requests - -from gradio import wasm_utils - -MESSAGING_API_ENDPOINT = "https://api.gradio.app/gradio-messaging/en" - -en = { - "RUNNING_LOCALLY": "Running on local URL: {}", - "RUNNING_LOCALLY_SEPARATED": "Running on local URL: {}://{}:{}", - "SHARE_LINK_DISPLAY": "Running on public URL: {}", - "COULD_NOT_GET_SHARE_LINK": "\nCould not create share link. Please check your internet connection or our status page: https://status.gradio.app.", - "COULD_NOT_GET_SHARE_LINK_MISSING_FILE": "\nCould not create share link. Missing file: {}. \n\nPlease check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: \n\n1. Download this file: {}\n2. Rename the downloaded file to: {}\n3. Move the file to this location: {}", - "COLAB_NO_LOCAL": "Cannot display local interface on google colab, public link created.", - "PUBLIC_SHARE_TRUE": "\nTo create a public link, set `share=True` in `launch()`.", - "MODEL_PUBLICLY_AVAILABLE_URL": "Model available publicly at: {} (may take up to a minute for link to be usable)", - "GENERATING_PUBLIC_LINK": "Generating public link (may take a few seconds...):", - "BETA_INVITE": "\nThanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB", - "COLAB_DEBUG_TRUE": "Colab notebook detected. This cell will run indefinitely so that you can see errors and logs. " - "To turn off, set debug=False in launch().", - "COLAB_DEBUG_FALSE": "Colab notebook detected. To show errors in colab notebook, set debug=True in launch()", - "COLAB_WARNING": "Note: opening Chrome Inspector may crash demo inside Colab notebooks.", - "SHARE_LINK_MESSAGE": "\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)", - "INLINE_DISPLAY_BELOW": "Interface loading below...", - "TIPS": [ - "You can add authentication to your app with the `auth=` kwarg in the `launch()` command; for example: `gr.Interface(...).launch(auth=('username', 'password'))`", - "Let users specify why they flagged input with the `flagging_options=` kwarg; for example: `gr.Interface(..., flagging_options=['too slow', 'incorrect output', 'other'])`", - "You can show or hide the button for flagging with the `allow_flagging=` kwarg; for example: gr.Interface(..., allow_flagging=False)", - "The inputs and outputs flagged by the users are stored in the flagging directory, specified by the flagging_dir= kwarg. You can view this data through the interface by setting the examples= kwarg to the flagging directory; for example gr.Interface(..., examples='flagged')", - "You can add a title and description to your interface using the `title=` and `description=` kwargs. The `article=` kwarg can be used to add a description under the interface; for example gr.Interface(..., title='My app', description='Lorem ipsum'). Try using Markdown!", - "For a classification or regression model, set `interpretation='default'` to see why the model made a prediction.", - ], -} - - -def get_updated_messaging(en: Dict): - try: - updated_messaging = requests.get(MESSAGING_API_ENDPOINT, timeout=3).json() - en.update(updated_messaging) - except Exception: # Use default messaging - pass - - -if os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True" and not wasm_utils.IS_WASM: - threading.Thread(target=get_updated_messaging, args=(en,)).start() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/state_inline.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/state_inline.py deleted file mode 100644 index c0c491c4b7c9ae4117d60f447fdbf3c742f66f48..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/state_inline.py +++ /dev/null @@ -1,166 +0,0 @@ -from __future__ import annotations - -from collections import namedtuple -from dataclasses import dataclass -from typing import TYPE_CHECKING, Any, Literal - -from .._compat import DATACLASS_KWARGS -from ..common.utils import isMdAsciiPunct, isPunctChar, isWhiteSpace -from ..ruler import StateBase -from ..token import Token -from ..utils import EnvType - -if TYPE_CHECKING: - from markdown_it import MarkdownIt - - -@dataclass(**DATACLASS_KWARGS) -class Delimiter: - # Char code of the starting marker (number). - marker: int - - # Total length of these series of delimiters. - length: int - - # A position of the token this delimiter corresponds to. - token: int - - # If this delimiter is matched as a valid opener, `end` will be - # equal to its position, otherwise it's `-1`. - end: int - - # Boolean flags that determine if this delimiter could open or close - # an emphasis. - open: bool - close: bool - - level: bool | None = None - - -Scanned = namedtuple("Scanned", ["can_open", "can_close", "length"]) - - -class StateInline(StateBase): - def __init__( - self, src: str, md: MarkdownIt, env: EnvType, outTokens: list[Token] - ) -> None: - self.src = src - self.env = env - self.md = md - self.tokens = outTokens - self.tokens_meta: list[dict[str, Any] | None] = [None] * len(outTokens) - - self.pos = 0 - self.posMax = len(self.src) - self.level = 0 - self.pending = "" - self.pendingLevel = 0 - - # Stores { start: end } pairs. Useful for backtrack - # optimization of pairs parse (emphasis, strikes). - self.cache: dict[int, int] = {} - - # List of emphasis-like delimiters for current tag - self.delimiters: list[Delimiter] = [] - - # Stack of delimiter lists for upper level tags - self._prev_delimiters: list[list[Delimiter]] = [] - - # backticklength => last seen position - self.backticks: dict[int, int] = {} - self.backticksScanned = False - - # Counter used to disable inline linkify-it execution - # inside and markdown links - self.linkLevel = 0 - - def __repr__(self) -> str: - return ( - f"{self.__class__.__name__}" - f"(pos=[{self.pos} of {self.posMax}], token={len(self.tokens)})" - ) - - def pushPending(self) -> Token: - token = Token("text", "", 0) - token.content = self.pending - token.level = self.pendingLevel - self.tokens.append(token) - self.pending = "" - return token - - def push(self, ttype: str, tag: str, nesting: Literal[-1, 0, 1]) -> Token: - """Push new token to "stream". - If pending text exists - flush it as text token - """ - if self.pending: - self.pushPending() - - token = Token(ttype, tag, nesting) - token_meta = None - - if nesting < 0: - # closing tag - self.level -= 1 - self.delimiters = self._prev_delimiters.pop() - - token.level = self.level - - if nesting > 0: - # opening tag - self.level += 1 - self._prev_delimiters.append(self.delimiters) - self.delimiters = [] - token_meta = {"delimiters": self.delimiters} - - self.pendingLevel = self.level - self.tokens.append(token) - self.tokens_meta.append(token_meta) - return token - - def scanDelims(self, start: int, canSplitWord: bool) -> Scanned: - """ - Scan a sequence of emphasis-like markers, and determine whether - it can start an emphasis sequence or end an emphasis sequence. - - - start - position to scan from (it should point at a valid marker); - - canSplitWord - determine if these markers can be found inside a word - - """ - pos = start - maximum = self.posMax - marker = self.src[start] - - # treat beginning of the line as a whitespace - lastChar = self.src[start - 1] if start > 0 else " " - - while pos < maximum and self.src[pos] == marker: - pos += 1 - - count = pos - start - - # treat end of the line as a whitespace - nextChar = self.src[pos] if pos < maximum else " " - - isLastPunctChar = isMdAsciiPunct(ord(lastChar)) or isPunctChar(lastChar) - isNextPunctChar = isMdAsciiPunct(ord(nextChar)) or isPunctChar(nextChar) - - isLastWhiteSpace = isWhiteSpace(ord(lastChar)) - isNextWhiteSpace = isWhiteSpace(ord(nextChar)) - - left_flanking = not ( - isNextWhiteSpace - or (isNextPunctChar and not (isLastWhiteSpace or isLastPunctChar)) - ) - right_flanking = not ( - isLastWhiteSpace - or (isLastPunctChar and not (isNextWhiteSpace or isNextPunctChar)) - ) - - if not canSplitWord: - can_open = left_flanking and ((not right_flanking) or isLastPunctChar) - can_close = right_flanking and ((not left_flanking) or isNextPunctChar) - else: - can_open = left_flanking - can_close = right_flanking - - return Scanned(can_open, can_close, count) diff --git a/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/app.py b/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/app.py deleted file mode 100644 index 62f7b4fdbf3048016caa077f0a1bfc6c366c203e..0000000000000000000000000000000000000000 --- a/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/app.py +++ /dev/null @@ -1,192 +0,0 @@ -#!/usr/bin/env python -from __future__ import annotations - -import os -import random -import time - -import gradio as gr -import numpy as np -import PIL.Image - -from huggingface_hub import snapshot_download -from diffusers import DiffusionPipeline - -from lcm_scheduler import LCMScheduler -from lcm_ov_pipeline import OVLatentConsistencyModelPipeline - -from optimum.intel.openvino.modeling_diffusion import OVModelVaeDecoder, OVBaseModel - -import os -from tqdm import tqdm -import gradio_user_history as gr_user_history - -from concurrent.futures import ThreadPoolExecutor -import uuid - -DESCRIPTION = '''# Latent Consistency Model OpenVino CPU -Based on [Latency Consistency Model](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model) HF space - -Distilled from [Dreamshaper v7](https://huggingface.co/Lykon/dreamshaper-7) fine-tune of [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with only 4,000 training iterations (~32 A100 GPU Hours). [Project page](https://latent-consistency-models.github.io) - -

Running on CPU 🥶.

-''' - -MAX_SEED = np.iinfo(np.int32).max -CACHE_EXAMPLES = os.getenv("CACHE_EXAMPLES") == "1" - -model_id = "deinferno/LCM_Dreamshaper_v7-openvino" -batch_size = 1 -width = int(os.getenv("IMAGE_WIDTH", "512")) -height = int(os.getenv("IMAGE_HEIGHT", "512")) -num_images = int(os.getenv("NUM_IMAGES", "1")) - -class CustomOVModelVaeDecoder(OVModelVaeDecoder): - def __init__( - self, model: openvino.runtime.Model, parent_model: OVBaseModel, ov_config: Optional[Dict[str, str]] = None, model_dir: str = None, - ): - super(OVModelVaeDecoder, self).__init__(model, parent_model, ov_config, "vae_decoder", model_dir) - -scheduler = LCMScheduler.from_pretrained(model_id, subfolder="scheduler") -pipe = OVLatentConsistencyModelPipeline.from_pretrained(model_id, scheduler = scheduler, compile = False, ov_config = {"CACHE_DIR":""}) - -# Inject TAESD - -taesd_dir = snapshot_download(repo_id="deinferno/taesd-openvino") -pipe.vae_decoder = CustomOVModelVaeDecoder(model = OVBaseModel.load_model(f"{taesd_dir}/vae_decoder/openvino_model.xml"), parent_model = pipe, model_dir = taesd_dir) - -pipe.reshape(batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images) -pipe.compile() - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - -def save_image(img, profile: gr.OAuthProfile | None, metadata: dict): - unique_name = str(uuid.uuid4()) + '.png' - img.save(unique_name) - gr_user_history.save_image(label=metadata["prompt"], image=img, profile=profile, metadata=metadata) - return unique_name - -def save_images(image_array, profile: gr.OAuthProfile | None, metadata: dict): - paths = [] - with ThreadPoolExecutor() as executor: - paths = list(executor.map(save_image, image_array, [profile]*len(image_array), [metadata]*len(image_array))) - return paths - -def generate( - prompt: str, - seed: int = 0, - guidance_scale: float = 8.0, - num_inference_steps: int = 4, - randomize_seed: bool = False, - progress = gr.Progress(track_tqdm=True), - profile: gr.OAuthProfile | None = None, -) -> PIL.Image.Image: - global batch_size - global width - global height - global num_images - - seed = randomize_seed_fn(seed, randomize_seed) - np.random.seed(seed) - start_time = time.time() - result = pipe( - prompt=prompt, - width=width, - height=height, - guidance_scale=guidance_scale, - num_inference_steps=num_inference_steps, - num_images_per_prompt=num_images, - output_type="pil", - ).images - paths = save_images(result, profile, metadata={"prompt": prompt, "seed": seed, "width": width, "height": height, "guidance_scale": guidance_scale, "num_inference_steps": num_inference_steps}) - print(time.time() - start_time) - return paths, seed - -examples = [ - "portrait photo of a girl, photograph, highly detailed face, depth of field, moody light, golden hour, style by Dan Winters, Russell James, Steve McCurry, centered, extremely detailed, Nikon D850, award winning photography", - "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k", - "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", - "A photo of beautiful mountain with realistic sunset and blue lake, highly detailed, masterpiece", -] - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Group(): - with gr.Row(): - prompt = gr.Text( - label="Prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - container=False, - ) - run_button = gr.Button("Run", scale=0) - result = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery", grid=[2] - ) - with gr.Accordion("Advanced options", open=False): - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - randomize=True - ) - randomize_seed = gr.Checkbox(label="Randomize seed across runs", value=True) - with gr.Row(): - guidance_scale = gr.Slider( - label="Guidance scale for base", - minimum=2, - maximum=14, - step=0.1, - value=8.0, - ) - num_inference_steps = gr.Slider( - label="Number of inference steps for base", - minimum=1, - maximum=8, - step=1, - value=4, - ) - - with gr.Accordion("Past generations", open=False): - gr_user_history.render() - - gr.Examples( - examples=examples, - inputs=prompt, - outputs=result, - fn=generate, - cache_examples=CACHE_EXAMPLES, - ) - - gr.on( - triggers=[ - prompt.submit, - run_button.click, - ], - fn=generate, - inputs=[ - prompt, - seed, - guidance_scale, - num_inference_steps, - randomize_seed - ], - outputs=[result, seed], - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(api_open=False) - # demo.queue(max_size=20).launch() - demo.launch() diff --git a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h b/spaces/derful/Chatgpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h deleted file mode 100644 index f88ab5d8cb343f97026966b402eaeed8831e356a..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h +++ /dev/null @@ -1,25 +0,0 @@ -#pragma once - -#include - -#include "libipc/def.h" -#include "libipc/prod_cons.h" - -#include "libipc/circ/elem_array.h" - -namespace ipc { -namespace policy { - -template
Streaming ServiceFeaturesPrice
Spotify- Offers a large library of Umlando music and other genres
- Allows you to create and follow playlists and podcasts
- Provides personalized recommendations and curated radio stations
- Supports offline mode and cross-device syncing
- Free with ads and limited skips
- $9.99/month for Premium with no ads and unlimited skips
Apple Music- Offers a large library of Umlando music and other genres
- Allows you to create and follow playlists and podcasts
- Provides personalized recommendations and curated radio stations
- Supports offline mode and cross-device syncing
- Free for 3 months, then $9.99/month
Deezer- Offers a large library of Umlando music and other genres
- Allows you to create and follow playlists and podcasts
- Provides personalized recommendations and curated radio stations
- Supports offline mode and cross-device syncing
- Free with ads and limited skips
- $9.99/month for Premium with no ads and unlimited skips
SoundCloud- Offers a large library of Umlando music and other genres
- Allows you to upload your own music and discover new artists
- Provides personalized recommendations and curated radio stations
- Supports offline mode and cross-device syncing
- Free with ads and limited skips
- $9.99/month for SoundCloud Go+ with no ads and unlimited skips