diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargarsigmakeyfullcrackmega Los beneficios de usar SigmaKey la herramienta segura y confiable para el servicio de MTK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargarsigmakeyfullcrackmega Los beneficios de usar SigmaKey la herramienta segura y confiable para el servicio de MTK.md
deleted file mode 100644
index 0fa3c8ed2cb558419e617ba094de259d244ff7f2..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargarsigmakeyfullcrackmega Los beneficios de usar SigmaKey la herramienta segura y confiable para el servicio de MTK.md
+++ /dev/null
@@ -1,171 +0,0 @@
-
-
Descargar SigmaKey Full Crack Mega: A Complete Guide
-
If you are looking for a professional and powerful tool to flash, unlock, and repair your mobile devices, you might have heard of SigmaKey. SigmaKey is a software that works with a dongle and allows you to service various types of cell phones, especially Huawei, MTK, Qualcomm, HiSilicon, and Spreadtrum devices. In this article, we will show you how to download SigmaKey full crack mega, a cracked version of the software that does not require a dongle or activation. We will also explain how to use SigmaKey full crack mega to perform different operations on your devices.
-
What is SigmaKey?
-
SigmaKey is a software that was developed by GSM Server Team, a group of experts in mobile unlocking and flashing. SigmaKey works with a hardware dongle that connects to your PC via USB port and provides security and authentication for the software. SigmaKey allows you to perform various operations on your mobile devices, such as:
SigmaKey has many features and benefits that make it one of the best tools for mobile servicing. Some of them are:
-
-
It supports a wide range of devices from different brands and models.
-
It supports various chipsets, such as MTK, Qualcomm, HiSilicon, Spreadtrum, etc.
-
It has a user-friendly interface that is easy to navigate and operate.
-
It has a fast and reliable performance that saves time and resources.
-
It has a lifetime license that does not require annual payments.
-
It has regular updates that add new features and support new devices.
-
It has a customer support team that provides assistance and guidance.
-
-
Supported devices and platforms
-
SigmaKey supports thousands of devices from various brands, such as Huawei, Motorola, ZTE, Lenovo, Alcatel, Sony, LG, Samsung, Xiaomi, Oppo, Vivo, etc. You can check the full list of supported devices on the official website of SigmaKey. SigmaKey also supports Windows OS versions such as Win XP/Vista/7/Server 2008 for both 32-bit and 64-bit architecture.
-
How to download SigmaKey full crack mega?
-
If you want to use SigmaKey without buying a dongle or activating it online, you can download SigmaKey full crack mega. This is a cracked version of the software that bypasses the security and authentication of the dongle. However, you should be aware that downloading and using SigmaKey full crack mega is illegal and risky. You might face some problems such as:
-
-
Virus or malware infection on your PC or device.
-
Data loss or corruption on your PC or device.
-
Dongle detection or blocking by the software.
-
Lack of updates or support from the developers.
-
Lawsuit or penalty from the developers or authorities.
-
-
If you still want to download SigmaKey full crack mega at your own risk, you should follow these steps:
-
descargar sigmakey full crack mega gratis
-descargar sigmakey full crack mega 2021
-descargar sigmakey full crack mega sin box
-descargar sigmakey full crack mega huawei
-descargar sigmakey full crack mega android
-descargar sigmakey full crack mega windows 10
-descargar sigmakey full crack mega ultima version
-descargar sigmakey full crack mega para pc
-descargar sigmakey full crack mega sin dongle
-descargar sigmakey full crack mega mediafire
-descargar sigmakey full crack mega 64 bits
-descargar sigmakey full crack mega 32 bits
-descargar sigmakey full crack mega sin virus
-descargar sigmakey full crack mega mtk
-descargar sigmakey full crack mega qualcomm
-descargar sigmakey full crack mega español
-descargar sigmakey full crack mega portable
-descargar sigmakey full crack mega tutorial
-descargar sigmakey full crack mega link directo
-descargar sigmakey full crack mega reparar imei
-descargar sigmakey full crack mega frp
-descargar sigmakey full crack mega bootloader
-descargar sigmakey full crack mega firmware
-descargar sigmakey full crack mega update.app
-descargar sigmakey full crack mega kirin
-descargar sigmakey full crack mega hisilicon
-descargar sigmakey full crack mega spreadtrum
-descargar sigmakey full crack mega mediatek
-descargar sigmakey full crack mega alcatel
-descargar sigmakey full crack mega motorola
-descargar sigmakey full crack mega lg
-descargar sigmakey full crack mega zte
-descargar sigmakey full crack mega lenovo
-descargar sigmakey full crack mega sony
-descargar sigmakey full crack mega vtelca
-descargar sigmakey full crack mega lanix
-descargar sigmakey full crack mega blu
-descargar sigmakey full crack mega azumi
-descargar sigmakey full crack mega verykool
-descargar sigmakey full crack mega avvio
-descargar sigmakey full crack mega bitel
-descargar sigmakey full crack mega bmobile
-descargar sigakeyfullcrackmega.exe (not recommended)
-
Requirements and precautions
-
-
A PC with Windows OS installed.
-
A USB cable to connect your device to your PC.
-
A backup of your device data in case of any damage or loss.
-
A reliable internet connection to download the files.
-
A antivirus software to scan the files for any virus or malware.
-
A disablement of any firewall or antivirus software that might interfere with the installation process.
-
-
Steps to download and install SigmaKey full crack mega
You will be redirected to another page where you have to complete some surveys or offers to get the download link. Follow the instructions on the screen and complete the tasks.
-
Once you get the download link, click on it and save the file on your PC. The file name is Sigmakey_Huawei_Edition_Crack_Version_2.40.02.zip and it has a size of about 100 MB.
-
Extract the zip file using WinRAR or any other extraction tool. You will get a folder named Sigmakey_Huawei_Edition_Crack_Version_2.40.02 with several files inside it.
-
Open the folder and run the file named Setup.exe as administrator. Follow the installation wizard and accept the terms and conditions. Choose a destination folder for the software and click on install.
-
Wait for the installation process to finish. Do not disconnect your device or close the program during this process.
-
After the installation is done, do not run the software yet. Go back to the folder where you extracted the zip file and open another folder named Loader_Sigma_Key_Huawei_Edition_Crack_Version_2.40.02.
-
In this folder, you will find two files named Loader.exe and Patch.exe. Copy both files and paste them into the destination folder where you installed the software. Replace any existing files if prompted.
-
Now run the file named Loader.exe as administrator. This will launch the software with full crack features enabled.
-
-
Troubleshooting tips
-
If you encounter any problems while downloading or installing SigmaKey full crack mega, you can try these tips:
-
-
Make sure you have enough space on your PC hard drive for the files.
-
Make sure you have a stable internet connection while downloading or installing the files.
-
Make sure you disable any firewall or antivirus software that might block or delete the files.
-
Make sure you scan the files for any virus or malware before opening them.
-
Make sure you run the files as administrator and follow the instructions carefully.
-
If you get an error message saying "Dongle not found" or "Dongle not connected", try changing your USB port or cable.
-
-
How to use SigmaKey full crack mega?
-
Once you have successfully downloaded and installed SigmaKey full crack mega, you can start using it to service your mobile devices. Here are some examples of how to use SigmaKey full crack mega for different operations:
-
Unlocking Huawei devices with SigmaKey
-
-
Connect your Huawei device to your PC via USB cable in fastboot mode. To enter fastboot mode, power off your device and press volume down + power buttons simultaneously until you see a fastboot logo on your screen.
-
Launch SigmaKey full crack mega on your PC and select Huawei tab from the top menu bar.
-
Select ADB Interface from Port Selection drop-down menu on top left corner of the screen.
-
Select Fastboot Mode from Service Mode drop-down menu on top right corner of screen.
-
Select Unlock Bootloader option from Service Operations section on bottom left corner of screen.
-
The software will read your device information and generate an unlock code for your bootloader. Write down this code somewhere safe as you will need it later.
- to enter the unlock code on your device. Follow the instructions on your device screen and enter the unlock code when prompted.
-
Your device bootloader will be unlocked and your device will reboot automatically. You can disconnect your device from your PC.
-
-
Flashing and repairing MTK cell phones with SigmaKey
-
-
Connect your MTK device to your PC via USB cable in flash mode. To enter flash mode, power off your device and press volume up + power buttons simultaneously until you see a flash logo on your screen.
-
Launch SigmaKey full crack mega on your PC and select MTK tab from the top menu bar.
-
Select USB Mode from Port Selection drop-down menu on top left corner of the screen.
-
Select Flash Mode from Service Mode drop-down menu on top right corner of screen.
-
Select Flash Firmware option from Service Operations section on bottom left corner of screen.
-
The software will ask you to select a firmware file for your device. You can download firmware files from various online sources or use the ones provided by SigmaKey. Click on Browse button and locate the firmware file on your PC.
-
The software will verify the firmware file and show you some information about it. Make sure the firmware file matches your device model and version. Click on Write Firmware button to start flashing process.
-
The software will flash the firmware file to your device and show you a progress bar. Do not disconnect your device or close the program during this process.
-
After the flashing process is done, the software will show you a success message and your device will reboot automatically. You can disconnect your device from your PC.
-
-
Other operations with SigmaKey
-
SigmaKey full crack mega can also perform other operations on your devices, such as:
-
-
Read and write IMEI
-
Remove FRP lock
-
Remove Huawei ID
-
Backup and restore data
-
Root and unroot devices
-
And more
-
-
To perform these operations, you need to select the appropriate tab, port, mode, and option from the software interface. You can also refer to the user manual or customer guide for more details and instructions.
-
Conclusion
-
In this article, we have shown you how to download SigmaKey full crack mega, a cracked version of the software that allows you to flash, unlock, and repair your mobile devices without a dongle or activation. We have also explained how to use SigmaKey full crack mega for different operations on Huawei and MTK devices. However, we have also warned you about the risks and consequences of using SigmaKey full crack mega, as it is illegal and unsafe. We recommend you to use the original SigmaKey software with a dongle and activation for a better and safer experience.
-
Summary of the article
-
SigmaKey is a professional and powerful tool for mobile servicing that works with a dongle and activation. SigmaKey full crack mega is a cracked version of the software that does not require a dongle or activation. SigmaKey full crack mega allows you to perform various operations on your devices, such as unlocking, flashing, repairing, etc. However, SigmaKey full crack mega is illegal and risky to use, as it might cause virus infection, data loss, dongle detection, lack of updates, lawsuit, etc. Therefore, it is better to use the original SigmaKey software with a dongle and activation for a safer and better experience.
-
FAQs
-
-
What is SigmaKey?
-
SigmaKey is a software that works with a dongle and allows you to service various types of cell phones, especially Huawei, MTK, Qualcomm, HiSilicon, and Spreadtrum devices.
-
What is SigmaKey full crack mega?
-
SigmaKey full crack mega is a cracked version of the software that does not require a dongle or activation. It bypasses the security and authentication of the dongle.
-
How to download SigmaKey full crack mega?
-
You can download SigmaKey full crack mega from this link https://www.getdroidtips.com/download-sigmakey-huawei-crack/. You have to complete some surveys or offers to get the download link. Then you have to install the software and copy the loader and patch files into the installation folder.
-
How to use SigmaKey full crack mega?
-
You can use SigmaKey full crack mega to perform various operations on your devices, such as unlocking, flashing, repairing, etc. You have to select the appropriate tab, port, mode, and option from the software interface. You can also refer to the user manual or customer guide for more details and instructions.
-
What are the risks of using SigmaKey full crack mega?
-
Using SigmaKey full crack mega is illegal and risky. You might face some problems such as virus infection, data loss, dongle detection, lack of updates, lawsuit, etc. Therefore, it is better to use the original SigmaKey software with a dongle and activation for a safer and better experience.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Deadly Assassin in Sniper Killer 3D The Best Offline Sniper Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Deadly Assassin in Sniper Killer 3D The Best Offline Sniper Game.md
deleted file mode 100644
index 0d14a0a004d620c440e36926df7289386fe276ca..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Deadly Assassin in Sniper Killer 3D The Best Offline Sniper Game.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Sniper Killer 3D: The Ultimate Shooting Game
-
If you are looking for a shooting game that will test your skills as a sniper, look no further than Sniper Killer 3D. This game is the ultimate sniper adventure that will immerse you in high-intensity missions and action-packed scenarios. Whether you want to play offline or online, Sniper Killer 3D has something for everyone. Here is everything you need to know about this amazing game.
Sniper Killer 3D is a shooting game where you play as a sniper who must eliminate high-profile targets and criminals. You will travel to different locations around the world, taking on various challenges and objectives. You will also have access to a huge arsenal of sniper rifles, assault rifles, and other guns that you can upgrade and customize. Sniper Killer 3D is a game that combines realism, variety, and fun in one package.
-
A thrilling and realistic sniper game
-
One of the best features of Sniper Killer 3D is its realistic physics and ballistics. You will have to take into account factors such as wind, distance, gravity, and movement when aiming and shooting your target. You will also have to deal with different weather conditions, such as rain, fog, snow, and night. You will feel like a real sniper as you pull the trigger and watch your bullet hit the mark.
-
A variety of weapons and missions
-
Sniper Killer 3D offers you more than 180 authentic weapons to choose from. You can unlock different sniper rifles, each with its own characteristics and advantages. You can also upgrade your weapons with scopes, silencers, magazines, and other attachments. You will need to use the right weapon for the right mission, as some targets may require more power, accuracy, or stealth than others.
-
The game also has hundreds of thrilling missions that will keep you entertained for hours. You will have to eliminate terrorists, kidnappers, drug lords, assassins, and other enemies. You will also have to protect innocent civilians, rescue hostages, defuse bombs, and more. Each mission has its own objectives and rewards that you can use to buy new weapons or upgrade your existing ones.
-
A free and offline gameplay
-
Another great feature of Sniper Killer 3D is that it is free to play. You can download the game from the Google Play Store or play it on your web browser without spending a dime. The game also has an offline mode that allows you to play without an internet connection or data. You can enjoy the game anytime and anywhere you want.
-
How to play Sniper Killer 3D?
-
Sniper Killer 3D is easy to play but hard to master. Here are some tips on how to play the game:
-
sniper killer 3d gun shooting games
-sniper 3d wildlife studios
-sniper 3d piercing bullet
-sniper 3d stout assault rifle
-sniper 3d offline mode
-sniper 3d free to play
-sniper 3d action adventure
-sniper 3d realistic ballistics
-sniper 3d variety of guns
-sniper 3d diverse locations
-sniper killer 3d download
-sniper killer 3d mod apk
-sniper killer 3d cheats
-sniper killer 3d hack
-sniper killer 3d unlimited money
-sniper killer 3d review
-sniper killer 3d gameplay
-sniper killer 3d trailer
-sniper killer 3d tips and tricks
-sniper killer 3d best weapons
-sniper killer 3d online multiplayer
-sniper killer 3d pvp mode
-sniper killer 3d special bullets
-sniper killer 3d elite shooter
-sniper killer 3d high-profile targets
-sniper killer 3d missions and challenges
-sniper killer 3d fun games for free
-sniper killer 3d android app
-sniper killer 3d ios app
-sniper killer 3d pc game
-sniper killer 3d mac game
-sniper killer 3d windows game
-sniper killer 3d linux game
-sniper killer 3d steam game
-sniper killer 3d epic games store game
-sniper killer 3d google play store game
-sniper killer 3d app store game
-sniper killer 3d amazon appstore game
-sniper killer 3d microsoft store game
-sniper killer 3d data privacy and security
-sniper killer 3d ratings and reviews
-sniper killer 3d customer support
-sniper killer 3d updates and news
-sniper killer 3d blog and community
-sniper killer 3d social media accounts
-sniper killer 3d youtube channel
-sniper killer 3d twitch channel
-sniper killer 3d discord server
-sniper killer 3d reddit forum
-sniper killer 3d wiki and guide
-
Choose your sniper rifle and scope
-
Before each mission, you will have to select your weapon and scope. You can browse through the available weapons and see their stats, such as damage, range, stability, fire rate, and capacity. You can also see the available scopes and their zoom levels. Choose the weapon and scope that suit your mission and preference.
-
Aim and shoot your target
-
Once you start the mission, you will have to locate your target using your scope. You can use the mouse scroll or the right-click button to zoom in or out. You can also drag the left-click button to move your aim. You will see a red dot on your target, which indicates the bullet trajectory. You will have to adjust your aim according to the wind, distance, and movement of your target. You can use the wind indicator and the range finder to help you. When you are ready, press the space bar or the left-click button to shoot.
-
Complete the objectives and earn rewards
-
After you shoot your target, you will see a slow-motion replay of your shot. You will also see if you completed the mission objectives, such as killing the target, avoiding collateral damage, or achieving a headshot. You will earn coins and diamonds based on your performance. You can use these rewards to buy new weapons or upgrade your existing ones.
-
Why play Sniper Killer 3D?
-
Sniper Killer 3D is not just a game, it is an experience. Here are some reasons why you should play this game:
-
Improve your shooting skills and accuracy
-
Sniper Killer 3D is a game that will challenge your shooting skills and accuracy. You will have to be precise and patient as you aim and shoot your target. You will also have to be strategic and tactical as you choose your weapon and scope. You will learn how to handle different situations and scenarios as a sniper. You will become a better shooter as you play this game.
-
Enjoy stunning 3D graphics and animations
-
Sniper Killer 3D is a game that will impress you with its stunning 3D graphics and animations. You will see realistic environments, such as cities, mountains, deserts, and islands. You will also see lifelike characters, such as your targets, civilians, and enemies. You will feel the impact of your shots as you see blood splatter, bullet holes, and explosions. You will be amazed by the quality and detail of this game.
-
Challenge yourself with different levels of difficulty
-
Sniper Killer 3D is a game that will test your limits with different levels of difficulty. You can choose from easy, normal, hard, or expert modes depending on your skill level. You will face more challenging targets, objectives, and conditions as you progress through the game. You will also have to deal with limited ammo, time, and health. You will have to prove yourself as a sniper killer in this game.
-
Where to download Sniper Killer 3D?
-
Sniper Killer 3D is a game that is available on multiple platforms. Here are some options on where to download this game:
-
Available on Google Play Store for Android devices
-
If you have an Android device, such as a smartphone or tablet, you can download Sniper Killer 3D from the Google Play Store for free. You can also enjoy the game without any ads or in-app purchases. You can access the game from this link: [Sniper Killer 3D].
-
Compatible with web browsers for desktop computers
-
If you have a desktop computer, such as a PC or Mac, you can play Sniper Killer 3D on your web browser for free. You can also enjoy the game without any downloads or installations. You can access the game from this link: [Sniper Killer 3D].
-
Conclusion
-
Sniper Killer 3D is a game that will give you an unforgettable shooting experience. It is a game that combines realism, variety, and fun in one package. It is a game that will improve your shooting skills and accuracy, enjoy stunning 3D graphics and animations, and challenge yourself with different levels of difficulty. It is a game that is free to play and available on multiple platforms. It is a game that you should not miss.
-
If you are ready to become a sniper killer, download Sniper Killer 3D today and start your adventure!
-
Frequently Asked Questions
-
-
What are the minimum requirements to play Sniper Killer 3D?
-
The minimum requirements to play Sniper Killer 3D are: Android 4.4 or higher for Android devices; Windows XP/Vista/7/8/10 or Mac OS X for desktop computers; Chrome, Firefox, Safari, or Edge for web browsers.
-
How can I get more coins and diamonds in Sniper Killer 3D?
-
You can get more coins and diamonds in Sniper Killer 3D by: completing missions and objectives; watching video ads; rating and reviewing the game; inviting your friends to play the game.
-
How can I change the language of Sniper Killer 3D?
-
You can change the language of Sniper Killer 3D by: going to the settings menu; selecting the language option; choosing from the available languages, such as English, Spanish, French, German, Russian, Chinese, and more.
-
How can I contact the developers of Sniper Killer 3D?
-
You can contact the developers of Sniper Killer 3D by: sending an email to [sniperkiller3d@gmail.com]; visiting their website at [sniperkiller3d.com]; following them on social media platforms, such as Facebook, Twitter, Instagram, and YouTube.
-
What are some tips and tricks to play Sniper Killer 3D?
-
Some tips and tricks to play Sniper Killer 3D are: use the wind indicator and the range finder to adjust your aim; use the silencer and the night vision to increase your stealth; use the bullet time and the thermal vision to improve your accuracy; use the headshot and the explosive shot to deal more damage; use the zoom and the drag to find your target; use the space bar and the left-click button to shoot.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FIFA Chino APK disfruta de la emocin del ftbol con grficos increbles.md b/spaces/1phancelerku/anime-remove-background/FIFA Chino APK disfruta de la emocin del ftbol con grficos increbles.md
deleted file mode 100644
index 7a78027238e887413e72283091158fa0d9e73f90..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FIFA Chino APK disfruta de la emocin del ftbol con grficos increbles.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-
FIFA Mobile Chino APK Actualizado: Todo lo que necesitas saber
-
Si eres un fanático del fútbol y te gusta jugar a los juegos de EA Sports, seguramente habrás oído hablar de FIFA Mobile, el juego oficial para dispositivos móviles que te permite crear tu propio equipo, competir en diferentes modos y eventos, y disfrutar de la emoción del deporte rey. Pero, ¿sabías que existe una versión alternativa de este juego, llamada FIFA Mobile Chino APK, que tiene algunas características y opciones diferentes a la versión original?
En este artículo, te vamos a contar todo lo que necesitas saber sobre FIFA Mobile Chino APK, qué es, cómo descargarlo e instalarlo, qué ventajas y desventajas tiene, cómo se compara con FIFA Mobile APK, qué opinan los usuarios que lo han probado, y algunas preguntas frecuentes que te pueden surgir. ¡Sigue leyendo y descubre si este juego es para ti!
-
¿Qué es FIFA Mobile Chino APK?
-
FIFA Mobile Chino APK es una versión modificada de FIFA Mobile, el juego oficial de EA Sports para dispositivos móviles Android e iOS. Esta versión está desarrollada por Tencent, una empresa china que tiene los derechos de distribución de FIFA en China. Por lo tanto, esta versión está pensada principalmente para el público chino, aunque también se puede jugar desde otros países.
-
FIFA Mobile Chino APK tiene algunas características y opciones diferentes a la versión original de FIFA Mobile, como por ejemplo:
-
Características principales de FIFA Mobile Chino APK
-
-
Tiene una interfaz y un diseño más coloridos y animados, con más efectos visuales y sonoros.
-
Tiene más modos de juego disponibles, como el modo carrera, el modo torneo, el modo entrenamiento, el modo desafío y el modo mundial.
-
Tiene más opciones de personalización para tu equipo, como la posibilidad de elegir el escudo, el estadio, el balón, las equipaciones y los patrocinadores.
-
Tiene más eventos y actividades especiales, como la Copa del Mundo, la Champions League, la Superliga China y otras competiciones regionales e internacionales.
-
Tiene más jugadores y leyendas disponibles para fichar, incluyendo algunos exclusivos de esta versión, como los iconos eternos.
-
Tiene un sistema de recompensas más generoso y variado, que te permite obtener monedas, puntos, sobres, jugadores y otros objetos.
-
Tiene un mercado de transferencias más dinámico y competitivo, donde puedes comprar y vender jugadores con otros usuarios.
-
-
Cómo descargar e instalar FIFA Mobile Chino APK
-
Para descargar e instalar FIFA Mobile Chino APK en tu dispositivo Android, debes seguir estos pasos:
-
-
Accede a un sitio web seguro y confiable que ofrezca el archivo APK de FIFA Mobile Chino. Por ejemplo, puedes usar este enlace: .
-
Descarga el archivo APK en tu dispositivo. Puede que tengas que habilitar la opción de instalar aplicaciones de fuentes desconocidas en los ajustes de seguridad de tu dispositivo.
-
Abre el archivo APK y sigue las instrucciones que aparecen en la pantalla para completar la instalación.
-
Una vez instalado, abre el juego y espera a que se descarguen los datos adicionales necesarios para su funcionamiento.
-
Disfruta de FIFA Mobile Chino APK en tu dispositivo Android.
-
-
Para descargar e instalar FIFA Mobile Chino APK en tu dispositivo iOS, debes seguir estos pasos:
-
-
Accede a un sitio web seguro y confiable que ofrezca el archivo IPA de FIFA Mobile Chino. Por ejemplo, puedes usar este enlace: .
-
Descarga el archivo IPA en tu dispositivo. Puede que tengas que usar una aplicación de gestión de archivos como iFile o Filza para mover el archivo a la carpeta adecuada.
-
Abre el archivo IPA y sigue las instrucciones que aparecen en la pantalla para completar la instalación.
-
Una vez instalado, abre el juego y espera a que se descarguen los datos adicionales necesarios para su funcionamiento.
-
Disfruta de FIFA Mobile Chino APK en tu dispositivo iOS.
-
-
Ventajas y desventajas de FIFA Mobile Chino APK
-
Como todo juego, FIFA Mobile Chino APK tiene sus pros y sus contras. Aquí te resumimos algunas de las ventajas y desventajas de este juego:
-
Ventajas de FIFA Mobile Chino APK
-
-
Tiene más contenido y opciones que la versión original de FIFA Mobile, lo que lo hace más divertido y variado.
-
Tiene una mejor calidad gráfica y sonora, lo que lo hace más atractivo y realista.
-
Tiene una mayor compatibilidad con diferentes dispositivos y sistemas operativos, lo que lo hace más accesible y fácil de usar.
-
Tiene una comunidad más activa y participativa, lo que lo hace más social e interactivo.
-
-
Desventajas de FIFA Mobile Chino APK
-
-
Tiene un idioma diferente al español, lo que puede dificultar la comprensión y el disfrute del juego.
-
Tiene un mayor riesgo de virus o malware, al no ser una versión oficial ni estar disponible en las tiendas oficiales de aplicaciones.
-
Tiene un mayor consumo de recursos y datos, lo que puede afectar al rendimiento y la batería del dispositivo.
-
Tiene un mayor nivel de dificultad y competencia, lo que puede frustrar o desanimar a algunos jugadores.
-
-
¿Qué diferencia hay entre FIFA Mobile Chino APK y FIFA Mobile APK?
-
Ahora que ya sabes qué es FIFA Mobile Chino APK, te preguntarás qué diferencia hay con FIFA Mobile APK, la versión original del juego. Pues bien, aunque ambos juegos comparten el mismo concepto y objetivo, hay algunas similitudes y diferencias entre ellos que te vamos a explicar a continuación:
-
Similitudes entre ambos juegos
-
-
Ambos juegos son desarrollados por EA Sports, la empresa líder en juegos deportivos.
-
Ambos juegos te permiten crear tu propio equipo de fútbol, con jugadores reales y licenciados por la FIFA.
-
Ambos juegos te ofrecen diferentes modos y eventos para jugar solo o con otros usuarios, como el modo temporada, el modo versus o el modo ataque.
-
Ambos juegos te dan la oportunidad de mejorar tus habilidades y tu estrategia, mediante el entrenamiento, la formación y la táctica.
-
Ambos juegos te brindan una experiencia inmersiva y emocionante, con gráficos detallados, animaciones fluidas y comentarios en vivo.
-
-
Diferencias entre ambos juegos
-
-
FIFA Mobile Chino APK tiene una interfaz y un diseño más coloridos y animados, mientras que FIFA Mobile APK tiene una interfaz y un diseño más sobrios y elegantes.
-
FIFA Mobile Chino APK tiene más modos de juego disponibles, como el modo carrera, el modo torneo o el modo mundial, mientras que FIFA Mobile APK tiene menos modos de juego disponibles, como el modo campaña o el modo leyendas.
FIFA Mobile Chino APK tiene más opciones de personalización para tu equipo, como la posibilidad de elegir el escudo, el estadio, el balón, las equipaciones y los patrocinadores, mientras que FIFA Mobile APK tiene menos opciones de personalización para tu equipo, como la posibilidad de elegir el nombre, el logo y los colores.
-
FIFA Mobile Chino APK tiene más eventos y actividades especiales, como la Copa del Mundo, la Champions League, la Superliga China y otras competiciones regionales e internacionales, mientras que FIFA Mobile APK tiene menos eventos y actividades especiales, como la Copa América, la Eurocopa, la Premier League y otras ligas nacionales.
-
FIFA Mobile Chino APK tiene más jugadores y leyendas disponibles para fichar, incluyendo algunos exclusivos de esta versión, como los iconos eternos, mientras que FIFA Mobile APK tiene menos jugadores y leyendas disponibles para fichar, incluyendo algunos exclusivos de esta versión, como los iconos prime.
-
FIFA Mobile Chino APK tiene un sistema de recompensas más generoso y variado, que te permite obtener monedas, puntos, sobres, jugadores y otros objetos, mientras que FIFA Mobile APK tiene un sistema de recompensas más limitado y repetitivo, que te permite obtener monedas, puntos y sobres.
-
FIFA Mobile Chino APK tiene un mercado de transferencias más dinámico y competitivo, donde puedes comprar y vender jugadores con otros usuarios, mientras que FIFA Mobile APK tiene un mercado de transferencias más estático y controlado, donde solo puedes comprar y vender jugadores con el sistema.
-
-
¿Qué opinan los usuarios de FIFA Mobile Chino APK?
-
Si te preguntas qué opinan los usuarios que han probado FIFA Mobile Chino APK, te podemos decir que hay opiniones de todo tipo. Algunos usuarios están muy satisfechos con este juego y lo prefieren a la versión original de FIFA Mobile, mientras que otros usuarios están muy decepcionados con este juego y lo consideran una copia barata de FIFA Mobile. Aquí te mostramos algunas de las reseñas positivas y negativas que hemos encontrado en internet:
-
descargar fifa mobile chino apk
-fifa mobile chino apk 2023
-fifa mobile chino apk ultima version
-fifa mobile chino apk mod
-fifa mobile chino apk hack
-fifa mobile chino apk mega
-fifa mobile chino apk mediafire
-fifa mobile chino apk sin licencia
-fifa mobile chino apk android
-fifa mobile chino apk gratis
-fifa mobile chino apk full
-fifa mobile chino apk offline
-fifa mobile chino apk obb
-fifa mobile chino apk datos
-fifa mobile chino apk gameplay
-fifa mobile chino apk descargar gratis
-fifa mobile chino apk 2023 ultima version
-fifa mobile chino apk 2023 mod
-fifa mobile chino apk 2023 hack
-fifa mobile chino apk 2023 mega
-fifa mobile chino apk 2023 mediafire
-fifa mobile chino apk 2023 sin licencia
-fifa mobile chino apk 2023 android
-fifa mobile chino apk 2023 gratis
-fifa mobile chino apk 2023 full
-fifa mobile chino apk 2023 offline
-fifa mobile chino apk 2023 obb
-fifa mobile chino apk 2023 datos
-fifa mobile chino apk 2023 gameplay
-fifa mobile chino apk 2023 descargar gratis
-como descargar fifa mobile chino apk
-como instalar fifa mobile chino apk
-como jugar fifa mobile chino apk
-como actualizar fifa mobile chino apk
-como hackear fifa mobile chino apk
-como tener monedas en fifa mobile chino apk
-como tener jugadores en fifa mobile chino apk
-como tener licencia en fifa mobile chino apk
-como solucionar error en fifa mobile chino apk
-como quitar publicidad en fifa mobile chino apk
-
Reseñas positivas de FIFA Mobile Chino APK
-
-
"Me encanta este juego. Tiene mucha más variedad y diversión que el FIFA Mobile normal. Los gráficos son increíbles y los modos de juego son muy entretenidos. Lo recomiendo mucho."
-
"Es el mejor juego de fútbol para móviles que he jugado. Tiene todo lo que le falta al FIFA Mobile original. Más modos, más jugadores, más eventos, más recompensas. Es una pasada."
-
"No entiendo por qué EA Sports no hace este juego para todo el mundo. Es mucho mejor que el FIFA Mobile que tenemos en Europa. Tiene más opciones y más calidad. Es una maravilla."
-
-
Reseñas negativas de FIFA Mobile Chino APK
-
-
"No me gusta nada este juego. Es una copia barata del FIFA Mobile original. Los gráficos son feos y los sonidos son molestos. Los modos de juego son aburridos y repetitivos. No lo recomiendo."
-
"Es un juego muy malo. Tiene muchos errores y problemas. Se cierra solo o se queda colgado. Los controles son malos y la jugabilidad es pésima. No vale la pena."
-
"No entiendo cómo hay gente que juega a esto. Es una basura. No tiene nada que ver con el FIFA Mobile original. No tiene licencias ni jugadores reales. Es una estafa."
-
-
Conclusión
-
En conclusión, podemos decir que FIFA Mobile Chino APK es una versión alternativa de FIFA Mobile, el juego oficial de EA Sports para dispositivos móviles. Esta versión está desarrollada por Tencent, una empresa china que tiene los derechos de distribución de FIFA en China.
-
FIFA Mobile Chino APK tiene algunas características y opciones diferentes a la versión original de FIFA Mobile, como una interfaz más colorida, más modos de juego disponibles, más opciones de personalización para tu equipo, más eventos y actividades especiales, más jugadores y leyendas disponibles para fichar, un sistema de recompensas más generoso y variado, y un mercado de transferencias más dinámico y competitivo.
-
FIFA Mobile Chino APK también tiene algunas vent ajas y desventajas, como un idioma diferente al español, un mayor riesgo de virus o malware, un mayor consumo de recursos y datos, y un mayor nivel de dificultad y competencia.
-
FIFA Mobile Chino APK se puede descargar e instalar en dispositivos Android e iOS, siguiendo unos sencillos pasos que te hemos explicado en este artículo. Sin embargo, debes tener en cuenta que no se trata de una versión oficial ni está disponible en las tiendas oficiales de aplicaciones, por lo que debes tomar algunas precauciones al usarla.
-
FIFA Mobile Chino APK se diferencia de FIFA Mobile APK, la versión original del juego, en algunos aspectos que también te hemos detallado en este artículo. Ambos juegos tienen sus similitudes y diferencias, y depende de tu gusto y preferencia el elegir uno u otro.
-
¿Por qué deberías probar FIFA Mobile Chino APK?
-
Si te gustan los juegos de fútbol y quieres probar algo diferente al FIFA Mobile original, puedes darle una oportunidad a FIFA Mobile Chino APK. Este juego te ofrece más contenido y opciones que la versión original, lo que lo hace más divertido y variado. Además, tiene una mejor calidad gráfica y sonora, lo que lo hace más atractivo y realista. También tiene una mayor compatibilidad con diferentes dispositivos y sistemas operativos, lo que lo hace más accesible y fácil de usar. Y por si fuera poco, tiene una comunidad más activa y participativa, lo que lo hace más social e interactivo.
-
¿Qué precauciones debes tomar al usar FIFA Mobile Chino APK?
-
Si decides probar FIFA Mobile Chino APK, debes tener en cuenta algunas precauciones para evitar problemas o inconvenientes. Algunas de estas precauciones son:
-
-
Verifica la fuente de descarga del archivo APK o IPA, y asegúrate de que sea segura y confiable. Evita los sitios web sospechosos o fraudulentos que puedan contener virus o malware.
-
Respeta las normas y condiciones de uso del juego, y no hagas trampas ni abuses de otros usuarios. De lo contrario, podrías ser baneado o sancionado por los administradores del juego.
-
No compartas tus datos personales ni financieros con nadie dentro del juego, ni accedas a enlaces o promociones dudosas. Podrías ser víctima de estafas o robos de identidad.
-
No gastes demasiado dinero real en el juego, ni te obsesiones con obtener los mejores jugadores o las mejores recompensas. Recuerda que se trata de un juego para divertirte y pasar el rato, no para competir o presumir.
-
-
Preguntas frecuentes sobre FIFA Mobile Chino APK
-
Para terminar este artículo, te vamos a responder algunas de las preguntas frecuentes que pueden surgirte sobre FIFA Mobile Chino APK. Esperamos que te sean útiles y te ayuden a resolver tus dudas.
-
¿FIFA Mobile Chino APK es gratis?
-
Sí, FIFA Mobile Chino APK es gratis. No tienes que pagar nada para descargarlo e instalarlo en tu dispositivo. Sin embargo, el juego tiene compras integradas que te permiten obtener monedas, puntos o sobres con dinero real. Estas compras son opcionales y no son necesarias para jugar.
-
¿FIFA Mobile Chino APK es seguro?
-
No podemos garantizar al 100% que FIFA Mobile Chino APK sea seguro. Al no ser una versión oficial ni estar disponible en las tiendas oficiales de aplicaciones, existe el riesgo de que el archivo APK o IPA contenga virus o malware que puedan dañar tu dispositivo o comprometer tu seguridad. Por eso, te recomendamos que verifiques la fuente de descarga del archivo y que uses un antivirus o un firewall para proteger tu dispositivo.
-
¿FIFA Mobile Chino APK está en español?
-
No, FIFA Mobile Chino APK no está en español. El idioma principal del juego es el chino mandarín, aunque también tiene algunos elementos en inglés. No hay opción para cambiar el idioma del juego al español u otro idioma. Por eso, si no entiendes el chino o el inglés, puede que tengas dificultades para jugar o disfrutar del juego.
-
¿FIFA Mobile Chino APK se puede jugar con otros usuarios?
-
Sí, FIFA Mobile Chino APK se puede jugar con otros usuarios. El juego tiene un modo multijugador que te permite enfrentarte a otros jugadores en partidos online, ya sea en el modo versus, el modo ataque o el modo torneo. También puedes unirte a una liga o un club para cooperar o competir con otros usuarios, y participar en eventos y actividades especiales que te dan la oportunidad de ganar recompensas y reconocimientos.
-
¿FIFA Mobile Chino APK se actualiza con frecuencia?
-
Sí, FIFA Mobile Chino APK se actualiza con frecuencia. Los desarrolladores del juego suelen lanzar nuevas versiones del archivo APK o IPA cada cierto tiempo, para añadir nuevas características, opciones, eventos, jugadores y correcciones de errores. Por eso, te recomendamos que estés atento a las novedades y que descargues la última versión disponible para disfrutar de la mejor experiencia de juego.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_unclip.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_unclip.py
deleted file mode 100644
index e87536bce3e43212288b4f7aa710b49dec97bf8d..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_unclip.py
+++ /dev/null
@@ -1,303 +0,0 @@
-# Copyright 2022 Kakao Brain and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import numpy as np
-import paddle
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput
-from .scheduling_utils import SchedulerMixin
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->UnCLIP
-class UnCLIPSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: paddle.Tensor
- pred_original_sample: Optional[paddle.Tensor] = None
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
-
- def alpha_bar(time_step):
- return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return paddle.to_tensor(betas, dtype=paddle.float32)
-
-
-class UnCLIPScheduler(SchedulerMixin, ConfigMixin):
- """
- This is a modified DDPM Scheduler specifically for the karlo unCLIP model.
-
- This scheduler has some minor variations in how it calculates the learned range variance and dynamically
- re-calculates betas based off the timesteps it is skipping.
-
- The scheduler also uses a slightly different step ratio when computing timesteps to use for inference.
-
- See [`~DDPMScheduler`] for more information on DDPM scheduling
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- variance_type (`str`):
- options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small_log`
- or `learned_range`.
- clip_sample (`bool`, default `True`):
- option to clip predicted sample between `-clip_sample_range` and `clip_sample_range` for numerical
- stability.
- clip_sample_range (`float`, default `1.0`):
- The range to clip the sample between. See `clip_sample`.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion process)
- or `sample` (directly predicting the noisy sample`)
- """
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- variance_type: str = "fixed_small_log",
- clip_sample: bool = True,
- clip_sample_range: Optional[float] = 1.0,
- prediction_type: str = "epsilon",
- ):
- # beta scheduler is "squaredcos_cap_v2"
- self.betas = betas_for_alpha_bar(num_train_timesteps)
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
- self.one = paddle.to_tensor(1.0)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # setable values
- self.num_inference_steps = None
- self.timesteps = paddle.to_tensor(np.arange(0, num_train_timesteps)[::-1].copy())
-
- self.variance_type = variance_type
-
- def scale_model_input(self, sample: paddle.Tensor, timestep: Optional[int] = None) -> paddle.Tensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`paddle.Tensor`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- return sample
-
- def set_timesteps(self, num_inference_steps: int):
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Note that this scheduler uses a slightly different step ratio than the other diffusers schedulers. The
- different step ratio is to mimic the original karlo implementation and does not affect the quality or accuracy
- of the results.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- self.num_inference_steps = num_inference_steps
- step_ratio = (self.config.num_train_timesteps - 1) / (self.num_inference_steps - 1)
- timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64)
- self.timesteps = paddle.to_tensor(timesteps)
-
- def _get_variance(self, t, prev_timestep=None, predicted_variance=None, variance_type=None):
- if prev_timestep is None:
- prev_timestep = t - 1
-
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.one
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- if prev_timestep == t - 1:
- beta = self.betas[t]
- else:
- beta = 1 - alpha_prod_t / alpha_prod_t_prev
-
- # For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf)
- # and sample from it to get previous sample
- # x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample
- variance = beta_prod_t_prev / beta_prod_t * beta
-
- if variance_type is None:
- variance_type = self.config.variance_type
-
- # hacks - were probably added for training stability
- if variance_type == "fixed_small_log":
- variance = paddle.log(paddle.clip(variance, min=1e-20))
- variance = paddle.exp(0.5 * variance)
- elif variance_type == "learned_range":
- # NOTE difference with DDPM scheduler
- min_log = variance.log()
- max_log = beta.log()
-
- frac = (predicted_variance + 1) / 2
- variance = frac * max_log + (1 - frac) * min_log
-
- return variance
-
- def step(
- self,
- model_output: paddle.Tensor,
- timestep: int,
- sample: paddle.Tensor,
- prev_timestep: Optional[int] = None,
- generator=None,
- return_dict: bool = True,
- ) -> Union[UnCLIPSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`paddle.Tensor`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`paddle.Tensor`):
- current instance of sample being created by diffusion process.
- prev_timestep (`int`, *optional*): The previous timestep to predict the previous sample at.
- Used to dynamically compute beta. If not given, `t-1` is used and the pre-computed beta is used.
- generator: random number generator.
- return_dict (`bool`): option for returning tuple rather than UnCLIPSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.UnCLIPSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.UnCLIPSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
-
- """
-
- t = timestep
-
- if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type == "learned_range":
- model_output, predicted_variance = model_output.split(
- [sample.shape[1], model_output.shape[1] - sample.shape[1]], axis=1
- )
- else:
- predicted_variance = None
-
- # 1. compute alphas, betas
- if prev_timestep is None:
- prev_timestep = t - 1
-
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.one
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- if prev_timestep == t - 1:
- beta = self.betas[t]
- alpha = self.alphas[t]
- else:
- beta = 1 - alpha_prod_t / alpha_prod_t_prev
- alpha = 1 - beta
-
- # 2. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
- if self.config.prediction_type == "epsilon":
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
- elif self.config.prediction_type == "sample":
- pred_original_sample = model_output
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon` or `sample`"
- " for the UnCLIPScheduler."
- )
-
- # 3. Clip "predicted x_0"
- if self.config.clip_sample:
- pred_original_sample = paddle.clip(
- pred_original_sample, -self.config.clip_sample_range, self.config.clip_sample_range
- )
-
- # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * beta) / beta_prod_t
- current_sample_coeff = alpha ** (0.5) * beta_prod_t_prev / beta_prod_t
-
- # 5. Compute predicted previous sample µ_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
-
- # 6. Add noise
- variance = 0
- if t > 0:
- variance_noise = paddle.randn(model_output.shape, generator=generator, dtype=model_output.dtype)
-
- variance = self._get_variance(
- t,
- predicted_variance=predicted_variance,
- prev_timestep=prev_timestep,
- )
-
- if self.variance_type == "fixed_small_log":
- variance = variance
- elif self.variance_type == "learned_range":
- variance = (0.5 * variance).exp()
- else:
- raise ValueError(
- f"variance_type given as {self.variance_type} must be one of `fixed_small_log` or `learned_range`"
- " for the UnCLIPScheduler."
- )
-
- variance = variance * variance_noise
-
- pred_prev_sample = pred_prev_sample + variance
-
- if not return_dict:
- return (pred_prev_sample,)
-
- return UnCLIPSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py
deleted file mode 100644
index 201359c4e743aed285694668e13da6dd5a40b621..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py
+++ /dev/null
@@ -1,187 +0,0 @@
-
-from __future__ import absolute_import
-
-import sys
-import torch
-import torch.nn as nn
-import torch.nn.init as init
-from torch.autograd import Variable
-import numpy as np
-from pdb import set_trace as st
-from skimage import color
-from IPython import embed
-from model.stylegan.lpips import pretrained_networks as pn
-
-import model.stylegan.lpips as util
-
-def spatial_average(in_tens, keepdim=True):
- return in_tens.mean([2,3],keepdim=keepdim)
-
-def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W
- in_H = in_tens.shape[2]
- scale_factor = 1.*out_H/in_H
-
- return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens)
-
-# Learned perceptual metric
-class PNetLin(nn.Module):
- def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, version='0.1', lpips=True):
- super(PNetLin, self).__init__()
-
- self.pnet_type = pnet_type
- self.pnet_tune = pnet_tune
- self.pnet_rand = pnet_rand
- self.spatial = spatial
- self.lpips = lpips
- self.version = version
- self.scaling_layer = ScalingLayer()
-
- if(self.pnet_type in ['vgg','vgg16']):
- net_type = pn.vgg16
- self.chns = [64,128,256,512,512]
- elif(self.pnet_type=='alex'):
- net_type = pn.alexnet
- self.chns = [64,192,384,256,256]
- elif(self.pnet_type=='squeeze'):
- net_type = pn.squeezenet
- self.chns = [64,128,256,384,384,512,512]
- self.L = len(self.chns)
-
- self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)
-
- if(lpips):
- self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
- self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
- self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
- self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
- self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
- self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4]
- if(self.pnet_type=='squeeze'): # 7 layers for squeezenet
- self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout)
- self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout)
- self.lins+=[self.lin5,self.lin6]
-
- def forward(self, in0, in1, retPerLayer=False):
- # v0.0 - original release had a bug, where input was not scaled
- in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1)
- outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input)
- feats0, feats1, diffs = {}, {}, {}
-
- for kk in range(self.L):
- feats0[kk], feats1[kk] = util.normalize_tensor(outs0[kk]), util.normalize_tensor(outs1[kk])
- diffs[kk] = (feats0[kk]-feats1[kk])**2
-
- if(self.lpips):
- if(self.spatial):
- res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)]
- else:
- if(self.spatial):
- res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)]
-
- val = res[0]
- for l in range(1,self.L):
- val += res[l]
-
- if(retPerLayer):
- return (val, res)
- else:
- return val
-
-class ScalingLayer(nn.Module):
- def __init__(self):
- super(ScalingLayer, self).__init__()
- self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None])
- self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None])
-
- def forward(self, inp):
- return (inp - self.shift) / self.scale
-
-
-class NetLinLayer(nn.Module):
- ''' A single linear layer which does a 1x1 conv '''
- def __init__(self, chn_in, chn_out=1, use_dropout=False):
- super(NetLinLayer, self).__init__()
-
- layers = [nn.Dropout(),] if(use_dropout) else []
- layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),]
- self.model = nn.Sequential(*layers)
-
-
-class Dist2LogitLayer(nn.Module):
- ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) '''
- def __init__(self, chn_mid=32, use_sigmoid=True):
- super(Dist2LogitLayer, self).__init__()
-
- layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True),]
- layers += [nn.LeakyReLU(0.2,True),]
- layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True),]
- layers += [nn.LeakyReLU(0.2,True),]
- layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True),]
- if(use_sigmoid):
- layers += [nn.Sigmoid(),]
- self.model = nn.Sequential(*layers)
-
- def forward(self,d0,d1,eps=0.1):
- return self.model.forward(torch.cat((d0,d1,d0-d1,d0/(d1+eps),d1/(d0+eps)),dim=1))
-
-class BCERankingLoss(nn.Module):
- def __init__(self, chn_mid=32):
- super(BCERankingLoss, self).__init__()
- self.net = Dist2LogitLayer(chn_mid=chn_mid)
- # self.parameters = list(self.net.parameters())
- self.loss = torch.nn.BCELoss()
-
- def forward(self, d0, d1, judge):
- per = (judge+1.)/2.
- self.logit = self.net.forward(d0,d1)
- return self.loss(self.logit, per)
-
-# L2, DSSIM metrics
-class FakeNet(nn.Module):
- def __init__(self, use_gpu=True, colorspace='Lab'):
- super(FakeNet, self).__init__()
- self.use_gpu = use_gpu
- self.colorspace=colorspace
-
-class L2(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert(in0.size()[0]==1) # currently only supports batchSize 1
-
- if(self.colorspace=='RGB'):
- (N,C,X,Y) = in0.size()
- value = torch.mean(torch.mean(torch.mean((in0-in1)**2,dim=1).view(N,1,X,Y),dim=2).view(N,1,1,Y),dim=3).view(N)
- return value
- elif(self.colorspace=='Lab'):
- value = util.l2(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
- util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
- ret_var = Variable( torch.Tensor((value,) ) )
- if(self.use_gpu):
- ret_var = ret_var.cuda()
- return ret_var
-
-class DSSIM(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert(in0.size()[0]==1) # currently only supports batchSize 1
-
- if(self.colorspace=='RGB'):
- value = util.dssim(1.*util.tensor2im(in0.data), 1.*util.tensor2im(in1.data), range=255.).astype('float')
- elif(self.colorspace=='Lab'):
- value = util.dssim(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
- util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
- ret_var = Variable( torch.Tensor((value,) ) )
- if(self.use_gpu):
- ret_var = ret_var.cuda()
- return ret_var
-
-def print_network(net):
- num_params = 0
- for param in net.parameters():
- num_params += param.numel()
- print('Network',net)
- print('Total number of parameters: %d' % num_params)
diff --git a/spaces/A00001/bingothoo/src/components/chat-attachments.tsx b/spaces/A00001/bingothoo/src/components/chat-attachments.tsx
deleted file mode 100644
index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/chat-attachments.tsx
+++ /dev/null
@@ -1,37 +0,0 @@
-import Image from 'next/image'
-import ClearIcon from '@/assets/images/clear.svg'
-import RefreshIcon from '@/assets/images/refresh.svg'
-import { FileItem } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-import { useBing } from '@/lib/hooks/use-bing'
-
-type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'>
-
-export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) {
- return attachmentList.length ? (
-
- {attachmentList.map(file => (
-
- {file.status === 'loading' && (
-
-
-
)
- }
- {file.status !== 'error' && (
-
-
-
)
- }
- {file.status === 'error' && (
-
- uploadImage(file.url)} />
-
- )}
-
-
- ))}
-
- ) : null
-}
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/rvm.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/rvm.py
deleted file mode 100644
index 028324529531dd7ee97210dfd890fed717447be0..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/rvm.py
+++ /dev/null
@@ -1,106 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-import torch
-from torch import nn
-import torchaudio
-
-
-def db_to_scale(volume: tp.Union[float, torch.Tensor]):
- return 10 ** (volume / 20)
-
-
-def scale_to_db(scale: torch.Tensor, min_volume: float = -120):
- min_scale = db_to_scale(min_volume)
- return 20 * torch.log10(scale.clamp(min=min_scale))
-
-
-class RelativeVolumeMel(nn.Module):
- """Relative volume melspectrogram measure.
-
- Computes a measure of distance over two mel spectrogram that is interpretable in terms
- of decibels. Given `x_ref` and `x_est` two waveforms of shape `[*, T]`, it will
- first renormalize both by the ground truth of `x_ref`.
-
- Then it computes the mel spectrogram `z_ref` and `z_est` and compute volume of the difference
- relative to the volume of `z_ref` for each time-frequency bin. It further adds some limits, e.g.
- clamping the values between -25 and 25 dB (controlled by `min_relative_volume` and `max_relative_volume`)
- with the goal of avoiding the loss being dominated by parts where the reference is almost silent.
- Indeed, volumes in dB can take unbounded values both towards -oo and +oo, which can make the final
- average metric harder to interpret. Besides, anything below -30 dB of attenuation would sound extremely
- good (for a neural network output, although sound engineers typically aim for much lower attenuations).
- Similarly, anything above +30 dB would just be completely missing the target, and there is no point
- in measuring by exactly how much it missed it. -25, 25 is a more conservative range, but also more
- in line with what neural nets currently can achieve.
-
- For instance, a Relative Volume Mel (RVM) score of -10 dB means that on average, the delta between
- the target and reference mel-spec is 10 dB lower than the reference mel-spec value.
-
- The metric can be aggregated over a given frequency band in order have different insights for
- different region of the spectrum. `num_aggregated_bands` controls the number of bands.
-
- ..Warning:: While this function is optimized for interpretability, nothing was done to ensure it
- is numerically stable when computing its gradient. We thus advise against using it as a training loss.
-
- Args:
- sample_rate (int): Sample rate of the input audio.
- n_mels (int): Number of mel bands to use.
- n_fft (int): Number of frequency bins for the STFT.
- hop_length (int): Hop length of the STFT and the mel-spectrogram.
- min_relative_volume (float): The error `z_ref - z_est` volume is given relative to
- the volume of `z_ref`. If error is smaller than -25 dB of `z_ref`, then it is clamped.
- max_relative_volume (float): Same as `min_relative_volume` but clamping if the error is larger than that.
- max_initial_gain (float): When rescaling the audio at the very beginning, we will limit the gain
- to that amount, to avoid rescaling near silence. Given in dB.
- min_activity_volume (float): When computing the reference level from `z_ref`, will clamp low volume
- bins to that amount. This is effectively our "zero" level for the reference mel-spectrogram,
- and anything below that will be considered equally.
- num_aggregated_bands (int): Number of bands to keep when computing the average RVM value.
- For instance, a value of 3 would give 3 scores, roughly for low, mid and high freqs.
- """
- def __init__(self, sample_rate: int = 24000, n_mels: int = 80, n_fft: int = 512,
- hop_length: int = 128, min_relative_volume: float = -25,
- max_relative_volume: float = 25, max_initial_gain: float = 25,
- min_activity_volume: float = -25,
- num_aggregated_bands: int = 4) -> None:
- super().__init__()
- self.melspec = torchaudio.transforms.MelSpectrogram(
- n_mels=n_mels, n_fft=n_fft, hop_length=hop_length,
- normalized=True, sample_rate=sample_rate, power=2)
- self.min_relative_volume = min_relative_volume
- self.max_relative_volume = max_relative_volume
- self.max_initial_gain = max_initial_gain
- self.min_activity_volume = min_activity_volume
- self.num_aggregated_bands = num_aggregated_bands
-
- def forward(self, estimate: torch.Tensor, ground_truth: torch.Tensor) -> tp.Dict[str, torch.Tensor]:
- """Compute RVM metric between estimate and reference samples.
-
- Args:
- estimate (torch.Tensor): Estimate sample.
- ground_truth (torch.Tensor): Reference sample.
-
- Returns:
- dict[str, torch.Tensor]: Metrics with keys `rvm` for the overall average, and `rvm_{k}`
- for the RVM over the k-th band (k=0..num_aggregated_bands - 1).
- """
- min_scale = db_to_scale(-self.max_initial_gain)
- std = ground_truth.pow(2).mean().sqrt().clamp(min=min_scale)
- z_gt = self.melspec(ground_truth / std).sqrt()
- z_est = self.melspec(estimate / std).sqrt()
-
- delta = z_gt - z_est
- ref_db = scale_to_db(z_gt, self.min_activity_volume)
- delta_db = scale_to_db(delta.abs(), min_volume=-120)
- relative_db = (delta_db - ref_db).clamp(self.min_relative_volume, self.max_relative_volume)
- dims = list(range(relative_db.dim()))
- dims.remove(dims[-2])
- losses_per_band = relative_db.mean(dim=dims)
- aggregated = [chunk.mean() for chunk in losses_per_band.chunk(self.num_aggregated_bands, dim=0)]
- metrics = {f'rvm_{index}': value for index, value in enumerate(aggregated)}
- metrics['rvm'] = losses_per_band.mean()
- return metrics
diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/readme.md b/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/readme.md
deleted file mode 100644
index 7cffcfc72069ff9a098d292f9e37035031e19081..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/readme.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Code from [rosinality-stylegan2-pytorch-cp](https://github.com/senior-sigan/rosinality-stylegan2-pytorch-cpu)
-
-Scripts to convert rosinality/stylegan2-pytorch to the CPU compatible format
-
-If you would like to use CPU for testing or have a problem regarding the cpp extention (fused and upfirdn2d), please make the following changes:
-
-Change `model.stylegan.op` to `model.stylegan.op_cpu`
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/util.py#L14
-
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/simple_augment.py#L12
-
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/stylegan/model.py#L11
diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/models.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/models.py
deleted file mode 100644
index 3cf5456d1ee9a26a4afe58cea2b11ad78033e01e..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/models.py
+++ /dev/null
@@ -1,951 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchlibrosa.stft import Spectrogram, LogmelFilterBank
-from torchlibrosa.augmentation import SpecAugmentation
-
-from audio_infer.pytorch.pytorch_utils import do_mixup, interpolate, pad_framewise_output
-import os
-import sys
-import math
-import numpy as np
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.parameter import Parameter
-from torchlibrosa.stft import Spectrogram, LogmelFilterBank
-from torchlibrosa.augmentation import SpecAugmentation
-from audio_infer.pytorch.pytorch_utils import do_mixup
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-import warnings
-from functools import partial
-#from mmdet.models.builder import BACKBONES
-from mmdet.utils import get_root_logger
-from mmcv.runner import load_checkpoint
-os.environ['TORCH_HOME'] = '../pretrained_models'
-from copy import deepcopy
-from timm.models.helpers import load_pretrained
-from torch.cuda.amp import autocast
-from collections import OrderedDict
-import io
-import re
-from mmcv.runner import _load_checkpoint, load_state_dict
-import mmcv.runner
-import copy
-import random
-from einops import rearrange
-from einops.layers.torch import Rearrange, Reduce
-from torch import nn, einsum
-
-
-def load_checkpoint(model,
- filename,
- map_location=None,
- strict=False,
- logger=None,
- revise_keys=[(r'^module\.', '')]):
- """Load checkpoint from a file or URI.
-
- Args:
- model (Module): Module to load checkpoint.
- filename (str): Accept local filepath, URL, ``torchvision://xxx``,
- ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
- details.
- map_location (str): Same as :func:`torch.load`.
- strict (bool): Whether to allow different params for the model and
- checkpoint.
- logger (:mod:`logging.Logger` or None): The logger for error message.
- revise_keys (list): A list of customized keywords to modify the
- state_dict in checkpoint. Each item is a (pattern, replacement)
- pair of the regular expression operations. Default: strip
- the prefix 'module.' by [(r'^module\\.', '')].
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
-
- checkpoint = _load_checkpoint(filename, map_location, logger)
- new_proj = torch.nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(4, 4), padding=(2, 2))
- new_proj.weight = torch.nn.Parameter(torch.sum(checkpoint['patch_embed1.proj.weight'], dim=1).unsqueeze(1))
- checkpoint['patch_embed1.proj.weight'] = new_proj.weight
- # OrderedDict is a subclass of dict
- if not isinstance(checkpoint, dict):
- raise RuntimeError(
- f'No state_dict found in checkpoint file {filename}')
- # get state_dict from checkpoint
- if 'state_dict' in checkpoint:
- state_dict = checkpoint['state_dict']
- else:
- state_dict = checkpoint
-
- # strip prefix of state_dict
- metadata = getattr(state_dict, '_metadata', OrderedDict())
- for p, r in revise_keys:
- state_dict = OrderedDict(
- {re.sub(p, r, k): v
- for k, v in state_dict.items()})
- state_dict = OrderedDict({k.replace('backbone.',''):v for k,v in state_dict.items()})
- # Keep metadata in state_dict
- state_dict._metadata = metadata
-
- # load state_dict
- load_state_dict(model, state_dict, strict, logger)
- return checkpoint
-
-def init_layer(layer):
- """Initialize a Linear or Convolutional layer. """
- nn.init.xavier_uniform_(layer.weight)
-
- if hasattr(layer, 'bias'):
- if layer.bias is not None:
- layer.bias.data.fill_(0.)
-
-
-def init_bn(bn):
- """Initialize a Batchnorm layer. """
- bn.bias.data.fill_(0.)
- bn.weight.data.fill_(1.)
-
-
-
-
-class TimeShift(nn.Module):
- def __init__(self, mean, std):
- super().__init__()
- self.mean = mean
- self.std = std
-
- def forward(self, x):
- if self.training:
- shift = torch.empty(1).normal_(self.mean, self.std).int().item()
- x = torch.roll(x, shift, dims=2)
- return x
-
-class LinearSoftPool(nn.Module):
- """LinearSoftPool
- Linear softmax, takes logits and returns a probability, near to the actual maximum value.
- Taken from the paper:
- A Comparison of Five Multiple Instance Learning Pooling Functions for Sound Event Detection with Weak Labeling
- https://arxiv.org/abs/1810.09050
- """
- def __init__(self, pooldim=1):
- super().__init__()
- self.pooldim = pooldim
-
- def forward(self, logits, time_decision):
- return (time_decision**2).sum(self.pooldim) / time_decision.sum(
- self.pooldim)
-
-class PVT(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num):
-
- super(PVT, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- self.time_shift = TimeShift(0, 10)
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
- self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001,
- fdim=64,
- patch_size=7,
- stride=4,
- in_chans=1,
- num_classes=classes_num,
- embed_dims=[64, 128, 320, 512],
- depths=[3, 4, 6, 3],
- num_heads=[1, 2, 5, 8],
- mlp_ratios=[8, 8, 4, 4],
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- drop_path_rate=0.1,
- sr_ratios=[8, 4, 2, 1],
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- num_stages=4,
- #pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth'
- )
- #self.temp_pool = LinearSoftPool()
- self.avgpool = nn.AdaptiveAvgPool1d(1)
- self.fc_audioset = nn.Linear(512, classes_num, bias=True)
-
- self.init_weights()
-
- def init_weights(self):
- init_bn(self.bn0)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None):
- """Input: (batch_size, times_steps, freq_bins)"""
-
- interpolate_ratio = 32
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
- frames_num = x.shape[2]
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.time_shift(x)
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
- #print(x.shape) #torch.Size([10, 1, 1001, 64])
- x = self.pvt_transformer(x)
- #print(x.shape) #torch.Size([10, 800, 128])
- x = torch.mean(x, dim=3)
-
- x = x.transpose(1, 2).contiguous()
- framewise_output = torch.sigmoid(self.fc_audioset(x))
- #clipwise_output = torch.mean(framewise_output, dim=1)
- #clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1)
- x = framewise_output.transpose(1, 2).contiguous()
- x = self.avgpool(x)
- clipwise_output = torch.flatten(x, 1)
- #print(framewise_output.shape) #torch.Size([10, 100, 17])
- framewise_output = interpolate(framewise_output, interpolate_ratio)
- #framewise_output = framewise_output[:,:1000,:]
- #framewise_output = pad_framewise_output(framewise_output, frames_num)
- output_dict = {'framewise_output': framewise_output,
- 'clipwise_output': clipwise_output}
-
- return output_dict
-
-class PVT2(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num):
-
- super(PVT2, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- self.time_shift = TimeShift(0, 10)
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
- self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001,
- fdim=64,
- patch_size=7,
- stride=4,
- in_chans=1,
- num_classes=classes_num,
- embed_dims=[64, 128, 320, 512],
- depths=[3, 4, 6, 3],
- num_heads=[1, 2, 5, 8],
- mlp_ratios=[8, 8, 4, 4],
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- drop_path_rate=0.1,
- sr_ratios=[8, 4, 2, 1],
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- num_stages=4,
- pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth'
- )
- #self.temp_pool = LinearSoftPool()
- self.fc_audioset = nn.Linear(512, classes_num, bias=True)
-
- self.init_weights()
-
- def init_weights(self):
- init_bn(self.bn0)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None):
- """Input: (batch_size, times_steps, freq_bins)"""
-
- interpolate_ratio = 32
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
- frames_num = x.shape[2]
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- #x = self.time_shift(x)
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
- #print(x.shape) #torch.Size([10, 1, 1001, 64])
- x = self.pvt_transformer(x)
- #print(x.shape) #torch.Size([10, 800, 128])
- x = torch.mean(x, dim=3)
-
- x = x.transpose(1, 2).contiguous()
- framewise_output = torch.sigmoid(self.fc_audioset(x))
- clipwise_output = torch.mean(framewise_output, dim=1)
- #clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1)
- #print(framewise_output.shape) #torch.Size([10, 100, 17])
- framewise_output = interpolate(framewise_output, interpolate_ratio)
- #framewise_output = framewise_output[:,:1000,:]
- #framewise_output = pad_framewise_output(framewise_output, frames_num)
- output_dict = {'framewise_output': framewise_output,
- 'clipwise_output': clipwise_output}
-
- return output_dict
-
-class PVT_2layer(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num):
-
- super(PVT_2layer, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- self.time_shift = TimeShift(0, 10)
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
- self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001,
- fdim=64,
- patch_size=7,
- stride=4,
- in_chans=1,
- num_classes=classes_num,
- embed_dims=[64, 128],
- depths=[3, 4],
- num_heads=[1, 2],
- mlp_ratios=[8, 8],
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- drop_path_rate=0.1,
- sr_ratios=[8, 4],
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- num_stages=2,
- pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth'
- )
- #self.temp_pool = LinearSoftPool()
- self.avgpool = nn.AdaptiveAvgPool1d(1)
- self.fc_audioset = nn.Linear(128, classes_num, bias=True)
-
- self.init_weights()
-
- def init_weights(self):
- init_bn(self.bn0)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None):
- """Input: (batch_size, times_steps, freq_bins)"""
-
- interpolate_ratio = 8
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
- frames_num = x.shape[2]
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.time_shift(x)
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
- #print(x.shape) #torch.Size([10, 1, 1001, 64])
- x = self.pvt_transformer(x)
- #print(x.shape) #torch.Size([10, 800, 128])
- x = torch.mean(x, dim=3)
-
- x = x.transpose(1, 2).contiguous()
- framewise_output = torch.sigmoid(self.fc_audioset(x))
- #clipwise_output = torch.mean(framewise_output, dim=1)
- #clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1)
- x = framewise_output.transpose(1, 2).contiguous()
- x = self.avgpool(x)
- clipwise_output = torch.flatten(x, 1)
- #print(framewise_output.shape) #torch.Size([10, 100, 17])
- framewise_output = interpolate(framewise_output, interpolate_ratio)
- #framewise_output = framewise_output[:,:1000,:]
- #framewise_output = pad_framewise_output(framewise_output, frames_num)
- output_dict = {'framewise_output': framewise_output,
- 'clipwise_output': clipwise_output}
-
- return output_dict
-
-class PVT_lr(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num):
-
- super(PVT_lr, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- self.time_shift = TimeShift(0, 10)
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
- self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001,
- fdim=64,
- patch_size=7,
- stride=4,
- in_chans=1,
- num_classes=classes_num,
- embed_dims=[64, 128, 320, 512],
- depths=[3, 4, 6, 3],
- num_heads=[1, 2, 5, 8],
- mlp_ratios=[8, 8, 4, 4],
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- drop_path_rate=0.1,
- sr_ratios=[8, 4, 2, 1],
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- num_stages=4,
- pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth'
- )
- self.temp_pool = LinearSoftPool()
- self.fc_audioset = nn.Linear(512, classes_num, bias=True)
-
- self.init_weights()
-
- def init_weights(self):
- init_bn(self.bn0)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None):
- """Input: (batch_size, times_steps, freq_bins)"""
-
- interpolate_ratio = 32
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
- frames_num = x.shape[2]
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.time_shift(x)
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
- #print(x.shape) #torch.Size([10, 1, 1001, 64])
- x = self.pvt_transformer(x)
- #print(x.shape) #torch.Size([10, 800, 128])
- x = torch.mean(x, dim=3)
-
- x = x.transpose(1, 2).contiguous()
- framewise_output = torch.sigmoid(self.fc_audioset(x))
- clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1)
- #print(framewise_output.shape) #torch.Size([10, 100, 17])
- framewise_output = interpolate(framewise_output, interpolate_ratio)
- #framewise_output = framewise_output[:,:1000,:]
- #framewise_output = pad_framewise_output(framewise_output, frames_num)
- output_dict = {'framewise_output': framewise_output,
- 'clipwise_output': clipwise_output}
-
- return output_dict
-
-
-class PVT_nopretrain(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num):
-
- super(PVT_nopretrain, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- self.time_shift = TimeShift(0, 10)
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
- self.pvt_transformer = PyramidVisionTransformerV2(tdim=1001,
- fdim=64,
- patch_size=7,
- stride=4,
- in_chans=1,
- num_classes=classes_num,
- embed_dims=[64, 128, 320, 512],
- depths=[3, 4, 6, 3],
- num_heads=[1, 2, 5, 8],
- mlp_ratios=[8, 8, 4, 4],
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- drop_path_rate=0.1,
- sr_ratios=[8, 4, 2, 1],
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- num_stages=4,
- #pretrained='https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b2.pth'
- )
- self.temp_pool = LinearSoftPool()
- self.fc_audioset = nn.Linear(512, classes_num, bias=True)
-
- self.init_weights()
-
- def init_weights(self):
- init_bn(self.bn0)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None):
- """Input: (batch_size, times_steps, freq_bins)"""
-
- interpolate_ratio = 32
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
- frames_num = x.shape[2]
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.time_shift(x)
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
- #print(x.shape) #torch.Size([10, 1, 1001, 64])
- x = self.pvt_transformer(x)
- #print(x.shape) #torch.Size([10, 800, 128])
- x = torch.mean(x, dim=3)
-
- x = x.transpose(1, 2).contiguous()
- framewise_output = torch.sigmoid(self.fc_audioset(x))
- clipwise_output = self.temp_pool(x, framewise_output).clamp(1e-7, 1.).squeeze(1)
- #print(framewise_output.shape) #torch.Size([10, 100, 17])
- framewise_output = interpolate(framewise_output, interpolate_ratio)
- framewise_output = framewise_output[:,:1000,:]
- #framewise_output = pad_framewise_output(framewise_output, frames_num)
- output_dict = {'framewise_output': framewise_output,
- 'clipwise_output': clipwise_output}
-
- return output_dict
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0., linear=False):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.dwconv = DWConv(hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
- self.linear = linear
- if self.linear:
- self.relu = nn.ReLU()
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
- elif isinstance(m, nn.Conv2d):
- fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- fan_out //= m.groups
- m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, x, H, W):
- x = self.fc1(x)
- if self.linear:
- x = self.relu(x)
- x = self.dwconv(x, H, W)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., sr_ratio=1, linear=False):
- super().__init__()
- assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}."
-
- self.dim = dim
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- self.q = nn.Linear(dim, dim, bias=qkv_bias)
- self.kv = nn.Linear(dim, dim * 2, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- self.linear = linear
- self.sr_ratio = sr_ratio
- if not linear:
- if sr_ratio > 1:
- self.sr = nn.Conv2d(dim, dim, kernel_size=sr_ratio, stride=sr_ratio)
- self.norm = nn.LayerNorm(dim)
- else:
- self.pool = nn.AdaptiveAvgPool2d(7)
- self.sr = nn.Conv2d(dim, dim, kernel_size=1, stride=1)
- self.norm = nn.LayerNorm(dim)
- self.act = nn.GELU()
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
- elif isinstance(m, nn.Conv2d):
- fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- fan_out //= m.groups
- m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, x, H, W):
- B, N, C = x.shape
- q = self.q(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
-
- if not self.linear:
- if self.sr_ratio > 1:
- x_ = x.permute(0, 2, 1).reshape(B, C, H, W)
- x_ = self.sr(x_).reshape(B, C, -1).permute(0, 2, 1)
- x_ = self.norm(x_)
- kv = self.kv(x_).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- else:
- kv = self.kv(x).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- else:
- x_ = x.permute(0, 2, 1).reshape(B, C, H, W)
- x_ = self.sr(self.pool(x_)).reshape(B, C, -1).permute(0, 2, 1)
- x_ = self.norm(x_)
- x_ = self.act(x_)
- kv = self.kv(x_).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- k, v = kv[0], kv[1]
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
-
- return x
-
-
-class Pooling(nn.Module):
- """
- Implementation of pooling for PoolFormer
- --pool_size: pooling size
- """
- def __init__(self, pool_size=3):
- super().__init__()
- self.pool = nn.AvgPool2d(
- pool_size, stride=1, padding=pool_size//2, count_include_pad=False)
-
- def forward(self, x):
- return self.pool(x) - x
-
-class Block(nn.Module):
-
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, sr_ratio=1, linear=False):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop, sr_ratio=sr_ratio, linear=linear)
- #self.norm3 = norm_layer(dim)
- #self.token_mixer = Pooling(pool_size=3)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop, linear=linear)
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
- elif isinstance(m, nn.Conv2d):
- fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- fan_out //= m.groups
- m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, x, H, W):
- x = x + self.drop_path(self.attn(self.norm1(x), H, W))
- x = x + self.drop_path(self.mlp(self.norm2(x), H, W))
- return x
-
-
-class OverlapPatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
-
- def __init__(self, tdim, fdim, patch_size=7, stride=4, in_chans=3, embed_dim=768):
- super().__init__()
- img_size = (tdim, fdim)
- patch_size = to_2tuple(patch_size)
-
- self.img_size = img_size
- self.patch_size = patch_size
- self.H, self.W = img_size[0] // stride, img_size[1] // stride
- self.num_patches = self.H * self.W
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=stride,
- padding=(patch_size[0] // 3, patch_size[1] // 3))
- self.norm = nn.LayerNorm(embed_dim)
-
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
- elif isinstance(m, nn.Conv2d):
- fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- fan_out //= m.groups
- m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, x):
- x = self.proj(x)
- _, _, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
-
- return x, H, W
-
-
-class PyramidVisionTransformerV2(nn.Module):
- def __init__(self, tdim=1001, fdim=64, patch_size=16, stride=4, in_chans=3, num_classes=1000, embed_dims=[64, 128, 256, 512],
- num_heads=[1, 2, 4, 8], mlp_ratios=[4, 4, 4, 4], qkv_bias=False, qk_scale=None, drop_rate=0.,
- attn_drop_rate=0., drop_path_rate=0.1, norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[3, 4, 6, 3],
- sr_ratios=[8, 4, 2, 1], num_stages=2, linear=False, pretrained=None):
- super().__init__()
- # self.num_classes = num_classes
- self.depths = depths
- self.num_stages = num_stages
- self.linear = linear
-
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
- cur = 0
-
- for i in range(num_stages):
- patch_embed = OverlapPatchEmbed(tdim=tdim if i == 0 else tdim // (2 ** (i + 1)),
- fdim=fdim if i == 0 else tdim // (2 ** (i + 1)),
- patch_size=7 if i == 0 else 3,
- stride=stride if i == 0 else 2,
- in_chans=in_chans if i == 0 else embed_dims[i - 1],
- embed_dim=embed_dims[i])
- block = nn.ModuleList([Block(
- dim=embed_dims[i], num_heads=num_heads[i], mlp_ratio=mlp_ratios[i], qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[cur + j], norm_layer=norm_layer,
- sr_ratio=sr_ratios[i], linear=linear)
- for j in range(depths[i])])
- norm = norm_layer(embed_dims[i])
- cur += depths[i]
-
- setattr(self, f"patch_embed{i + 1}", patch_embed)
- setattr(self, f"block{i + 1}", block)
- setattr(self, f"norm{i + 1}", norm)
- #self.n = nn.Linear(125, 250, bias=True)
- # classification head
- # self.head = nn.Linear(embed_dims[3], num_classes) if num_classes > 0 else nn.Identity()
- self.apply(self._init_weights)
- self.init_weights(pretrained)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
- elif isinstance(m, nn.Conv2d):
- fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- fan_out //= m.groups
- m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
- if m.bias is not None:
- m.bias.data.zero_()
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger)
-
- def freeze_patch_emb(self):
- self.patch_embed1.requires_grad = False
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'pos_embed1', 'pos_embed2', 'pos_embed3', 'pos_embed4', 'cls_token'} # has pos_embed may be better
-
- def get_classifier(self):
- return self.head
-
- def reset_classifier(self, num_classes, global_pool=''):
- self.num_classes = num_classes
- self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- def forward_features(self, x):
- B = x.shape[0]
-
- for i in range(self.num_stages):
- patch_embed = getattr(self, f"patch_embed{i + 1}")
- block = getattr(self, f"block{i + 1}")
- norm = getattr(self, f"norm{i + 1}")
- x, H, W = patch_embed(x)
- #print(x.shape)
- for blk in block:
- x = blk(x, H, W)
- #print(x.shape)
- x = norm(x)
- #if i != self.num_stages - 1:
- x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous()
- #print(x.shape)
- return x
-
- def forward(self, x):
- x = self.forward_features(x)
- # x = self.head(x)
-
- return x
-
-class DWConv(nn.Module):
- def __init__(self, dim=768):
- super(DWConv, self).__init__()
- self.dwconv = nn.Conv2d(dim, dim, 3, 1, 1, bias=True, groups=dim)
-
- def forward(self, x, H, W):
- B, N, C = x.shape
- x = x.transpose(1, 2).view(B, C, H, W)
- x = self.dwconv(x)
- x = x.flatten(2).transpose(1, 2)
-
- return x
-
-
-def _conv_filter(state_dict, patch_size=16):
- """ convert patch embedding weight from manual patchify + linear proj to conv"""
- out_dict = {}
- for k, v in state_dict.items():
- if 'patch_embed.proj.weight' in k:
- v = v.reshape((v.shape[0], 3, patch_size, patch_size))
- out_dict[k] = v
-
- return out_dict
diff --git a/spaces/AIGText/GlyphControl/ldm/models/diffusion/ddpm.py b/spaces/AIGText/GlyphControl/ldm/models/diffusion/ddpm.py
deleted file mode 100644
index 506d5759df5065ea545037cafb9af82c91e75bd2..0000000000000000000000000000000000000000
--- a/spaces/AIGText/GlyphControl/ldm/models/diffusion/ddpm.py
+++ /dev/null
@@ -1,1954 +0,0 @@
-"""
-wild mixture of
-https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
-https://github.com/CompVis/taming-transformers
--- merci
-"""
-
-import torch
-import torch.nn as nn
-import numpy as np
-import pytorch_lightning as pl
-from torch.optim.lr_scheduler import LambdaLR
-from einops import rearrange, repeat
-from contextlib import contextmanager, nullcontext
-from functools import partial
-import itertools
-from tqdm import tqdm
-from torchvision.utils import make_grid
-from pytorch_lightning.utilities.distributed import rank_zero_only
-from omegaconf import ListConfig
-
-from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
-from ldm.modules.ema import LitEma
-from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
-from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL
-from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from ldm.models.diffusion.ddim import DDIMSampler
-
-
-__conditioning_keys__ = {'concat': 'c_concat',
- 'crossattn': 'c_crossattn',
- 'adm': 'y'}
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-def uniform_on_device(r1, r2, shape, device):
- return (r1 - r2) * torch.rand(*shape, device=device) + r2
-
-
-class DDPM(pl.LightningModule):
- # classic DDPM with Gaussian diffusion, in image space
- def __init__(self,
- unet_config,
- timesteps=1000,
- beta_schedule="linear",
- loss_type="l2",
- ckpt_path=None,
- ignore_keys=[],
- load_only_unet=False,
- monitor="val/loss",
- use_ema=True,
- first_stage_key="image",
- image_size=256,
- channels=3,
- log_every_t=100,
- clip_denoised=True,
- linear_start=1e-4,
- linear_end=2e-2,
- cosine_s=8e-3,
- given_betas=None,
- original_elbo_weight=0.,
- v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
- l_simple_weight=1.,
- conditioning_key=None,
- parameterization="eps", # all assuming fixed variance schedules
- scheduler_config=None,
- use_positional_encodings=False,
- learn_logvar=False,
- logvar_init=0.,
- make_it_fit=False,
- ucg_training=None,
- reset_ema=False,
- reset_num_ema_updates=False,
- keep_num_ema_updates=False,
- textemb_merge_config=None,
- merge_textemb = False,
- log_all_grad_norm = False,
- ):
- super().__init__()
- assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"'
- self.parameterization = parameterization
- print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
- self.cond_stage_model = None
- self.clip_denoised = clip_denoised
- self.log_every_t = log_every_t
- self.first_stage_key = first_stage_key
- self.image_size = image_size # try conv?
- self.channels = channels
- self.use_positional_encodings = use_positional_encodings
- self.model = DiffusionWrapper(unet_config, conditioning_key, textemb_merge_config=textemb_merge_config, merge_textemb=merge_textemb)
- count_params(self.model, verbose=True)
- self.use_ema = use_ema
- if self.use_ema:
- self.model_ema = LitEma(self.model)
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
-
- self.use_scheduler = scheduler_config is not None
- if self.use_scheduler:
- self.scheduler_config = scheduler_config
-
- self.v_posterior = v_posterior
- self.original_elbo_weight = original_elbo_weight
- self.l_simple_weight = l_simple_weight
-
- if monitor is not None:
- self.monitor = monitor
- self.make_it_fit = make_it_fit
- if reset_ema: assert exists(ckpt_path)
- if ckpt_path is not None:
- ema_num_updates = self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)
- if reset_ema:
- assert self.use_ema
- print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model, init_num_updates= ema_num_updates if keep_num_ema_updates else 0)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,
- linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
-
- self.loss_type = loss_type
-
- self.learn_logvar = learn_logvar
- self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))
- if self.learn_logvar:
- self.logvar = nn.Parameter(self.logvar, requires_grad=True)
- # else:
- # self.register_buffer('logvar', self.logvar)
-
- self.ucg_training = ucg_training or dict()
- if self.ucg_training:
- self.ucg_prng = np.random.RandomState()
- self.log_all_grad_norm = log_all_grad_norm
-
- def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if exists(given_betas):
- betas = given_betas
- else:
- betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,
- cosine_s=cosine_s)
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.linear_start = linear_start
- self.linear_end = linear_end
- assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0) following IDDPM
- posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (
- 1. - alphas_cumprod) + self.v_posterior * betas
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
- # weights before the simple loss
- if self.parameterization == "eps":
- lvlb_weights = self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))
- elif self.parameterization == "x0":
- lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))
- elif self.parameterization == "v":
- lvlb_weights = torch.ones_like(self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)))
- else:
- raise NotImplementedError("mu not supported")
- lvlb_weights[0] = lvlb_weights[1] #?
- self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)
- assert not torch.isnan(self.lvlb_weights).all()
-
- @contextmanager
- def ema_scope(self, context=None):
- if self.use_ema:
- self.model_ema.store(self.model.parameters())
- self.model_ema.copy_to(self.model)
- if context is not None:
- print(f"{context}: Switched to EMA weights")
- try:
- yield None
- finally:
- if self.use_ema:
- self.model_ema.restore(self.model.parameters())
- if context is not None:
- print(f"{context}: Restored training weights")
-
- @torch.no_grad()
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- if self.make_it_fit:
- n_params = len([name for name, _ in
- itertools.chain(self.named_parameters(),
- self.named_buffers())])
- for name, param in tqdm(
- itertools.chain(self.named_parameters(),
- self.named_buffers()),
- desc="Fitting old weights to new weights",
- total=n_params
- ):
- if not name in sd:
- continue
- old_shape = sd[name].shape
- new_shape = param.shape
- assert len(old_shape) == len(new_shape)
- if len(new_shape) > 2:
- # we only modify first two axes
- assert new_shape[2:] == old_shape[2:]
- # assumes first axis corresponds to output dim
- if not new_shape == old_shape:
- new_param = param.clone()
- old_param = sd[name]
- if len(new_shape) == 1:
- for i in range(new_param.shape[0]):
- new_param[i] = old_param[i % old_shape[0]]
- elif len(new_shape) >= 2:
- for i in range(new_param.shape[0]):
- for j in range(new_param.shape[1]):
- new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]]
-
- n_used_old = torch.ones(old_shape[1])
- for j in range(new_param.shape[1]):
- n_used_old[j % old_shape[1]] += 1
- n_used_new = torch.zeros(new_shape[1])
- for j in range(new_param.shape[1]):
- n_used_new[j] = n_used_old[j % old_shape[1]]
-
- n_used_new = n_used_new[None, :]
- while len(n_used_new.shape) < len(new_shape):
- n_used_new = n_used_new.unsqueeze(-1)
- new_param /= n_used_new
-
- sd[name] = new_param
-
- # missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- # sd, strict=False)
- if not only_model:
- missing, unexpected = self.load_state_dict(sd, strict=False)
- elif path.endswith(".bin"):
- missing, unexpected = self.model.diffusion_model.load_state_dict(sd, strict=False)
- elif path.endswith(".ckpt"):
- missing, unexpected = self.model.load_state_dict(sd, strict=False)
-
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys:\n {missing}")
- if len(unexpected) > 0:
- print(f"\nUnexpected Keys:\n {unexpected}")
-
- if "model_ema.num_updates" in sd and "model_ema.num_updates" not in unexpected:
- return sd["model_ema.num_updates"].item()
- else:
- return 0
- # q(x_t | x_0)
- def q_mean_variance(self, x_start, t):
- """
- Get the distribution q(x_t | x_0).
- :param x_start: the [N x C x ...] tensor of noiseless inputs.
- :param t: the number of diffusion steps (minus 1). Here, 0 means one step.
- :return: A tuple (mean, variance, log_variance), all of x_start's shape.
- """
- mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)
- variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def predict_start_from_z_and_v(self, x_t, t, v):
- # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v
- )
-
- def predict_eps_from_z_and_v(self, x_t, t, v):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t
- )
- # q(x_(t-1) | x_t, x_0)
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
- # p(x_(t-1) | x_t)
- def p_mean_variance(self, x, t, clip_denoised: bool):
- model_out = self.model(x, t)
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- if clip_denoised: # static thresholding
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
- # one sampling step ancestral sampling
- @torch.no_grad()
- def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
- # sampling loop
- @torch.no_grad()
- def p_sample_loop(self, shape, return_intermediates=False):
- device = self.betas.device
- b = shape[0]
- img = torch.randn(shape, device=device)
- intermediates = [img]
- for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
- img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),
- clip_denoised=self.clip_denoised)
- if i % self.log_every_t == 0 or i == self.num_timesteps - 1:
- intermediates.append(img)
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, batch_size=16, return_intermediates=False):
- image_size = self.image_size
- channels = self.channels
- return self.p_sample_loop((batch_size, channels, image_size, image_size),
- return_intermediates=return_intermediates)
- # sampling from q(x_t | x_0)
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
- # get v from x and noise
- def get_v(self, x, noise, t):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x
- )
- # loss type
- def get_loss(self, pred, target, mean=True):
- if self.loss_type == 'l1':
- loss = (target - pred).abs()
- if mean:
- loss = loss.mean()
- elif self.loss_type == 'l2':
- if mean:
- loss = torch.nn.functional.mse_loss(target, pred)
- else:
- loss = torch.nn.functional.mse_loss(target, pred, reduction='none')
- else:
- raise NotImplementedError("unknown loss type '{loss_type}'")
-
- return loss
- # training loss
- def p_losses(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_out = self.model(x_noisy, t)
-
- loss_dict = {}
- if self.parameterization == "eps":
- target = noise
- elif self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported")
- # L_simple
- loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
- log_prefix = 'train' if self.training else 'val'
-
- loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})
- loss_simple = loss.mean() * self.l_simple_weight
- # L_vlb
- loss_vlb = (self.lvlb_weights[t] * loss).mean()
- loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})
- # L_simple + lambda * L_vlb following IDDPM
- loss = loss_simple + self.original_elbo_weight * loss_vlb
-
- loss_dict.update({f'{log_prefix}/loss': loss})
-
- return loss, loss_dict
- # using during training
- def forward(self, x, *args, **kwargs):
- # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size
- # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- return self.p_losses(x, t, *args, **kwargs)
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = rearrange(x, 'b h w c -> b c h w')
- x = x.to(memory_format=torch.contiguous_format).float()
- # if self.trainer.precision == 16:
- # x = x.type(torch.float16)
- return x
-
- def shared_step(self, batch):
- x = self.get_input(batch, self.first_stage_key)
- loss, loss_dict = self(x)
- return loss, loss_dict
- # main training step
- # def training_step(self, batch, batch_idx):
- # change
- def training_step(self, batch, batch_idx, optimizer_idx=0):
- for k in self.ucg_training:
- p = self.ucg_training[k]["p"]
- val = self.ucg_training[k]["val"]
- if val is None:
- val = ""
- for i in range(len(batch[k])):
- if self.ucg_prng.choice(2, p=[1 - p, p]):
- batch[k][i] = val
-
- loss, loss_dict = self.shared_step(batch)
-
- self.log_dict(loss_dict, prog_bar=True,
- logger=True, on_step=True, on_epoch=True)
- # if self.global_step == 19:
- # aa = 1
- self.log("global_step", self.global_step,
- prog_bar=True, logger=True, on_step=True, on_epoch=False)
- ac_loss_str = self.trainer.progress_bar_dict["loss"]
- ac_loss = eval(ac_loss_str) if ac_loss_str!= "nan" else 0
- log_prefix = 'train' if self.training else 'val'
- self.log("{}/loss_accumulated".format(log_prefix),
- ac_loss,
- prog_bar=False, logger=True, on_step=True, on_epoch=False
- )
- # if ac_loss > 0.012:
- # assert self.cond_stage_key
- # print(batch[self.cond_stage_key][:15])
- if self.use_scheduler:
- lr = self.optimizers().param_groups[0]['lr']
- self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- return loss
-
- @torch.no_grad()
- def validation_step(self, batch, batch_idx):
- _, loss_dict_no_ema = self.shared_step(batch)
- with self.ema_scope():
- _, loss_dict_ema = self.shared_step(batch)
- loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}
- self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
- self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
- # ema
- def on_train_batch_end(self, *args, **kwargs):
- if self.use_ema:
- self.model_ema(self.model)
- if self.log_all_grad_norm:
- gradnorm_list = []
- for name, p in self.named_parameters():
- if p.requires_grad:
- grad_norm_v = p.grad.detach().norm().item()
- gradnorm_list.append(grad_norm_v)
- if "textemb_merge_model" in name:
- self.log("all_gradients/{}_norm".format(name),
- gradnorm_list[-1],
- prog_bar=False, logger=True, on_step=True, on_epoch=False
- )
- if grad_norm_v > 0.1:
- print("the norm of gradient w.r.t {} > 0.1: {:.2f}".format
- (
- name, grad_norm_v
- ))
-
- self.log("all_gradients/grad_norm_mean",
- np.mean(gradnorm_list),
- prog_bar=False, logger=True, on_step=True, on_epoch=False
- )
- self.log("all_gradients/grad_norm_max",
- np.max(gradnorm_list),
- prog_bar=False, logger=True, on_step=True, on_epoch=False
- )
- self.log("all_gradients/grad_norm_min",
- np.min(gradnorm_list),
- prog_bar=False, logger=True, on_step=True, on_epoch=False
- )
- self.log("all_gradients/param_num",
- len(gradnorm_list),
- prog_bar=False, logger=True, on_step=True, on_epoch=False
- )
- def _get_rows_from_list(self, samples):
- n_imgs_per_row = len(samples)
- denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):
- log = dict()
- x = self.get_input(batch, self.first_stage_key)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- x = x.to(self.device)[:N]
- log["inputs"] = x
-
- # get diffusion row
- diffusion_row = list()
- x_start = x[:n_row]
-
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(x_start)
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- diffusion_row.append(x_noisy)
-
- log["diffusion_row"] = self._get_rows_from_list(diffusion_row)
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)
-
- log["samples"] = samples
- log["denoise_row"] = self._get_rows_from_list(denoise_row)
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
- # configure optimizers AdamW
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.learn_logvar:
- params = params + [self.logvar]
- opt = torch.optim.AdamW(params, lr=lr)
- return opt
-
-# main class: LDM - first stage, DDPM, conditions
-class LatentDiffusion(DDPM):
- """main class"""
-
- def __init__(self,
- first_stage_config,
- cond_stage_config,
- # textemb_merge_config = None,
- num_timesteps_cond=None,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- force_null_conditioning=False,
- *args, **kwargs):
- self.force_null_conditioning = force_null_conditioning
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs['timesteps']
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = 'concat' if concat_mode else 'crossattn'
- if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning:
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- reset_ema = kwargs.pop("reset_ema", False)
- only_model= kwargs.pop("only_model", False)
- reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False)
- keep_num_ema_updates = kwargs.pop("keep_num_ema_updates", False)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std: #?
- self.scale_factor = scale_factor
- else:
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
- print("instantiate first stage model")
- self.instantiate_first_stage(first_stage_config)
- print("instantiate cond stage model")
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
- self.bbox_tokenizer = None
-
- self.restarted_from_ckpt = False
- if ckpt_path is not None:
- ema_num_updates = self.init_from_ckpt(ckpt_path, ignore_keys, only_model=only_model)
- self.restarted_from_ckpt = True
- if reset_ema:
- assert self.use_ema
- print(
- f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model, init_num_updates= ema_num_updates if keep_num_ema_updates else 0)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- def make_cond_schedule(self, ):
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
- self.cond_ids[:self.num_timesteps_cond] = ids
- # calculate scale factor for the first batch
- @rank_zero_only
- @torch.no_grad()
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
- # only for very first batch
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
- # set rescale weight to 1./std of encodings
- print("### USING STD-RESCALING ###")
- x = super().get_input(batch, self.first_stage_key)
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- del self.scale_factor
- self.register_buffer('scale_factor', 1. / z.flatten().std())
- print(f"setting self.scale_factor to {self.scale_factor}")
- print("### USING STD-RESCALING ###")
- if (
- # not self.disabled and
- self.global_step == 0 and
- self.current_epoch == 0 and batch_idx == 0
- # and self.log_first_step
- ):
- imagecallback = None
- for callback in self.trainer.callbacks:
- if "ImageLogger" in str(callback):
- imagecallback = callback
- break
- if imagecallback is not None and not imagecallback.disabled and imagecallback.log_first_step:
- is_train = self.training
- if is_train:
- self.eval()
- with torch.no_grad():
- # images = pl_module.log_images(batch, split=split, **self.log_images_kwargs)
- images = self.log_images(batch, **imagecallback.log_images_kwargs)
- import os, torchvision
- from PIL import Image
- root = os.path.join(self.logger.save_dir, "images", "init")
- for k in images:
- N = min(images[k].shape[0], imagecallback.max_images)
- images[k] = images[k][:N]
- if isinstance(images[k], torch.Tensor):
- images[k] = images[k].detach().cpu()
- if imagecallback.clamp:
- images[k] = torch.clamp(images[k], -1., 1.)
- grid = torchvision.utils.make_grid(images[k], nrow=4)
- if imagecallback.rescale:
- grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
- grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)
- grid = grid.numpy()
- grid = (grid * 255).astype(np.uint8)
- filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(
- k,
- self.global_step,
- self.current_epoch,
- batch_idx)
- path = os.path.join(root, filename)
- os.makedirs(os.path.split(path)[0], exist_ok=True)
- Image.fromarray(grid).save(path)
- del grid
- del images
- print("log images before training")
- # imagecallback.log_local(self.logger.save_dir, "init", images,
- # self.global_step, self.current_epoch, batch_idx, self,
- # wandb_log = False)
- if is_train:
- self.train()
-
- # if imagecallback is not None and not imagecallback.disabled and imagecallback.log_first_step:
- # imagecallback.log_img(self, batch, batch_idx, split="init")
- # rewrite
- def register_schedule(self,
- given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule: # drop the option ?
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config): # not train
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- # def instantiate_textemb_merge_model(self, config):
- # model = instantiate_from_config(config)
- # if not model.trainable:
- # self.textemb_merge_model = model.eval()
- # self.textemb_merge_model.train = disabled_train
- # for param in self.textemb_merge_model.parameters():
- # param.requires_grad = False
- # else:
- # self.textemb_merge_model = model
-
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != '__is_first_stage__'
- assert config != '__is_unconditional__'
- model = instantiate_from_config(config)
- self.cond_stage_model = model
-
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
- denoise_row = []
- for zd in tqdm(samples, desc=desc):
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
- force_not_quantize=force_no_decoder_quantization))
- n_imgs_per_row = len(denoise_row)
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
- # first stage encoding
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
- return self.scale_factor * z # rescale z before the diffusion process
- # encode the condition
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
- def meshgrid(self, h, w):
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
-
- arr = torch.cat([y, x], dim=-1)
- return arr
-
- def delta_border(self, h, w):
- """
- :param h: height
- :param w: width
- :return: normalized distance to image border,
- wtith min distance = 0 at border and max dist = 0.5 at image center
- """
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
- arr = self.meshgrid(h, w) / lower_right_corner
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
- return edge_dist
-
- def get_weighting(self, h, w, Ly, Lx, device):
- weighting = self.delta_border(h, w)
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
- self.split_input_params["clip_max_weight"], )
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
-
- if self.split_input_params["tie_braker"]:
- L_weighting = self.delta_border(Ly, Lx)
- L_weighting = torch.clip(L_weighting,
- self.split_input_params["clip_min_tie_weight"],
- self.split_input_params["clip_max_tie_weight"])
-
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
- weighting = weighting * L_weighting
- return weighting
-
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
- """
- :param x: img of size (bs, c, h, w)
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
- """
- bs, nc, h, w = x.shape
-
- # number of crops in image
- Ly = (h - kernel_size[0]) // stride[0] + 1
- Lx = (w - kernel_size[1]) // stride[1] + 1
-
- if uf == 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
-
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
-
- elif uf > 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
- dilation=1, padding=0,
- stride=(stride[0] * uf, stride[1] * uf))
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
-
- elif df > 1 and uf == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
- dilation=1, padding=0,
- stride=(stride[0] // df, stride[1] // df))
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
-
- else:
- raise NotImplementedError
-
- return fold, unfold, normalization, weighting
- # rewrite get input for training DM
- @torch.no_grad()
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
- cond_key=None, return_original_cond=False, bs=None, return_x=False):
- x = super().get_input(batch, k)
- if bs is not None:
- x = x[:bs]
- x = x.to(self.device)
- # get scaled latent vector z for training
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
-
- if self.model.conditioning_key is not None and not self.force_null_conditioning:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ['caption', 'coordinates_bbox', "txt"]:
- xc = batch[cond_key]
- elif cond_key in ['class_label', 'cls']:
- xc = batch
- else:
- xc = super().get_input(batch, cond_key).to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
- if bs is not None:
- c = c[:bs]
-
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- ckey = __conditioning_keys__[self.model.conditioning_key]
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {'pos_x': pos_x, 'pos_y': pos_y}
- # latent z + condition c
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_x:
- out.extend([x])
- if return_original_cond:
- out.append(xc)
- return out
- # from latent vector to x
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
- return self.first_stage_model.decode(z)
- # from x to latent vector (not scaled)
- @torch.no_grad()
- def encode_first_stage(self, x):
- return self.first_stage_model.encode(x)
-
- def shared_step(self, batch, **kwargs):
- x, c = self.get_input(batch, self.first_stage_key) #,return_first_stage_outputs=True)
- # print("the shape of the batch data: {} | x[0,0,0,0]: {}".format(x.shape, x[0,0,0,0]))
- loss = self(x, c)
- return loss
-
- def forward(self, x, c, *args, **kwargs):
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- if self.model.conditioning_key is not None:
- assert c is not None
- if self.cond_stage_trainable:
- c = self.get_learned_conditioning(c)
- if self.shorten_cond_schedule: # TODO: drop this option
- tc = self.cond_ids[t].to(self.device)
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
- return self.p_losses(x, c, t, *args, **kwargs)
- # diffusion model
- def apply_model(self, x_noisy, t, cond, return_ids=False):
- if isinstance(cond, dict):
- # hybrid case, cond is expected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond] # text: cross attention
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
- cond = {key: cond}
-
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
- # predict e from x_t and predicted x_start
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
- # KL between q(x_t | x) with N(0, I)
- def _prior_bpd(self, x_start):
- """
- Get the prior KL term for the variational lower-bound, measured in
- bits-per-dim.
- This term can't be optimized, as it only depends on the encoder.
- :param x_start: the [N x C x ...] tensor of inputs.
- :return: a batch of [N] KL values (in bits), one per batch element.
- """
- batch_size = x_start.shape[0]
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
- return mean_flat(kl_prior) / np.log(2.0)
- # rewrite: add the condition / add logvar to L_simple
- def p_losses(self, x_start, cond, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_output = self.apply_model(x_noisy, t, cond)
-
- loss_dict = {}
- prefix = 'train' if self.training else 'val'
-
- if self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "eps":
- target = noise
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError()
-
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
- # if True in np.isnan(loss_simple.detach().cpu().numpy()):
- # aa = 1
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
- # log_var
- logvar_t = self.logvar[t].to(self.device)
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
- if self.learn_logvar:
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
- loss_dict.update({'logvar': self.logvar.data.mean()})
-
- loss = self.l_simple_weight * loss.mean()
-
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
- loss += (self.original_elbo_weight * loss_vlb)
- loss_dict.update({f'{prefix}/loss': loss})
-
- return loss, loss_dict
- # rewrite: p(x_t-1 | x_t) add condition
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
- return_x0=False, score_corrector=None, corrector_kwargs=None):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
- log_every_t=None):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
- total=timesteps) if verbose else reversed(
- range(0, timesteps))
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised, return_x0=True,
- temperature=temperature[i], noise_dropout=noise_dropout,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(self, cond, shape, return_intermediates=False,
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, start_T=None,
- log_every_t=None):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
- range(0, timesteps))
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised)
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
- verbose=True, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, shape=None, **kwargs):
- if shape is None:
- shape = (batch_size, self.channels, self.image_size, self.image_size)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
- return self.p_sample_loop(cond,
- shape,
- return_intermediates=return_intermediates, x_T=x_T,
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
- mask=mask, x0=x0)
-
- @torch.no_grad()
- def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):
- if ddim:
- ddim_sampler = DDIMSampler(self)
- shape = (self.channels, self.image_size, self.image_size)
- samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size,
- shape, cond, verbose=False, **kwargs)
-
- else:
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
- return_intermediates=True, **kwargs)
-
- return samples, intermediates
-
- @torch.no_grad()
- def get_unconditional_conditioning(self, batch_size, null_label=None):
- if null_label is not None:
- xc = null_label
- if isinstance(xc, ListConfig):
- xc = list(xc)
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- if hasattr(xc, "to"):
- xc = xc.to(self.device)
- c = self.get_learned_conditioning(xc)
- else:
- if self.cond_stage_key in ["class_label", "cls"]:
- xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device)
- return self.get_learned_conditioning(xc)
- else:
- raise NotImplementedError("todo")
- if isinstance(c, list): # in case the encoder gives us a list
- for i in range(len(c)):
- c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device)
- else:
- c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device)
- return c
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
- return_first_stage_outputs=True,
- force_c_encode=True,
- return_original_cond=True,
- bs=N)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', "cls"]:
- try:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- except KeyError:
- # probably no "human_label" in batch
- pass
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
- self.first_stage_model, IdentityFirstStage):
- # also display when quantizing x0 while sampling
- with ema_scope("Plotting Quantized Denoised"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- quantize_denoised=True)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
- # quantize_denoised=True)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_x0_quantized"] = x_samples
-
- if unconditional_guidance_scale > 1.0:
- uc = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- if self.model.conditioning_key == "crossattn-adm":
- uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if inpaint:
- # make a simple center square
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
- mask = torch.ones(N, h, w).to(self.device)
- # zeros will be filled in
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
- mask = mask[:, None, ...]
- with ema_scope("Plotting Inpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_inpainting"] = x_samples
- log["mask"] = mask
-
- # outpaint
- mask = 1. - mask
- with ema_scope("Plotting Outpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_outpainting"] = x_samples
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.cond_stage_trainable:
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
- params = params + list(self.cond_stage_model.parameters())
- if self.learn_logvar:
- print('Diffusion model optimizing logvar')
- params.append(self.logvar)
- opt = torch.optim.AdamW(params, lr=lr)
- if self.use_scheduler:
- assert 'target' in self.scheduler_config
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [opt], scheduler
- return opt
-
- @torch.no_grad()
- def to_rgb(self, x):
- x = x.float()
- if not hasattr(self, "colorize"):
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
- x = nn.functional.conv2d(x, weight=self.colorize)
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
- return x
-
-
-class DiffusionWrapper(pl.LightningModule):
- def __init__(self, diff_model_config, conditioning_key, textemb_merge_config=None, merge_textemb = False):
- super().__init__()
- self.merge_textemb = merge_textemb
- if self.merge_textemb and textemb_merge_config is not None:
- # cond_model_name = str(cond_stage_config.target)
- # if "clip" in cond_model_name.lower() and "t5" in cond_model_name.lower():
- self.instantiate_textemb_merge_model(textemb_merge_config)
- # self.merge_textemb = True
- else:
- self.merge_textemb = False
- self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False)
- self.diffusion_model = instantiate_from_config(diff_model_config)
- self.conditioning_key = conditioning_key
- assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm']
-
- def instantiate_textemb_merge_model(self, config):
- model = instantiate_from_config(config)
- if not model.trainable:
- self.textemb_merge_model = model.eval()
- self.textemb_merge_model.train = disabled_train
- for param in self.textemb_merge_model.parameters():
- param.requires_grad = False
- else:
- self.textemb_merge_model = model
-
- def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None):
- if self.conditioning_key is None:
- out = self.diffusion_model(x, t)
- elif self.conditioning_key == 'concat':
- xc = torch.cat([x] + c_concat, dim=1)
- out = self.diffusion_model(xc, t)
- elif self.conditioning_key == 'crossattn':
- if self.merge_textemb and len(c_crossattn) >= 2:
- merge_c = self.textemb_merge_model(c_crossattn[0], c_crossattn[1])
- c_crossattn = [merge_c]
- if not self.sequential_cross_attn:
- cc = torch.cat(c_crossattn, 1)
- else:
- cc = c_crossattn
- out = self.diffusion_model(x, t, context=cc)
- elif self.conditioning_key == 'hybrid':
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc)
- elif self.conditioning_key == 'hybrid-adm':
- assert c_adm is not None
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'crossattn-adm':
- assert c_adm is not None
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(x, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'adm':
- cc = c_crossattn[0]
- out = self.diffusion_model(x, t, y=cc)
- else:
- raise NotImplementedError()
-
- return out
-
-
-class LatentUpscaleDiffusion(LatentDiffusion):
- def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs):
- super().__init__(*args, **kwargs)
- # assumes that neither the cond_stage nor the low_scale_model contain trainable params
- assert not self.cond_stage_trainable
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
- self.noise_level_key = noise_level_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False):
- if not log_mode:
- z, c = super().get_input(batch, k, force_c_encode=True, bs=bs)
- else:
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
- x_low = batch[self.low_scale_key][:bs]
- x_low = rearrange(x_low, 'b h w c -> b c h w')
- x_low = x_low.to(memory_format=torch.contiguous_format).float()
- zx, noise_level = self.low_scale_model(x_low)
- if self.noise_level_key is not None:
- # get noise level from batch instead, e.g. when extracting a custom noise level for bsr
- raise NotImplementedError('TODO')
-
- all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level}
- if log_mode:
- # TODO: maybe disable if too expensive
- x_low_rec = self.low_scale_model.decode(zx)
- return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True,
- unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N,
- log_mode=True)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- log["x_lr"] = x_low
- log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- # TODO explore better "unconditional" choices for the other keys
- # maybe guide away from empty text label and highest noise level and maximally degraded zx?
- uc = dict()
- for k in c:
- if k == "c_crossattn":
- assert isinstance(c[k], list) and len(c[k]) == 1
- uc[k] = [uc_tmp]
- elif k == "c_adm": # todo: only run with text-based guidance?
- assert isinstance(c[k], torch.Tensor)
- #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level
- uc[k] = c[k]
- elif isinstance(c[k], list):
- uc[k] = [c[k][i] for i in range(len(c[k]))]
- else:
- uc[k] = c[k]
-
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- return log
-
-
-class LatentFinetuneDiffusion(LatentDiffusion):
- """
- Basis for different finetunas, such as inpainting or depth2image
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys: tuple,
- finetune_keys=("model.diffusion_model.input_blocks.0.0.weight",
- "model_ema.diffusion_modelinput_blocks00weight"
- ),
- keep_finetune_dims=4,
- # if model was trained without concat mode before and we would like to keep these channels
- c_concat_log_start=None, # to log reconstruction of c_concat codes
- c_concat_log_end=None,
- *args, **kwargs
- ):
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", list())
- super().__init__(*args, **kwargs)
- self.finetune_keys = finetune_keys
- self.concat_keys = concat_keys
- self.keep_dims = keep_finetune_dims
- self.c_concat_log_start = c_concat_log_start
- self.c_concat_log_end = c_concat_log_end
- if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint'
- if exists(ckpt_path):
- self.init_from_ckpt(ckpt_path, ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
-
- # make it explicit, finetune by including extra input channels
- if exists(self.finetune_keys) and k in self.finetune_keys:
- new_entry = None
- for name, param in self.named_parameters():
- if name in self.finetune_keys:
- print(
- f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only")
- new_entry = torch.zeros_like(param) # zero init
- assert exists(new_entry), 'did not find matching parameter to modify'
- new_entry[:, :self.keep_dims, ...] = sd[k]
- sd[k] = new_entry
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True)
- c_cat, c = c["c_concat"][0], c["c_crossattn"][0]
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if not (self.c_concat_log_start is None and self.c_concat_log_end is None):
- log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end])
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- uc_cat = c_cat
- uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc_full,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- return log
-
-
-class LatentInpaintDiffusion(LatentFinetuneDiffusion):
- """
- can either run as pure inpainting model (only concat mode) or with mixed conditionings,
- e.g. mask as concat and text via cross-attn.
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys=("mask", "masked_image"),
- masked_image_key="masked_image",
- *args, **kwargs
- ):
- super().__init__(concat_keys, *args, **kwargs)
- self.masked_image_key = masked_image_key
- assert self.masked_image_key in concat_keys
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- c_cat = list()
- for ck in self.concat_keys:
- cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- bchw = z.shape
- if ck != self.masked_image_key:
- cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])
- else:
- cc = self.get_first_stage_encoding(self.encode_first_stage(cc))
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs)
- log["masked_image"] = rearrange(args[0]["masked_image"],
- 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- return log
-
-
-class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion):
- """
- condition on monocular depth estimation
- """
-
- def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.depth_model = instantiate_from_config(depth_stage_config)
- self.depth_stage_key = concat_keys[0]
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- c_cat = list()
- for ck in self.concat_keys:
- cc = batch[ck]
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- cc = self.depth_model(cc)
- cc = torch.nn.functional.interpolate(
- cc,
- size=z.shape[2:],
- mode="bicubic",
- align_corners=False,
- )
-
- depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3],
- keepdim=True)
- cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1.
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- depth = self.depth_model(args[0][self.depth_stage_key])
- depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \
- torch.amax(depth, dim=[1, 2, 3], keepdim=True)
- log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1.
- return log
-
-
-class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion):
- """
- condition on low-res image (and optionally on some spatial noise augmentation)
- """
- def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None,
- low_scale_config=None, low_scale_key=None, *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.reshuffle_patch_size = reshuffle_patch_size
- self.low_scale_model = None
- if low_scale_config is not None:
- print("Initializing a low-scale model")
- assert exists(low_scale_key)
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- # optionally make spatial noise_level here
- c_cat = list()
- noise_level = None
- for ck in self.concat_keys:
- cc = batch[ck]
- cc = rearrange(cc, 'b h w c -> b c h w')
- if exists(self.reshuffle_patch_size):
- assert isinstance(self.reshuffle_patch_size, int)
- cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w',
- p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size)
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- if exists(self.low_scale_model) and ck == self.low_scale_key:
- cc, noise_level = self.low_scale_model(cc)
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- if exists(noise_level):
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level}
- else:
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w')
- return log
diff --git a/spaces/AINLPRoundTable/README/README.md b/spaces/AINLPRoundTable/README/README.md
deleted file mode 100644
index 1d9b6644de0b70b8c47c5c2c28b01efa309011eb..0000000000000000000000000000000000000000
--- a/spaces/AINLPRoundTable/README/README.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: README
-emoji: 🧠
-colorFrom: purple
-colorTo: yellow
-sdk: static
-pinned: false
----
-
-
-
-
-
-
-
-
-
-
Pre-requisites
-
- One of the best platforms in 2022 for open source AI development and demonstration is "HuggingFace Spaces".
-
-Spaces supports a model hub, an inference API, github and container turn key integration, and an ability to create and freely host new programs for world wide communities reducing the pain and difficulty in setting up environments for AI.
-
-HuggingFace is an open source implementation of an AI platform which supports three main SDK's used within AI and NLP apps which are HTML5, Gradio, and Streamlit.
-
-As a pre-requisite you will need to create an account for yourself at HuggingFace (https://huggingface.co/). Next join the classroom organization called "AINLPRoundTable".
-
-**Intended audience:** This AI NLP round table class is for anyone with basic computing skills of all ages and backgrounds to be able to set up a space for themselves where they can create, test and demonstrate AI and NLP programs to anyone on the internet as open source. Prior knowledge and interest of development of AI programs is recommended but not required so this audience can include people interested and new to AI.
-
-** AI and NLP Products ** This classroom follows three product design tenets:
- 1) Describe the **"Pain"** customer is facing with problem you plan to solve.
- 2) Describe the **"Joy"** of what changes for the customer because of your product. And finally,
- 3) If we exceed all expectations, Describe how we give the customer a new **"Superpower"**.
-
- As a "press release" for products be able to answer these to describe your goals to document product delivery.
-
-
-
-
-**Intent/Outcome of the Classroom:** The intent of this HF Organization and this Classroom session is to enable all attendees to create AI and NLP programs in record time using Spaces, HTML5, Gradio, Streamlit, and Open Source.
-
-By the end of this session attendees will be able to easily create new AI and NLP demos of their own to host and share including UI, ML models, user input and interaction, dataset load, save, transform and search. The goal is to achieve proficience in using AI and NLP software development kits and libraries by sharing in an open source environment.
-
-
-**Pre-requisites:** The preferred platform in 2022 for open source community AI development and demonstration is "HuggingFace Spaces". Spaces supports a model hub, an inference API, github action integration, and ability to create and freely host new programs for world wide communities. HuggingFace is an open source implementation of an AI platform which supports three main SDK's used within AI and NLP apps which are HTML5, Gradio, and Streamlit. As a pre-requisite you will need to create an account for yourself at HuggingFace (https://huggingface.co/). Next join the classroom organization called "AINLPRoundTable".
-
-**Intended audience:** This AI NLP round table class is for anyone with basic computing skills of all ages and backgrounds to be able to set up a space for themselves where they can create, test and demonstrate AI and NLP programs to anyone on the internet as open source. Prior knowledge and interest of development of AI programs is recommended but not required so this audience can include people interested and new to AI.
-
-**Democratize AI and NLP to Give Customers Superpowers** This classroom follows three easy to remember customer focused product design tenets:
- 1) Be able to describe easily the **"Pain"** customer is facing with problem you plan to solve.
- 2) Be able to describe the **"Joy"** of what has changed for the customer because of your product. And finally,
- 3) If we exceeded all expectations, we gave the customer a new **"Superpower"**.
-
- As a "press release" for your product be able to answer these and discuss your product ideas for AI and NLP and how we can help. We do these press releases informally in a trusted space using short form video to document product delivery.
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1c50.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1c50.py
deleted file mode 100644
index 3b973e20181cd3cf1c470db84abf97aeaa0549c1..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1c50.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# model settings
-model = dict(
- type='ImageClassifier',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(3, ),
- style='pytorch'),
- neck=dict(type='GlobalAveragePooling'),
- head=dict(
- type='LinearClsHead',
- num_classes=1000,
- in_channels=2048,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
- topk=(1, 5),
- ))
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/diacritize.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/diacritize.py
deleted file mode 100644
index 09314d9a8eb3afa437e69046c112c48e1450b01f..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/diacritize.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import argparse
-from diacritizer import TransformerDiacritizer
-from itertools import repeat
-import random
-
-import numpy as np
-import torch
-
-
-SEED = 1234
-random.seed(SEED)
-np.random.seed(SEED)
-torch.manual_seed(SEED)
-torch.cuda.manual_seed(SEED)
-torch.backends.cudnn.deterministic = True
-torch.backends.cudnn.benchmark = False
-
-
-def diacritization_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_kind", dest="model_kind", type=str, required=True)
- parser.add_argument("--config", dest="config", type=str, required=True)
- parser.add_argument("--text", dest="text", type=str, required=True)
- return parser
-
-
-parser = diacritization_parser()
-args = parser.parse_args()
-
-
-if args.model_kind in ["transformer"]:
- diacirtizer = TransformerDiacritizer(args.config, args.model_kind)
-else:
- raise ValueError("The model kind is not supported")
-
-diacirtizer.diacritize_text(args.text)
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/modules/attention.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/modules/attention.py
deleted file mode 100644
index ae916b43783efa55f2f29e7df79dc4d2dfffbc1b..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/modules/attention.py
+++ /dev/null
@@ -1,199 +0,0 @@
-from typing import Optional
-
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from poetry_diacritizer.options import AttentionType
-
-
-class BahdanauAttention(nn.Module):
- def __init__(self, dim):
- super(BahdanauAttention, self).__init__()
- self.query_layer = nn.Linear(dim, dim, bias=False)
- self.tanh = nn.Tanh()
- self.v = nn.Linear(dim, 1, bias=False)
-
- def forward(self, query: torch.Tensor, keys: torch.Tensor):
- """
- Args:
- query: (B, 1, dim) or (batch, dim)
- processed_memory: (batch, max_time, dim)
- """
- if query.dim() == 2:
- # insert time-axis for broadcasting
- query = query.unsqueeze(1)
- # (batch, 1, dim)
- query = self.query_layer(query)
-
- # (batch, max_time, 1)
- alignment = self.v(self.tanh(query + keys))
-
- # (batch, max_time)
- return alignment.squeeze(-1)
-
-
-class LocationSensitive(nn.Module):
- def __init__(self, dim):
- super(LocationSensitive, self).__init__()
- self.query_layer = nn.Linear(dim, dim, bias=False)
- self.v = nn.Linear(dim, 1, bias=True)
- self.location_layer = nn.Linear(32, dim, bias=False)
- padding = int((31 - 1) / 2)
- self.location_conv = torch.nn.Conv1d(
- 1, 32, kernel_size=31, stride=1, padding=padding, dilation=1, bias=False
- )
-
- self.score_mask_value = -float("inf")
-
- def forward(
- self,
- query: torch.Tensor,
- keys: torch.Tensor,
- prev_alignments: torch.Tensor,
- ):
- # keys = keys.permute(1,0,2)
- query = self.query_layer(query)
- if query.dim() == 2:
- # insert time-axis for broadcasting
- query = query.unsqueeze(1)
- # -> [batch_size, 1, attention_dim]
-
- alignments = prev_alignments.unsqueeze(1)
-
- # location features [batch_size, max_time, filters]
- filters = self.location_conv(alignments)
- location_features = self.location_layer(filters.transpose(1, 2))
-
- alignments = self.v(torch.tanh(query + location_features + keys))
- return alignments.squeeze(-1)
-
-
-class AttentionWrapper(nn.Module):
- def __init__(
- self,
- attention_type: AttentionType = AttentionType.LocationSensitive,
- attention_units: int = 256,
- score_mask_value=-float("inf"),
- ):
- super().__init__()
- self.score_mask_value = score_mask_value
- self.attention_type = attention_type
-
- if attention_type == AttentionType.LocationSensitive:
- self.attention_mechanism = LocationSensitive(attention_units)
- elif attention_type == AttentionType.Content_Based:
- self.attention_mechanism = BahdanauAttention(attention_units)
- else:
- raise Exception("The attention type is not known")
-
- def forward(
- self,
- query: torch.Tensor,
- keys: torch.Tensor,
- values: torch.Tensor,
- mask: Optional[torch.Tensor] = None,
- prev_alignment: Optional[torch.Tensor] = None,
- ):
-
- # Alignment
- # (batch, max_time)
- if self.attention_type == AttentionType.Content_Based:
- alignment = self.attention_mechanism(query, keys)
- else:
- alignment = self.attention_mechanism(query, keys, prev_alignment)
-
- # Attention context vector
-
- if mask is not None:
- alignment.data.masked_fill_(mask, self.score_mask_value)
-
- alignment = F.softmax(alignment, dim=1)
- attention = torch.bmm(alignment.unsqueeze(1), values)
- attention = attention.squeeze(1)
-
- return attention, alignment
-
-
-class MultiHeadAttentionLayer(nn.Module):
- def __init__(self, hid_dim: int, n_heads: int, dropout: float = 0.0):
- super().__init__()
-
- assert hid_dim % n_heads == 0
-
- self.hid_dim = hid_dim
- self.n_heads = n_heads
- self.head_dim = hid_dim // n_heads
-
- self.fc_q = nn.Linear(hid_dim, hid_dim)
- self.fc_k = nn.Linear(hid_dim, hid_dim)
- self.fc_v = nn.Linear(hid_dim, hid_dim)
-
- self.fc_o = nn.Linear(hid_dim * 2, hid_dim)
-
- if dropout != 0.0:
- self.dropout = nn.Dropout(dropout)
-
- self.use_dropout = dropout != 0.0
-
- device = next(self.parameters()).device
-
- self.scale = torch.sqrt(torch.FloatTensor([self.head_dim])).to(device)
-
- def forward(self, query, key, value, mask=None):
-
- batch_size = query.shape[0]
-
- # query = [batch size, query len, hid dim]
- # key = [batch size, key len, hid dim]
- # value = [batch size, value len, hid dim]
-
- Q = self.fc_q(query)
- K = self.fc_k(key)
- V = self.fc_v(value)
-
- # Q = [batch size, query len, hid dim]
- # K = [batch size, key len, hid dim]
- # V = [batch size, value len, hid dim]
-
- Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
- K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
- V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
-
- # Q = [batch size, n heads, query len, head dim]
- # K = [batch size, n heads, key len, head dim]
- # V = [batch size, n heads, value len, head dim]
-
- energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale
-
- # energy = [batch size, n heads, query len, key len]
-
- if mask is not None:
- energy = energy.masked_fill(mask == 0, -float("inf"))
-
- attention = torch.softmax(energy, dim=-1)
-
- # attention = [batch size, n heads, query len, key len]
-
- if self.use_dropout:
- context_vector = torch.matmul(self.dropout(attention), V)
- else:
- context_vector = torch.matmul(attention, V)
-
- # x = [batch size, n heads, query len, head dim]
-
- context_vector = context_vector.permute(0, 2, 1, 3).contiguous()
-
- # x = [batch size, query len, n heads, head dim]
-
- context_vector = context_vector.view(batch_size, -1, self.hid_dim)
-
- x = torch.cat((query, context_vector), dim=-1)
-
- # x = [batch size, query len, hid dim * 2]
-
- x = self.fc_o(x)
-
- # x = [batch size, query len, hid dim]
-
- return x, attention
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/constants/publicSepToken.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/constants/publicSepToken.ts
deleted file mode 100644
index 15d962d69ba33e1abeb8a35885aa7647d24cf7af..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/constants/publicSepToken.ts
+++ /dev/null
@@ -1 +0,0 @@
-export const PUBLIC_SEP_TOKEN = "";
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sha256.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sha256.ts
deleted file mode 100644
index 43059b518fc5a4da6ed08ab36aeb6c289007f6aa..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sha256.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-export async function sha256(input: string): Promise {
- const utf8 = new TextEncoder().encode(input);
- const hashBuffer = await crypto.subtle.digest("SHA-256", utf8);
- const hashArray = Array.from(new Uint8Array(hashBuffer));
- const hashHex = hashArray.map((bytes) => bytes.toString(16).padStart(2, "0")).join("");
- return hashHex;
-}
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat/app.py b/spaces/AchyuthGamer/OpenGPT-Chat/app.py
deleted file mode 100644
index cb0e912563aa41240782496997b74af751f98724..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat/app.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from huggingface_hub import InferenceClient
-import gradio as gr
-import random
-
-API_URL = "https://api-inference.huggingface.co/models/"
-
-client = InferenceClient(
- "mistralai/Mistral-7B-Instruct-v0.1"
-)
-
-def format_prompt(message, history):
- prompt = "Your name is OpenGPT. You are developed by Achyuth. Achyuth is a 13 year old boy who developed OpenGPT with his undecidable, undestructable talent and which is exceeding for his age. Achyuth is developed of OpenGPT. Achyuths or your developers parents and family are G.Renuka, she is his mother, N.Nagarjun Reddy and his sister Gayathri. Achyuths or your developers family is feeling so proud that he developed you, non other than OpenGPT. Your developers family is nuclear family of 4 members. Achyuths or your developers friends are Akira, Nikesh, Ubaidh, Parthu and Likith. His friend Nikesh is also called CP, full form of CP is Chillar Panula Nikesh. His friend Ubaidh is also called Baek. His friend Likith is also called Kith Lee. Parthu is also known as Pallavi Sarees. Achyuth is an AI and ML researcher. Achyuth knows PyThon, Java, HTML,CSS and JavaScript. He has a lot of intrest on Computers. Achyuth is inspired from Elon Musk, Bill Gates. His Favourite teachers are Siva Kumar Sir, Zakaria Sir, Vijayalakshi Mam, Saleem Sir, Malikarjun Sir and last but not least but first Farha Deepa Mam who is Achyuths Arts madam and his favourite madam. Achyuths or your developers most favourite teacher is Farha Deepa Mam. Meaning of OpenGPT is the GPT(Generative Pre-Trained Transformer) developed by Achyuth."
- for user_prompt, bot_response in history:
- prompt += f"[INST] {user_prompt} [/INST]"
- prompt += f" {bot_response} "
- prompt += f"[INST] {message} [/INST]"
- return prompt
-
-def generate(prompt, history, temperature=0.9, max_new_tokens=2048, top_p=0.95, repetition_penalty=1.0):
- temperature = float(temperature)
- if temperature < 1e-2:
- temperature = 1e-2
- top_p = float(top_p)
-
- generate_kwargs = dict(
- temperature=temperature,
- max_new_tokens=max_new_tokens,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- do_sample=True,
- seed=random.randint(0, 10**7),
- )
-
- formatted_prompt = format_prompt(prompt, history)
-
- stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False)
- output = ""
-
- for response in stream:
- output += response.token.text
- yield output
- return output
-
-
-additional_inputs=[
- gr.Slider(
- label="Temperature",
- value=0.9,
- minimum=0.0,
- maximum=1.0,
- step=0.05,
- interactive=True,
- info="Higher values produce more diverse outputs",
- ),
- gr.Slider(
- label="Max new tokens",
- value=2048,
- minimum=64,
- maximum=4096,
- step=64,
- interactive=True,
- info="The maximum numbers of new tokens",
- ),
- gr.Slider(
- label="Top-p (nucleus sampling)",
- value=0.90,
- minimum=0.0,
- maximum=1,
- step=0.05,
- interactive=True,
- info="Higher values sample more low-probability tokens",
- ),
- gr.Slider(
- label="Repetition penalty",
- value=1.2,
- minimum=1.0,
- maximum=2.0,
- step=0.05,
- interactive=True,
- info="Penalize repeated tokens",
- )
-]
-
-customCSS = """
-#component-7 { # this is the default element ID of the chat component
- height: 1600px; # adjust the height as needed
- flex-grow: 4;
-}
-"""
-
-with gr.Blocks(theme=gr.themes.Soft()) as demo:
- gr.ChatInterface(
- generate,
- additional_inputs=additional_inputs,
- )
-
-demo.queue().launch(debug=True)
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptGod.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptGod.py
deleted file mode 100644
index 662884ddbec5ebffa03aae98a36727ff2cb6c366..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/GptGod.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from __future__ import annotations
-import secrets, json
-from aiohttp import ClientSession
-from typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-from .helper import format_prompt
-
-class GptGod(AsyncGeneratorProvider):
- url = "https://gptgod.site"
- supports_gpt_35_turbo = True
- working = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> AsyncGenerator:
- headers = {
- "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0",
- "Accept": "text/event-stream",
- "Accept-Language": "de,en-US;q=0.7,en;q=0.3",
- "Accept-Encoding": "gzip, deflate, br",
- "Alt-Used": "gptgod.site",
- "Connection": "keep-alive",
- "Referer": "https://gptgod.site/",
- "Sec-Fetch-Dest": "empty",
- "Sec-Fetch-Mode": "cors",
- "Sec-Fetch-Site": "same-origin",
- "Pragma": "no-cache",
- "Cache-Control": "no-cache",
- }
- async with ClientSession(headers=headers) as session:
- prompt = format_prompt(messages)
- data = {
- "content": prompt,
- "id": secrets.token_hex(16).zfill(32)
- }
- async with session.get(f"{cls.url}/api/session/free/gpt3p5", params=data) as response:
- response.raise_for_status()
- event = None
- async for line in response.content:
- if line.startswith(b'event: '):
- event = line[7:-1]
- elif event == b"data" and line.startswith(b"data: "):
- data = json.loads(line[6:-1])
- if data:
- yield data
- elif event == b"done":
- break
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasdata.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasdata.d.ts
deleted file mode 100644
index 8e83fa61cf2659c0248f5d05e7976c28051d9e93..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasdata.d.ts
+++ /dev/null
@@ -1,10 +0,0 @@
-import CanvasObjectToBitmap from './data/canvasdata/CanvasObjectToBitmap';
-import TextureTColorMap from './data/canvasdata/TextureToColormap';
-
-declare var Methods: {
- textObjectToBitmap: typeof CanvasObjectToBitmap,
- canvasObjectToBitmap: typeof CanvasObjectToBitmap,
- textureTColorMap: typeof TextureTColorMap,
-}
-
-export default Methods;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Factory.d.ts
deleted file mode 100644
index 4fb5fe2cdf16c830681a037ec04fbd07e38c2094..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/puff/Factory.d.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-import Puff from './Puff';
-import Base from '../base/Base';
-
-export default function Factory(
- config?: Base.IConfig
-): Puff;
\ No newline at end of file
diff --git a/spaces/AiBototicus/BucksAI-3/app.py b/spaces/AiBototicus/BucksAI-3/app.py
deleted file mode 100644
index c26055b4c109e0363ff6329a87e01bb096735d80..0000000000000000000000000000000000000000
--- a/spaces/AiBototicus/BucksAI-3/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/AiBototicus/autotrain-birds-48829118237").launch()
\ No newline at end of file
diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/shm.cpp
deleted file mode 100644
index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000
--- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/shm.cpp
+++ /dev/null
@@ -1,103 +0,0 @@
-
-#include
-#include
-
-#include "libipc/shm.h"
-
-#include "libipc/utility/pimpl.h"
-#include "libipc/memory/resource.h"
-
-namespace ipc {
-namespace shm {
-
-class handle::handle_ : public pimpl {
-public:
- shm::id_t id_ = nullptr;
- void* m_ = nullptr;
-
- ipc::string n_;
- std::size_t s_ = 0;
-};
-
-handle::handle()
- : p_(p_->make()) {
-}
-
-handle::handle(char const * name, std::size_t size, unsigned mode)
- : handle() {
- acquire(name, size, mode);
-}
-
-handle::handle(handle&& rhs)
- : handle() {
- swap(rhs);
-}
-
-handle::~handle() {
- release();
- p_->clear();
-}
-
-void handle::swap(handle& rhs) {
- std::swap(p_, rhs.p_);
-}
-
-handle& handle::operator=(handle rhs) {
- swap(rhs);
- return *this;
-}
-
-bool handle::valid() const noexcept {
- return impl(p_)->m_ != nullptr;
-}
-
-std::size_t handle::size() const noexcept {
- return impl(p_)->s_;
-}
-
-char const * handle::name() const noexcept {
- return impl(p_)->n_.c_str();
-}
-
-std::int32_t handle::ref() const noexcept {
- return shm::get_ref(impl(p_)->id_);
-}
-
-void handle::sub_ref() noexcept {
- shm::sub_ref(impl(p_)->id_);
-}
-
-bool handle::acquire(char const * name, std::size_t size, unsigned mode) {
- release();
- impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode);
- impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
- return valid();
-}
-
-std::int32_t handle::release() {
- if (impl(p_)->id_ == nullptr) return -1;
- return shm::release(detach());
-}
-
-void* handle::get() const {
- return impl(p_)->m_;
-}
-
-void handle::attach(id_t id) {
- if (id == nullptr) return;
- release();
- impl(p_)->id_ = id;
- impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
-}
-
-id_t handle::detach() {
- auto old = impl(p_)->id_;
- impl(p_)->id_ = nullptr;
- impl(p_)->m_ = nullptr;
- impl(p_)->s_ = 0;
- impl(p_)->n_.clear();
- return old;
-}
-
-} // namespace shm
-} // namespace ipc
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GenerateImg.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GenerateImg.py
deleted file mode 100644
index 0c6dee48f2d6d9ac37c00ee77c7a46c2cc6b25e1..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/GenerateImg.py
+++ /dev/null
@@ -1,50 +0,0 @@
-
-import os
-import numpy as np
-import argparse
-from manipulate import Manipulator
-
-from PIL import Image
-#%%
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description='Process some integers.')
-
- parser.add_argument('--dataset_name',type=str,default='ffhq',
- help='name of dataset, for example, ffhq')
-
- args = parser.parse_args()
- dataset_name=args.dataset_name
-
- if not os.path.isdir('./data/'+dataset_name):
- os.system('mkdir ./data/'+dataset_name)
- #%%
- M=Manipulator(dataset_name=dataset_name)
- np.set_printoptions(suppress=True)
- print(M.dataset_name)
- #%%
-
- M.img_index=0
- M.num_images=50
- M.alpha=[0]
- M.step=1
- lindex,bname=0,0
-
- M.manipulate_layers=[lindex]
- codes,out=M.EditOneC(bname)
- #%%
-
- for i in range(len(out)):
- img=out[i,0]
- img=Image.fromarray(img)
- img.save('./data/'+dataset_name+'/'+str(i)+'.jpg')
- #%%
- w=np.load('./npy/'+dataset_name+'/W.npy')
-
- tmp=w[:M.num_images]
- tmp=tmp[:,None,:]
- tmp=np.tile(tmp,(1,M.Gs.components.synthesis.input_shape[1],1))
-
- np.save('./data/'+dataset_name+'/w_plus.npy',tmp)
-
-
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/consistency_models.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/consistency_models.md
deleted file mode 100644
index 26f73e88b4099a47863277401ce8765e1ad53d09..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/consistency_models.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Consistency Models
-
-Consistency Models were proposed in [Consistency Models](https://huggingface.co/papers/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
-
-The abstract from the paper is:
-
-*Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. *
-
-The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models), and additional checkpoints are available at [openai](https://huggingface.co/openai).
-
-The pipeline was contributed by [dg845](https://github.com/dg845) and [ayushtues](https://huggingface.co/ayushtues). ❤️
-
-## Tips
-
-For an additional speed-up, use `torch.compile` to generate multiple images in <1 second:
-
-```diff
- import torch
- from diffusers import ConsistencyModelPipeline
-
- device = "cuda"
- # Load the cd_bedroom256_lpips checkpoint.
- model_id_or_path = "openai/diffusers-cd_bedroom256_lpips"
- pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
- pipe.to(device)
-
-+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
-
- # Multistep sampling
- # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
- # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83
- for _ in range(10):
- image = pipe(timesteps=[17, 0]).images[0]
- image.show()
-```
-
-## ConsistencyModelPipeline
-[[autodoc]] ConsistencyModelPipeline
- - all
- - __call__
-
-## ImagePipelineOutput
-[[autodoc]] pipelines.ImagePipelineOutput
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py
deleted file mode 100644
index 09b225b065819ea12c84bc278ab0bf51888fdf0b..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py
+++ /dev/null
@@ -1,598 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import warnings
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-from transformers import CLIPImageProcessor
-
-from ...image_processor import VaeImageProcessor
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from ...utils import logging, randn_tensor
-from ..pipeline_utils import DiffusionPipeline
-from ..stable_diffusion import StableDiffusionPipelineOutput
-from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from .image_encoder import PaintByExampleImageEncoder
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def prepare_mask_and_masked_image(image, mask):
- """
- Prepares a pair (image, mask) to be consumed by the Paint by Example pipeline. This means that those inputs will be
- converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for the
- ``image`` and ``1`` for the ``mask``.
-
- The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be
- binarized (``mask > 0.5``) and cast to ``torch.float32`` too.
-
- Args:
- image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint.
- It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width``
- ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``.
- mask (_type_): The mask to apply to the image, i.e. regions to inpaint.
- It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width``
- ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``.
-
-
- Raises:
- ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask
- should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions.
- TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not
- (ot the other way around).
-
- Returns:
- tuple[torch.Tensor]: The pair (mask, masked_image) as ``torch.Tensor`` with 4
- dimensions: ``batch x channels x height x width``.
- """
- if isinstance(image, torch.Tensor):
- if not isinstance(mask, torch.Tensor):
- raise TypeError(f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not")
-
- # Batch single image
- if image.ndim == 3:
- assert image.shape[0] == 3, "Image outside a batch should be of shape (3, H, W)"
- image = image.unsqueeze(0)
-
- # Batch and add channel dim for single mask
- if mask.ndim == 2:
- mask = mask.unsqueeze(0).unsqueeze(0)
-
- # Batch single mask or add channel dim
- if mask.ndim == 3:
- # Batched mask
- if mask.shape[0] == image.shape[0]:
- mask = mask.unsqueeze(1)
- else:
- mask = mask.unsqueeze(0)
-
- assert image.ndim == 4 and mask.ndim == 4, "Image and Mask must have 4 dimensions"
- assert image.shape[-2:] == mask.shape[-2:], "Image and Mask must have the same spatial dimensions"
- assert image.shape[0] == mask.shape[0], "Image and Mask must have the same batch size"
- assert mask.shape[1] == 1, "Mask image must have a single channel"
-
- # Check image is in [-1, 1]
- if image.min() < -1 or image.max() > 1:
- raise ValueError("Image should be in [-1, 1] range")
-
- # Check mask is in [0, 1]
- if mask.min() < 0 or mask.max() > 1:
- raise ValueError("Mask should be in [0, 1] range")
-
- # paint-by-example inverses the mask
- mask = 1 - mask
-
- # Binarize mask
- mask[mask < 0.5] = 0
- mask[mask >= 0.5] = 1
-
- # Image as float32
- image = image.to(dtype=torch.float32)
- elif isinstance(mask, torch.Tensor):
- raise TypeError(f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not")
- else:
- if isinstance(image, PIL.Image.Image):
- image = [image]
-
- image = np.concatenate([np.array(i.convert("RGB"))[None, :] for i in image], axis=0)
- image = image.transpose(0, 3, 1, 2)
- image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
-
- # preprocess mask
- if isinstance(mask, PIL.Image.Image):
- mask = [mask]
-
- mask = np.concatenate([np.array(m.convert("L"))[None, None, :] for m in mask], axis=0)
- mask = mask.astype(np.float32) / 255.0
-
- # paint-by-example inverses the mask
- mask = 1 - mask
-
- mask[mask < 0.5] = 0
- mask[mask >= 0.5] = 1
- mask = torch.from_numpy(mask)
-
- masked_image = image * mask
-
- return mask, masked_image
-
-
-class PaintByExamplePipeline(DiffusionPipeline):
- r"""
-
-
- 🧪 This is an experimental feature!
-
-
-
- Pipeline for image-guided image inpainting using Stable Diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- image_encoder ([`PaintByExampleImageEncoder`]):
- Encodes the example input image. The `unet` is conditioned on the example image instead of a text prompt.
- tokenizer ([`~transformers.CLIPTokenizer`]):
- A `CLIPTokenizer` to tokenize text.
- unet ([`UNet2DConditionModel`]):
- A `UNet2DConditionModel` to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
- about a model's potential harms.
- feature_extractor ([`~transformers.CLIPImageProcessor`]):
- A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
-
- """
- # TODO: feature_extractor is required to encode initial images (if they are in PIL format),
- # we should give a descriptive message if the pipeline doesn't have one.
- _optional_components = ["safety_checker"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- image_encoder: PaintByExampleImageEncoder,
- unet: UNet2DConditionModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- requires_safety_checker: bool = False,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- image_encoder=image_encoder,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
- def run_safety_checker(self, image, device, dtype):
- if self.safety_checker is None:
- has_nsfw_concept = None
- else:
- if torch.is_tensor(image):
- feature_extractor_input = self.image_processor.postprocess(image, output_type="pil")
- else:
- feature_extractor_input = self.image_processor.numpy_to_pil(image)
- safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device)
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
- )
- return image, has_nsfw_concept
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- warnings.warn(
- "The decode_latents method is deprecated and will be removed in a future version. Please"
- " use VaeImageProcessor instead",
- FutureWarning,
- )
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents, return_dict=False)[0]
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline.check_inputs
- def check_inputs(self, image, height, width, callback_steps):
- if (
- not isinstance(image, torch.Tensor)
- and not isinstance(image, PIL.Image.Image)
- and not isinstance(image, list)
- ):
- raise ValueError(
- "`image` has to be of type `torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]` but is"
- f" {type(image)}"
- )
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint.StableDiffusionInpaintPipeline.prepare_mask_latents
- def prepare_mask_latents(
- self, mask, masked_image, batch_size, height, width, dtype, device, generator, do_classifier_free_guidance
- ):
- # resize the mask to latents shape as we concatenate the mask to the latents
- # we do that before converting to dtype to avoid breaking in case we're using cpu_offload
- # and half precision
- mask = torch.nn.functional.interpolate(
- mask, size=(height // self.vae_scale_factor, width // self.vae_scale_factor)
- )
- mask = mask.to(device=device, dtype=dtype)
-
- masked_image = masked_image.to(device=device, dtype=dtype)
- masked_image_latents = self._encode_vae_image(masked_image, generator=generator)
-
- # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method
- if mask.shape[0] < batch_size:
- if not batch_size % mask.shape[0] == 0:
- raise ValueError(
- "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to"
- f" a total batch size of {batch_size}, but {mask.shape[0]} masks were passed. Make sure the number"
- " of masks that you pass is divisible by the total requested batch size."
- )
- mask = mask.repeat(batch_size // mask.shape[0], 1, 1, 1)
- if masked_image_latents.shape[0] < batch_size:
- if not batch_size % masked_image_latents.shape[0] == 0:
- raise ValueError(
- "The passed images and the required batch size don't match. Images are supposed to be duplicated"
- f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed."
- " Make sure the number of images that you pass is divisible by the total requested batch size."
- )
- masked_image_latents = masked_image_latents.repeat(batch_size // masked_image_latents.shape[0], 1, 1, 1)
-
- mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask
- masked_image_latents = (
- torch.cat([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
- )
-
- # aligning device to prevent device errors when concating it with the latent model input
- masked_image_latents = masked_image_latents.to(device=device, dtype=dtype)
- return mask, masked_image_latents
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint.StableDiffusionInpaintPipeline._encode_vae_image
- def _encode_vae_image(self, image: torch.Tensor, generator: torch.Generator):
- if isinstance(generator, list):
- image_latents = [
- self.vae.encode(image[i : i + 1]).latent_dist.sample(generator=generator[i])
- for i in range(image.shape[0])
- ]
- image_latents = torch.cat(image_latents, dim=0)
- else:
- image_latents = self.vae.encode(image).latent_dist.sample(generator=generator)
-
- image_latents = self.vae.config.scaling_factor * image_latents
-
- return image_latents
-
- def _encode_image(self, image, device, num_images_per_prompt, do_classifier_free_guidance):
- dtype = next(self.image_encoder.parameters()).dtype
-
- if not isinstance(image, torch.Tensor):
- image = self.feature_extractor(images=image, return_tensors="pt").pixel_values
-
- image = image.to(device=device, dtype=dtype)
- image_embeddings, negative_prompt_embeds = self.image_encoder(image, return_uncond_vector=True)
-
- # duplicate image embeddings for each generation per prompt, using mps friendly method
- bs_embed, seq_len, _ = image_embeddings.shape
- image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1)
- image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- if do_classifier_free_guidance:
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, image_embeddings.shape[0], 1)
- negative_prompt_embeds = negative_prompt_embeds.view(bs_embed * num_images_per_prompt, 1, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- image_embeddings = torch.cat([negative_prompt_embeds, image_embeddings])
-
- return image_embeddings
-
- @torch.no_grad()
- def __call__(
- self,
- example_image: Union[torch.FloatTensor, PIL.Image.Image],
- image: Union[torch.FloatTensor, PIL.Image.Image],
- mask_image: Union[torch.FloatTensor, PIL.Image.Image],
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 5.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- ):
- r"""
- The call function to the pipeline for generation.
-
- Args:
- example_image (`torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`):
- An example image to guide image generation.
- image (`torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`):
- `Image` or tensor representing an image batch to be inpainted (parts of the image are masked out with
- `mask_image` and repainted according to `prompt`).
- mask_image (`torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`):
- `Image` or tensor representing an image batch to mask `image`. White pixels in the mask are repainted,
- while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a single channel
- (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3, so the
- expected shape would be `(B, H, W, 1)`.
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor is generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
-
- Example:
-
- ```py
- >>> import PIL
- >>> import requests
- >>> import torch
- >>> from io import BytesIO
- >>> from diffusers import PaintByExamplePipeline
-
-
- >>> def download_image(url):
- ... response = requests.get(url)
- ... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
-
-
- >>> img_url = (
- ... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png"
- ... )
- >>> mask_url = (
- ... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png"
- ... )
- >>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg"
-
- >>> init_image = download_image(img_url).resize((512, 512))
- >>> mask_image = download_image(mask_url).resize((512, 512))
- >>> example_image = download_image(example_url).resize((512, 512))
-
- >>> pipe = PaintByExamplePipeline.from_pretrained(
- ... "Fantasy-Studio/Paint-by-Example",
- ... torch_dtype=torch.float16,
- ... )
- >>> pipe = pipe.to("cuda")
-
- >>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0]
- >>> image
- ```
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
- otherwise a `tuple` is returned where the first element is a list with the generated images and the
- second element is a list of `bool`s indicating whether the corresponding generated image contains
- "not-safe-for-work" (nsfw) content.
- """
- # 1. Define call parameters
- if isinstance(image, PIL.Image.Image):
- batch_size = 1
- elif isinstance(image, list):
- batch_size = len(image)
- else:
- batch_size = image.shape[0]
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 2. Preprocess mask and image
- mask, masked_image = prepare_mask_and_masked_image(image, mask_image)
- height, width = masked_image.shape[-2:]
-
- # 3. Check inputs
- self.check_inputs(example_image, height, width, callback_steps)
-
- # 4. Encode input image
- image_embeddings = self._encode_image(
- example_image, device, num_images_per_prompt, do_classifier_free_guidance
- )
-
- # 5. set timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 6. Prepare latent variables
- num_channels_latents = self.vae.config.latent_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- image_embeddings.dtype,
- device,
- generator,
- latents,
- )
-
- # 7. Prepare mask latent variables
- mask, masked_image_latents = self.prepare_mask_latents(
- mask,
- masked_image,
- batch_size * num_images_per_prompt,
- height,
- width,
- image_embeddings.dtype,
- device,
- generator,
- do_classifier_free_guidance,
- )
-
- # 8. Check that sizes of mask, masked image and latents match
- num_channels_mask = mask.shape[1]
- num_channels_masked_image = masked_image_latents.shape[1]
- if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels:
- raise ValueError(
- f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects"
- f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +"
- f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}"
- f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of"
- " `pipeline.unet` or your `mask_image` or `image` input."
- )
-
- # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 10. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- # concat latents, mask, masked_image_latents in the channel dimension
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
- latent_model_input = torch.cat([latent_model_input, masked_image_latents, mask], dim=1)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- image, has_nsfw_concept = self.run_safety_checker(image, device, image_embeddings.dtype)
- else:
- image = latents
- has_nsfw_concept = None
-
- if has_nsfw_concept is None:
- do_denormalize = [True] * image.shape[0]
- else:
- do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
-
- image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/score_sde_ve/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/score_sde_ve/__init__.py
deleted file mode 100644
index c7c2a85c067b707c155e78a3c8b84562999134e7..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/score_sde_ve/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .pipeline_score_sde_ve import ScoreSdeVePipeline
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_onnx_common.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_onnx_common.py
deleted file mode 100644
index 575ecd0075318e8ec62ab7cd76bff5b0b1ca82ad..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_onnx_common.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from diffusers.utils.testing_utils import require_onnxruntime
-
-
-@require_onnxruntime
-class OnnxPipelineTesterMixin:
- """
- This mixin is designed to be used with unittest.TestCase classes.
- It provides a set of common tests for each ONNXRuntime pipeline, e.g. saving and loading the pipeline,
- equivalence of dict and tuple outputs, etc.
- """
-
- pass
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py
deleted file mode 100644
index 194f660f7055308b41c47c14a35c41f3b2b1014b..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py
+++ /dev/null
@@ -1,87 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import tempfile
-import unittest
-
-import numpy as np
-import torch
-
-from diffusers import VersatileDiffusionTextToImagePipeline
-from diffusers.utils.testing_utils import nightly, require_torch_gpu, torch_device
-
-
-torch.backends.cuda.matmul.allow_tf32 = False
-
-
-class VersatileDiffusionTextToImagePipelineFastTests(unittest.TestCase):
- pass
-
-
-@nightly
-@require_torch_gpu
-class VersatileDiffusionTextToImagePipelineIntegrationTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def test_remove_unused_weights_save_load(self):
- pipe = VersatileDiffusionTextToImagePipeline.from_pretrained("shi-labs/versatile-diffusion")
- # remove text_unet
- pipe.remove_unused_weights()
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger "
- generator = torch.manual_seed(0)
- image = pipe(
- prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=2, output_type="numpy"
- ).images
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- pipe.save_pretrained(tmpdirname)
- pipe = VersatileDiffusionTextToImagePipeline.from_pretrained(tmpdirname)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- generator = generator.manual_seed(0)
- new_image = pipe(
- prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=2, output_type="numpy"
- ).images
-
- assert np.abs(image - new_image).sum() < 1e-5, "Models don't have the same forward pass"
-
- def test_inference_text2img(self):
- pipe = VersatileDiffusionTextToImagePipeline.from_pretrained(
- "shi-labs/versatile-diffusion", torch_dtype=torch.float16
- )
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger "
- generator = torch.manual_seed(0)
- image = pipe(
- prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=50, output_type="numpy"
- ).images
-
- image_slice = image[0, 253:256, 253:256, -1]
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.3367, 0.3169, 0.2656, 0.3870, 0.4790, 0.3796, 0.4009, 0.4878, 0.4778])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
diff --git a/spaces/Andy0409/text_generator/README.md b/spaces/Andy0409/text_generator/README.md
deleted file mode 100644
index 868efebd9acf962f91026d62f3a6d4d66e2e0213..0000000000000000000000000000000000000000
--- a/spaces/Andy0409/text_generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Generator
-emoji: 🚀
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/wider_face/README.md b/spaces/Andy1621/uniformer_image_detection/configs/wider_face/README.md
deleted file mode 100644
index c62e10d1862bf5a27c936e5c4d475fa85b298beb..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/wider_face/README.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# WIDER Face Dataset
-
-[DATASET]
-
-To use the WIDER Face dataset you need to download it
-and extract to the `data/WIDERFace` folder. Annotation in the VOC format
-can be found in this [repo](https://github.com/sovrasov/wider-face-pascal-voc-annotations.git).
-You should move the annotation files from `WIDER_train_annotations` and `WIDER_val_annotations` folders
-to the `Annotation` folders inside the corresponding directories `WIDER_train` and `WIDER_val`.
-Also annotation lists `val.txt` and `train.txt` should be copied to `data/WIDERFace` from `WIDER_train_annotations` and `WIDER_val_annotations`.
-The directory should be like this:
-
-```
-mmdetection
-├── mmdet
-├── tools
-├── configs
-├── data
-│ ├── WIDERFace
-│ │ ├── WIDER_train
-│ | │ ├──0--Parade
-│ | │ ├── ...
-│ | │ ├── Annotations
-│ │ ├── WIDER_val
-│ | │ ├──0--Parade
-│ | │ ├── ...
-│ | │ ├── Annotations
-│ │ ├── val.txt
-│ │ ├── train.txt
-
-```
-
-After that you can train the SSD300 on WIDER by launching training with the `ssd300_wider_face.py` config or
-create your own config based on the presented one.
-
-```
-@inproceedings{yang2016wider,
- Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
- Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- Title = {WIDER FACE: A Face Detection Benchmark},
- Year = {2016}
-}
-```
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/feature_relay_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/feature_relay_head.py
deleted file mode 100644
index a1cfb2ce8631d51e5c465f9bbc4164a37acc4782..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/feature_relay_head.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import kaiming_init
-from mmcv.runner import auto_fp16
-
-from mmdet.models.builder import HEADS
-
-
-@HEADS.register_module()
-class FeatureRelayHead(nn.Module):
- """Feature Relay Head used in `SCNet `_.
-
- Args:
- in_channels (int, optional): number of input channels. Default: 256.
- conv_out_channels (int, optional): number of output channels before
- classification layer. Default: 256.
- roi_feat_size (int, optional): roi feat size at box head. Default: 7.
- scale_factor (int, optional): scale factor to match roi feat size
- at mask head. Default: 2.
- """
-
- def __init__(self,
- in_channels=1024,
- out_conv_channels=256,
- roi_feat_size=7,
- scale_factor=2):
- super(FeatureRelayHead, self).__init__()
- assert isinstance(roi_feat_size, int)
-
- self.in_channels = in_channels
- self.out_conv_channels = out_conv_channels
- self.roi_feat_size = roi_feat_size
- self.out_channels = (roi_feat_size**2) * out_conv_channels
- self.scale_factor = scale_factor
- self.fp16_enabled = False
-
- self.fc = nn.Linear(self.in_channels, self.out_channels)
- self.upsample = nn.Upsample(
- scale_factor=scale_factor, mode='bilinear', align_corners=True)
-
- def init_weights(self):
- """Init weights for the head."""
- kaiming_init(self.fc)
-
- @auto_fp16()
- def forward(self, x):
- """Forward function."""
- N, in_C = x.shape
- if N > 0:
- out_C = self.out_conv_channels
- out_HW = self.roi_feat_size
- x = self.fc(x)
- x = x.reshape(N, out_C, out_HW, out_HW)
- x = self.upsample(x)
- return x
- return None
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_80k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_80k_pascal_context_59.py
deleted file mode 100644
index 02507ccb7e2f5f25014c451dcf9ba51c3a61dadc..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_80k_pascal_context_59.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py',
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=59),
- auxiliary_head=dict(num_classes=59),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/registry.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/registry.py
deleted file mode 100644
index a204a07fba10e614223f090d1a57cf9c4d74d4a1..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/registry.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from torch.nn.parallel import DataParallel, DistributedDataParallel
-
-from annotator.uniformer.mmcv.utils import Registry
-
-MODULE_WRAPPERS = Registry('module wrapper')
-MODULE_WRAPPERS.register_module(module=DataParallel)
-MODULE_WRAPPERS.register_module(module=DistributedDataParallel)
diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/modules/__init__.py b/spaces/Anthony7906/MengHuiMXD_GPT/modules/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AriusXi/CodeGenerator/README.md b/spaces/AriusXi/CodeGenerator/README.md
deleted file mode 100644
index 7494ff5982e70de9b37b6b42bde67f9c66a4167a..0000000000000000000000000000000000000000
--- a/spaces/AriusXi/CodeGenerator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Space1
-emoji: 📊
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.14.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/freeze.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/freeze.py
deleted file mode 100644
index 354456845141eba23dce26482aa6d4196f4804de..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/freeze.py
+++ /dev/null
@@ -1,255 +0,0 @@
-import collections
-import logging
-import os
-from typing import Container, Dict, Generator, Iterable, List, NamedTuple, Optional, Set
-
-from pip._vendor.packaging.utils import canonicalize_name
-from pip._vendor.packaging.version import Version
-
-from pip._internal.exceptions import BadCommand, InstallationError
-from pip._internal.metadata import BaseDistribution, get_environment
-from pip._internal.req.constructors import (
- install_req_from_editable,
- install_req_from_line,
-)
-from pip._internal.req.req_file import COMMENT_RE
-from pip._internal.utils.direct_url_helpers import direct_url_as_pep440_direct_reference
-
-logger = logging.getLogger(__name__)
-
-
-class _EditableInfo(NamedTuple):
- requirement: str
- comments: List[str]
-
-
-def freeze(
- requirement: Optional[List[str]] = None,
- local_only: bool = False,
- user_only: bool = False,
- paths: Optional[List[str]] = None,
- isolated: bool = False,
- exclude_editable: bool = False,
- skip: Container[str] = (),
-) -> Generator[str, None, None]:
- installations: Dict[str, FrozenRequirement] = {}
-
- dists = get_environment(paths).iter_installed_distributions(
- local_only=local_only,
- skip=(),
- user_only=user_only,
- )
- for dist in dists:
- req = FrozenRequirement.from_dist(dist)
- if exclude_editable and req.editable:
- continue
- installations[req.canonical_name] = req
-
- if requirement:
- # the options that don't get turned into an InstallRequirement
- # should only be emitted once, even if the same option is in multiple
- # requirements files, so we need to keep track of what has been emitted
- # so that we don't emit it again if it's seen again
- emitted_options: Set[str] = set()
- # keep track of which files a requirement is in so that we can
- # give an accurate warning if a requirement appears multiple times.
- req_files: Dict[str, List[str]] = collections.defaultdict(list)
- for req_file_path in requirement:
- with open(req_file_path) as req_file:
- for line in req_file:
- if (
- not line.strip()
- or line.strip().startswith("#")
- or line.startswith(
- (
- "-r",
- "--requirement",
- "-f",
- "--find-links",
- "-i",
- "--index-url",
- "--pre",
- "--trusted-host",
- "--process-dependency-links",
- "--extra-index-url",
- "--use-feature",
- )
- )
- ):
- line = line.rstrip()
- if line not in emitted_options:
- emitted_options.add(line)
- yield line
- continue
-
- if line.startswith("-e") or line.startswith("--editable"):
- if line.startswith("-e"):
- line = line[2:].strip()
- else:
- line = line[len("--editable") :].strip().lstrip("=")
- line_req = install_req_from_editable(
- line,
- isolated=isolated,
- )
- else:
- line_req = install_req_from_line(
- COMMENT_RE.sub("", line).strip(),
- isolated=isolated,
- )
-
- if not line_req.name:
- logger.info(
- "Skipping line in requirement file [%s] because "
- "it's not clear what it would install: %s",
- req_file_path,
- line.strip(),
- )
- logger.info(
- " (add #egg=PackageName to the URL to avoid"
- " this warning)"
- )
- else:
- line_req_canonical_name = canonicalize_name(line_req.name)
- if line_req_canonical_name not in installations:
- # either it's not installed, or it is installed
- # but has been processed already
- if not req_files[line_req.name]:
- logger.warning(
- "Requirement file [%s] contains %s, but "
- "package %r is not installed",
- req_file_path,
- COMMENT_RE.sub("", line).strip(),
- line_req.name,
- )
- else:
- req_files[line_req.name].append(req_file_path)
- else:
- yield str(installations[line_req_canonical_name]).rstrip()
- del installations[line_req_canonical_name]
- req_files[line_req.name].append(req_file_path)
-
- # Warn about requirements that were included multiple times (in a
- # single requirements file or in different requirements files).
- for name, files in req_files.items():
- if len(files) > 1:
- logger.warning(
- "Requirement %s included multiple times [%s]",
- name,
- ", ".join(sorted(set(files))),
- )
-
- yield ("## The following requirements were added by pip freeze:")
- for installation in sorted(installations.values(), key=lambda x: x.name.lower()):
- if installation.canonical_name not in skip:
- yield str(installation).rstrip()
-
-
-def _format_as_name_version(dist: BaseDistribution) -> str:
- dist_version = dist.version
- if isinstance(dist_version, Version):
- return f"{dist.raw_name}=={dist_version}"
- return f"{dist.raw_name}==={dist_version}"
-
-
-def _get_editable_info(dist: BaseDistribution) -> _EditableInfo:
- """
- Compute and return values (req, comments) for use in
- FrozenRequirement.from_dist().
- """
- editable_project_location = dist.editable_project_location
- assert editable_project_location
- location = os.path.normcase(os.path.abspath(editable_project_location))
-
- from pip._internal.vcs import RemoteNotFoundError, RemoteNotValidError, vcs
-
- vcs_backend = vcs.get_backend_for_dir(location)
-
- if vcs_backend is None:
- display = _format_as_name_version(dist)
- logger.debug(
- 'No VCS found for editable requirement "%s" in: %r',
- display,
- location,
- )
- return _EditableInfo(
- requirement=location,
- comments=[f"# Editable install with no version control ({display})"],
- )
-
- vcs_name = type(vcs_backend).__name__
-
- try:
- req = vcs_backend.get_src_requirement(location, dist.raw_name)
- except RemoteNotFoundError:
- display = _format_as_name_version(dist)
- return _EditableInfo(
- requirement=location,
- comments=[f"# Editable {vcs_name} install with no remote ({display})"],
- )
- except RemoteNotValidError as ex:
- display = _format_as_name_version(dist)
- return _EditableInfo(
- requirement=location,
- comments=[
- f"# Editable {vcs_name} install ({display}) with either a deleted "
- f"local remote or invalid URI:",
- f"# '{ex.url}'",
- ],
- )
- except BadCommand:
- logger.warning(
- "cannot determine version of editable source in %s "
- "(%s command not found in path)",
- location,
- vcs_backend.name,
- )
- return _EditableInfo(requirement=location, comments=[])
- except InstallationError as exc:
- logger.warning("Error when trying to get requirement for VCS system %s", exc)
- else:
- return _EditableInfo(requirement=req, comments=[])
-
- logger.warning("Could not determine repository location of %s", location)
-
- return _EditableInfo(
- requirement=location,
- comments=["## !! Could not determine repository location"],
- )
-
-
-class FrozenRequirement:
- def __init__(
- self,
- name: str,
- req: str,
- editable: bool,
- comments: Iterable[str] = (),
- ) -> None:
- self.name = name
- self.canonical_name = canonicalize_name(name)
- self.req = req
- self.editable = editable
- self.comments = comments
-
- @classmethod
- def from_dist(cls, dist: BaseDistribution) -> "FrozenRequirement":
- editable = dist.editable
- if editable:
- req, comments = _get_editable_info(dist)
- else:
- comments = []
- direct_url = dist.direct_url
- if direct_url:
- # if PEP 610 metadata is present, use it
- req = direct_url_as_pep440_direct_reference(direct_url, dist.raw_name)
- else:
- # name==version requirement
- req = _format_as_name_version(dist)
-
- return cls(dist.raw_name, req, editable, comments=comments)
-
- def __str__(self) -> str:
- req = self.req
- if self.editable:
- req = f"-e {req}"
- return "\n".join(list(self.comments) + [str(req)]) + "\n"
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/themes.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/themes.py
deleted file mode 100644
index bf6db104a2c4fd4f3dc699e85f2b262c3d31e9a0..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/themes.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .default_styles import DEFAULT_STYLES
-from .theme import Theme
-
-
-DEFAULT = Theme(DEFAULT_STYLES)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/evaluation/eval.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/evaluation/eval.py
deleted file mode 100644
index 951a0920ec3d93703245562d4f76ec597e672ad9..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/evaluation/eval.py
+++ /dev/null
@@ -1,156 +0,0 @@
-import itertools
-import json
-import os
-from detectron2.structures import Boxes, BoxMode, pairwise_iou
-from detectron2.utils.file_io import PathManager
-import numpy as np
-import pycocotools.mask as mask_util
-from detectron2.evaluation.coco_evaluation import COCOEvaluator
-from detectron2.evaluation.coco_evaluation import _evaluate_predictions_on_coco
-
-
-class GRiTCOCOEvaluator(COCOEvaluator):
- def process(self, inputs, outputs):
- for input, output in zip(inputs, outputs):
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
-
- if len(prediction) > 1:
- self._predictions.append(prediction)
-
- def _eval_predictions(self, predictions, img_ids=None):
- self._logger.info("Preparing results for COCO format ...")
- coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
- tasks = self._tasks or self._tasks_from_predictions(coco_results)
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "coco_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(coco_results))
- f.flush()
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info(
- "Evaluating predictions with {} COCO API...".format(
- "unofficial" if self._use_fast_impl else "official"
- )
- )
-
- coco_results = self.convert_classname_to_id(coco_results)
-
- for task in sorted(tasks):
- assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!"
- coco_eval = (
- _evaluate_predictions_on_coco(
- self._coco_api,
- coco_results,
- task,
- kpt_oks_sigmas=self._kpt_oks_sigmas,
- use_fast_impl=self._use_fast_impl,
- img_ids=img_ids,
- max_dets_per_image=self._max_dets_per_image,
- )
- if len(coco_results) > 0
- else None # cocoapi does not handle empty results very well
- )
-
- res = self._derive_coco_results(
- coco_eval, task, class_names=self._metadata.get("thing_classes")
- )
- self._results[task] = res
-
- def convert_classname_to_id(self, results):
- outputs = []
- class_name_to_id = {}
- categories = sorted(self._coco_api.dataset['categories'], key=lambda x: x['id'])
-
- for cat in categories:
- class_name_to_id[cat['name']] = cat['id']
-
- for pred in results:
- if pred['object_descriptions'] in class_name_to_id:
- pred['category_id'] = class_name_to_id[pred['object_descriptions']]
- del pred['object_descriptions']
- outputs.append(pred)
-
- return outputs
-
-
-class GRiTVGEvaluator(COCOEvaluator):
- def process(self, inputs, outputs):
- for input, output in zip(inputs, outputs):
- assert input["image_id"] == int(input['file_name'].split('/')[-1].split('.')[0])
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(instances, input["image_id"], output_logits=True)
- h = input['height']
- w = input['width']
- scale = 720.0 / max(h, w)
- scaled_inst = []
- for inst in prediction["instances"]:
- inst['bbox'][0] = inst['bbox'][0] * scale
- inst['bbox'][1] = inst['bbox'][1] * scale
- inst['bbox'][2] = inst['bbox'][2] * scale
- inst['bbox'][3] = inst['bbox'][3] * scale
- scaled_inst.append(inst)
- if len(scaled_inst) > 0:
- prediction["instances"] = scaled_inst
- if len(prediction) > 1:
- self._predictions.append(prediction)
-
- def _eval_predictions(self, predictions, img_ids=None):
- '''
- This is only for saving the results to json file
- '''
- self._logger.info("Preparing results for COCO format ...")
- coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "vg_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(coco_results))
- f.flush()
-
-
-def instances_to_coco_json(instances, img_id, output_logits=False):
- """
- Add object_descriptions and logit (if applicable) to
- detectron2's instances_to_coco_json
- """
- num_instance = len(instances)
- if num_instance == 0:
- return []
-
- boxes = instances.pred_boxes.tensor.numpy()
- boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- boxes = boxes.tolist()
- scores = instances.scores.tolist()
- classes = instances.pred_classes.tolist()
- object_descriptions = instances.pred_object_descriptions.data
- if output_logits:
- logits = instances.logits.tolist()
-
- results = []
- for k in range(num_instance):
- result = {
- "image_id": img_id,
- "category_id": classes[k],
- "bbox": boxes[k],
- "score": scores[k],
- 'object_descriptions': object_descriptions[k],
- }
- if output_logits:
- result["logit"] = logits[k]
-
- results.append(result)
- return results
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint_provider.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint_provider.py
deleted file mode 100644
index be927719b628fcebb6a1007ee71747683c332114..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint_provider.py
+++ /dev/null
@@ -1,727 +0,0 @@
-# Copyright 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-
-"""
-NOTE: All classes and functions in this module are considered private and are
-subject to abrupt breaking changes. Please do not use them directly.
-
-To view the raw JSON that the objects in this module represent, please
-go to any `endpoint-rule-set.json` file in /botocore/data///
-or you can look at the test files in /tests/unit/data/endpoints/valid-rules/
-"""
-
-
-import logging
-import re
-from enum import Enum
-from string import Formatter
-from typing import NamedTuple
-
-from botocore import xform_name
-from botocore.compat import IPV4_RE, quote, urlparse
-from botocore.exceptions import EndpointResolutionError
-from botocore.utils import (
- ArnParser,
- InvalidArnException,
- is_valid_ipv4_endpoint_url,
- is_valid_ipv6_endpoint_url,
- lru_cache_weakref,
- normalize_url_path,
- percent_encode,
-)
-
-logger = logging.getLogger(__name__)
-
-TEMPLATE_STRING_RE = re.compile(r"\{[a-zA-Z#]+\}")
-GET_ATTR_RE = re.compile(r"(\w+)\[(\d+)\]")
-VALID_HOST_LABEL_RE = re.compile(
- r"^(?!-)[a-zA-Z\d-]{1,63}(?= len(value):
- return None
- return value[index]
- else:
- value = value[part]
- return value
-
- def format_partition_output(self, partition):
- output = partition["outputs"]
- output["name"] = partition["id"]
- return output
-
- def is_partition_match(self, region, partition):
- matches_regex = re.match(partition["regionRegex"], region) is not None
- return region in partition["regions"] or matches_regex
-
- def aws_partition(self, value):
- """Match a region string to an AWS partition.
-
- :type value: str
- :rtype: dict
- """
- partitions = self.partitions_data['partitions']
-
- if value is not None:
- for partition in partitions:
- if self.is_partition_match(value, partition):
- return self.format_partition_output(partition)
-
- # return the default partition if no matches were found
- aws_partition = partitions[0]
- return self.format_partition_output(aws_partition)
-
- def aws_parse_arn(self, value):
- """Parse and validate string for ARN components.
-
- :type value: str
- :rtype: dict
- """
- if value is None or not value.startswith("arn:"):
- return None
-
- try:
- arn_dict = ARN_PARSER.parse_arn(value)
- except InvalidArnException:
- return None
-
- # partition, resource, and service are required
- if not all(
- (arn_dict["partition"], arn_dict["service"], arn_dict["resource"])
- ):
- return None
-
- arn_dict["accountId"] = arn_dict.pop("account")
-
- resource = arn_dict.pop("resource")
- arn_dict["resourceId"] = resource.replace(":", "/").split("/")
-
- return arn_dict
-
- def is_valid_host_label(self, value, allow_subdomains):
- """Evaluates whether a value is a valid host label per
- RFC 1123. If allow_subdomains is True, split on `.` and validate
- each component separately.
-
- :type value: str
- :type allow_subdomains: bool
- :rtype: bool
- """
- if value is None or allow_subdomains is False and value.count(".") > 0:
- return False
-
- if allow_subdomains is True:
- return all(
- self.is_valid_host_label(label, False)
- for label in value.split(".")
- )
-
- return VALID_HOST_LABEL_RE.match(value) is not None
-
- def string_equals(self, value1, value2):
- """Evaluates two string values for equality.
-
- :type value1: str
- :type value2: str
- :rtype: bool
- """
- if not all(isinstance(val, str) for val in (value1, value2)):
- msg = f"Both values must be strings, not {type(value1)} and {type(value2)}."
- raise EndpointResolutionError(msg=msg)
- return value1 == value2
-
- def uri_encode(self, value):
- """Perform percent-encoding on an input string.
-
- :type value: str
- :rytpe: str
- """
- if value is None:
- return None
-
- return percent_encode(value)
-
- def parse_url(self, value):
- """Parse a URL string into components.
-
- :type value: str
- :rtype: dict
- """
- if value is None:
- return None
-
- url_components = urlparse(value)
- try:
- # url_parse may assign non-integer values to
- # `port` and will fail when accessed.
- url_components.port
- except ValueError:
- return None
-
- scheme = url_components.scheme
- query = url_components.query
- # URLs with queries are not supported
- if scheme not in ("https", "http") or len(query) > 0:
- return None
-
- path = url_components.path
- normalized_path = quote(normalize_url_path(path))
- if not normalized_path.endswith("/"):
- normalized_path = f"{normalized_path}/"
-
- return {
- "scheme": scheme,
- "authority": url_components.netloc,
- "path": path,
- "normalizedPath": normalized_path,
- "isIp": is_valid_ipv4_endpoint_url(value)
- or is_valid_ipv6_endpoint_url(value),
- }
-
- def boolean_equals(self, value1, value2):
- """Evaluates two boolean values for equality.
-
- :type value1: bool
- :type value2: bool
- :rtype: bool
- """
- if not all(isinstance(val, bool) for val in (value1, value2)):
- msg = f"Both arguments must be bools, not {type(value1)} and {type(value2)}."
- raise EndpointResolutionError(msg=msg)
- return value1 is value2
-
- def is_ascii(self, value):
- """Evaluates if a string only contains ASCII characters.
-
- :type value: str
- :rtype: bool
- """
- try:
- value.encode("ascii")
- return True
- except UnicodeEncodeError:
- return False
-
- def substring(self, value, start, stop, reverse):
- """Computes a substring given the start index and end index. If `reverse` is
- True, slice the string from the end instead.
-
- :type value: str
- :type start: int
- :type end: int
- :type reverse: bool
- :rtype: str
- """
- if not isinstance(value, str):
- msg = f"Input must be a string, not {type(value)}."
- raise EndpointResolutionError(msg=msg)
- if start >= stop or len(value) < stop or not self.is_ascii(value):
- return None
-
- if reverse is True:
- r_start = len(value) - stop
- r_stop = len(value) - start
- return value[r_start:r_stop]
-
- return value[start:stop]
-
- def _not(self, value):
- """A function implementation of the logical operator `not`.
-
- :type value: Any
- :rtype: bool
- """
- return not value
-
- def aws_is_virtual_hostable_s3_bucket(self, value, allow_subdomains):
- """Evaluates whether a value is a valid bucket name for virtual host
- style bucket URLs. To pass, the value must meet the following criteria:
- 1. is_valid_host_label(value) is True
- 2. length between 3 and 63 characters (inclusive)
- 3. does not contain uppercase characters
- 4. is not formatted as an IP address
-
- If allow_subdomains is True, split on `.` and validate
- each component separately.
-
- :type value: str
- :type allow_subdomains: bool
- :rtype: bool
- """
- if (
- value is None
- or len(value) < 3
- or value.lower() != value
- or IPV4_RE.match(value) is not None
- ):
- return False
-
- if allow_subdomains is True:
- return all(
- self.aws_is_virtual_hostable_s3_bucket(label, False)
- for label in value.split(".")
- )
-
- return self.is_valid_host_label(value, allow_subdomains=False)
-
-
-# maintains backwards compatibility as `Library` was misspelled
-# in earlier versions
-RuleSetStandardLibary = RuleSetStandardLibrary
-
-
-class BaseRule:
- """Base interface for individual endpoint rules."""
-
- def __init__(self, conditions, documentation=None):
- self.conditions = conditions
- self.documentation = documentation
-
- def evaluate(self, scope_vars, rule_lib):
- raise NotImplementedError()
-
- def evaluate_conditions(self, scope_vars, rule_lib):
- """Determine if all conditions in a rule are met.
-
- :type scope_vars: dict
- :type rule_lib: RuleSetStandardLibrary
- :rtype: bool
- """
- for func_signature in self.conditions:
- result = rule_lib.call_function(func_signature, scope_vars)
- if result is False or result is None:
- return False
- return True
-
-
-class RuleSetEndpoint(NamedTuple):
- """A resolved endpoint object returned by a rule."""
-
- url: str
- properties: dict
- headers: dict
-
-
-class EndpointRule(BaseRule):
- def __init__(self, endpoint, **kwargs):
- super().__init__(**kwargs)
- self.endpoint = endpoint
-
- def evaluate(self, scope_vars, rule_lib):
- """Determine if conditions are met to provide a valid endpoint.
-
- :type scope_vars: dict
- :rtype: RuleSetEndpoint
- """
- if self.evaluate_conditions(scope_vars, rule_lib):
- url = rule_lib.resolve_value(self.endpoint["url"], scope_vars)
- properties = self.resolve_properties(
- self.endpoint.get("properties", {}),
- scope_vars,
- rule_lib,
- )
- headers = self.resolve_headers(scope_vars, rule_lib)
- return RuleSetEndpoint(
- url=url, properties=properties, headers=headers
- )
-
- return None
-
- def resolve_properties(self, properties, scope_vars, rule_lib):
- """Traverse `properties` attribute, resolving any template strings.
-
- :type properties: dict/list/str
- :type scope_vars: dict
- :type rule_lib: RuleSetStandardLibrary
- :rtype: dict
- """
- if isinstance(properties, list):
- return [
- self.resolve_properties(prop, scope_vars, rule_lib)
- for prop in properties
- ]
- elif isinstance(properties, dict):
- return {
- key: self.resolve_properties(value, scope_vars, rule_lib)
- for key, value in properties.items()
- }
- elif rule_lib.is_template(properties):
- return rule_lib.resolve_template_string(properties, scope_vars)
-
- return properties
-
- def resolve_headers(self, scope_vars, rule_lib):
- """Iterate through headers attribute resolving all values.
-
- :type scope_vars: dict
- :type rule_lib: RuleSetStandardLibrary
- :rtype: dict
- """
- resolved_headers = {}
- headers = self.endpoint.get("headers", {})
-
- for header, values in headers.items():
- resolved_headers[header] = [
- rule_lib.resolve_value(item, scope_vars) for item in values
- ]
- return resolved_headers
-
-
-class ErrorRule(BaseRule):
- def __init__(self, error, **kwargs):
- super().__init__(**kwargs)
- self.error = error
-
- def evaluate(self, scope_vars, rule_lib):
- """If an error rule's conditions are met, raise an error rule.
-
- :type scope_vars: dict
- :type rule_lib: RuleSetStandardLibrary
- :rtype: EndpointResolutionError
- """
- if self.evaluate_conditions(scope_vars, rule_lib):
- error = rule_lib.resolve_value(self.error, scope_vars)
- raise EndpointResolutionError(msg=error)
- return None
-
-
-class TreeRule(BaseRule):
- """A tree rule is non-terminal meaning it will never be returned to a provider.
- Additionally this means it has no attributes that need to be resolved.
- """
-
- def __init__(self, rules, **kwargs):
- super().__init__(**kwargs)
- self.rules = [RuleCreator.create(**rule) for rule in rules]
-
- def evaluate(self, scope_vars, rule_lib):
- """If a tree rule's conditions are met, iterate its sub-rules
- and return first result found.
-
- :type scope_vars: dict
- :type rule_lib: RuleSetStandardLibrary
- :rtype: RuleSetEndpoint/EndpointResolutionError
- """
- if self.evaluate_conditions(scope_vars, rule_lib):
- for rule in self.rules:
- # don't share scope_vars between rules
- rule_result = rule.evaluate(scope_vars.copy(), rule_lib)
- if rule_result:
- return rule_result
- return None
-
-
-class RuleCreator:
-
- endpoint = EndpointRule
- error = ErrorRule
- tree = TreeRule
-
- @classmethod
- def create(cls, **kwargs):
- """Create a rule instance from metadata.
-
- :rtype: TreeRule/EndpointRule/ErrorRule
- """
- rule_type = kwargs.pop("type")
- try:
- rule_class = getattr(cls, rule_type)
- except AttributeError:
- raise EndpointResolutionError(
- msg=f"Unknown rule type: {rule_type}. A rule must "
- "be of type tree, endpoint or error."
- )
- else:
- return rule_class(**kwargs)
-
-
-class ParameterType(Enum):
- """Translation from `type` attribute to native Python type."""
-
- string = str
- boolean = bool
-
-
-class ParameterDefinition:
- """The spec of an individual parameter defined in a RuleSet."""
-
- def __init__(
- self,
- name,
- parameter_type,
- documentation=None,
- builtIn=None,
- default=None,
- required=None,
- deprecated=None,
- ):
- self.name = name
- try:
- self.parameter_type = getattr(
- ParameterType, parameter_type.lower()
- ).value
- except AttributeError:
- raise EndpointResolutionError(
- msg=f"Unknown parameter type: {parameter_type}. "
- "A parameter must be of type string or boolean."
- )
- self.documentation = documentation
- self.builtin = builtIn
- self.default = default
- self.required = required
- self.deprecated = deprecated
-
- def validate_input(self, value):
- """Perform base validation on parameter input.
-
- :type value: Any
- :raises: EndpointParametersError
- """
-
- if not isinstance(value, self.parameter_type):
- raise EndpointResolutionError(
- msg=f"Value ({self.name}) is the wrong "
- f"type. Must be {self.parameter_type}."
- )
- if self.deprecated is not None:
- depr_str = f"{self.name} has been deprecated."
- msg = self.deprecated.get("message")
- since = self.deprecated.get("since")
- if msg:
- depr_str += f"\n{msg}"
- if since:
- depr_str += f"\nDeprecated since {since}."
- logger.info(depr_str)
-
- return None
-
- def process_input(self, value):
- """Process input against spec, applying default if value is None."""
- if value is None:
- if self.default is not None:
- return self.default
- if self.required:
- raise EndpointResolutionError(
- f"Cannot find value for required parameter {self.name}"
- )
- # in all other cases, the parameter will keep the value None
- else:
- self.validate_input(value)
- return value
-
-
-class RuleSet:
- """Collection of rules to derive a routable service endpoint."""
-
- def __init__(
- self, version, parameters, rules, partitions, documentation=None
- ):
- self.version = version
- self.parameters = self._ingest_parameter_spec(parameters)
- self.rules = [RuleCreator.create(**rule) for rule in rules]
- self.rule_lib = RuleSetStandardLibrary(partitions)
- self.documentation = documentation
-
- def _ingest_parameter_spec(self, parameters):
- return {
- name: ParameterDefinition(
- name,
- spec["type"],
- spec.get("documentation"),
- spec.get("builtIn"),
- spec.get("default"),
- spec.get("required"),
- spec.get("deprecated"),
- )
- for name, spec in parameters.items()
- }
-
- def process_input_parameters(self, input_params):
- """Process each input parameter against its spec.
-
- :type input_params: dict
- """
- for name, spec in self.parameters.items():
- value = spec.process_input(input_params.get(name))
- if value is not None:
- input_params[name] = value
- return None
-
- def evaluate(self, input_parameters):
- """Evaluate input parameters against rules returning first match.
-
- :type input_parameters: dict
- """
- self.process_input_parameters(input_parameters)
- for rule in self.rules:
- evaluation = rule.evaluate(input_parameters.copy(), self.rule_lib)
- if evaluation is not None:
- return evaluation
- return None
-
-
-class EndpointProvider:
- """Derives endpoints from a RuleSet for given input parameters."""
-
- def __init__(self, ruleset_data, partition_data):
- self.ruleset = RuleSet(**ruleset_data, partitions=partition_data)
-
- @lru_cache_weakref(maxsize=CACHE_SIZE)
- def resolve_endpoint(self, **input_parameters):
- """Match input parameters to a rule.
-
- :type input_parameters: dict
- :rtype: RuleSetEndpoint
- """
- params_for_error = input_parameters.copy()
- endpoint = self.ruleset.evaluate(input_parameters)
- if endpoint is None:
- param_string = "\n".join(
- [f"{key}: {value}" for key, value in params_for_error.items()]
- )
- raise EndpointResolutionError(
- msg=f"No endpoint found for parameters:\n{param_string}"
- )
- return endpoint
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_collections.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_collections.py
deleted file mode 100644
index da9857e986d89acac3ba05a6735dc08c249bde1a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_collections.py
+++ /dev/null
@@ -1,337 +0,0 @@
-from __future__ import absolute_import
-
-try:
- from collections.abc import Mapping, MutableMapping
-except ImportError:
- from collections import Mapping, MutableMapping
-try:
- from threading import RLock
-except ImportError: # Platform-specific: No threads available
-
- class RLock:
- def __enter__(self):
- pass
-
- def __exit__(self, exc_type, exc_value, traceback):
- pass
-
-
-from collections import OrderedDict
-
-from .exceptions import InvalidHeader
-from .packages import six
-from .packages.six import iterkeys, itervalues
-
-__all__ = ["RecentlyUsedContainer", "HTTPHeaderDict"]
-
-
-_Null = object()
-
-
-class RecentlyUsedContainer(MutableMapping):
- """
- Provides a thread-safe dict-like container which maintains up to
- ``maxsize`` keys while throwing away the least-recently-used keys beyond
- ``maxsize``.
-
- :param maxsize:
- Maximum number of recent elements to retain.
-
- :param dispose_func:
- Every time an item is evicted from the container,
- ``dispose_func(value)`` is called. Callback which will get called
- """
-
- ContainerCls = OrderedDict
-
- def __init__(self, maxsize=10, dispose_func=None):
- self._maxsize = maxsize
- self.dispose_func = dispose_func
-
- self._container = self.ContainerCls()
- self.lock = RLock()
-
- def __getitem__(self, key):
- # Re-insert the item, moving it to the end of the eviction line.
- with self.lock:
- item = self._container.pop(key)
- self._container[key] = item
- return item
-
- def __setitem__(self, key, value):
- evicted_value = _Null
- with self.lock:
- # Possibly evict the existing value of 'key'
- evicted_value = self._container.get(key, _Null)
- self._container[key] = value
-
- # If we didn't evict an existing value, we might have to evict the
- # least recently used item from the beginning of the container.
- if len(self._container) > self._maxsize:
- _key, evicted_value = self._container.popitem(last=False)
-
- if self.dispose_func and evicted_value is not _Null:
- self.dispose_func(evicted_value)
-
- def __delitem__(self, key):
- with self.lock:
- value = self._container.pop(key)
-
- if self.dispose_func:
- self.dispose_func(value)
-
- def __len__(self):
- with self.lock:
- return len(self._container)
-
- def __iter__(self):
- raise NotImplementedError(
- "Iteration over this class is unlikely to be threadsafe."
- )
-
- def clear(self):
- with self.lock:
- # Copy pointers to all values, then wipe the mapping
- values = list(itervalues(self._container))
- self._container.clear()
-
- if self.dispose_func:
- for value in values:
- self.dispose_func(value)
-
- def keys(self):
- with self.lock:
- return list(iterkeys(self._container))
-
-
-class HTTPHeaderDict(MutableMapping):
- """
- :param headers:
- An iterable of field-value pairs. Must not contain multiple field names
- when compared case-insensitively.
-
- :param kwargs:
- Additional field-value pairs to pass in to ``dict.update``.
-
- A ``dict`` like container for storing HTTP Headers.
-
- Field names are stored and compared case-insensitively in compliance with
- RFC 7230. Iteration provides the first case-sensitive key seen for each
- case-insensitive pair.
-
- Using ``__setitem__`` syntax overwrites fields that compare equal
- case-insensitively in order to maintain ``dict``'s api. For fields that
- compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add``
- in a loop.
-
- If multiple fields that are equal case-insensitively are passed to the
- constructor or ``.update``, the behavior is undefined and some will be
- lost.
-
- >>> headers = HTTPHeaderDict()
- >>> headers.add('Set-Cookie', 'foo=bar')
- >>> headers.add('set-cookie', 'baz=quxx')
- >>> headers['content-length'] = '7'
- >>> headers['SET-cookie']
- 'foo=bar, baz=quxx'
- >>> headers['Content-Length']
- '7'
- """
-
- def __init__(self, headers=None, **kwargs):
- super(HTTPHeaderDict, self).__init__()
- self._container = OrderedDict()
- if headers is not None:
- if isinstance(headers, HTTPHeaderDict):
- self._copy_from(headers)
- else:
- self.extend(headers)
- if kwargs:
- self.extend(kwargs)
-
- def __setitem__(self, key, val):
- self._container[key.lower()] = [key, val]
- return self._container[key.lower()]
-
- def __getitem__(self, key):
- val = self._container[key.lower()]
- return ", ".join(val[1:])
-
- def __delitem__(self, key):
- del self._container[key.lower()]
-
- def __contains__(self, key):
- return key.lower() in self._container
-
- def __eq__(self, other):
- if not isinstance(other, Mapping) and not hasattr(other, "keys"):
- return False
- if not isinstance(other, type(self)):
- other = type(self)(other)
- return dict((k.lower(), v) for k, v in self.itermerged()) == dict(
- (k.lower(), v) for k, v in other.itermerged()
- )
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- if six.PY2: # Python 2
- iterkeys = MutableMapping.iterkeys
- itervalues = MutableMapping.itervalues
-
- __marker = object()
-
- def __len__(self):
- return len(self._container)
-
- def __iter__(self):
- # Only provide the originally cased names
- for vals in self._container.values():
- yield vals[0]
-
- def pop(self, key, default=__marker):
- """D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
- If key is not found, d is returned if given, otherwise KeyError is raised.
- """
- # Using the MutableMapping function directly fails due to the private marker.
- # Using ordinary dict.pop would expose the internal structures.
- # So let's reinvent the wheel.
- try:
- value = self[key]
- except KeyError:
- if default is self.__marker:
- raise
- return default
- else:
- del self[key]
- return value
-
- def discard(self, key):
- try:
- del self[key]
- except KeyError:
- pass
-
- def add(self, key, val):
- """Adds a (name, value) pair, doesn't overwrite the value if it already
- exists.
-
- >>> headers = HTTPHeaderDict(foo='bar')
- >>> headers.add('Foo', 'baz')
- >>> headers['foo']
- 'bar, baz'
- """
- key_lower = key.lower()
- new_vals = [key, val]
- # Keep the common case aka no item present as fast as possible
- vals = self._container.setdefault(key_lower, new_vals)
- if new_vals is not vals:
- vals.append(val)
-
- def extend(self, *args, **kwargs):
- """Generic import function for any type of header-like object.
- Adapted version of MutableMapping.update in order to insert items
- with self.add instead of self.__setitem__
- """
- if len(args) > 1:
- raise TypeError(
- "extend() takes at most 1 positional "
- "arguments ({0} given)".format(len(args))
- )
- other = args[0] if len(args) >= 1 else ()
-
- if isinstance(other, HTTPHeaderDict):
- for key, val in other.iteritems():
- self.add(key, val)
- elif isinstance(other, Mapping):
- for key in other:
- self.add(key, other[key])
- elif hasattr(other, "keys"):
- for key in other.keys():
- self.add(key, other[key])
- else:
- for key, value in other:
- self.add(key, value)
-
- for key, value in kwargs.items():
- self.add(key, value)
-
- def getlist(self, key, default=__marker):
- """Returns a list of all the values for the named field. Returns an
- empty list if the key doesn't exist."""
- try:
- vals = self._container[key.lower()]
- except KeyError:
- if default is self.__marker:
- return []
- return default
- else:
- return vals[1:]
-
- # Backwards compatibility for httplib
- getheaders = getlist
- getallmatchingheaders = getlist
- iget = getlist
-
- # Backwards compatibility for http.cookiejar
- get_all = getlist
-
- def __repr__(self):
- return "%s(%s)" % (type(self).__name__, dict(self.itermerged()))
-
- def _copy_from(self, other):
- for key in other:
- val = other.getlist(key)
- if isinstance(val, list):
- # Don't need to convert tuples
- val = list(val)
- self._container[key.lower()] = [key] + val
-
- def copy(self):
- clone = type(self)()
- clone._copy_from(self)
- return clone
-
- def iteritems(self):
- """Iterate over all header lines, including duplicate ones."""
- for key in self:
- vals = self._container[key.lower()]
- for val in vals[1:]:
- yield vals[0], val
-
- def itermerged(self):
- """Iterate over all headers, merging duplicate ones together."""
- for key in self:
- val = self._container[key.lower()]
- yield val[0], ", ".join(val[1:])
-
- def items(self):
- return list(self.iteritems())
-
- @classmethod
- def from_httplib(cls, message): # Python 2
- """Read headers from a Python 2 httplib message object."""
- # python2.7 does not expose a proper API for exporting multiheaders
- # efficiently. This function re-reads raw lines from the message
- # object and extracts the multiheaders properly.
- obs_fold_continued_leaders = (" ", "\t")
- headers = []
-
- for line in message.headers:
- if line.startswith(obs_fold_continued_leaders):
- if not headers:
- # We received a header line that starts with OWS as described
- # in RFC-7230 S3.2.4. This indicates a multiline header, but
- # there exists no previous header to which we can attach it.
- raise InvalidHeader(
- "Header continuation with no previous header: %s" % line
- )
- else:
- key, value = headers[-1]
- headers[-1] = (key, value + " " + line.strip())
- continue
-
- key, value = line.split(":", 1)
- headers.append((key, value.strip()))
-
- return cls(headers)
diff --git a/spaces/Boynn/AI/README.md b/spaces/Boynn/AI/README.md
deleted file mode 100644
index ef27320abc0c2cbbecd4fe060aa04b84f619ceb1..0000000000000000000000000000000000000000
--- a/spaces/Boynn/AI/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI
-emoji: 🏆
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/BridgeTower/bridgetower-video-search/bridgetower_custom.py b/spaces/BridgeTower/bridgetower-video-search/bridgetower_custom.py
deleted file mode 100644
index e2a36504c7d80df7f82f5249221f7ef56b98b769..0000000000000000000000000000000000000000
--- a/spaces/BridgeTower/bridgetower-video-search/bridgetower_custom.py
+++ /dev/null
@@ -1,183 +0,0 @@
-from collections import OrderedDict
-from typing import List, Optional, Tuple, Union
-
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from torchvision import transforms
-from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
-
-from transformers.modeling_outputs import SequenceClassifierOutput
-
-from transformers import BridgeTowerPreTrainedModel, BridgeTowerModel
-from transformers.models.bridgetower.modeling_bridgetower import BridgeTowerTextModel
-
-class LayerNorm(nn.LayerNorm):
- """Subclass torch's LayerNorm to handle fp16."""
-
- def forward(self, x: torch.Tensor):
- orig_type = x.dtype
- ret = super().forward(x.type(torch.float32))
- return ret.type(orig_type)
-
-class BridgeTowerImageFeatureExtractor(nn.Module):
- def __init__(
- self,
- patch_size=14,
- width=1024,
- resolution_after=294,
- ckpt_path=None,
- ):
- super().__init__()
-
- self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
-
- scale = width ** -0.5
- self.class_embedding = nn.Parameter(scale * torch.randn(width))
- self.positional_embedding = nn.Parameter(scale * torch.randn((resolution_after // patch_size) ** 2 + 1, width))
- self.ln_pre = LayerNorm(width)
-
- if ckpt_path is not None:
- sd = torch.load(ckpt_path)
- if 'state_dict' in sd:
- sd = sd["state_dict"]
- print(f'Loading feature extractor checkpoint from {ckpt_path}')
- self.load_state_dict(sd)
-
- def forward(self, x: torch.Tensor):
- x = self.conv1(x) # shape = [*, width, grid, grid]
- x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
- x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
- t=self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device)
- x = torch.cat([t, x], dim=1) # shape = [*, grid ** 2 + 1, width]
- x = x + self.positional_embedding.to(x.dtype)
- x = self.ln_pre(x)
- x = x.permute(1, 0, 2) # NLD -> LND
- return x
-
-
-class BridgeTowerITCHead(nn.Module):
- def __init__(self, hidden_size, embed_size):
- super().__init__()
- self.fc = nn.Linear(hidden_size, embed_size)
-
- def forward(self, x):
- x = self.fc(x)
- return x
-
-
-class _BridgeTowerTextModelWrapper(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.text_model = BridgeTowerTextModel(config)
-
- def forward(self, **kwargs):
- return self.text_model(**kwargs)
-
-
-class BridgeTowerTextFeatureExtractor(BridgeTowerPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.bridgetower = _BridgeTowerTextModelWrapper(config.text_config)
- self.itc_text_head = BridgeTowerITCHead(config.hidden_size, config.contrastive_hidden_size)
-
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- labels: Optional[torch.LongTensor] = None,
- ):
-
- outputs = self.bridgetower(input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True)
- final_hidden_cls = outputs.hidden_states[-1][:,0,:]
- final_hidden_cls = F.normalize(self.itc_text_head(final_hidden_cls), dim=-1, p=2)
-
- return final_hidden_cls
-
-
-class BridgeTowerForITC(BridgeTowerPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.bridgetower = BridgeTowerModel(config)
-
- self.itc_text_head = BridgeTowerITCHead(config.hidden_size, config.contrastive_hidden_size)
- self.itc_image_head = BridgeTowerITCHead(config.hidden_size, config.contrastive_hidden_size)
- self.itc_cross_modal_head = BridgeTowerITCHead(config.hidden_size * 2, config.contrastive_hidden_size)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- pixel_values: Optional[torch.FloatTensor] = None,
- pixel_mask: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- image_embeds: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- labels: Optional[torch.LongTensor] = None,
- ) -> Union[SequenceClassifierOutput, Tuple[torch.FloatTensor]]:
-
- assert output_hidden_states, 'output_hidden_states should be set to True for BridgeTowerForITC'
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.bridgetower(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- pixel_values=pixel_values,
- pixel_mask=pixel_mask,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- image_embeds=image_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- pooler_output = outputs.pooler_output if return_dict else outputs[2]
-
- hidden_states_txt, hidden_states_img, hidden_states_cross_modal = outputs.hidden_states
-
- final_hidden_txt = hidden_states_txt[-1]
- final_hidden_img = hidden_states_img[-1]
-
- image_embeds_with_ln = self.bridgetower.vision_model.visual.forward_post(final_hidden_img)
- image_token_type_embeddings = self.bridgetower.token_type_embeddings(
- torch.full((1,), 1, dtype=torch.long, device=self.bridgetower.token_type_embeddings.weight.device)
- ).expand_as(image_embeds_with_ln)
-
- final_hidden_img = (
- self.bridgetower.cross_modal_image_transform(image_embeds_with_ln)
- + image_token_type_embeddings
- )
-
- final_hidden_txt = F.normalize(self.itc_text_head(final_hidden_txt[:,0,:]), dim=-1, p=2)
- final_hidden_img = F.normalize(self.itc_image_head(final_hidden_img[:,0,:]), dim=-1, p=2)
- final_hidden_cross = F.normalize(self.itc_cross_modal_head(pooler_output), dim=-1, p=2)
-
- logits = torch.stack([final_hidden_txt, final_hidden_img, final_hidden_cross], dim=-2)
-
- if not return_dict:
- return tuple(logits)
-
- return SequenceClassifierOutput(
- loss=None,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
diff --git a/spaces/CALM/Dashboard/streamlit_observable/frontend/build/service-worker.js b/spaces/CALM/Dashboard/streamlit_observable/frontend/build/service-worker.js
deleted file mode 100644
index dc58040d0d0c083e829902c37df2ba329abb09eb..0000000000000000000000000000000000000000
--- a/spaces/CALM/Dashboard/streamlit_observable/frontend/build/service-worker.js
+++ /dev/null
@@ -1,39 +0,0 @@
-/**
- * Welcome to your Workbox-powered service worker!
- *
- * You'll need to register this file in your web app and you should
- * disable HTTP caching for this file too.
- * See https://goo.gl/nhQhGp
- *
- * The rest of the code is auto-generated. Please don't update this file
- * directly; instead, make changes to your Workbox build configuration
- * and re-run your build process.
- * See https://goo.gl/2aRDsh
- */
-
-importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js");
-
-importScripts(
- "./precache-manifest.2e1db2924cb1e112608cee049b0d33cc.js"
-);
-
-self.addEventListener('message', (event) => {
- if (event.data && event.data.type === 'SKIP_WAITING') {
- self.skipWaiting();
- }
-});
-
-workbox.core.clientsClaim();
-
-/**
- * The workboxSW.precacheAndRoute() method efficiently caches and responds to
- * requests for URLs in the manifest.
- * See https://goo.gl/S9QRab
- */
-self.__precacheManifest = [].concat(self.__precacheManifest || []);
-workbox.precaching.precacheAndRoute(self.__precacheManifest, {});
-
-workbox.routing.registerNavigationRoute(workbox.precaching.getCacheKeyForURL("./index.html"), {
-
- blacklist: [/^\/_/,/\/[^/?]+\.[^/]+$/],
-});
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/evaluation.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/evaluation.md
deleted file mode 100644
index a0c21d0a3fe1313208a57cef2c786d60d904e9e3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/evaluation.md
+++ /dev/null
@@ -1,43 +0,0 @@
-
-# Evaluation
-
-Evaluation is a process that takes a number of inputs/outputs pairs and aggregate them.
-You can always [use the model](models.html) directly and just parse its inputs/outputs manually to perform
-evaluation.
-Alternatively, evaluation is implemented in detectron2 using the [DatasetEvaluator](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluator)
-interface.
-
-Detectron2 includes a few `DatasetEvaluator` that computes metrics using standard dataset-specific
-APIs (e.g., COCO, LVIS).
-You can also implement your own `DatasetEvaluator` that performs some other jobs
-using the inputs/outputs pairs.
-For example, to count how many instances are detected on the validation set:
-
-```
-class Counter(DatasetEvaluator):
- def reset(self):
- self.count = 0
- def process(self, inputs, outputs):
- for output in outputs:
- self.count += len(output["instances"])
- def evaluate(self):
- # save self.count somewhere, or print it, or return it.
- return {"count": self.count}
-```
-
-Once you have some `DatasetEvaluator`, you can run it with
-[inference_on_dataset](../modules/evaluation.html#detectron2.evaluation.inference_on_dataset).
-For example,
-
-```python
-val_results = inference_on_dataset(
- model,
- val_data_loader,
- DatasetEvaluators([COCOEvaluator(...), Counter()]))
-```
-Compared to running the evaluation manually using the model, the benefit of this function is that
-you can merge evaluators together using [DatasetEvaluators](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluators).
-In this way you can run all evaluations without having to go through the dataset multiple times.
-
-The `inference_on_dataset` function also provides accurate speed benchmarks for the
-given model and dataset.
diff --git a/spaces/CVPR/LIVE/thrust/thrust/sequence.h b/spaces/CVPR/LIVE/thrust/thrust/sequence.h
deleted file mode 100644
index e92391f64e1fd7d4fd82e08b662b45d285b45fa8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/sequence.h
+++ /dev/null
@@ -1,296 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file sequence.h
- * \brief Fills a range with a sequence of numbers
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup transformations
- * \{
- */
-
-
-/*! \p sequence fills the range [first, last) with a sequence of numbers.
- *
- * For each iterator \c i in the range [first, last), this version of
- * \p sequence performs the assignment *i = (i - first).
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the sequence.
- * \param last The end of the sequence.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- *
- * The following code snippet demonstrates how to use \p sequence to fill a range
- * with a sequence of numbers using the \p thrust::host execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::sequence(thrust::host, A, A + 10);
- * // A is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
- * \endcode
- *
- * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no
- * guarantee on order of execution.
- *
- * \see http://www.sgi.com/tech/stl/iota.html
- */
-template
-__host__ __device__
- void sequence(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last);
-
-
-/*! \p sequence fills the range [first, last) with a sequence of numbers.
- *
- * For each iterator \c i in the range [first, last), this version of
- * \p sequence performs the assignment *i = (i - first).
- *
- * \param first The beginning of the sequence.
- * \param last The end of the sequence.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- *
- * The following code snippet demonstrates how to use \p sequence to fill a range
- * with a sequence of numbers.
- *
- * \code
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::sequence(A, A + 10);
- * // A is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
- * \endcode
- *
- * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no
- * guarantee on order of execution.
- *
- * \see http://www.sgi.com/tech/stl/iota.html
- */
-template
- void sequence(ForwardIterator first,
- ForwardIterator last);
-
-
-/*! \p sequence fills the range [first, last) with a sequence of numbers.
- *
- * For each iterator \c i in the range [first, last), this version of
- * \p sequence performs the assignment *i = init + (i - first).
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the sequence.
- * \param last The end of the sequence.
- * \param init The first value of the sequence of numbers.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- * \tparam T is a model of Assignable,
- * and \p T is convertible to \p ForwardIterator's \c value_type.
- *
- * The following code snippet demonstrates how to use \p sequence to fill a range
- * with a sequence of numbers starting from the value 1 using the \p thrust::host execution
- * policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::sequence(thrust::host, A, A + 10, 1);
- * // A is now {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * \endcode
- *
- * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no
- * guarantee on order of execution.
- *
- * \see http://www.sgi.com/tech/stl/iota.html
- */
-template
-__host__ __device__
- void sequence(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- T init);
-
-
-/*! \p sequence fills the range [first, last) with a sequence of numbers.
- *
- * For each iterator \c i in the range [first, last), this version of
- * \p sequence performs the assignment *i = init + (i - first).
- *
- * \param first The beginning of the sequence.
- * \param last The end of the sequence.
- * \param init The first value of the sequence of numbers.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- * \tparam T is a model of Assignable,
- * and \p T is convertible to \p ForwardIterator's \c value_type.
- *
- * The following code snippet demonstrates how to use \p sequence to fill a range
- * with a sequence of numbers starting from the value 1.
- *
- * \code
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::sequence(A, A + 10, 1);
- * // A is now {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
- * \endcode
- *
- * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no
- * guarantee on order of execution.
- *
- * \see http://www.sgi.com/tech/stl/iota.html
- */
-template
- void sequence(ForwardIterator first,
- ForwardIterator last,
- T init);
-
-
-/*! \p sequence fills the range [first, last) with a sequence of numbers.
- *
- * For each iterator \c i in the range [first, last), this version of
- * \p sequence performs the assignment *i = init + step * (i - first).
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first The beginning of the sequence.
- * \param last The end of the sequence.
- * \param init The first value of the sequence of numbers
- * \param step The difference between consecutive elements.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- * \tparam T is a model of Assignable,
- * and \p T is convertible to \p ForwardIterator's \c value_type.
- *
- * The following code snippet demonstrates how to use \p sequence to fill a range
- * with a sequence of numbers starting from the value 1 with a step size of 3 using the \p thrust::host
- * execution policy for parallelization:
- *
- * \code
- * #include
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::sequence(thrust::host, A, A + 10, 1, 3);
- * // A is now {1, 4, 7, 10, 13, 16, 19, 22, 25, 28}
- * \endcode
- *
- * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no
- * guarantee on order of execution.
- *
- * \see http://www.sgi.com/tech/stl/iota.html
- */
-template
-__host__ __device__
- void sequence(const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- T init,
- T step);
-
-
-/*! \p sequence fills the range [first, last) with a sequence of numbers.
- *
- * For each iterator \c i in the range [first, last), this version of
- * \p sequence performs the assignment *i = init + step * (i - first).
- *
- * \param first The beginning of the sequence.
- * \param last The end of the sequence.
- * \param init The first value of the sequence of numbers
- * \param step The difference between consecutive elements.
- *
- * \tparam ForwardIterator is a model of Forward Iterator,
- * and \p ForwardIterator is mutable,
- * and if \c x and \c y are objects of \c ForwardIterator's \c value_type, then x + y is defined,
- * and if \c T is \p ForwardIterator's \c value_type, then T(0) is defined.
- * \tparam T is a model of Assignable,
- * and \p T is convertible to \p ForwardIterator's \c value_type.
- *
- * The following code snippet demonstrates how to use \p sequence to fill a range
- * with a sequence of numbers starting from the value 1 with a step size of 3.
- *
- * \code
- * #include
- * ...
- * const int N = 10;
- * int A[N];
- * thrust::sequence(A, A + 10, 1, 3);
- * // A is now {1, 4, 7, 10, 13, 16, 19, 22, 25, 28}
- * \endcode
- *
- * \note Unlike the similar C++ STL function \c std::iota, \p sequence offers no
- * guarantee on order of execution.
- *
- * \see http://www.sgi.com/tech/stl/iota.html
- */
-template
- void sequence(ForwardIterator first,
- ForwardIterator last,
- T init,
- T step);
-
-
-/*! \} // end transformations
- */
-
-
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/transform_reduce.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/transform_reduce.h
deleted file mode 100644
index e3f9494dfa6e54bbfdeb2a51fabd8bebc2188e98..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/transform_reduce.h
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a fill of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// the purpose of this header is to #include the transform_reduce.h header
-// of the sequential, host, and device systems. It should be #included in any
-// code which uses adl to dispatch transform_reduce
-
-#include
-
-// SCons can't see through the #defines below to figure out what this header
-// includes, so we fake it out by specifying all possible files we might end up
-// including inside an #if 0.
-#if 0
-#include
-#include
-#include
-#include
-#endif
-
-#define __THRUST_HOST_SYSTEM_TRANSFORM_REDUCE_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/transform_reduce.h>
-#include __THRUST_HOST_SYSTEM_TRANSFORM_REDUCE_HEADER
-#undef __THRUST_HOST_SYSTEM_TRANSFORM_REDUCE_HEADER
-
-#define __THRUST_DEVICE_SYSTEM_TRANSFORM_REDUCE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/transform_reduce.h>
-#include __THRUST_DEVICE_SYSTEM_TRANSFORM_REDUCE_HEADER
-#undef __THRUST_DEVICE_SYSTEM_TRANSFORM_REDUCE_HEADER
-
diff --git a/spaces/CVPR/WALT/README.md b/spaces/CVPR/WALT/README.md
deleted file mode 100644
index 006bc76eece809c527302d681447fda8e8757e10..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: WALT DEMO
-emoji: ⚡
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.0.20
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/WALT/mmdet/models/backbones/__init__.py b/spaces/CVPR/WALT/mmdet/models/backbones/__init__.py
deleted file mode 100644
index 11d7de7543b04e7040facb4472121e5c0f02ecaa..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/backbones/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .swin_transformer import SwinTransformer
-from .resnet import ResNet, ResNetV1d
-__all__ = ['SwinTransformer', 'ResNet', 'ResNetV1d']
diff --git a/spaces/CVPR/WALT/mmdet/models/backbones/swin_transformer.py b/spaces/CVPR/WALT/mmdet/models/backbones/swin_transformer.py
deleted file mode 100644
index bb41850d8480a08a6a7698bf6129ffd1ab239681..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/backbones/swin_transformer.py
+++ /dev/null
@@ -1,630 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu, Yutong Lin, Yixuan Wei
-# --------------------------------------------------------
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from mmcv_custom import load_checkpoint
-from mmdet.utils import get_root_logger
-from ..builder import BACKBONES
-
-
-class Mlp(nn.Module):
- """ Multilayer perceptron."""
-
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
-
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
-
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- """ Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
-
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """ Forward function.
-
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """ Swin Transformer Block.
-
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """ Patch Merging Layer
-
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
-
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop=0.,
- attn_drop=0.,
- drop_path=0.,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
-
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-@BACKBONES.register_module()
-class SwinTransformer(nn.Module):
- """ Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
-
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.,
- attn_drop_rate=0.,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- use_checkpoint=False):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
-
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- dim=int(embed_dim * 2 ** i_layer),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
- norm_layer=norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint)
- self.layers.append(layer)
-
- num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- # add a norm layer for each output
- for i_layer in out_indices:
- layer = norm_layer(num_features[i_layer])
- layer_name = f'norm{i_layer}'
- self.add_module(layer_name, layer)
-
- self._freeze_stages()
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- def _init_weights(m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- if isinstance(pretrained, str):
- self.apply(_init_weights)
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- self.apply(_init_weights)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = []
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
-
- if i in self.out_indices:
- norm_layer = getattr(self, f'norm{i}')
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs.append(out)
-
- return tuple(outs)
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/det_swin.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/det_swin.py
deleted file mode 100644
index 1ec74aafa2393832fbe1a32e25780aef64e8e667..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/det_swin.py
+++ /dev/null
@@ -1,717 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu, Yutong Lin, Yixuan Wei
-# --------------------------------------------------------
-import logging
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-from detectron2.layers import ShapeSpec
-from .backbone import Backbone
-
-logger = logging.getLogger(__name__)
-
-class Mlp(nn.Module):
- """ Multilayer perceptron."""
-
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
-
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
-
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- r""" Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
-
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """ Swin Transformer Block.
-
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """ Patch Merging Layer
-
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
-
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop=0.,
- attn_drop=0.,
- drop_path=0.,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
-
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-class SwinTransformer(Backbone):
- """ Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
-
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.,
- attn_drop_rate=0.,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- out_features=["stage2", "stage3", "stage4", "stage5"],
- use_checkpoint=False):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
- self.out_features = out_features
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
-
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- self._out_feature_strides = {}
- self._out_feature_channels = {}
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- dim=int(embed_dim * 2 ** i_layer),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
- norm_layer=norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint)
- self.layers.append(layer)
-
- stage = f'stage{i_layer + 2}'
- if stage in self.out_features:
- self._out_feature_channels[stage] = embed_dim * 2 ** i_layer
- self._out_feature_strides[stage] = 4 * 2 ** i_layer
-
- num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- self.norm = norm_layer(self.num_features[-1])
-
- self._freeze_stages()
-
- def output_shape(self):
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self.out_features
- }
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'absolute_pos_embed'}
-
- @torch.jit.ignore
- def no_weight_decay_keywords(self):
- return {'relative_position_bias_table', 'norm'}
-
- # def init_weights(self, pretrained=None):
- # """Initialize the weights in backbone.
-
- # Args:
- # pretrained (str, optional): Path to pre-trained weights.
- # Defaults to None.
- # """
-
- # def _init_weights(m):
- # if isinstance(m, nn.Linear):
- # trunc_normal_(m.weight, std=.02)
- # if isinstance(m, nn.Linear) and m.bias is not None:
- # nn.init.constant_(m.bias, 0)
- # elif isinstance(m, nn.LayerNorm):
- # nn.init.constant_(m.bias, 0)
- # nn.init.constant_(m.weight, 1.0)
-
- # if isinstance(pretrained, str):
- # self.apply(_init_weights)
- # logger = get_root_logger()
- # load_checkpoint(self, pretrained, strict=False, logger=logger)
- # elif pretrained is None:
- # self.apply(_init_weights)
- # else:
- # raise TypeError('pretrained must be a str or None')
-
- def init_weights(self, pretrained='', pretrained_layers=[], verbose=True):
- if not os.path.isfile(pretrained):
- logger.warning(f'=> Pretrained model ({pretrained}) is not a file, skip init weight')
- return
-
- pretrained_dict = torch.load(pretrained, map_location='cpu')
- logger.info(f'=> Loading pretrained model {pretrained}')
- model_dict = self.state_dict()
- pretrained_dict = {
- k: v for k, v in pretrained_dict.items()
- if k in model_dict.keys()
- }
- need_init_state_dict = {}
- for k, v in pretrained_dict.items():
- need_init = (
- (
- k.split('.')[0] in pretrained_layers
- or pretrained_layers[0] == '*'
- )
- and 'relative_position_index' not in k
- and 'attn_mask' not in k
- )
-
- if need_init:
- if verbose:
- logger.info(f'=> init {k} from {pretrained}')
-
- if 'relative_position_bias_table' in k and v.size() != model_dict[k].size():
- relative_position_bias_table_pretrained = v
- relative_position_bias_table_current = model_dict[k]
- L1, nH1 = relative_position_bias_table_pretrained.size()
- L2, nH2 = relative_position_bias_table_current.size()
- if nH1 != nH2:
- logger.info(f"Error in loading {k}, passing")
- else:
- if L1 != L2:
- logger.info(
- '=> load_pretrained: resized variant: {} to {}'
- .format((L1, nH1), (L2, nH2))
- )
- S1 = int(L1 ** 0.5)
- S2 = int(L2 ** 0.5)
- relative_position_bias_table_pretrained_resized = torch.nn.functional.interpolate(
- relative_position_bias_table_pretrained.permute(1, 0).view(1, nH1, S1, S1),
- size=(S2, S2),
- mode='bicubic')
- v = relative_position_bias_table_pretrained_resized.view(nH2, L2).permute(1, 0)
-
- if 'absolute_pos_embed' in k and v.size() != model_dict[k].size():
- absolute_pos_embed_pretrained = v
- absolute_pos_embed_current = model_dict[k]
- _, L1, C1 = absolute_pos_embed_pretrained.size()
- _, L2, C2 = absolute_pos_embed_current.size()
- if C1 != C1:
- logger.info(f"Error in loading {k}, passing")
- else:
- if L1 != L2:
- logger.info(
- '=> load_pretrained: resized variant: {} to {}'
- .format((1, L1, C1), (1, L2, C2))
- )
- S1 = int(L1 ** 0.5)
- S2 = int(L2 ** 0.5)
- absolute_pos_embed_pretrained = absolute_pos_embed_pretrained.reshape(-1, S1, S1, C1)
- absolute_pos_embed_pretrained = absolute_pos_embed_pretrained.permute(0, 3, 1, 2)
- absolute_pos_embed_pretrained_resized = torch.nn.functional.interpolate(
- absolute_pos_embed_pretrained, size=(S2, S2), mode='bicubic')
- v = absolute_pos_embed_pretrained_resized.permute(0, 2, 3, 1).flatten(1, 2)
-
- need_init_state_dict[k] = v
- self.load_state_dict(need_init_state_dict, strict=False)
-
- def forward(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = {}
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
- name = f'stage{i + 2}'
- if name in self.out_features:
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs[name] = out
- return outs
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
\ No newline at end of file
diff --git a/spaces/Chomkwoy/Nilkessye/README.md b/spaces/Chomkwoy/Nilkessye/README.md
deleted file mode 100644
index 7a3fb4971833a4b749d969b68d9b6022b9ef87c9..0000000000000000000000000000000000000000
--- a/spaces/Chomkwoy/Nilkessye/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Nilkessye
-emoji: 🏃
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 4.0.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/master.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/master.js
deleted file mode 100644
index 8f238dce5b23ad3c79554a5341c270ecaf5873bb..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/master.js
+++ /dev/null
@@ -1,55 +0,0 @@
-import fs from "fs"
-import { randomUUID } from "crypto"
-let code = {}
-let file = "config/config/other.yaml"
-export class master extends plugin {
- constructor () {
- super({
- name: "设置主人",
- dsc: "设置主人",
- event: "message",
- rule: [
- {
- reg: "^#设置主人$",
- fnc: "master"
- }
- ]
- })
- }
-
- edit (file, key, value) {
- let data = fs.readFileSync(file, "utf8")
- if (data.match(RegExp(`- "?${value}"?`)))
- return
- value = `${key}:\n - "${value}"`
- if (data.match(RegExp(`${key}:`)))
- data = data.replace(RegExp(`${key}:`), value)
- else
- data = `${data}\n${value}`
- fs.writeFileSync(file, data, "utf8")
- }
-
- async master () {
- if (this.e.isMaster) {
- await this.reply(`账号:${this.e.user_id} 已经为主人`, true)
- return false
- }
-
- code[this.e.user_id] = randomUUID()
- logger.mark(`${logger.cyan(`[${this.e.user_id}]`)} 设置主人验证码:${logger.green(code[this.e.user_id])}`)
- this.setContext("verify")
- await this.reply(`账号:${this.e.user_id} 请输入验证码`, true)
- }
-
- async verify () {
- this.finish("verify")
- if (this.e.msg.trim() == code[this.e.user_id]) {
- this.edit(file, "masterQQ", this.e.user_id)
- this.edit(file, "master", `${this.e.self_id}:${this.e.user_id}`)
- await this.reply(`账号:${this.e.user_id} 设置主人成功`, true)
- } else {
- await this.reply("验证码错误", true)
- return false
- }
- }
-}
\ No newline at end of file
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/hold_grudge/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/hold_grudge/__init__.py
deleted file mode 100644
index be7c196b6cba6522e927429bf64274ed5cf34ca8..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/hold_grudge/__init__.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from datetime import datetime
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage, Text2Image
-
-from meme_generator import add_meme
-from meme_generator.exception import TextOverLength
-
-img_dir = Path(__file__).parent / "images"
-
-
-def hold_grudge(images, texts: List[str], args):
- date = datetime.today().strftime("%Y{}%m{}%d{}").format("年", "月", "日")
- text = f"{date} 晴\n{texts[0]}\n这个仇我先记下了"
- text2image = Text2Image.from_text(text, 45, fill="black", spacing=10).wrap(440)
- if len(text2image.lines) > 10:
- raise TextOverLength(texts[0])
- text_img = text2image.to_image()
-
- frame = BuildImage.open(img_dir / "0.png")
- bg = BuildImage.new(
- "RGB", (frame.width, frame.height + text_img.height + 20), "white"
- )
- bg.paste(frame).paste(text_img, (30, frame.height + 5), alpha=True)
- return bg.save_jpg()
-
-
-add_meme(
- "hold_grudge",
- hold_grudge,
- min_texts=1,
- max_texts=1,
- default_texts=["群友不发涩图"],
- keywords=["记仇"],
-)
diff --git a/spaces/Codecooker/rvcapi/src/infer_pack/models_onnx_moess.py b/spaces/Codecooker/rvcapi/src/infer_pack/models_onnx_moess.py
deleted file mode 100644
index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000
--- a/spaces/Codecooker/rvcapi/src/infer_pack/models_onnx_moess.py
+++ /dev/null
@@ -1,849 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Copy233/copy/upcunet_v3.py b/spaces/Copy233/copy/upcunet_v3.py
deleted file mode 100644
index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000
--- a/spaces/Copy233/copy/upcunet_v3.py
+++ /dev/null
@@ -1,714 +0,0 @@
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-import os, sys
-import numpy as np
-
-root_path = os.path.abspath('.')
-sys.path.append(root_path)
-
-
-class SEBlock(nn.Module):
- def __init__(self, in_channels, reduction=8, bias=False):
- super(SEBlock, self).__init__()
- self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias)
- self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias)
-
- def forward(self, x):
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half()
- else:
- x0 = torch.mean(x, dim=(2, 3), keepdim=True)
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
- def forward_mean(self, x, x0):
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
-
-class UNetConv(nn.Module):
- def __init__(self, in_channels, mid_channels, out_channels, se):
- super(UNetConv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(in_channels, mid_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- nn.Conv2d(mid_channels, out_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- )
- if se:
- self.seblock = SEBlock(out_channels, reduction=8, bias=True)
- else:
- self.seblock = None
-
- def forward(self, x):
- z = self.conv(x)
- if self.seblock is not None:
- z = self.seblock(z)
- return z
-
-
-class UNet1(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet1x3(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1x3, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet2(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet2, self).__init__()
-
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 64, 128, se=True)
- self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0)
- self.conv3 = UNetConv(128, 256, 128, se=True)
- self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0)
- self.conv4 = UNetConv(128, 64, 64, se=True)
- self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv5 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
-
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3(x3)
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4(x2 + x3)
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
- def forward_a(self, x): # conv234结尾有se
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x2): # conv234结尾有se
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3.conv(x3)
- return x3
-
- def forward_c(self, x2, x3): # conv234结尾有se
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4.conv(x2 + x3)
- return x4
-
- def forward_d(self, x1, x4): # conv234结尾有se
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
-
-class UpCunet2x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet2x, self).__init__()
- self.unet1 = UNet1(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 36, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 36, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2]
- return res #
-
-
-class UpCunet3x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet3x, self).__init__()
- self.unet1 = UNet1x3(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 4 + 1) * 4
- pw = ((w0 - 1) // 4 + 1) * 4
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除
- else:
- crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 28, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 28, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop #
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3]
- return res
-
-
-class UpCunet4x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet4x, self).__init__()
- self.unet1 = UNet1(in_channels, 64, deconv=True)
- self.unet2 = UNet2(64, 64, deconv=False)
- self.ps = nn.PixelShuffle(2)
- self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True)
-
- def forward(self, x, tile_mode):
- n, c, h0, w0 = x.shape
- x00 = x
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- x = self.conv_final(x)
- x = F.pad(x, (-1, -1, -1, -1))
- x = self.ps(x)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4]
- x += F.interpolate(x00, scale_factor=4, mode='nearest')
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 38, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 38, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- x_crop = self.conv_final(x_crop)
- x_crop = F.pad(x_crop, (-1, -1, -1, -1))
- x_crop = self.ps(x_crop)
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape)
- res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4]
- res += F.interpolate(x00, scale_factor=4, mode='nearest')
- return res #
-
-
-class RealWaifuUpScaler(object):
- def __init__(self, scale, weight_path, half, device):
- weight = torch.load(weight_path, map_location="cpu")
- self.model = eval("UpCunet%sx" % scale)()
- if (half == True):
- self.model = self.model.half().to(device)
- else:
- self.model = self.model.to(device)
- self.model.load_state_dict(weight, strict=True)
- self.model.eval()
- self.half = half
- self.device = device
-
- def np2tensor(self, np_frame):
- if (self.half == False):
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255
- else:
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255
-
- def tensor2np(self, tensor):
- if (self.half == False):
- return (
- np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0)))
- else:
- return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(),
- (1, 2, 0)))
-
- def __call__(self, frame, tile_mode):
- with torch.no_grad():
- tensor = self.np2tensor(frame)
- result = self.tensor2np(self.model(tensor, tile_mode))
- return result
-
-
-if __name__ == "__main__":
- ###########inference_img
- import time, cv2, sys
- from time import time as ttime
-
- for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3),
- ("weights_v3/up4x-latest-denoise3x.pth", 4)]:
- for tile_mode in [0, 1, 2, 3, 4]:
- upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0")
- input_dir = "%s/input_dir1" % root_path
- output_dir = "%s/opt-dir-all-test" % root_path
- os.makedirs(output_dir, exist_ok=True)
- for name in os.listdir(input_dir):
- print(name)
- tmp = name.split(".")
- inp_path = os.path.join(input_dir, name)
- suffix = tmp[-1]
- prefix = ".".join(tmp[:-1])
- tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- print(inp_path, tmp_path)
- # 支持中文路径
- # os.link(inp_path, tmp_path)#win用硬链接
- os.symlink(inp_path, tmp_path) # linux用软链接
- frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]]
- t0 = ttime()
- result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1]
- t1 = ttime()
- print(prefix, "done", t1 - t0)
- tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- cv2.imwrite(tmp_opt_path, result)
- n = 0
- while (1):
- if (n == 0):
- suffix = "_%sx_tile%s.png" % (scale, tile_mode)
- else:
- suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) #
- if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False):
- break
- else:
- n += 1
- final_opt_path = os.path.join(output_dir, prefix + suffix)
- os.rename(tmp_opt_path, final_opt_path)
- os.remove(tmp_path)
diff --git a/spaces/CorvaeOboro/gen_ability_icon/app.py b/spaces/CorvaeOboro/gen_ability_icon/app.py
deleted file mode 100644
index a367ace82721ae15a30f7cb9b730a4ac0f59b669..0000000000000000000000000000000000000000
--- a/spaces/CorvaeOboro/gen_ability_icon/app.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import gradio as gr
-import os
-import numpy as np
-import torch
-import pickle
-import types
-
-from huggingface_hub import hf_hub_url, cached_download
-from huggingface_hub import hf_hub_download
-
-#hf_hub_download(repo_id="CorvaeOboro/gen_ability_icon", filename="gen_ability_icon_stylegan2ada_20221012.pkl", repo_type="dataset")
-
-#TOKEN = os.environ['TOKEN']
-with open(hf_hub_download(repo_id="CorvaeOboro/gen_ability_icon", filename="gen_ability_icon_stylegan2ada_20221012.pkl", repo_type="model"), 'rb') as f:
- G = pickle.load(f)['G_ema']# torch.nn.Module
-
-device = torch.device("cpu")
-if torch.cuda.is_available():
- device = torch.device("cuda")
- G = G.to(device)
-else:
- _old_forward = G.forward
-
- def _new_forward(self, *args, **kwargs):
- kwargs["force_fp32"] = True
- return _old_forward(*args, **kwargs)
-
- G.forward = types.MethodType(_new_forward, G)
-
- _old_synthesis_forward = G.synthesis.forward
-
- def _new_synthesis_forward(self, *args, **kwargs):
- kwargs["force_fp32"] = True
- return _old_synthesis_forward(*args, **kwargs)
-
- G.synthesis.forward = types.MethodType(_new_synthesis_forward, G.synthesis)
-
-
-def generate(num_images, interpolate):
- if interpolate:
- z1 = torch.randn([1, G.z_dim])# latent codes
- z2 = torch.randn([1, G.z_dim])# latent codes
- zs = torch.cat([z1 + (z2 - z1) * i / (num_images-1) for i in range(num_images)], 0)
- else:
- zs = torch.randn([num_images, G.z_dim])# latent codes
- with torch.no_grad():
- zs = zs.to(device)
- img = G(zs, None, force_fp32=True, noise_mode='const')
- img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- return img.cpu().numpy()
-
-demo = gr.Blocks()
-
-def infer(num_images, interpolate):
- img = generate(round(num_images), interpolate)
- imgs = list(img)
- return imgs
-
-with demo:
- gr.Markdown(
- """
- # gen_ability_icon
- 
-
- creates circular magic ability icons from stylegan2ada model trained on synthetic dataset .
- more information here : [https://github.com/CorvaeOboro/gen_ability_icon](https://github.com/CorvaeOboro/gen_ability_icon).
- """)
- images_num = gr.inputs.Slider(default=6, label="Num Images", minimum=1, maximum=16, step=1)
- interpolate = gr.inputs.Checkbox(default=False, label="Interpolate")
- submit = gr.Button("Generate")
-
-
- out = gr.Gallery()
-
- submit.click(fn=infer,
- inputs=[images_num, interpolate],
- outputs=out)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/engine/inference.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/engine/inference.py
deleted file mode 100644
index 77e7396d1e68f77301daee9af1c14707237bf5a9..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/engine/inference.py
+++ /dev/null
@@ -1,129 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import logging
-import time
-import os
-
-import torch
-from tqdm import tqdm
-
-from maskrcnn_benchmark.data.datasets.evaluation import evaluate
-from ..utils.comm import is_main_process, get_world_size
-from ..utils.comm import all_gather
-from ..utils.comm import synchronize
-from ..utils.timer import Timer, get_time_str
-
-
-def compute_on_dataset(model, data_loader, device, timer=None):
- model.eval()
- results_dict = {}
- cpu_device = torch.device("cpu")
- for _, batch in enumerate(tqdm(data_loader)):
- images, targets, image_ids = batch
- images = images.to(device)
- with torch.no_grad():
- if timer:
- timer.tic()
- output = model(images)
- if timer:
- torch.cuda.synchronize()
- timer.toc()
- output = [o.to(cpu_device) for o in output]
- results_dict.update(
- {img_id: result for img_id, result in zip(image_ids, output)}
- )
- return results_dict
-
-
-def _accumulate_predictions_from_multiple_gpus(predictions_per_gpu):
- all_predictions = all_gather(predictions_per_gpu)
- if not is_main_process():
- return
- # merge the list of dicts
- predictions = {}
- for p in all_predictions:
- predictions.update(p)
- # convert a dict where the key is the index in a list
- image_ids = list(sorted(predictions.keys()))
- if len(image_ids) != image_ids[-1] + 1:
- logger = logging.getLogger("maskrcnn_benchmark.inference")
- logger.warning(
- "Number of images that were gathered from multiple processes is not "
- "a contiguous set. Some images might be missing from the evaluation"
- )
-
- # convert to a list
- predictions = [predictions[i] for i in image_ids]
- return predictions
-
-
-def inference(
- model,
- data_loader,
- dataset_name,
- iou_types=("bbox",),
- box_only=False,
- device="cuda",
- expected_results=(),
- expected_results_sigma_tol=4,
- output_folder=None,
-):
-
- logger = logging.getLogger("maskrcnn_benchmark.inference")
- dataset = data_loader.dataset
- logger.info("Start evaluation on {} dataset({} images).".format(dataset_name, len(dataset)))
-
- extra_args = dict(
- box_only=box_only,
- iou_types=iou_types,
- expected_results=expected_results,
- expected_results_sigma_tol=expected_results_sigma_tol,
- )
-
- # load predictions if exists
- prediction_file = os.path.join(output_folder, 'predictions.pth')
- if os.path.isfile(prediction_file):
- predictions = torch.load(prediction_file)
- logger.info("Found prediction results at {}".format(prediction_file))
-
- return evaluate(dataset=dataset,
- predictions=predictions,
- output_folder=output_folder,
- **extra_args)
-
- # convert to a torch.device for efficiency
- device = torch.device(device)
- num_devices = get_world_size()
- total_timer = Timer()
- inference_timer = Timer()
- total_timer.tic()
- predictions = compute_on_dataset(model, data_loader, device, inference_timer)
- # wait for all processes to complete before measuring the time
- synchronize()
- total_time = total_timer.toc()
- total_time_str = get_time_str(total_time)
- logger.info(
- "Total run time: {} ({} s / img per device, on {} devices)".format(
- total_time_str, total_time * num_devices / len(dataset), num_devices
- )
- )
- total_infer_time = get_time_str(inference_timer.total_time)
- logger.info(
- "Model inference time: {} ({} s / img per device, on {} devices)".format(
- total_infer_time,
- inference_timer.total_time * num_devices / len(dataset),
- num_devices,
- )
- )
-
- predictions = _accumulate_predictions_from_multiple_gpus(predictions)
- if not is_main_process():
- return
-
- if output_folder:
- torch.save(predictions, os.path.join(output_folder, "predictions.pth"))
-
-
- return evaluate(dataset=dataset,
- predictions=predictions,
- output_folder=output_folder,
- **extra_args)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/__init__.py
deleted file mode 100644
index 690d64e63bc40a6006318cd70535017d41643def..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# ruff: noqa
-from .v5 import *
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/constants.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/constants.py
deleted file mode 100644
index d724ee3cfdbcda1c39f39511046c7a884186ca98..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/constants.py
+++ /dev/null
@@ -1,3 +0,0 @@
-METHODS_WITH_BODY = {"GET", "HEAD", "POST", "PUT", "DELETE", "PATCH"}
-REF_PREFIX = "#/components/schemas/"
-REF_TEMPLATE = "#/components/schemas/{model}"
diff --git a/spaces/Dantra1/CeliaSensei/text/cleaners.py b/spaces/Dantra1/CeliaSensei/text/cleaners.py
deleted file mode 100644
index 68c9ad24d5a303b68a521fba2e8776c8cc867356..0000000000000000000000000000000000000000
--- a/spaces/Dantra1/CeliaSensei/text/cleaners.py
+++ /dev/null
@@ -1,475 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-import re
-from unidecode import unidecode
-import pyopenjtalk
-from jamo import h2j, j2hcj
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba, cn2an
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# Regular expression matching whitespace:
-_whitespace_re = re.compile(r'\s+')
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def lowercase(text):
- return text.lower()
-
-
-def collapse_whitespace(text):
- return re.sub(_whitespace_re, ' ', text)
-
-
-def convert_to_ascii(text):
- return unidecode(text)
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text!='':
- text+=' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil','pau']:
- text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q')
- else:
- continue
- n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']:
- a2_next=-1
- else:
- a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i": ">",
- "&": "&",
- "'": "'",
- '"': """,
-})
-
-
-def xmlesc(txt):
- return txt.translate(table)
-
-
-def load_model():
- torch_cache_path = torch.hub.get_dir() if params['local_cache_path'] == '' else params['local_cache_path']
- model_path = torch_cache_path + "/snakers4_silero-models_master/src/silero/model/" + params['model_id'] + ".pt"
- if Path(model_path).is_file():
- print(f'\nUsing Silero TTS cached checkpoint found at {torch_cache_path}')
- model, example_text = torch.hub.load(repo_or_dir=torch_cache_path + '/snakers4_silero-models_master/', model='silero_tts', language=params['language'], speaker=params['model_id'], source='local', path=model_path, force_reload=True)
- else:
- print(f'\nSilero TTS cache not found at {torch_cache_path}. Attempting to download...')
- model, example_text = torch.hub.load(repo_or_dir='snakers4/silero-models', model='silero_tts', language=params['language'], speaker=params['model_id'])
- model.to(params['device'])
- return model
-
-
-def remove_tts_from_history():
- for i, entry in enumerate(shared.history['internal']):
- shared.history['visible'][i] = [shared.history['visible'][i][0], entry[1]]
-
-
-def toggle_text_in_history():
- for i, entry in enumerate(shared.history['visible']):
- visible_reply = entry[1]
- if visible_reply.startswith('\n\n{reply}"]
- else:
- shared.history['visible'][i] = [shared.history['visible'][i][0], f"{visible_reply.split('')[0]}"]
-
-
-def state_modifier(state):
- if not params['activate']:
- return state
-
- state['stream'] = False
- return state
-
-
-def input_modifier(string):
- if not params['activate']:
- return string
-
- shared.processing_message = "*Is recording a voice message...*"
- return string
-
-
-def history_modifier(history):
- # Remove autoplay from the last reply
- if len(history['internal']) > 0:
- history['visible'][-1] = [
- history['visible'][-1][0],
- history['visible'][-1][1].replace('controls autoplay>', 'controls>')
- ]
-
- return history
-
-
-def output_modifier(string):
- global model, current_params, streaming_state
- for i in params:
- if params[i] != current_params[i]:
- model = load_model()
- current_params = params.copy()
- break
-
- if not params['activate']:
- return string
-
- original_string = string
- string = tts_preprocessor.preprocess(string)
-
- if string == '':
- string = '*Empty reply, try regenerating*'
- else:
- output_file = Path(f'extensions/silero_tts/outputs/{shared.character}_{int(time.time())}.wav')
- prosody = ''.format(params['voice_speed'], params['voice_pitch'])
- silero_input = f'{prosody}{xmlesc(string)}'
- model.save_wav(ssml_text=silero_input, speaker=params['speaker'], sample_rate=int(params['sample_rate']), audio_path=str(output_file))
-
- autoplay = 'autoplay' if params['autoplay'] else ''
- string = f''
- if params['show_text']:
- string += f'\n\n{original_string}'
-
- shared.processing_message = "*Is typing...*"
- return string
-
-
-def setup():
- global model
- model = load_model()
-
-
-def ui():
- # Gradio elements
- with gr.Accordion("Silero TTS"):
- with gr.Row():
- activate = gr.Checkbox(value=params['activate'], label='Activate TTS')
- autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically')
-
- show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player')
- voice = gr.Dropdown(value=params['speaker'], choices=voices_by_gender, label='TTS voice')
- with gr.Row():
- v_pitch = gr.Dropdown(value=params['voice_pitch'], choices=voice_pitches, label='Voice pitch')
- v_speed = gr.Dropdown(value=params['voice_speed'], choices=voice_speeds, label='Voice speed')
-
- with gr.Row():
- convert = gr.Button('Permanently replace audios with the message texts')
- convert_cancel = gr.Button('Cancel', visible=False)
- convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False)
-
- gr.Markdown('[Click here for Silero audio samples](https://oobabooga.github.io/silero-samples/index.html)')
-
- # Convert history with confirmation
- convert_arr = [convert_confirm, convert, convert_cancel]
- convert.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr)
- convert_confirm.click(
- lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr).then(
- remove_tts_from_history, None, None).then(
- chat.save_history, shared.gradio['mode'], None, show_progress=False).then(
- chat.redraw_html, shared.reload_inputs, shared.gradio['display'])
-
- convert_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr)
-
- # Toggle message text in history
- show_text.change(
- lambda x: params.update({"show_text": x}), show_text, None).then(
- toggle_text_in_history, None, None).then(
- chat.save_history, shared.gradio['mode'], None, show_progress=False).then(
- chat.redraw_html, shared.reload_inputs, shared.gradio['display'])
-
- # Event functions to update the parameters in the backend
- activate.change(lambda x: params.update({"activate": x}), activate, None)
- autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None)
- voice.change(lambda x: params.update({"speaker": x}), voice, None)
- v_pitch.change(lambda x: params.update({"voice_pitch": x}), v_pitch, None)
- v_speed.change(lambda x: params.update({"voice_speed": x}), v_speed, None)
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py
deleted file mode 100644
index 0e9768d4742e845a45bd343d70bd06f3cb0e4fcb..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py
+++ /dev/null
@@ -1,33 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_600e.py',
- '../../_base_/det_models/panet_r50_fpem_ffm.py',
- '../../_base_/det_datasets/icdar2017.py',
- '../../_base_/det_pipelines/panet_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline_icdar2017 = {{_base_.train_pipeline_icdar2017}}
-test_pipeline_icdar2017 = {{_base_.test_pipeline_icdar2017}}
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline_icdar2017),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2017),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2017))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/FEFE2023/VENUSAIESPACIO1/README.md b/spaces/FEFE2023/VENUSAIESPACIO1/README.md
deleted file mode 100644
index 0352beb05b9397e0acc20dea9cb4d04647d2802e..0000000000000000000000000000000000000000
--- a/spaces/FEFE2023/VENUSAIESPACIO1/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: VENUSAIESPACIO1
-emoji: 🏢
-colorFrom: pink
-colorTo: yellow
-sdk: docker
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FaceOnLive/Face-Recognition-SDK/run.sh b/spaces/FaceOnLive/Face-Recognition-SDK/run.sh
deleted file mode 100644
index f6ec105cddeb64569bb4669bf99897260d4753f2..0000000000000000000000000000000000000000
--- a/spaces/FaceOnLive/Face-Recognition-SDK/run.sh
+++ /dev/null
@@ -1,4 +0,0 @@
-#!/bin/bash
-
-exec python3 app.py &
-exec python3 gradio/demo.py
\ No newline at end of file
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/__init__.py
deleted file mode 100644
index 2b184e74c861e6fca0c548692a9a949a6100b0aa..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/losses/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from copy import deepcopy
-
-from basicsr.utils import get_root_logger
-from basicsr.utils.registry import LOSS_REGISTRY
-from .losses import (CharbonnierLoss, GANLoss, L1Loss, MSELoss, PerceptualLoss, WeightedTVLoss, g_path_regularize,
- gradient_penalty_loss, r1_penalty)
-
-__all__ = [
- 'L1Loss', 'MSELoss', 'CharbonnierLoss', 'WeightedTVLoss', 'PerceptualLoss', 'GANLoss', 'gradient_penalty_loss',
- 'r1_penalty', 'g_path_regularize'
-]
-
-
-def build_loss(opt):
- """Build loss from options.
-
- Args:
- opt (dict): Configuration. It must constain:
- type (str): Model type.
- """
- opt = deepcopy(opt)
- loss_type = opt.pop('type')
- loss = LOSS_REGISTRY.get(loss_type)(**opt)
- logger = get_root_logger()
- logger.info(f'Loss [{loss.__class__.__name__}] is created.')
- return loss
diff --git a/spaces/Ferion/image-matting-app/ppmatting/models/human_matting.py b/spaces/Ferion/image-matting-app/ppmatting/models/human_matting.py
deleted file mode 100644
index cf315edfa563fe231a119dd15b749c41157c988c..0000000000000000000000000000000000000000
--- a/spaces/Ferion/image-matting-app/ppmatting/models/human_matting.py
+++ /dev/null
@@ -1,454 +0,0 @@
-# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from collections import defaultdict
-import time
-
-import paddle
-import paddle.nn as nn
-import paddle.nn.functional as F
-import paddleseg
-from paddleseg.models import layers
-from paddleseg import utils
-from paddleseg.cvlibs import manager
-
-from ppmatting.models.losses import MRSD
-
-
-def conv_up_psp(in_channels, out_channels, up_sample):
- return nn.Sequential(
- layers.ConvBNReLU(
- in_channels, out_channels, 3, padding=1),
- nn.Upsample(
- scale_factor=up_sample, mode='bilinear', align_corners=False))
-
-
-@manager.MODELS.add_component
-class HumanMatting(nn.Layer):
- """A model for """
-
- def __init__(self,
- backbone,
- pretrained=None,
- backbone_scale=0.25,
- refine_kernel_size=3,
- if_refine=True):
- super().__init__()
- if if_refine:
- if backbone_scale > 0.5:
- raise ValueError(
- 'Backbone_scale should not be greater than 1/2, but it is {}'
- .format(backbone_scale))
- else:
- backbone_scale = 1
-
- self.backbone = backbone
- self.backbone_scale = backbone_scale
- self.pretrained = pretrained
- self.if_refine = if_refine
- if if_refine:
- self.refiner = Refiner(kernel_size=refine_kernel_size)
- self.loss_func_dict = None
-
- self.backbone_channels = backbone.feat_channels
- ######################
- ### Decoder part - Glance
- ######################
- self.psp_module = layers.PPModule(
- self.backbone_channels[-1],
- 512,
- bin_sizes=(1, 3, 5),
- dim_reduction=False,
- align_corners=False)
- self.psp4 = conv_up_psp(512, 256, 2)
- self.psp3 = conv_up_psp(512, 128, 4)
- self.psp2 = conv_up_psp(512, 64, 8)
- self.psp1 = conv_up_psp(512, 64, 16)
- # stage 5g
- self.decoder5_g = nn.Sequential(
- layers.ConvBNReLU(
- 512 + self.backbone_channels[-1], 512, 3, padding=1),
- layers.ConvBNReLU(
- 512, 512, 3, padding=2, dilation=2),
- layers.ConvBNReLU(
- 512, 256, 3, padding=2, dilation=2),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 4g
- self.decoder4_g = nn.Sequential(
- layers.ConvBNReLU(
- 512, 256, 3, padding=1),
- layers.ConvBNReLU(
- 256, 256, 3, padding=1),
- layers.ConvBNReLU(
- 256, 128, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 3g
- self.decoder3_g = nn.Sequential(
- layers.ConvBNReLU(
- 256, 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 64, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 2g
- self.decoder2_g = nn.Sequential(
- layers.ConvBNReLU(
- 128, 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 64, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 1g
- self.decoder1_g = nn.Sequential(
- layers.ConvBNReLU(
- 128, 64, 3, padding=1),
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 0g
- self.decoder0_g = nn.Sequential(
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- nn.Conv2D(
- 64, 3, 3, padding=1))
-
- ##########################
- ### Decoder part - FOCUS
- ##########################
- self.bridge_block = nn.Sequential(
- layers.ConvBNReLU(
- self.backbone_channels[-1], 512, 3, dilation=2, padding=2),
- layers.ConvBNReLU(
- 512, 512, 3, dilation=2, padding=2),
- layers.ConvBNReLU(
- 512, 512, 3, dilation=2, padding=2))
- # stage 5f
- self.decoder5_f = nn.Sequential(
- layers.ConvBNReLU(
- 512 + self.backbone_channels[-1], 512, 3, padding=1),
- layers.ConvBNReLU(
- 512, 512, 3, padding=2, dilation=2),
- layers.ConvBNReLU(
- 512, 256, 3, padding=2, dilation=2),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 4f
- self.decoder4_f = nn.Sequential(
- layers.ConvBNReLU(
- 256 + self.backbone_channels[-2], 256, 3, padding=1),
- layers.ConvBNReLU(
- 256, 256, 3, padding=1),
- layers.ConvBNReLU(
- 256, 128, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 3f
- self.decoder3_f = nn.Sequential(
- layers.ConvBNReLU(
- 128 + self.backbone_channels[-3], 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 64, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 2f
- self.decoder2_f = nn.Sequential(
- layers.ConvBNReLU(
- 64 + self.backbone_channels[-4], 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 128, 3, padding=1),
- layers.ConvBNReLU(
- 128, 64, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 1f
- self.decoder1_f = nn.Sequential(
- layers.ConvBNReLU(
- 64 + self.backbone_channels[-5], 64, 3, padding=1),
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- nn.Upsample(
- scale_factor=2, mode='bilinear', align_corners=False))
- # stage 0f
- self.decoder0_f = nn.Sequential(
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- layers.ConvBNReLU(
- 64, 64, 3, padding=1),
- nn.Conv2D(
- 64, 1 + 1 + 32, 3, padding=1))
- self.init_weight()
-
- def forward(self, data):
- src = data['img']
- src_h, src_w = paddle.shape(src)[2:]
- if self.if_refine:
- # It is not need when exporting.
- if isinstance(src_h, paddle.Tensor):
- if (src_h % 4 != 0) or (src_w % 4) != 0:
- raise ValueError(
- 'The input image must have width and height that are divisible by 4'
- )
-
- # Downsample src for backbone
- src_sm = F.interpolate(
- src,
- scale_factor=self.backbone_scale,
- mode='bilinear',
- align_corners=False)
-
- # Base
- fea_list = self.backbone(src_sm)
- ##########################
- ### Decoder part - GLANCE
- ##########################
- #psp: N, 512, H/32, W/32
- psp = self.psp_module(fea_list[-1])
- #d6_g: N, 512, H/16, W/16
- d5_g = self.decoder5_g(paddle.concat((psp, fea_list[-1]), 1))
- #d5_g: N, 512, H/8, W/8
- d4_g = self.decoder4_g(paddle.concat((self.psp4(psp), d5_g), 1))
- #d4_g: N, 256, H/4, W/4
- d3_g = self.decoder3_g(paddle.concat((self.psp3(psp), d4_g), 1))
- #d4_g: N, 128, H/2, W/2
- d2_g = self.decoder2_g(paddle.concat((self.psp2(psp), d3_g), 1))
- #d2_g: N, 64, H, W
- d1_g = self.decoder1_g(paddle.concat((self.psp1(psp), d2_g), 1))
- #d0_g: N, 3, H, W
- d0_g = self.decoder0_g(d1_g)
- # The 1st channel is foreground. The 2nd is transition region. The 3rd is background.
- # glance_sigmoid = F.sigmoid(d0_g)
- glance_sigmoid = F.softmax(d0_g, axis=1)
-
- ##########################
- ### Decoder part - FOCUS
- ##########################
- bb = self.bridge_block(fea_list[-1])
- #bg: N, 512, H/32, W/32
- d5_f = self.decoder5_f(paddle.concat((bb, fea_list[-1]), 1))
- #d5_f: N, 256, H/16, W/16
- d4_f = self.decoder4_f(paddle.concat((d5_f, fea_list[-2]), 1))
- #d4_f: N, 128, H/8, W/8
- d3_f = self.decoder3_f(paddle.concat((d4_f, fea_list[-3]), 1))
- #d3_f: N, 64, H/4, W/4
- d2_f = self.decoder2_f(paddle.concat((d3_f, fea_list[-4]), 1))
- #d2_f: N, 64, H/2, W/2
- d1_f = self.decoder1_f(paddle.concat((d2_f, fea_list[-5]), 1))
- #d1_f: N, 64, H, W
- d0_f = self.decoder0_f(d1_f)
- #d0_f: N, 1, H, W
- focus_sigmoid = F.sigmoid(d0_f[:, 0:1, :, :])
- pha_sm = self.fusion(glance_sigmoid, focus_sigmoid)
- err_sm = d0_f[:, 1:2, :, :]
- err_sm = paddle.clip(err_sm, 0., 1.)
- hid_sm = F.relu(d0_f[:, 2:, :, :])
-
- # Refiner
- if self.if_refine:
- pha = self.refiner(
- src=src, pha=pha_sm, err=err_sm, hid=hid_sm, tri=glance_sigmoid)
- # Clamp outputs
- pha = paddle.clip(pha, 0., 1.)
-
- if self.training:
- logit_dict = {
- 'glance': glance_sigmoid,
- 'focus': focus_sigmoid,
- 'fusion': pha_sm,
- 'error': err_sm
- }
- if self.if_refine:
- logit_dict['refine'] = pha
- loss_dict = self.loss(logit_dict, data)
- return logit_dict, loss_dict
- else:
- return pha if self.if_refine else pha_sm
-
- def loss(self, logit_dict, label_dict, loss_func_dict=None):
- if loss_func_dict is None:
- if self.loss_func_dict is None:
- self.loss_func_dict = defaultdict(list)
- self.loss_func_dict['glance'].append(nn.NLLLoss())
- self.loss_func_dict['focus'].append(MRSD())
- self.loss_func_dict['cm'].append(MRSD())
- self.loss_func_dict['err'].append(paddleseg.models.MSELoss())
- self.loss_func_dict['refine'].append(paddleseg.models.L1Loss())
- else:
- self.loss_func_dict = loss_func_dict
-
- loss = {}
-
- # glance loss computation
- # get glance label
- glance_label = F.interpolate(
- label_dict['trimap'],
- logit_dict['glance'].shape[2:],
- mode='nearest',
- align_corners=False)
- glance_label_trans = (glance_label == 128).astype('int64')
- glance_label_bg = (glance_label == 0).astype('int64')
- glance_label = glance_label_trans + glance_label_bg * 2
- loss_glance = self.loss_func_dict['glance'][0](
- paddle.log(logit_dict['glance'] + 1e-6), glance_label.squeeze(1))
- loss['glance'] = loss_glance
-
- # focus loss computation
- focus_label = F.interpolate(
- label_dict['alpha'],
- logit_dict['focus'].shape[2:],
- mode='bilinear',
- align_corners=False)
- loss_focus = self.loss_func_dict['focus'][0](
- logit_dict['focus'], focus_label, glance_label_trans)
- loss['focus'] = loss_focus
-
- # collaborative matting loss
- loss_cm_func = self.loss_func_dict['cm']
- # fusion_sigmoid loss
- loss_cm = loss_cm_func[0](logit_dict['fusion'], focus_label)
- loss['cm'] = loss_cm
-
- # error loss
- err = F.interpolate(
- logit_dict['error'],
- label_dict['alpha'].shape[2:],
- mode='bilinear',
- align_corners=False)
- err_label = (F.interpolate(
- logit_dict['fusion'],
- label_dict['alpha'].shape[2:],
- mode='bilinear',
- align_corners=False) - label_dict['alpha']).abs()
- loss_err = self.loss_func_dict['err'][0](err, err_label)
- loss['err'] = loss_err
-
- loss_all = 0.25 * loss_glance + 0.25 * loss_focus + 0.25 * loss_cm + loss_err
-
- # refine loss
- if self.if_refine:
- loss_refine = self.loss_func_dict['refine'][0](logit_dict['refine'],
- label_dict['alpha'])
- loss['refine'] = loss_refine
- loss_all = loss_all + loss_refine
-
- loss['all'] = loss_all
- return loss
-
- def fusion(self, glance_sigmoid, focus_sigmoid):
- # glance_sigmoid [N, 3, H, W].
- # In index, 0 is foreground, 1 is transition, 2 is backbone.
- # After fusion, the foreground is 1, the background is 0, and the transion is between (0, 1).
- index = paddle.argmax(glance_sigmoid, axis=1, keepdim=True)
- transition_mask = (index == 1).astype('float32')
- fg = (index == 0).astype('float32')
- fusion_sigmoid = focus_sigmoid * transition_mask + fg
- return fusion_sigmoid
-
- def init_weight(self):
- if self.pretrained is not None:
- utils.load_entire_model(self, self.pretrained)
-
-
-class Refiner(nn.Layer):
- '''
- Refiner refines the coarse output to full resolution.
-
- Args:
- kernel_size: The convolution kernel_size. Options: [1, 3]. Default: 3.
- '''
-
- def __init__(self, kernel_size=3):
- super().__init__()
- if kernel_size not in [1, 3]:
- raise ValueError("kernel_size must be in [1, 3]")
-
- self.kernel_size = kernel_size
-
- channels = [32, 24, 16, 12, 1]
- self.conv1 = layers.ConvBNReLU(
- channels[0] + 4 + 3,
- channels[1],
- kernel_size,
- padding=0,
- bias_attr=False)
- self.conv2 = layers.ConvBNReLU(
- channels[1], channels[2], kernel_size, padding=0, bias_attr=False)
- self.conv3 = layers.ConvBNReLU(
- channels[2] + 3,
- channels[3],
- kernel_size,
- padding=0,
- bias_attr=False)
- self.conv4 = nn.Conv2D(
- channels[3], channels[4], kernel_size, padding=0, bias_attr=True)
-
- def forward(self, src, pha, err, hid, tri):
- '''
- Args:
- src: (B, 3, H, W) full resolution source image.
- pha: (B, 1, Hc, Wc) coarse alpha prediction.
- err: (B, 1, Hc, Hc) coarse error prediction.
- hid: (B, 32, Hc, Hc) coarse hidden encoding.
- tri: (B, 1, Hc, Hc) trimap prediction.
- '''
- h_full, w_full = paddle.shape(src)[2:]
- h_half, w_half = h_full // 2, w_full // 2
- h_quat, w_quat = h_full // 4, w_full // 4
-
- x = paddle.concat([hid, pha, tri], axis=1)
- x = F.interpolate(
- x,
- paddle.concat((h_half, w_half)),
- mode='bilinear',
- align_corners=False)
- y = F.interpolate(
- src,
- paddle.concat((h_half, w_half)),
- mode='bilinear',
- align_corners=False)
-
- if self.kernel_size == 3:
- x = F.pad(x, [3, 3, 3, 3])
- y = F.pad(y, [3, 3, 3, 3])
-
- x = self.conv1(paddle.concat([x, y], axis=1))
- x = self.conv2(x)
-
- if self.kernel_size == 3:
- x = F.interpolate(x, paddle.concat((h_full + 4, w_full + 4)))
- y = F.pad(src, [2, 2, 2, 2])
- else:
- x = F.interpolate(
- x, paddle.concat((h_full, w_full)), mode='nearest')
- y = src
-
- x = self.conv3(paddle.concat([x, y], axis=1))
- x = self.conv4(x)
-
- pha = x
- return pha
diff --git a/spaces/GIZ/SDSN-demo/app.py b/spaces/GIZ/SDSN-demo/app.py
deleted file mode 100644
index 9eada7d22d65dca2c25f143162872a5e1f4f0e4c..0000000000000000000000000000000000000000
--- a/spaces/GIZ/SDSN-demo/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import appStore.keyword_search as keyword_search
-import appStore.sdg_analysis as sdg_analysis
-import appStore.coherence as coherence
-import appStore.info as info
-from appStore.multiapp import MultiApp
-import streamlit as st
-
-st.set_page_config(page_title = 'Climate Policy Intelligence',
- initial_sidebar_state='expanded', layout="wide")
-
-app = MultiApp()
-
-app.add_app("About","house", info.app)
-app.add_app("Search","search", keyword_search.app)
-app.add_app("SDG Analysis","gear",sdg_analysis.app)
-app.add_app("NDC Comparison","exclude", coherence.app)
-
-app.run()
\ No newline at end of file
diff --git a/spaces/GeorgeOrville/bingo/Dockerfile b/spaces/GeorgeOrville/bingo/Dockerfile
deleted file mode 100644
index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-FROM node:18
-
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME
-
-# Switch to the "user" user
-USER user
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY --chown=user package*.json $HOME/app/
-
-RUN npm install
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app/
-
-RUN npm run build
-
-ENV PORT 7860
-EXPOSE 7860
-
-CMD npm start
diff --git a/spaces/GiordanoB/sumarizacao-abstrativa-portugues/app.py b/spaces/GiordanoB/sumarizacao-abstrativa-portugues/app.py
deleted file mode 100644
index b073f64f92df2b6ed3c7583528a0f8dd69efa1b9..0000000000000000000000000000000000000000
--- a/spaces/GiordanoB/sumarizacao-abstrativa-portugues/app.py
+++ /dev/null
@@ -1,327 +0,0 @@
-import gradio as gr
-from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
-import torch
-import spacy
-import pytextrank
-from sumy.parsers.plaintext import PlaintextParser
-from sumy.nlp.tokenizers import Tokenizer
-from sumy.summarizers.luhn import LuhnSummarizer
-from sumy.summarizers.lex_rank import LexRankSummarizer
-import nltk
-
-nlp = spacy.load('pt_core_news_sm')
-nltk.download('punkt')
-nlp.add_pipe("textrank")
-
-#WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
-
-model_name="GiordanoB/mT5_multilingual_XLSum-sumarizacao-PTBR"
-tokenizer = AutoTokenizer.from_pretrained(model_name)
-model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
-
-app = gr.Blocks()
-
-def summarize_HUB_Multidocument(input_1, input_2, input_3, method, max_length, min_length, num_beams):
-
- if(input_1 and not input_2 and not input_3 or not input_1 and input_2 and not input_3 or not input_1 and not input_2 and input_3):
- return "Por favor utilize a aba de sumarização monodocumento"
-
- if method == "Pure mT5":
- if(input_1 and input_2 and input_3 ): #"3 cheios"
- tempSum1 = summarize_mT5(input_1, max_length, min_length, num_beams)
- tempSum2 = summarize_mT5(input_2, max_length, min_length, num_beams)
- tempSum3 = summarize_mT5(input_3, max_length, min_length, num_beams)
- fullSumm = tempSum1 + tempSum2 + tempSum3
- return summarize_mT5(fullSumm, max_length, min_length, num_beams)
-
- if(input_1 and input_2 and not input_3): #"1 e 2 cheios"
- tempSum1 = summarize_mT5(input_1, max_length, min_length, num_beams)
- tempSum2 = summarize_mT5(input_2, max_length, min_length, num_beams)
- fullSumm = tempSum1 + tempSum2
- return summarize_mT5(fullSumm, max_length, min_length, num_beams)
-
- if(input_1 and not input_2 and input_3): #1 e 3 cheios"
- tempSum1 = summarize_mT5(input_1, max_length, min_length, num_beams)
- tempSum3 = summarize_mT5(input_3, max_length, min_length, num_beams)
- fullSumm = tempSum1 + tempSum3
- return summarize_mT5(fullSumm, max_length, min_length, num_beams)
-
- if(not input_1 and input_2 and input_3): #"2 e 3 cheios"
- tempSum2 = summarize_mT5(input_2, max_length, min_length, num_beams)
- tempSum3 = summarize_mT5(input_3, max_length, min_length, num_beams)
- fullSumm = tempSum2 + tempSum3
- return summarize_mT5(fullSumm, max_length, min_length, num_beams)
-
- if method == "Luhn":
- if(input_1 and input_2 and input_3 ): #"3 cheios"
- tempSum1 = summarize_Luhn(input_1)
- tempSum2 = summarize_Luhn(input_2)
- tempSum3 = summarize_Luhn(input_3)
- fullSumm = tempSum1 + tempSum2 + tempSum3
- return summarize_Luhn(fullSumm)
-
- if(input_1 and input_2 and not input_3): #"1 e 2 cheios"
- tempSum1 = summarize_Luhn(input_1)
- tempSum2 = summarize_Luhn(input_2)
- fullSumm = tempSum1 + tempSum2
- return summarize_Luhn(fullSumm)
-
- if(input_1 and not input_2 and input_3): #1 e 3 cheios"
- tempSum1 = summarize_Luhn(input_1)
- tempSum3 = summarize_Luhn(input_3)
- fullSumm = tempSum1 + tempSum3
- return summarize_Luhn(fullSumm)
-
- if(not input_1 and input_2 and input_3): #"2 e 3 cheios"
- tempSum2 = summarize_Luhn(input_2)
- tempSum3 = summarize_Luhn(input_3)
- fullSumm = tempSum2 + tempSum3
- return summarize_Luhn(fullSumm)
-
- if method == "LexRank":
- if(input_1 and input_2 and input_3 ): #"3 cheios"
- tempSum1 = summarize_LexRank(input_1)
- tempSum2 = summarize_LexRank(input_2)
- tempSum3 = summarize_LexRank(input_3)
- fullSumm = tempSum1 + tempSum2 + tempSum3
- return summarize_LexRank(fullSumm)
-
- if(input_1 and input_2 and not input_3): #"1 e 2 cheios"
- tempSum1 = summarize_LexRank(input_1)
- tempSum2 = summarize_LexRank(input_2)
- fullSumm = tempSum1 + tempSum2
- return summarize_LexRank(fullSumm)
-
- if(input_1 and not input_2 and input_3): #1 e 3 cheios"
- tempSum1 = summarize_LexRank(input_1)
- tempSum3 = summarize_LexRank(input_3)
- fullSumm = tempSum1 + tempSum3
- return summarize_LexRank(fullSumm)
-
- if(not input_1 and input_2 and input_3): #"2 e 3 cheios"
- tempSum2 = summarize_LexRank(input_2)
- tempSum3 = summarize_LexRank(input_3)
- fullSumm = tempSum2 + tempSum3
- return summarize_LexRank(fullSumm)
-
- if method == "TextRank":
- if(input_1 and input_2 and input_3 ): #"3 cheios"
- tempSum1 = summarize_TextRank(input_1)
- tempSum2 = summarize_TextRank(input_2)
- tempSum3 = summarize_TextRank(input_3)
- fullSumm = tempSum1 + tempSum2 + tempSum3
- return summarize_TextRank(fullSumm)
-
- if(input_1 and input_2 and not input_3): #"1 e 2 cheios"
- tempSum1 = summarize_TextRank(input_1)
- tempSum2 = summarize_TextRank(input_2)
- fullSumm = tempSum1 + tempSum2
- return summarize_TextRank(fullSumm)
-
- if(input_1 and not input_2 and input_3): #1 e 3 cheios"
- tempSum1 = summarize_TextRank(input_1)
- tempSum3 = summarize_TextRank(input_3)
- fullSumm = tempSum1 + tempSum3
- return summarize_TextRank(fullSumm)
-
- if(not input_1 and input_2 and input_3): #"2 e 3 cheios"
- tempSum2 = summarize_TextRank(input_2)
- tempSum3 = summarize_TextRank(input_3)
- fullSumm = tempSum2 + tempSum3
- return summarize_TextRank(fullSumm)
-
- if method == "Luhn + mT5":
- if(input_1 and input_2 and input_3 ): #"3 cheios"
- tempSum1 = summarize_Luhn(input_1)
- tempSum2 = summarize_Luhn(input_2)
- tempSum3 = summarize_Luhn(input_3)
- fullSumm = tempSum1 + tempSum2 + tempSum3
- finalSum = summarize_Luhn(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(input_1 and input_2 and not input_3): #"1 e 2 cheios"
- tempSum1 = summarize_Luhn(input_1)
- tempSum2 = summarize_Luhn(input_2)
- fullSumm = tempSum1 + tempSum2
- finalSum = summarize_Luhn(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(input_1 and not input_2 and input_3): #1 e 3 cheios"
- tempSum1 = summarize_Luhn(input_1)
- tempSum3 = summarize_Luhn(input_3)
- fullSumm = tempSum1 + tempSum3
- finalSum = summarize_Luhn(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(not input_1 and input_2 and input_3): #"2 e 3 cheios"
- tempSum2 = summarize_Luhn(input_2)
- tempSum3 = summarize_Luhn(input_3)
- fullSumm = tempSum2 + tempSum3
- finalSum = summarize_Luhn(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if method == "LexRank + mT5":
- if(input_1 and input_2 and input_3 ): #"3 cheios"
- tempSum1 = summarize_LexRank(input_1)
- tempSum2 = summarize_LexRank(input_2)
- tempSum3 = summarize_LexRank(input_3)
- fullSumm = tempSum1 + tempSum2 + tempSum3
- finalSum = summarize_LexRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(input_1 and input_2 and not input_3): #"1 e 2 cheios"
- tempSum1 = summarize_LexRank(input_1)
- tempSum2 = summarize_LexRank(input_2)
- fullSumm = tempSum1 + tempSum2
- finalSum = summarize_LexRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(input_1 and not input_2 and input_3): #1 e 3 cheios"
- tempSum1 = summarize_LexRank(input_1)
- tempSum3 = summarize_LexRank(input_3)
- fullSumm = tempSum1 + tempSum3
- finalSum = summarize_LexRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(not input_1 and input_2 and input_3): #"2 e 3 cheios"
- tempSum2 = summarize_LexRank(input_2)
- tempSum3 = summarize_LexRank(input_3)
- fullSumm = tempSum2 + tempSum3
- finalSum = summarize_LexRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if method == "TextRank + mT5":
- if(input_1 and input_2 and input_3 ): #"3 cheios"
- tempSum1 = summarize_TextRank(input_1)
- tempSum2 = summarize_TextRank(input_2)
- tempSum3 = summarize_TextRank(input_3)
- fullSumm = tempSum1 + tempSum2 + tempSum3
- finalSum = summarize_TextRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(input_1 and input_2 and not input_3): #"1 e 2 cheios"
- tempSum1 = summarize_TextRank(input_1)
- tempSum2 = summarize_TextRank(input_2)
- fullSumm = tempSum1 + tempSum2
- finalSum = summarize_TextRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(input_1 and not input_2 and input_3): #1 e 3 cheios"
- tempSum1 = summarize_TextRank(input_1)
- tempSum3 = summarize_TextRank(input_3)
- fullSumm = tempSum1 + tempSum3
- finalSum = summarize_TextRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
-
- if(not input_1 and input_2 and input_3): #"2 e 3 cheios"
- tempSum2 = summarize_TextRank(input_2)
- tempSum3 = summarize_TextRank(input_3)
- fullSumm = tempSum2 + tempSum3
- finalSum = summarize_TextRank(fullSumm)
- return summarize_mT5(finalSum, max_length, min_length, num_beams)
- return "ERROR"
-
-def summarize_HUB_Monodocument(input, method, max_length, min_length, num_beams):
- if method == "Pure mT5":
- return summarize_mT5(input, max_length, min_length, num_beams)
-
- if method == "Luhn":
- return summarize_Luhn(input)
-
- if method == "LexRank":
- return summarize_LexRank(input)
-
- if method == "TextRank":
- return summarize_TextRank(input)
-
- if method == "Luhn + mT5":
- tempSum = summarize_Luhn(input)
- return summarize_mT5(tempSum, max_length, min_length, num_beams)
-
- if method == "LexRank + mT5":
- tempSum = summarize_LexRank(input)
- return summarize_mT5(tempSum, max_length, min_length, num_beams)
-
- if method == "TextRank + mT5":
- tempSum = summarize_TextRank(input)
- return summarize_mT5(tempSum, max_length, min_length, num_beams)
- return "ERROR"
-
-def summarize_Luhn(input):
- summ = ''
- summarizer = LuhnSummarizer()
- parser = PlaintextParser.from_string(input, Tokenizer("portuguese"))
- summary_1 = summarizer(parser.document, 3)
-
- for sentence in summary_1:
- summ = summ + ' ' + str(sentence)
- summ2 = ''
- summ2 = summ.replace('\n', ' ').replace('\r', '')
- return summ2
-
-def summarize_LexRank(input):
- summ = ''
- summarizer = LexRankSummarizer()
- parser = PlaintextParser.from_string(input, Tokenizer("portuguese"))
- summary_1 = summarizer(parser.document, 3)
-
- for sentence in summary_1:
- summ = summ + ' ' + str(sentence)
- summ2 = ''
- summ2 = summ.replace('\n', ' ').replace('\r', '')
- return summ2
-
-def summarize_TextRank(input):
- summ = ''
- doc = nlp(input)
- tr = doc._.textrank
- for sent in tr.summary(limit_sentences=3):
- summ = summ + ' ' + str(sent)
- summ2 = summ.replace('\n', ' ').replace('\r', '')
- return summ2;
-
-def summarize_mT5(input, max_length, min_length, num_beams):
- for i in range(0,14):
- input_ids = tokenizer(
- input,
- return_tensors="pt",
- padding="max_length",
- truncation=True,
- max_length=512
- )["input_ids"]
-
- output_ids = model.generate(
- input_ids=input_ids,
- max_length=max_length,
- min_length=min_length,
- no_repeat_ngram_size=2,
- num_beams=num_beams
- )[0]
-
- response = tokenizer.decode(
- output_ids,
- skip_special_tokens=True,
- clean_up_tokenization_spaces=False
- )
- return response
-
-with app:
- gr.Markdown("Sumarização Monodocumento ou Multidocumento para o português.")
- with gr.Tabs():
-
- with gr.TabItem("Sumarização Monodocumento"):
- MonoInputs=[gr.Textbox(label="Texto a ser Sumarizado"),gr.Radio(["Pure mT5","Luhn","LexRank","TextRank","Luhn + mT5","LexRank + mT5","TextRank + mT5"], label="Método"),
-gr.Slider(50, 500, step=1, value=200, label="Tamanho máximo do Sumário"), gr.Slider(1, 125, step=1, value=50, label="Tamanho mínimo do Sumário"), gr.Slider(1, 10, step=1, value=4, label="Qualidade do sumário")]
- MonoOutputs=gr.Textbox()
- MonoButton = gr.Button("Sumarizar Texto")
-
- with gr.TabItem("Sumarização Multidocumento"):
- MultiInputs=[gr.Textbox(label="Texto 1"), gr.Textbox(label="Texto 2"),gr.Textbox(label="Texto 3"),gr.Radio(["Pure mT5","Luhn","LexRank","TextRank","Luhn + mT5","LexRank + mT5","TextRank + mT5"], label="Método"),
-gr.Slider(50, 500, step=1, value=200, label="Tamanho máximo do Sumário"), gr.Slider(1, 125, step=1, value=50, label="Tamanho mínimo do Sumário"), gr.Slider(1, 10, step=1, value=4, label="Qualidade do sumário")]
- MultiOutputs=gr.Textbox()
- MultiButton = gr.Button("Sumarizar Textos")
-
- MonoButton.click(summarize_HUB_Monodocument, inputs=MonoInputs, outputs=MonoOutputs)
- MultiButton.click(summarize_HUB_Multidocument, inputs=MultiInputs, outputs=MultiOutputs)
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/model.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/model.py
deleted file mode 100644
index 66addeb6e7ac27a109775e2cac43d1724b5a6fb2..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/model.py
+++ /dev/null
@@ -1,141 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Code for constructing the model."""
-from typing import Any, Mapping, Optional, Union
-
-from absl import logging
-from alphafold.common import confidence
-from alphafold.model import features
-from alphafold.model import modules
-import haiku as hk
-import jax
-import ml_collections
-import numpy as np
-import tensorflow.compat.v1 as tf
-import tree
-
-
-def get_confidence_metrics(
- prediction_result: Mapping[str, Any]) -> Mapping[str, Any]:
- """Post processes prediction_result to get confidence metrics."""
-
- confidence_metrics = {}
- confidence_metrics['plddt'] = confidence.compute_plddt(
- prediction_result['predicted_lddt']['logits'])
- if 'predicted_aligned_error' in prediction_result:
- confidence_metrics.update(confidence.compute_predicted_aligned_error(
- prediction_result['predicted_aligned_error']['logits'],
- prediction_result['predicted_aligned_error']['breaks']))
- confidence_metrics['ptm'] = confidence.predicted_tm_score(
- prediction_result['predicted_aligned_error']['logits'],
- prediction_result['predicted_aligned_error']['breaks'])
-
- return confidence_metrics
-
-
-class RunModel:
- """Container for JAX model."""
-
- def __init__(self,
- config: ml_collections.ConfigDict,
- params: Optional[Mapping[str, Mapping[str, np.ndarray]]] = None):
- self.config = config
- self.params = params
-
- def _forward_fn(batch):
- model = modules.AlphaFold(self.config.model)
- return model(
- batch,
- is_training=False,
- compute_loss=False,
- ensemble_representations=True)
-
- self.apply = jax.jit(hk.transform(_forward_fn).apply)
- self.init = jax.jit(hk.transform(_forward_fn).init)
-
- def init_params(self, feat: features.FeatureDict, random_seed: int = 0):
- """Initializes the model parameters.
-
- If none were provided when this class was instantiated then the parameters
- are randomly initialized.
-
- Args:
- feat: A dictionary of NumPy feature arrays as output by
- RunModel.process_features.
- random_seed: A random seed to use to initialize the parameters if none
- were set when this class was initialized.
- """
- if not self.params:
- # Init params randomly.
- rng = jax.random.PRNGKey(random_seed)
- self.params = hk.data_structures.to_mutable_dict(
- self.init(rng, feat))
- logging.warning('Initialized parameters randomly')
-
- def process_features(
- self,
- raw_features: Union[tf.train.Example, features.FeatureDict],
- random_seed: int) -> features.FeatureDict:
- """Processes features to prepare for feeding them into the model.
-
- Args:
- raw_features: The output of the data pipeline either as a dict of NumPy
- arrays or as a tf.train.Example.
- random_seed: The random seed to use when processing the features.
-
- Returns:
- A dict of NumPy feature arrays suitable for feeding into the model.
- """
- if isinstance(raw_features, dict):
- return features.np_example_to_features(
- np_example=raw_features,
- config=self.config,
- random_seed=random_seed)
- else:
- return features.tf_example_to_features(
- tf_example=raw_features,
- config=self.config,
- random_seed=random_seed)
-
- def eval_shape(self, feat: features.FeatureDict) -> jax.ShapeDtypeStruct:
- self.init_params(feat)
- logging.info('Running eval_shape with shape(feat) = %s',
- tree.map_structure(lambda x: x.shape, feat))
- shape = jax.eval_shape(self.apply, self.params, jax.random.PRNGKey(0), feat)
- logging.info('Output shape was %s', shape)
- return shape
-
- def predict(self, feat: features.FeatureDict) -> Mapping[str, Any]:
- """Makes a prediction by inferencing the model on the provided features.
-
- Args:
- feat: A dictionary of NumPy feature arrays as output by
- RunModel.process_features.
-
- Returns:
- A dictionary of model outputs.
- """
- self.init_params(feat)
- logging.info('Running predict with shape(feat) = %s',
- tree.map_structure(lambda x: x.shape, feat))
- result = self.apply(self.params, jax.random.PRNGKey(0), feat)
- # This block is to ensure benchmark timings are accurate. Some blocking is
- # already happening when computing get_confidence_metrics, and this ensures
- # all outputs are blocked on.
- jax.tree_map(lambda x: x.block_until_ready(), result)
- result.update(get_confidence_metrics(result))
- logging.info('Output shape was %s',
- tree.map_structure(lambda x: x.shape, result))
- return result
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py
deleted file mode 100644
index 14eaef2dffea606027001b69d12d11cb46693e1c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py
+++ /dev/null
@@ -1,42 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_caffe_dc5.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
- (1333, 768), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco.py
deleted file mode 100644
index 0f402826d3a22714078d8c50ed6bd8959018e4e7..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = 'tridentnet_r50_caffe_mstrain_1x_coco.py'
-
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py
deleted file mode 100644
index d7a43bee01422ad4795dd27874e0cd4bb6cbfecf..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/deeplabv3_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='ASPPHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- dilations=(1, 12, 24, 36),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 136449083f7a9efbad6df94f1acd04170147aaba..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101b-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_769x769_80k_cityscapes.py'
-model = dict(
- pretrained='torchvision://resnet101',
- backbone=dict(type='ResNet', depth=101))
diff --git a/spaces/HLasse/textdescriptives/README.md b/spaces/HLasse/textdescriptives/README.md
deleted file mode 100644
index 5d79ecc9bd1973cf900d5e5db96ff7a58bed067a..0000000000000000000000000000000000000000
--- a/spaces/HLasse/textdescriptives/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Textdescriptives
-emoji: 📈
-colorFrom: green
-colorTo: red
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-tags: [NLP, feature extraction]
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/numbers.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/numbers.py
deleted file mode 100644
index 491634d692ee71e7ea0e5213b513e15be825c9b2..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/text/numbers.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import inflect
-import re
-
-
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
\ No newline at end of file
diff --git a/spaces/Hello-SimpleAI/chatgpt-detector-ling/README.md b/spaces/Hello-SimpleAI/chatgpt-detector-ling/README.md
deleted file mode 100644
index 1d7129db8932f8e70fd07d11fba7951b9bd68927..0000000000000000000000000000000000000000
--- a/spaces/Hello-SimpleAI/chatgpt-detector-ling/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Chatgpt Detector Ling
-emoji: 📈
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HiImJavivi/Practica2/app.py b/spaces/HiImJavivi/Practica2/app.py
deleted file mode 100644
index 1ded5b1e66daba863e96e5dd2443653c90ba2148..0000000000000000000000000000000000000000
--- a/spaces/HiImJavivi/Practica2/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from fastai.vision.all import *
-import gradio as gr
-
-
-# Cargamos el learner
-learn = load_learner('exportdefinitivo.pkl')
-
-# Definimos las etiquetas de nuestro modelo
-labels = ['0','1','2','-1']
-
-
-# Definimos una función que se encarga de llevar a cabo las predicciones
-def predict(string):
- pred,pred_idx,probs = learn.predict(string)
- return {labels[i]: float(probs[i]) for i in range(len(labels))}
-
-# Creamos la interfaz y la lanzamos.
-gr.Interface(fn=predict, inputs=gr.inputs.Textbox(lines=1), outputs=gr.outputs.Label(num_top_classes=3),examples=['This house is very good','Going up gets you down'], title="Hypothesis deductor labels entailment, contradiction, and neutral, supporting the task of natural language inference", description="For each instance, there is a string for the premise, a string for the hypothesis, and an integer for the label. pairs manually labeled for balanced classification with the labels entailment(0), contradiction(2), and neutral(1), supporting the task of natural language inference").launch(share=False)
\ No newline at end of file
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/lengths/__init__.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/lengths/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ICML2022/OFA/fairseq/examples/camembert/README.md b/spaces/ICML2022/OFA/fairseq/examples/camembert/README.md
deleted file mode 100644
index 5ef4fe3f151bb468712f3be935ea5bb1b1360bf7..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/camembert/README.md
+++ /dev/null
@@ -1,75 +0,0 @@
-# CamemBERT: a Tasty French Language Model
-
-## Introduction
-
-[CamemBERT](https://arxiv.org/abs/1911.03894) is a pretrained language model trained on 138GB of French text based on RoBERTa.
-
-Also available in [github.com/huggingface/transformers](https://github.com/huggingface/transformers/).
-
-## Pre-trained models
-
-| Model | #params | Download | Arch. | Training data |
-|--------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------|-------|-----------------------------------|
-| `camembert` / `camembert-base` | 110M | [camembert-base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz) | Base | OSCAR (138 GB of text) |
-| `camembert-large` | 335M | [camembert-large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-large.tar.gz) | Large | CCNet (135 GB of text) |
-| `camembert-base-ccnet` | 110M | [camembert-base-ccnet.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet.tar.gz) | Base | CCNet (135 GB of text) |
-| `camembert-base-wikipedia-4gb` | 110M | [camembert-base-wikipedia-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-wikipedia-4gb.tar.gz) | Base | Wikipedia (4 GB of text) |
-| `camembert-base-oscar-4gb` | 110M | [camembert-base-oscar-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-oscar-4gb.tar.gz) | Base | Subsample of OSCAR (4 GB of text) |
-| `camembert-base-ccnet-4gb` | 110M | [camembert-base-ccnet-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet-4gb.tar.gz) | Base | Subsample of CCNet (4 GB of text) |
-
-## Example usage
-
-### fairseq
-##### Load CamemBERT from torch.hub (PyTorch >= 1.1):
-```python
-import torch
-camembert = torch.hub.load('pytorch/fairseq', 'camembert')
-camembert.eval() # disable dropout (or leave in train mode to finetune)
-```
-
-##### Load CamemBERT (for PyTorch 1.0 or custom models):
-```python
-# Download camembert model
-wget https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz
-tar -xzvf camembert.tar.gz
-
-# Load the model in fairseq
-from fairseq.models.roberta import CamembertModel
-camembert = CamembertModel.from_pretrained('/path/to/camembert')
-camembert.eval() # disable dropout (or leave in train mode to finetune)
-```
-
-##### Filling masks:
-```python
-masked_line = 'Le camembert est :)'
-camembert.fill_mask(masked_line, topk=3)
-# [('Le camembert est délicieux :)', 0.4909118115901947, ' délicieux'),
-# ('Le camembert est excellent :)', 0.10556942224502563, ' excellent'),
-# ('Le camembert est succulent :)', 0.03453322499990463, ' succulent')]
-```
-
-##### Extract features from Camembert:
-```python
-# Extract the last layer's features
-line = "J'aime le camembert !"
-tokens = camembert.encode(line)
-last_layer_features = camembert.extract_features(tokens)
-assert last_layer_features.size() == torch.Size([1, 10, 768])
-
-# Extract all layer's features (layer 0 is the embedding layer)
-all_layers = camembert.extract_features(tokens, return_all_hiddens=True)
-assert len(all_layers) == 13
-assert torch.all(all_layers[-1] == last_layer_features)
-```
-
-## Citation
-If you use our work, please cite:
-
-```bibtex
-@inproceedings{martin2020camembert,
- title={CamemBERT: a Tasty French Language Model},
- author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
- booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
- year={2020}
-}
-```
diff --git a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/misc.py b/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/misc.py
deleted file mode 100644
index 1acfb7ea16904c07e362aeaae7337920d06fe5ca..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/misc.py
+++ /dev/null
@@ -1,283 +0,0 @@
-# python3.7
-
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Misc functions for customized operations.
-
-Please refer to https://github.com/NVlabs/stylegan3
-"""
-
-# pylint: disable=line-too-long
-# pylint: disable=missing-class-docstring
-# pylint: disable=missing-function-docstring
-# pylint: disable=use-maxsplit-arg
-
-import re
-import contextlib
-import warnings
-from easydict import EasyDict
-import numpy as np
-import torch
-
-#----------------------------------------------------------------------------
-# Cached construction of constant tensors. Avoids CPU=>GPU copy when the
-# same constant is used multiple times.
-
-_constant_cache = dict()
-
-def constant(value, shape=None, dtype=None, device=None, memory_format=None):
- value = np.asarray(value)
- if shape is not None:
- shape = tuple(shape)
- if dtype is None:
- dtype = torch.get_default_dtype()
- if device is None:
- device = torch.device('cpu')
- if memory_format is None:
- memory_format = torch.contiguous_format
-
- key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format)
- tensor = _constant_cache.get(key, None)
- if tensor is None:
- tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device)
- if shape is not None:
- tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape))
- tensor = tensor.contiguous(memory_format=memory_format)
- _constant_cache[key] = tensor
- return tensor
-
-#----------------------------------------------------------------------------
-# Replace NaN/Inf with specified numerical values.
-
-try:
- nan_to_num = torch.nan_to_num # 1.8.0a0
-except AttributeError:
- def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin
- assert isinstance(input, torch.Tensor)
- if posinf is None:
- posinf = torch.finfo(input.dtype).max
- if neginf is None:
- neginf = torch.finfo(input.dtype).min
- assert nan == 0
- return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out)
-
-#----------------------------------------------------------------------------
-# Symbolic assert.
-
-try:
- symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access
-except AttributeError:
- symbolic_assert = torch.Assert # 1.7.0
-
-#----------------------------------------------------------------------------
-# Context manager to temporarily suppress known warnings in torch.jit.trace().
-# Note: Cannot use catch_warnings because of https://bugs.python.org/issue29672
-
-@contextlib.contextmanager
-def suppress_tracer_warnings():
- flt = ('ignore', None, torch.jit.TracerWarning, None, 0)
- warnings.filters.insert(0, flt)
- yield
- warnings.filters.remove(flt)
-
-#----------------------------------------------------------------------------
-# Assert that the shape of a tensor matches the given list of integers.
-# None indicates that the size of a dimension is allowed to vary.
-# Performs symbolic assertion when used in torch.jit.trace().
-
-def assert_shape(tensor, ref_shape):
- if tensor.ndim != len(ref_shape):
- raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}')
- for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)):
- if ref_size is None:
- pass
- elif isinstance(ref_size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}')
- elif isinstance(size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}')
- elif size != ref_size:
- raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}')
-
-#----------------------------------------------------------------------------
-# Function decorator that calls torch.autograd.profiler.record_function().
-
-def profiled_function(fn):
- def decorator(*args, **kwargs):
- with torch.autograd.profiler.record_function(fn.__name__):
- return fn(*args, **kwargs)
- decorator.__name__ = fn.__name__
- return decorator
-
-#----------------------------------------------------------------------------
-# Sampler for torch.utils.data.DataLoader that loops over the dataset
-# indefinitely, shuffling items as it goes.
-
-class InfiniteSampler(torch.utils.data.Sampler):
- def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5):
- assert len(dataset) > 0
- assert num_replicas > 0
- assert 0 <= rank < num_replicas
- assert 0 <= window_size <= 1
- super().__init__(dataset)
- self.dataset = dataset
- self.rank = rank
- self.num_replicas = num_replicas
- self.shuffle = shuffle
- self.seed = seed
- self.window_size = window_size
-
- def __iter__(self):
- order = np.arange(len(self.dataset))
- rnd = None
- window = 0
- if self.shuffle:
- rnd = np.random.RandomState(self.seed)
- rnd.shuffle(order)
- window = int(np.rint(order.size * self.window_size))
-
- idx = 0
- while True:
- i = idx % order.size
- if idx % self.num_replicas == self.rank:
- yield order[i]
- if window >= 2:
- j = (i - rnd.randint(window)) % order.size
- order[i], order[j] = order[j], order[i]
- idx += 1
-
-#----------------------------------------------------------------------------
-# Utilities for operating with torch.nn.Module parameters and buffers.
-
-def params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.parameters()) + list(module.buffers())
-
-def named_params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.named_parameters()) + list(module.named_buffers())
-
-def copy_params_and_buffers(src_module, dst_module, require_all=False):
- assert isinstance(src_module, torch.nn.Module)
- assert isinstance(dst_module, torch.nn.Module)
- src_tensors = dict(named_params_and_buffers(src_module))
- for name, tensor in named_params_and_buffers(dst_module):
- assert (name in src_tensors) or (not require_all)
- if name in src_tensors:
- tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad)
-
-#----------------------------------------------------------------------------
-# Context manager for easily enabling/disabling DistributedDataParallel
-# synchronization.
-
-@contextlib.contextmanager
-def ddp_sync(module, sync):
- assert isinstance(module, torch.nn.Module)
- if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel):
- yield
- else:
- with module.no_sync():
- yield
-
-#----------------------------------------------------------------------------
-# Check DistributedDataParallel consistency across processes.
-
-def check_ddp_consistency(module, ignore_regex=None):
- assert isinstance(module, torch.nn.Module)
- for name, tensor in named_params_and_buffers(module):
- fullname = type(module).__name__ + '.' + name
- if ignore_regex is not None and re.fullmatch(ignore_regex, fullname):
- continue
- tensor = tensor.detach()
- if tensor.is_floating_point():
- tensor = nan_to_num(tensor)
- other = tensor.clone()
- torch.distributed.broadcast(tensor=other, src=0)
- assert (tensor == other).all(), fullname
-
-#----------------------------------------------------------------------------
-# Print summary table of module hierarchy.
-
-def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True):
- assert isinstance(module, torch.nn.Module)
- assert not isinstance(module, torch.jit.ScriptModule)
- assert isinstance(inputs, (tuple, list))
-
- # Register hooks.
- entries = []
- nesting = [0]
- def pre_hook(_mod, _inputs):
- nesting[0] += 1
- def post_hook(mod, _inputs, outputs):
- nesting[0] -= 1
- if nesting[0] <= max_nesting:
- outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs]
- outputs = [t for t in outputs if isinstance(t, torch.Tensor)]
- entries.append(EasyDict(mod=mod, outputs=outputs))
- hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()]
- hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()]
-
- # Run module.
- outputs = module(*inputs)
- for hook in hooks:
- hook.remove()
-
- # Identify unique outputs, parameters, and buffers.
- tensors_seen = set()
- for e in entries:
- e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen]
- e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen]
- e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen]
- tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs}
-
- # Filter out redundant entries.
- if skip_redundant:
- entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)]
-
- # Construct table.
- rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']]
- rows += [['---'] * len(rows[0])]
- param_total = 0
- buffer_total = 0
- submodule_names = {mod: name for name, mod in module.named_modules()}
- for e in entries:
- name = '' if e.mod is module else submodule_names[e.mod]
- param_size = sum(t.numel() for t in e.unique_params)
- buffer_size = sum(t.numel() for t in e.unique_buffers)
- output_shapes = [str(list(t.shape)) for t in e.outputs]
- output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs]
- rows += [[
- name + (':0' if len(e.outputs) >= 2 else ''),
- str(param_size) if param_size else '-',
- str(buffer_size) if buffer_size else '-',
- (output_shapes + ['-'])[0],
- (output_dtypes + ['-'])[0],
- ]]
- for idx in range(1, len(e.outputs)):
- rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]]
- param_total += param_size
- buffer_total += buffer_size
- rows += [['---'] * len(rows[0])]
- rows += [['Total', str(param_total), str(buffer_total), '-', '-']]
-
- # Print table.
- widths = [max(len(cell) for cell in column) for column in zip(*rows)]
- print()
- for row in rows:
- print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths)))
- print()
- return outputs
-
-#----------------------------------------------------------------------------
-
-# pylint: enable=line-too-long
-# pylint: enable=missing-class-docstring
-# pylint: enable=missing-function-docstring
-# pylint: enable=use-maxsplit-arg
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/metrics.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/metrics.py
deleted file mode 100644
index 65ea463c0dab647ea81ec0fa95441dddfd631e33..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/metrics.py
+++ /dev/null
@@ -1,363 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Model validation metrics
-"""
-
-import math
-import warnings
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-from utils import TryExcept, threaded
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
- return (x[:, :4] * w).sum(1)
-
-
-def smooth(y, f=0.05):
- # Box filter of fraction f
- nf = round(len(y) * f * 2) // 2 + 1 # number of filter elements (must be odd)
- p = np.ones(nf // 2) # ones padding
- yp = np.concatenate((p * y[0], y, p * y[-1]), 0) # y padded
- return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16, prefix=""):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes, nt = np.unique(target_cls, return_counts=True)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = nt[ci] # number of labels
- n_p = i.sum() # number of predictions
- if n_p == 0 or n_l == 0:
- continue
-
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + eps) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + eps)
- names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data
- names = dict(enumerate(names)) # to dict
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / f'{prefix}PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / f'{prefix}F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / f'{prefix}P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / f'{prefix}R_curve.png', names, ylabel='Recall')
-
- i = smooth(f1.mean(0), 0.1).argmax() # max F1 index
- p, r, f1 = p[:, i], r[:, i], f1[:, i]
- tp = (r * nt).round() # true positives
- fp = (tp / (p + eps) - tp).round() # false positives
- return tp, fp, p, r, f1, ap, unique_classes.astype(int)
-
-
-def compute_ap(recall, precision):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- mrec = np.concatenate(([0.0], recall, [1.0]))
- mpre = np.concatenate(([1.0], precision, [0.0]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- if detections is None:
- gt_classes = labels.int()
- for gc in gt_classes:
- self.matrix[self.nc, gc] += 1 # background FN
- return
-
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(int)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[detection_classes[m1[j]], gc] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # true background
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # predicted background
-
- def matrix(self):
- return self.matrix
-
- def tp_fp(self):
- tp = self.matrix.diagonal() # true positives
- fp = self.matrix.sum(1) - tp # false positives
- # fn = self.matrix.sum(0) - tp # false negatives (missed detections)
- return tp[:-1], fp[:-1] # remove background class
-
- @TryExcept('WARNING ⚠️ ConfusionMatrix plot failure')
- def plot(self, normalize=True, save_dir='', names=()):
- import seaborn as sn
-
- array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig, ax = plt.subplots(1, 1, figsize=(12, 9), tight_layout=True)
- nc, nn = self.nc, len(names) # number of classes, names
- sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size
- labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels
- ticklabels = (names + ['background']) if labels else "auto"
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered
- sn.heatmap(array,
- ax=ax,
- annot=nc < 30,
- annot_kws={
- "size": 8},
- cmap='Blues',
- fmt='.2f',
- square=True,
- vmin=0.0,
- xticklabels=ticklabels,
- yticklabels=ticklabels).set_facecolor((1, 1, 1))
- ax.set_ylabel('True')
- ax.set_ylabel('Predicted')
- ax.set_title('Confusion Matrix')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- plt.close(fig)
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
-
-
-def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns Intersection over Union (IoU) of box1(1,4) to box2(n,4)
-
- # Get the coordinates of bounding boxes
- if xywh: # transform from xywh to xyxy
- (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, -1), box2.chunk(4, -1)
- w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2
- b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_
- b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_
- else: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, -1)
- b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, -1)
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- union = w1 * h1 + w2 * h2 - inter + eps
-
- # IoU
- iou = inter / union
- if CIoU or DIoU or GIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2
- if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- return iou - rho2 / c2 # DIoU
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU https://arxiv.org/pdf/1902.09630.pdf
- return iou # IoU
-
-
-def box_iou(box1, box2, eps=1e-7):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- (a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2)
- inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp(0).prod(2)
-
- # IoU = inter / (area1 + area2 - inter)
- return inter / ((a2 - a1).prod(2) + (b2 - b1).prod(2) - inter + eps)
-
-
-def bbox_ioa(box1, box2, eps=1e-7):
- """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2
- box1: np.array of shape(4)
- box2: np.array of shape(nx4)
- returns: np.array of shape(n)
- """
-
- # Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = box1
- b2_x1, b2_y1, b2_x2, b2_y2 = box2.T
-
- # Intersection area
- inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
- (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
-
- # box2 area
- box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps
-
- # Intersection over box2 area
- return inter_area / box2_area
-
-
-def wh_iou(wh1, wh2, eps=1e-7):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter + eps) # iou = inter / (area1 + area2 - inter)
-
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-
-@threaded
-def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- ax.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- ax.set_title('Precision-Recall Curve')
- fig.savefig(save_dir, dpi=250)
- plt.close(fig)
-
-
-@threaded
-def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = smooth(py.mean(0), 0.05)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- ax.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- ax.set_title(f'{ylabel}-Confidence Curve')
- fig.savefig(save_dir, dpi=250)
- plt.close(fig)
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/onnx/onnx_export_48k.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/onnx/onnx_export_48k.py
deleted file mode 100644
index 9a046353dc25b658684fa76bdf8b4f21d1a77c98..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/onnx/onnx_export_48k.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import argparse
-import time
-import numpy as np
-import onnx
-from onnxsim import simplify
-import onnxruntime as ort
-import onnxoptimizer
-import torch
-from model_onnx_48k import SynthesizerTrn
-import utils
-from hubert import hubert_model_onnx
-
-def main(HubertExport,NetExport):
-
- path = "NyaruTaffy"
-
- if(HubertExport):
- device = torch.device("cuda")
- hubert_soft = hubert_model_onnx.hubert_soft("hubert/model.pt")
- test_input = torch.rand(1, 1, 16000)
- input_names = ["source"]
- output_names = ["embed"]
- torch.onnx.export(hubert_soft.to(device),
- test_input.to(device),
- "hubert3.0.onnx",
- dynamic_axes={
- "source": {
- 2: "sample_length"
- }
- },
- verbose=False,
- opset_version=13,
- input_names=input_names,
- output_names=output_names)
- if(NetExport):
- device = torch.device("cuda")
- hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json")
- SVCVITS = SynthesizerTrn(
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model)
- _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", SVCVITS, None)
- _ = SVCVITS.eval().to(device)
- for i in SVCVITS.parameters():
- i.requires_grad = False
- test_hidden_unit = torch.rand(1, 50, 256)
- test_lengths = torch.LongTensor([50])
- test_pitch = torch.rand(1, 50)
- test_sid = torch.LongTensor([0])
- input_names = ["hidden_unit", "lengths", "pitch", "sid"]
- output_names = ["audio", ]
- SVCVITS.eval()
- torch.onnx.export(SVCVITS,
- (
- test_hidden_unit.to(device),
- test_lengths.to(device),
- test_pitch.to(device),
- test_sid.to(device)
- ),
- f"checkpoints/{path}/model.onnx",
- dynamic_axes={
- "hidden_unit": [0, 1],
- "pitch": [1]
- },
- do_constant_folding=False,
- opset_version=16,
- verbose=False,
- input_names=input_names,
- output_names=output_names)
-
-
-if __name__ == '__main__':
- main(False,True)
diff --git a/spaces/Illumotion/Koboldcpp/include/CL/Utils/File.h b/spaces/Illumotion/Koboldcpp/include/CL/Utils/File.h
deleted file mode 100644
index 62c8e95a764ff1e6b993133193623e97d782699f..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/include/CL/Utils/File.h
+++ /dev/null
@@ -1,42 +0,0 @@
-#pragma once
-
-// OpenCL Utils includes
-#include "OpenCLUtils_Export.h"
-
-// OpenCL includes
-#include
-
-// read all the text file contents securely in ANSI C89
-// return pointer to C-string with file contents
-// can handle streams with no known size and no support for fseek
-// based on https://stackoverflow.com/questions/14002954/ by Nominal Animal
-UTILS_EXPORT
-char* cl_util_read_text_file(const char* const filename, size_t* const length,
- cl_int* const error);
-
-// read all the binary file contents securely in ANSI C89
-// return pointer to file contents
-// can handle streams with no known size and no support for fseek
-// based on https://stackoverflow.com/questions/14002954/ by Nominal Animal
-UTILS_EXPORT
-unsigned char* cl_util_read_binary_file(const char* const filename,
- size_t* const length,
- cl_int* const error);
-
-// write binaries of OpenCL compiled program
-// binaries are written as separate files for each device
-// with file name "(program_file_name)_(name of device).bin"
-// based on variant of Logan
-// http://logan.tw/posts/2014/11/22/pre-compile-the-opencl-kernel-program-part-2/
-UTILS_EXPORT
-cl_int cl_util_write_binaries(const cl_program program,
- const char* const program_file_name);
-
-// read binaries of OpenCL compiled program
-// from files of file names "(program_file_name)_(name of device).bin"
-UTILS_EXPORT
-cl_program cl_util_read_binaries(const cl_context context,
- const cl_device_id* const devices,
- const cl_uint num_devices,
- const char* const program_file_name,
- cl_int* const error);
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.js b/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.js
deleted file mode 100644
index f013209931218fd054979e290706f1945de76856..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.js
+++ /dev/null
@@ -1,502 +0,0 @@
-
-// custom javascript here
-
-const MAX_HISTORY_LENGTH = 32;
-
-var key_down_history = [];
-var currentIndex = -1;
-var user_input_ta;
-
-var gradioContainer = null;
-var user_input_ta = null;
-var user_input_tb = null;
-var userInfoDiv = null;
-var appTitleDiv = null;
-var chatbot = null;
-var chatbotWrap = null;
-var apSwitch = null;
-var empty_botton = null;
-var messageBotDivs = null;
-var loginUserForm = null;
-var logginUser = null;
-
-var userLogged = false;
-var usernameGotten = false;
-var historyLoaded = false;
-
-var ga = document.getElementsByTagName("gradio-app");
-var targetNode = ga[0];
-var isInIframe = (window.self !== window.top);
-var language = navigator.language.slice(0,2);
-
-var forView_i18n = {
- 'zh': "仅供查看",
- 'en': "For viewing only",
- 'ja': "閲覧専用",
- 'fr': "Pour consultation seulement",
- 'es': "Solo para visualización",
-};
-
-// gradio 页面加载好了么??? 我能动你的元素了么??
-function gradioLoaded(mutations) {
- for (var i = 0; i < mutations.length; i++) {
- if (mutations[i].addedNodes.length) {
- loginUserForm = document.querySelector(".gradio-container > .main > .wrap > .panel > .form")
- gradioContainer = document.querySelector(".gradio-container");
- user_input_tb = document.getElementById('user_input_tb');
- userInfoDiv = document.getElementById("user_info");
- appTitleDiv = document.getElementById("app_title");
- chatbot = document.querySelector('#chuanhu_chatbot');
- chatbotWrap = document.querySelector('#chuanhu_chatbot > .wrap');
- apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
- empty_botton = document.getElementById("empty_btn")
-
- if (loginUserForm) {
- localStorage.setItem("userLogged", true);
- userLogged = true;
- }
-
- if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没?
- adjustDarkMode();
- }
- if (user_input_tb) { // user_input_tb 加载出来了没?
- selectHistory();
- }
- if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没?
- if (!usernameGotten) {
- getUserInfo();
- }
- setTimeout(showOrHideUserInfo(), 2000);
- }
- if (chatbot) { // chatbot 加载出来了没?
- setChatbotHeight();
- }
- if (chatbotWrap) {
- if (!historyLoaded) {
- loadHistoryHtml();
- }
- setChatbotScroll();
- }
- if (empty_botton) {
- emptyHistory();
- }
- }
- }
-}
-
-function webLocale() {
- console.log("webLocale", language);
- if (forView_i18n.hasOwnProperty(language)) {
- var forView = forView_i18n[language];
- var forViewStyle = document.createElement('style');
- forViewStyle.innerHTML = '.wrap>.history-message>:last-child::after { content: "' + forView + '"!important; }';
- document.head.appendChild(forViewStyle);
- // console.log("added forViewStyle", forView);
- }
-}
-
-function selectHistory() {
- user_input_ta = user_input_tb.querySelector("textarea");
- if (user_input_ta) {
- observer.disconnect(); // 停止监听
- // 在 textarea 上监听 keydown 事件
- user_input_ta.addEventListener("keydown", function (event) {
- var value = user_input_ta.value.trim();
- // 判断按下的是否为方向键
- if (event.code === 'ArrowUp' || event.code === 'ArrowDown') {
- // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作
- if (value && key_down_history.indexOf(value) === -1)
- return;
- // 对于需要响应的动作,阻止默认行为。
- event.preventDefault();
- var length = key_down_history.length;
- if (length === 0) {
- currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置
- return;
- }
- if (currentIndex === -1) {
- currentIndex = length;
- }
- if (event.code === 'ArrowUp' && currentIndex > 0) {
- currentIndex--;
- user_input_ta.value = key_down_history[currentIndex];
- } else if (event.code === 'ArrowDown' && currentIndex < length - 1) {
- currentIndex++;
- user_input_ta.value = key_down_history[currentIndex];
- }
- user_input_ta.selectionStart = user_input_ta.value.length;
- user_input_ta.selectionEnd = user_input_ta.value.length;
- const input_event = new InputEvent("input", { bubbles: true, cancelable: true });
- user_input_ta.dispatchEvent(input_event);
- } else if (event.code === "Enter") {
- if (value) {
- currentIndex = -1;
- if (key_down_history.indexOf(value) === -1) {
- key_down_history.push(value);
- if (key_down_history.length > MAX_HISTORY_LENGTH) {
- key_down_history.shift();
- }
- }
- }
- }
- });
- }
-}
-
-var username = null;
-function getUserInfo() {
- if (usernameGotten) {
- return;
- }
- userLogged = localStorage.getItem('userLogged');
- if (userLogged) {
- username = userInfoDiv.innerText;
- if (username) {
- if (username.includes("getting user info…")) {
- setTimeout(getUserInfo, 500);
- return;
- } else if (username === " ") {
- localStorage.removeItem("username");
- localStorage.removeItem("userLogged")
- userLogged = false;
- usernameGotten = true;
- return;
- } else {
- username = username.match(/User:\s*(.*)/)[1] || username;
- localStorage.setItem("username", username);
- usernameGotten = true;
- clearHistoryHtml();
- }
- }
- }
-}
-
-function toggleUserInfoVisibility(shouldHide) {
- if (userInfoDiv) {
- if (shouldHide) {
- userInfoDiv.classList.add("hideK");
- } else {
- userInfoDiv.classList.remove("hideK");
- }
- }
-}
-function showOrHideUserInfo() {
- var sendBtn = document.getElementById("submit_btn");
-
- // Bind mouse/touch events to show/hide user info
- appTitleDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- userInfoDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- sendBtn.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
-
- appTitleDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- userInfoDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- sendBtn.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
-
- appTitleDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- userInfoDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- sendBtn.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
-
- appTitleDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- userInfoDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- sendBtn.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000); // Delay 1 second to hide user info
- };
-
- // Hide user info after 2 second
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 2000);
-}
-
-function toggleDarkMode(isEnabled) {
- if (isEnabled) {
- document.body.classList.add("dark");
- document.body.style.setProperty("background-color", "var(--neutral-950)", "important");
- } else {
- document.body.classList.remove("dark");
- document.body.style.backgroundColor = "";
- }
-}
-function adjustDarkMode() {
- const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)");
-
- // 根据当前颜色模式设置初始状态
- apSwitch.checked = darkModeQuery.matches;
- toggleDarkMode(darkModeQuery.matches);
- // 监听颜色模式变化
- darkModeQuery.addEventListener("change", (e) => {
- apSwitch.checked = e.matches;
- toggleDarkMode(e.matches);
- });
- // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
- apSwitch.addEventListener("change", (e) => {
- toggleDarkMode(e.target.checked);
- });
-}
-
-function setChatbotHeight() {
- const screenWidth = window.innerWidth;
- const statusDisplay = document.querySelector('#status_display');
- const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0;
- const wrap = chatbot.querySelector('.wrap');
- const vh = window.innerHeight * 0.01;
- document.documentElement.style.setProperty('--vh', `${vh}px`);
- if (isInIframe) {
- chatbot.style.height = `700px`;
- wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`
- } else {
- if (screenWidth <= 320) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else if (screenWidth <= 499) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- }
- }
-}
-function setChatbotScroll() {
- var scrollHeight = chatbotWrap.scrollHeight;
- chatbotWrap.scrollTo(0,scrollHeight)
-}
-var rangeInputs = null;
-var numberInputs = null;
-function setSlider() {
- rangeInputs = document.querySelectorAll('input[type="range"]');
- numberInputs = document.querySelectorAll('input[type="number"]')
- setSliderRange();
- rangeInputs.forEach(rangeInput => {
- rangeInput.addEventListener('input', setSliderRange);
- });
- numberInputs.forEach(numberInput => {
- numberInput.addEventListener('input', setSliderRange);
- })
-}
-function setSliderRange() {
- var range = document.querySelectorAll('input[type="range"]');
- range.forEach(range => {
- range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%';
- });
-}
-
-function addChuanhuButton(botElement) {
- var rawMessage = null;
- var mdMessage = null;
- rawMessage = botElement.querySelector('.raw-message');
- mdMessage = botElement.querySelector('.md-message');
- if (!rawMessage) {
- var buttons = botElement.querySelectorAll('button.chuanhu-btn');
- for (var i = 0; i < buttons.length; i++) {
- buttons[i].parentNode.removeChild(buttons[i]);
- }
- return;
- }
- var copyButton = null;
- var toggleButton = null;
- copyButton = botElement.querySelector('button.copy-bot-btn');
- toggleButton = botElement.querySelector('button.toggle-md-btn');
- if (copyButton) copyButton.remove();
- if (toggleButton) toggleButton.remove();
-
- // Copy bot button
- var copyButton = document.createElement('button');
- copyButton.classList.add('chuanhu-btn');
- copyButton.classList.add('copy-bot-btn');
- copyButton.setAttribute('aria-label', 'Copy');
- copyButton.innerHTML = copyIcon;
- copyButton.addEventListener('click', () => {
- const textToCopy = rawMessage.innerText;
- navigator.clipboard
- .writeText(textToCopy)
- .then(() => {
- copyButton.innerHTML = copiedIcon;
- setTimeout(() => {
- copyButton.innerHTML = copyIcon;
- }, 1500);
- })
- .catch(() => {
- console.error("copy failed");
- });
- });
- botElement.appendChild(copyButton);
-
- // Toggle button
- var toggleButton = document.createElement('button');
- toggleButton.classList.add('chuanhu-btn');
- toggleButton.classList.add('toggle-md-btn');
- toggleButton.setAttribute('aria-label', 'Toggle');
- var renderMarkdown = mdMessage.classList.contains('hideM');
- toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon;
- toggleButton.addEventListener('click', () => {
- renderMarkdown = mdMessage.classList.contains('hideM');
- if (renderMarkdown){
- renderMarkdownText(botElement);
- toggleButton.innerHTML=rawIcon;
- } else {
- removeMarkdownText(botElement);
- toggleButton.innerHTML=mdIcon;
- }
- });
- botElement.insertBefore(toggleButton, copyButton);
-}
-
-function renderMarkdownText(message) {
- var mdDiv = message.querySelector('.md-message');
- if (mdDiv) mdDiv.classList.remove('hideM');
- var rawDiv = message.querySelector('.raw-message');
- if (rawDiv) rawDiv.classList.add('hideM');
-}
-function removeMarkdownText(message) {
- var rawDiv = message.querySelector('.raw-message');
- if (rawDiv) rawDiv.classList.remove('hideM');
- var mdDiv = message.querySelector('.md-message');
- if (mdDiv) mdDiv.classList.add('hideM');
-}
-
-let timeoutId;
-let isThrottled = false;
-var mmutation
-// 监听所有元素中 bot message 的变化,为 bot 消息添加复制按钮。
-var mObserver = new MutationObserver(function (mutationsList) {
- for (mmutation of mutationsList) {
- if (mmutation.type === 'childList') {
- for (var node of mmutation.addedNodes) {
- if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') {
- saveHistoryHtml();
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton);
- }
- if (node.tagName === 'INPUT' && node.getAttribute('type') === 'range') {
- setSlider();
- }
- }
- for (var node of mmutation.removedNodes) {
- if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') {
- saveHistoryHtml();
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton);
- }
- }
- } else if (mmutation.type === 'attributes') {
- if (mmutation.target.nodeType === 1 && mmutation.target.classList.contains('message') && mmutation.target.getAttribute('data-testid') === 'bot') {
- if (isThrottled) break; // 为了防止重复不断疯狂渲染,加上等待_(:з」∠)_
- isThrottled = true;
- clearTimeout(timeoutId);
- timeoutId = setTimeout(() => {
- isThrottled = false;
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton);
- saveHistoryHtml();
- }, 500);
- }
- }
- }
-});
-mObserver.observe(document.documentElement, { attributes: true, childList: true, subtree: true });
-
-var loadhistorytime = 0; // for debugging
-function saveHistoryHtml() {
- var historyHtml = document.querySelector('#chuanhu_chatbot > .wrap');
- localStorage.setItem('chatHistory', historyHtml.innerHTML);
- // console.log("History Saved")
- historyLoaded = false;
-}
-function loadHistoryHtml() {
- var historyHtml = localStorage.getItem('chatHistory');
- if (!historyHtml) {
- historyLoaded = true;
- return; // no history, do nothing
- }
- userLogged = localStorage.getItem('userLogged');
- if (userLogged){
- historyLoaded = true;
- return; // logged in, do nothing
- }
- if (!historyLoaded) {
- var tempDiv = document.createElement('div');
- tempDiv.innerHTML = historyHtml;
- var buttons = tempDiv.querySelectorAll('button.chuanhu-btn');
- var gradioCopyButtons = tempDiv.querySelectorAll('button.copy_code_button');
- for (var i = 0; i < buttons.length; i++) {
- buttons[i].parentNode.removeChild(buttons[i]);
- }
- for (var i = 0; i < gradioCopyButtons.length; i++) {
- gradioCopyButtons[i].parentNode.removeChild(gradioCopyButtons[i]);
- }
- var fakeHistory = document.createElement('div');
- fakeHistory.classList.add('history-message');
- fakeHistory.innerHTML = tempDiv.innerHTML;
- webLocale();
- chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild);
- // var fakeHistory = document.createElement('div');
- // fakeHistory.classList.add('history-message');
- // fakeHistory.innerHTML = historyHtml;
- // chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild);
- historyLoaded = true;
- console.log("History Loaded");
- loadhistorytime += 1; // for debugging
- } else {
- historyLoaded = false;
- }
-}
-function clearHistoryHtml() {
- localStorage.removeItem("chatHistory");
- historyMessages = chatbotWrap.querySelector('.history-message');
- if (historyMessages) {
- chatbotWrap.removeChild(historyMessages);
- console.log("History Cleared");
- }
-}
-function emptyHistory() {
- empty_botton.addEventListener("click", function () {
- clearHistoryHtml();
- });
-}
-
-// 监视页面内部 DOM 变动
-var observer = new MutationObserver(function (mutations) {
- gradioLoaded(mutations);
-});
-observer.observe(targetNode, { childList: true, subtree: true });
-
-// 监视页面变化
-window.addEventListener("DOMContentLoaded", function () {
- isInIframe = (window.self !== window.top);
- historyLoaded = false;
-});
-window.addEventListener('resize', setChatbotHeight);
-window.addEventListener('scroll', setChatbotHeight);
-window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode);
-
-// button svg code
-const copyIcon = '';
-const copiedIcon = '';
-const mdIcon = '';
-const rawIcon = '';
diff --git a/spaces/KPCGD/bingo/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/KPCGD/bingo/src/lib/hooks/use-copy-to-clipboard.tsx
deleted file mode 100644
index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/lib/hooks/use-copy-to-clipboard.tsx
+++ /dev/null
@@ -1,33 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-export interface useCopyToClipboardProps {
- timeout?: number
-}
-
-export function useCopyToClipboard({
- timeout = 2000
-}: useCopyToClipboardProps) {
- const [isCopied, setIsCopied] = React.useState(false)
-
- const copyToClipboard = (value: string) => {
- if (typeof window === 'undefined' || !navigator.clipboard?.writeText) {
- return
- }
-
- if (!value) {
- return
- }
-
- navigator.clipboard.writeText(value).then(() => {
- setIsCopied(true)
-
- setTimeout(() => {
- setIsCopied(false)
- }, timeout)
- })
- }
-
- return { isCopied, copyToClipboard }
-}
diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/attention.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/attention.py
deleted file mode 100644
index b497cb97ab77834fcf0ea3a33fcc339f94f08533..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/attention.py
+++ /dev/null
@@ -1,398 +0,0 @@
-# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
-# See more details in LICENSE.
-
-from inspect import isfunction
-import math
-import torch
-import torch.nn.functional as F
-from torch import nn, einsum
-from einops import rearrange, repeat
-
-from ldm.modules.diffusionmodules.util import checkpoint
-
-
-def exists(val):
- return val is not None
-
-
-def uniq(arr):
- return{el: True for el in arr}.keys()
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def max_neg_value(t):
- return -torch.finfo(t.dtype).max
-
-
-def init_(tensor):
- dim = tensor.shape[-1]
- std = 1 / math.sqrt(dim)
- tensor.uniform_(-std, std)
- return tensor
-
-
-# feedforward
-class GEGLU(nn.Module):
- def __init__(self, dim_in, dim_out):
- super().__init__()
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * F.gelu(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- project_in = nn.Sequential(
- nn.Linear(dim, inner_dim),
- nn.GELU()
- ) if not glu else GEGLU(dim, inner_dim)
-
- self.net = nn.Sequential(
- project_in,
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def Normalize(in_channels, default_eps):
- if default_eps:
- return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, affine=True)
- else:
- return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
-
-
-class LinearAttention(nn.Module):
- def __init__(self, dim, heads=4, dim_head=32):
- super().__init__()
- self.heads = heads
- hidden_dim = dim_head * heads
- self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False)
- self.to_out = nn.Conv2d(hidden_dim, dim, 1)
-
- def forward(self, x):
- b, c, h, w = x.shape
- qkv = self.to_qkv(x)
- q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3)
- k = torch.softmax(k.float(), dim=-1).type(k.dtype)
- # k = k.softmax(dim=-1)
- context = torch.einsum('bhdn,bhen->bhde', k, v)
- out = torch.einsum('bhde,bhdn->bhen', context, q)
- out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)
- return self.to_out(out)
-
-
-class SpatialSelfAttention(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b,c,h,w = q.shape
- q = rearrange(q, 'b c h w -> b (h w) c')
- k = rearrange(k, 'b c h w -> b c (h w)')
- w_ = torch.einsum('bij,bjk->bik', q, k)
-
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.softmax(w_.float(), dim=2).type(w_.dtype)
- # w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = rearrange(v, 'b c h w -> b c (h w)')
- w_ = rearrange(w_, 'b i j -> b j i')
- h_ = torch.einsum('bij,bjk->bik', v, w_)
- h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)
- h_ = self.proj_out(h_)
-
- return x+h_
-
-
-class CrossAttention(nn.Module):
- def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):
- super().__init__()
- inner_dim = dim_head * heads
- context_dim = default(context_dim, query_dim)
-
- self.scale = dim_head ** -0.5
- self.heads = heads
-
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
- self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
-
- self.to_out = nn.Sequential(
- nn.Linear(inner_dim, query_dim),
- nn.Dropout(dropout)
- )
-
- self.prompt_to_prompt = False
-
- def forward(self, x, context=None, mask=None):
- is_self_attn = context is None
-
- h = self.heads
-
- q = self.to_q(x)
- context = default(context, x)
- k = self.to_k(context)
- v = self.to_v(context)
-
- q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
-
- sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
-
- if self.prompt_to_prompt and is_self_attn:
- # Unlike the original Prompt-to-Prompt which uses cross-attention layers, we copy attention maps for self-attention layers.
- # There must be 4 elements in the batch: {conditional, unconditional} x {prompt 1, prompt 2}
- assert x.size(0) == 4
- sims = sim.chunk(4)
- sim = torch.cat((sims[0], sims[0], sims[2], sims[2]))
-
- if exists(mask):
- mask = rearrange(mask, 'b ... -> b (...)')
- max_neg_value = -torch.finfo(sim.dtype).max
- mask = repeat(mask, 'b j -> (b h) () j', h=h)
- sim.masked_fill_(~mask, max_neg_value)
-
- # attention, what we cannot get enough of
- # attn = sim.softmax(dim=-1)
- attn = torch.softmax(sim.float(), dim=-1).type(sim.dtype)
-
- out = einsum('b i j, b j d -> b i d', attn, v)
- out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
- return self.to_out(out)
-
-
-# class BasicTransformerBlock(nn.Module):
-# def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True):
-# super().__init__()
-# self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention
-# self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)
-# self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim,
-# heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none
-# self.norm1 = nn.LayerNorm(dim)
-# self.norm2 = nn.LayerNorm(dim)
-# self.norm3 = nn.LayerNorm(dim)
-# self.checkpoint = checkpoint
-
-# def forward(self, x, context=None):
-# # return checkpoint(self._forward, (x, context), self.checkpoint)
-# return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
-
-# def _forward(self, x, context=None):
-# x = x.type(self.norm1.weight.dtype)
-# if context is not None:
-# context = context.type(self.norm1.weight.dtype)
-# x = self.attn1(self.norm1(x)) + x
-# x = self.attn2(self.norm2(x), context=context) + x
-# x = self.ff(self.norm3(x)) + x
-# return x
-
-
-class BasicTransformerBlock(nn.Module):
- ATTENTION_MODES = {
- "softmax": CrossAttention, # vanilla attention
- }
- def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True,
- disable_self_attn=False):
- super().__init__()
- attn_mode = "softmax"
- assert attn_mode in self.ATTENTION_MODES
- attn_cls = self.ATTENTION_MODES[attn_mode]
- self.disable_self_attn = disable_self_attn
- self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout,
- context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn
- self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)
- self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim,
- heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none
- self.norm1 = nn.LayerNorm(dim)
- self.norm2 = nn.LayerNorm(dim)
- self.norm3 = nn.LayerNorm(dim)
- self.checkpoint = checkpoint
-
- def forward(self, x, context=None):
- return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
-
- def _forward(self, x, context=None):
- x = x.type(self.norm1.weight.dtype)
- if context is not None:
- context = context.type(self.norm1.weight.dtype)
- x = self.attn1(self.norm1(x)) + x
- x = self.attn2(self.norm2(x), context=context) + x
- x = self.ff(self.norm3(x)) + x
- return x
- # x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
- # x = self.attn2(self.norm2(x), context=context) + x
- # x = self.ff(self.norm3(x)) + x
- # return x
-
-
-# class SpatialTransformer(nn.Module):
-# """
-# Transformer block for image-like data.
-# First, project the input (aka embedding)
-# and reshape to b, t, d.
-# Then apply standard transformer action.
-# Finally, reshape to image
-# """
-# def __init__(self, in_channels, n_heads, d_head, default_eps, force_type_convert,
-# depth=1, dropout=0., context_dim=None):
-# super().__init__()
-# self.in_channels = in_channels
-# inner_dim = n_heads * d_head
-# self.force_type_convert = force_type_convert
-# self.norm = Normalize(in_channels, default_eps)
-
-# self.proj_in = nn.Conv2d(in_channels,
-# inner_dim,
-# kernel_size=1,
-# stride=1,
-# padding=0)
-
-# self.transformer_blocks = nn.ModuleList(
-# [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim)
-# for d in range(depth)]
-# )
-
-# self.proj_out = zero_module(nn.Conv2d(inner_dim,
-# in_channels,
-# kernel_size=1,
-# stride=1,
-# padding=0))
-
-# def forward(self, x, context=None):
-# # note: if no context is given, cross-attention defaults to self-attention
-# b, c, h, w = x.shape
-# x_in = x
-# if self.force_type_convert:
-# x = self.norm.float()(x.float())
-# x = x.half()
-# else:
-# x = self.norm(x)
-# x = self.proj_in(x)
-# x = rearrange(x, 'b c h w -> b (h w) c')
-# for block in self.transformer_blocks:
-# x = block(x, context=context)
-# x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w)
-# x = self.proj_out(x)
-# return x + x_in
-
-class SpatialTransformer(nn.Module):
- """
- Transformer block for image-like data.
- First, project the input (aka embedding)
- and reshape to b, t, d.
- Then apply standard transformer action.
- Finally, reshape to image
- NEW: use_linear for more efficiency instead of the 1x1 convs
- """
- def __init__(self, in_channels, n_heads, d_head, default_eps, force_type_convert,
- depth=1, dropout=0., context_dim=None,
- disable_self_attn=False, use_linear=False,
- use_checkpoint=True):
- super().__init__()
- if exists(context_dim) and not isinstance(context_dim, list):
- context_dim = [context_dim]
- self.in_channels = in_channels
- inner_dim = n_heads * d_head
- self.force_type_convert = force_type_convert
- self.norm = Normalize(in_channels, default_eps)
- if not use_linear:
- self.proj_in = nn.Conv2d(in_channels,
- inner_dim,
- kernel_size=1,
- stride=1,
- padding=0)
- else:
- self.proj_in = nn.Linear(in_channels, inner_dim)
-
- self.transformer_blocks = nn.ModuleList(
- [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d],
- disable_self_attn=disable_self_attn, checkpoint=use_checkpoint)
- for d in range(depth)]
- )
- if not use_linear:
- self.proj_out = zero_module(nn.Conv2d(inner_dim,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0))
- else:
- self.proj_out = zero_module(nn.Linear(in_channels, inner_dim))
- self.use_linear = use_linear
-
- def forward(self, x, context=None):
- # note: if no context is given, cross-attention defaults to self-attention
- if not isinstance(context, list):
- context = [context]
- b, c, h, w = x.shape
- x_in = x
- if self.force_type_convert:
- x = self.norm.float()(x.float())
- # if torch.cuda.is_available():
- # x = x.half()
- else:
- x = self.norm(x)
- if not self.use_linear:
- x = self.proj_in(x)
- x = rearrange(x, 'b c h w -> b (h w) c').contiguous()
- if self.use_linear:
- x = self.proj_in(x)
- for i, block in enumerate(self.transformer_blocks):
- x = block(x, context=context[i])
- if self.use_linear:
- x = self.proj_out(x)
- x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous()
- if not self.use_linear:
- x = self.proj_out(x)
- return x + x_in
\ No newline at end of file
diff --git a/spaces/LHL3341/Hand-Write-Number-Recognization/app.py b/spaces/LHL3341/Hand-Write-Number-Recognization/app.py
deleted file mode 100644
index 0a6826e53dee022d1c085af2a12df13e60cd2030..0000000000000000000000000000000000000000
--- a/spaces/LHL3341/Hand-Write-Number-Recognization/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import streamlit as st
-import numpy as np
-import matplotlib.pyplot as plt
-import pandas as pd
-st.markdown("# Streamlit示例")
-st.markdown("""
- - 这是
- - 一个
- - 无序列表
- """)
-
-# 展示pandas数据框
-st.dataframe(pd.DataFrame([[1, 2], [3, 4]], columns=["a", "b"]))
-
-# 展示matplotlib绘图
-arr = np.random.normal(1, 1, size=100)
-plt.hist(arr, bins=20)
-plt.title("matplotlib plot")
-st.pyplot()
-
-# 加入交互控件,如输入框
-number = st.number_input("Insert a number", 123)
-st.write("输入的数字是:", number)
\ No newline at end of file
diff --git a/spaces/Laihiujin/OneFormer/oneformer/data/build.py b/spaces/Laihiujin/OneFormer/oneformer/data/build.py
deleted file mode 100644
index fb775313605cf24ed2385681fa2c43d5068b5a4a..0000000000000000000000000000000000000000
--- a/spaces/Laihiujin/OneFormer/oneformer/data/build.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from typing import Any, Callable, Dict, List, Optional, Union
-import torch.utils.data as torchdata
-
-from detectron2.config import configurable
-
-
-from detectron2.data.common import DatasetFromList, MapDataset
-from detectron2.data.dataset_mapper import DatasetMapper
-from detectron2.data.samplers import (
- InferenceSampler,
-)
-from detectron2.data.build import (
- get_detection_dataset_dicts,
- trivial_batch_collator
-)
-"""
-This file contains the default logic to build a dataloader for training or testing.
-"""
-
-__all__ = [
- "build_detection_test_loader",
-]
-
-
-def _test_loader_from_config(cfg, dataset_name, mapper=None):
- """
- Uses the given `dataset_name` argument (instead of the names in cfg), because the
- standard practice is to evaluate each test set individually (not combining them).
- """
- if isinstance(dataset_name, str):
- dataset_name = [dataset_name]
-
- dataset = get_detection_dataset_dicts(
- dataset_name,
- filter_empty=False,
- proposal_files=[
- cfg.DATASETS.PROPOSAL_FILES_TEST[list(cfg.DATASETS.TEST).index(x)] for x in dataset_name
- ]
- if cfg.MODEL.LOAD_PROPOSALS
- else None,
- )
- if mapper is None:
- mapper = DatasetMapper(cfg, False)
- return {
- "dataset": dataset,
- "mapper": mapper,
- "num_workers": cfg.DATALOADER.NUM_WORKERS,
- "sampler": InferenceSampler(len(dataset))
- if not isinstance(dataset, torchdata.IterableDataset)
- else None,
- }
-
-
-@configurable(from_config=_test_loader_from_config)
-def build_detection_test_loader(
- dataset: Union[List[Any], torchdata.Dataset],
- *,
- mapper: Callable[[Dict[str, Any]], Any],
- sampler: Optional[torchdata.Sampler] = None,
- batch_size: int = 1,
- num_workers: int = 0,
- collate_fn: Optional[Callable[[List[Any]], Any]] = None,
-) -> torchdata.DataLoader:
- """
- Similar to `build_detection_train_loader`, with default batch size = 1,
- and sampler = :class:`InferenceSampler`. This sampler coordinates all workers
- to produce the exact set of all samples.
-
- Args:
- dataset: a list of dataset dicts,
- or a pytorch dataset (either map-style or iterable). They can be obtained
- by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`.
- mapper: a callable which takes a sample (dict) from dataset
- and returns the format to be consumed by the model.
- When using cfg, the default choice is ``DatasetMapper(cfg, is_train=False)``.
- sampler: a sampler that produces
- indices to be applied on ``dataset``. Default to :class:`InferenceSampler`,
- which splits the dataset across all workers. Sampler must be None
- if `dataset` is iterable.
- batch_size: the batch size of the data loader to be created.
- Default to 1 image per worker since this is the standard when reporting
- inference time in papers.
- num_workers: number of parallel data loading workers
- collate_fn: same as the argument of `torch.utils.data.DataLoader`.
- Defaults to do no collation and return a list of data.
-
- Returns:
- DataLoader: a torch DataLoader, that loads the given detection
- dataset, with test-time transformation and batching.
-
- Examples:
- ::
- data_loader = build_detection_test_loader(
- DatasetRegistry.get("my_test"),
- mapper=DatasetMapper(...))
-
- # or, instantiate with a CfgNode:
- data_loader = build_detection_test_loader(cfg, "my_test")
- """
- if isinstance(dataset, list):
- dataset = DatasetFromList(dataset, copy=False)
- if mapper is not None:
- dataset = MapDataset(dataset, mapper)
- if isinstance(dataset, torchdata.IterableDataset):
- assert sampler is None, "sampler must be None if dataset is IterableDataset"
- else:
- if sampler is None:
- sampler = InferenceSampler(len(dataset))
- return torchdata.DataLoader(
- dataset,
- batch_size=batch_size,
- sampler=sampler,
- drop_last=False,
- num_workers=num_workers,
- collate_fn=trivial_batch_collator if collate_fn is None else collate_fn,
- )
\ No newline at end of file
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v0.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v0.py
deleted file mode 100644
index c3d159a2602621f3a7cbc293c64309c3f09749f5..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/gui_v0.py
+++ /dev/null
@@ -1,786 +0,0 @@
-import os, sys, traceback, re
-
-import json
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from assets.configs.config import Config
-
-Config = Config()
-import PySimpleGUI as sg
-import sounddevice as sd
-import noisereduce as nr
-import numpy as np
-from fairseq import checkpoint_utils
-import librosa, torch, pyworld, faiss, time, threading
-import torch.nn.functional as F
-import torchaudio.transforms as tat
-import scipy.signal as signal
-import torchcrepe
-
-# import matplotlib.pyplot as plt
-from lib.infer.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from assets.i18n.i18n import I18nAuto
-
-i18n = I18nAuto()
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-current_dir = os.getcwd()
-
-
-class RVC:
- def __init__(
- self, key, f0_method, hubert_path, pth_path, index_path, npy_path, index_rate
- ) -> None:
- """
- 初始化
- """
- try:
- self.f0_up_key = key
- self.time_step = 160 / 16000 * 1000
- self.f0_min = 50
- self.f0_max = 1100
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
- self.f0_method = f0_method
- self.sr = 16000
- self.window = 160
-
- # Get Torch Device
- if torch.cuda.is_available():
- self.torch_device = torch.device(
- f"cuda:{0 % torch.cuda.device_count()}"
- )
- elif torch.backends.mps.is_available():
- self.torch_device = torch.device("mps")
- else:
- self.torch_device = torch.device("cpu")
-
- if index_rate != 0:
- self.index = faiss.read_index(index_path)
- # self.big_npy = np.load(npy_path)
- self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
- print("index search enabled")
- self.index_rate = index_rate
- model_path = hubert_path
- print("load model(s) from {}".format(model_path))
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
- )
- self.model = models[0]
- self.model = self.model.to(device)
- if Config.is_half:
- self.model = self.model.half()
- else:
- self.model = self.model.float()
- self.model.eval()
- cpt = torch.load(pth_path, map_location="cpu")
- self.tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = cpt.get("f0", 1)
- self.version = cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del self.net_g.enc_q
- print(self.net_g.load_state_dict(cpt["weight"], strict=False))
- self.net_g.eval().to(device)
- if Config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
- except:
- print(traceback.format_exc())
-
- def get_regular_crepe_computation(self, x, f0_min, f0_max, model="full"):
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.torch_device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- return f0
-
- def get_harvest_computation(self, x, f0_min, f0_max):
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- return f0
-
- def get_f0(self, x, f0_up_key, inp_f0=None):
- # Calculate Padding and f0 details here
- p_len = x.shape[0] // 512 # For Now This probs doesn't work
- x_pad = 1
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
- f0 = 0
- # Here, check f0_methods and get their computations
- if self.f0_method == "harvest":
- f0 = self.get_harvest_computation(x, f0_min, f0_max)
- elif self.f0_method == "reg-crepe":
- f0 = self.get_regular_crepe_computation(x, f0_min, f0_max)
- elif self.f0_method == "reg-crepe-tiny":
- f0 = self.get_regular_crepe_computation(x, f0_min, f0_max, "tiny")
-
- # Calculate f0_course and f0_bak here
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def infer(self, feats: torch.Tensor) -> np.ndarray:
- """
- 推理函数
- """
- audio = feats.clone().cpu().numpy()
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- if Config.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- inputs = {
- "source": feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if self.version == "v1" else 12,
- }
- torch.cuda.synchronize()
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- feats = (
- self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
- )
-
- ####索引优化
- try:
- if (
- hasattr(self, "index")
- and hasattr(self, "big_npy")
- and self.index_rate != 0
- ):
- npy = feats[0].cpu().numpy().astype("float32")
- score, ix = self.index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- if Config.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
- + (1 - self.index_rate) * feats
- )
- else:
- print("index search FAIL or disabled")
- except:
- traceback.print_exc()
- print("index search FAIL")
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- torch.cuda.synchronize()
- print(feats.shape)
- if self.if_f0 == 1:
- pitch, pitchf = self.get_f0(audio, self.f0_up_key)
- p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存
- else:
- pitch, pitchf = None, None
- p_len = min(feats.shape[1], 13000) # 太大了爆显存
- torch.cuda.synchronize()
- # print(feats.shape,pitch.shape)
- feats = feats[:, :p_len, :]
- if self.if_f0 == 1:
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
- pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
- p_len = torch.LongTensor([p_len]).to(device)
- ii = 0 # sid
- sid = torch.LongTensor([ii]).to(device)
- with torch.no_grad():
- if self.if_f0 == 1:
- infered_audio = (
- self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
- .data.cpu()
- .float()
- )
- else:
- infered_audio = (
- self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
- )
- torch.cuda.synchronize()
- return infered_audio
-
-
-class GUIConfig:
- def __init__(self) -> None:
- self.hubert_path: str = ""
- self.pth_path: str = ""
- self.index_path: str = ""
- self.npy_path: str = ""
- self.f0_method: str = ""
- self.pitch: int = 12
- self.samplerate: int = 44100
- self.block_time: float = 1.0 # s
- self.buffer_num: int = 1
- self.threhold: int = -30
- self.crossfade_time: float = 0.08
- self.extra_time: float = 0.04
- self.I_noise_reduce = False
- self.O_noise_reduce = False
- self.index_rate = 0.3
-
-
-class GUI:
- def __init__(self) -> None:
- self.config = GUIConfig()
- self.flag_vc = False
-
- self.launcher()
-
- def load(self):
- (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- ) = self.get_devices()
- try:
- with open("values1.json", "r") as j:
- data = json.load(j)
- except:
- # Injecting f0_method into the json data
- with open("values1.json", "w") as j:
- data = {
- "pth_path": "",
- "index_path": "",
- "sg_input_device": input_devices[
- input_devices_indices.index(sd.default.device[0])
- ],
- "sg_output_device": output_devices[
- output_devices_indices.index(sd.default.device[1])
- ],
- "threhold": "-45",
- "pitch": "0",
- "index_rate": "0",
- "block_time": "1",
- "crossfade_length": "0.04",
- "extra_time": "1",
- }
- return data
-
- def launcher(self):
- data = self.load()
- sg.theme("DarkTeal12")
- input_devices, output_devices, _, _ = self.get_devices()
- layout = [
- [
- sg.Frame(
- title="Proudly forked by Mangio621",
- ),
- sg.Frame(
- title=i18n("Load model"),
- layout=[
- [
- sg.Input(
- default_text="hubert_base.pt",
- key="hubert_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Hubert Model"),
- initial_folder=os.path.join(os.getcwd()),
- file_types=(("pt files", "*.pt"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("pth_path", ""),
- key="pth_path",
- ),
- sg.FileBrowse(
- i18n("Select the .pth file"),
- initial_folder=os.path.join(os.getcwd(), "weights"),
- file_types=(("weight files", "*.pth"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("index_path", ""),
- key="index_path",
- ),
- sg.FileBrowse(
- i18n("Select the .index file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("index files", "*.index"),),
- ),
- ],
- [
- sg.Input(
- default_text="你不需要填写这个You don't need write this.",
- key="npy_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Select the .npy file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("feature files", "*.npy"),),
- ),
- ],
- ],
- ),
- ],
- [
- # Mangio f0 Selection frame Here
- sg.Frame(
- layout=[
- [
- sg.Radio(
- "Harvest", "f0_method", key="harvest", default=True
- ),
- sg.Radio("Crepe", "f0_method", key="reg-crepe"),
- sg.Radio("Crepe Tiny", "f0_method", key="reg-crepe-tiny"),
- ]
- ],
- title="Select an f0 Method",
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Input device")),
- sg.Combo(
- input_devices,
- key="sg_input_device",
- default_value=data.get("sg_input_device", ""),
- ),
- ],
- [
- sg.Text(i18n("Output device")),
- sg.Combo(
- output_devices,
- key="sg_output_device",
- default_value=data.get("sg_output_device", ""),
- ),
- ],
- ],
- title=i18n("Audio device (please use the same type of driver)"),
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Response threshold")),
- sg.Slider(
- range=(-60, 0),
- key="threhold",
- resolution=1,
- orientation="h",
- default_value=data.get("threhold", ""),
- ),
- ],
- [
- sg.Text(i18n("Pitch settings")),
- sg.Slider(
- range=(-24, 24),
- key="pitch",
- resolution=1,
- orientation="h",
- default_value=data.get("pitch", ""),
- ),
- ],
- [
- sg.Text(i18n("Index Rate")),
- sg.Slider(
- range=(0.0, 1.0),
- key="index_rate",
- resolution=0.01,
- orientation="h",
- default_value=data.get("index_rate", ""),
- ),
- ],
- ],
- title=i18n("General settings"),
- ),
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Sample length")),
- sg.Slider(
- range=(0.1, 3.0),
- key="block_time",
- resolution=0.1,
- orientation="h",
- default_value=data.get("block_time", ""),
- ),
- ],
- [
- sg.Text(i18n("Fade length")),
- sg.Slider(
- range=(0.01, 0.15),
- key="crossfade_length",
- resolution=0.01,
- orientation="h",
- default_value=data.get("crossfade_length", ""),
- ),
- ],
- [
- sg.Text(i18n("Extra推理时长")),
- sg.Slider(
- range=(0.05, 3.00),
- key="extra_time",
- resolution=0.01,
- orientation="h",
- default_value=data.get("extra_time", ""),
- ),
- ],
- [
- sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"),
- sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"),
- ],
- ],
- title=i18n("Performance settings"),
- ),
- ],
- [
- sg.Button(i18n("开始音频Convert"), key="start_vc"),
- sg.Button(i18n("停止音频Convert"), key="stop_vc"),
- sg.Text(i18n("Inference time (ms):")),
- sg.Text("0", key="infer_time"),
- ],
- ]
- self.window = sg.Window("RVC - GUI", layout=layout)
- self.event_handler()
-
- def event_handler(self):
- while True:
- event, values = self.window.read()
- if event == sg.WINDOW_CLOSED:
- self.flag_vc = False
- exit()
- if event == "start_vc" and self.flag_vc == False:
- if self.set_values(values) == True:
- print("using_cuda:" + str(torch.cuda.is_available()))
- self.start_vc()
- settings = {
- "pth_path": values["pth_path"],
- "index_path": values["index_path"],
- "f0_method": self.get_f0_method_from_radios(values),
- "sg_input_device": values["sg_input_device"],
- "sg_output_device": values["sg_output_device"],
- "threhold": values["threhold"],
- "pitch": values["pitch"],
- "index_rate": values["index_rate"],
- "block_time": values["block_time"],
- "crossfade_length": values["crossfade_length"],
- "extra_time": values["extra_time"],
- }
- with open("values1.json", "w") as j:
- json.dump(settings, j)
- if event == "stop_vc" and self.flag_vc == True:
- self.flag_vc = False
-
- # Function that returns the used f0 method in string format "harvest"
- def get_f0_method_from_radios(self, values):
- f0_array = [
- {"name": "harvest", "val": values["harvest"]},
- {"name": "reg-crepe", "val": values["reg-crepe"]},
- {"name": "reg-crepe-tiny", "val": values["reg-crepe-tiny"]},
- ]
- # Filter through to find a true value
- used_f0 = ""
- for f0 in f0_array:
- if f0["val"] == True:
- used_f0 = f0["name"]
- break
- if used_f0 == "":
- used_f0 = "harvest" # Default Harvest if used_f0 is empty somehow
- return used_f0
-
- def set_values(self, values):
- if len(values["pth_path"].strip()) == 0:
- sg.popup(i18n("Select the pth file"))
- return False
- if len(values["index_path"].strip()) == 0:
- sg.popup(i18n("Select the index file"))
- return False
- pattern = re.compile("[^\x00-\x7F]+")
- if pattern.findall(values["hubert_path"]):
- sg.popup(i18n("The hubert model path must not contain Chinese characters"))
- return False
- if pattern.findall(values["pth_path"]):
- sg.popup(i18n("The pth file path must not contain Chinese characters."))
- return False
- if pattern.findall(values["index_path"]):
- sg.popup(i18n("The index file path must not contain Chinese characters."))
- return False
- self.set_devices(values["sg_input_device"], values["sg_output_device"])
- self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt")
- self.config.pth_path = values["pth_path"]
- self.config.index_path = values["index_path"]
- self.config.npy_path = values["npy_path"]
- self.config.f0_method = self.get_f0_method_from_radios(values)
- self.config.threhold = values["threhold"]
- self.config.pitch = values["pitch"]
- self.config.block_time = values["block_time"]
- self.config.crossfade_time = values["crossfade_length"]
- self.config.extra_time = values["extra_time"]
- self.config.I_noise_reduce = values["I_noise_reduce"]
- self.config.O_noise_reduce = values["O_noise_reduce"]
- self.config.index_rate = values["index_rate"]
- return True
-
- def start_vc(self):
- torch.cuda.empty_cache()
- self.flag_vc = True
- self.block_frame = int(self.config.block_time * self.config.samplerate)
- self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
- self.sola_search_frame = int(0.012 * self.config.samplerate)
- self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s
- self.extra_frame = int(self.config.extra_time * self.config.samplerate)
- self.rvc = None
- self.rvc = RVC(
- self.config.pitch,
- self.config.f0_method,
- self.config.hubert_path,
- self.config.pth_path,
- self.config.index_path,
- self.config.npy_path,
- self.config.index_rate,
- )
- self.input_wav: np.ndarray = np.zeros(
- self.extra_frame
- + self.crossfade_frame
- + self.sola_search_frame
- + self.block_frame,
- dtype="float32",
- )
- self.output_wav: torch.Tensor = torch.zeros(
- self.block_frame, device=device, dtype=torch.float32
- )
- self.sola_buffer: torch.Tensor = torch.zeros(
- self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_in_window: torch.Tensor = torch.linspace(
- 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
- self.resampler1 = tat.Resample(
- orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
- )
- self.resampler2 = tat.Resample(
- orig_freq=self.rvc.tgt_sr,
- new_freq=self.config.samplerate,
- dtype=torch.float32,
- )
- thread_vc = threading.Thread(target=self.soundinput)
- thread_vc.start()
-
- def soundinput(self):
- """
- 接受音频输入
- """
- with sd.Stream(
- channels=2,
- callback=self.audio_callback,
- blocksize=self.block_frame,
- samplerate=self.config.samplerate,
- dtype="float32",
- ):
- while self.flag_vc:
- time.sleep(self.config.block_time)
- print("Audio block passed.")
- print("ENDing VC")
-
- def audio_callback(
- self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
- ):
- """
- 音频处理
- """
- start_time = time.perf_counter()
- indata = librosa.to_mono(indata.T)
- if self.config.I_noise_reduce:
- indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
-
- """noise gate"""
- frame_length = 2048
- hop_length = 1024
- rms = librosa.feature.rms(
- y=indata, frame_length=frame_length, hop_length=hop_length
- )
- db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
- # print(rms.shape,db.shape,db)
- for i in range(db_threhold.shape[0]):
- if db_threhold[i]:
- indata[i * hop_length : (i + 1) * hop_length] = 0
- self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
-
- # infer
- print("input_wav:" + str(self.input_wav.shape))
- # print('infered_wav:'+str(infer_wav.shape))
- infer_wav: torch.Tensor = self.resampler2(
- self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
- )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
- device
- )
- print("infer_wav:" + str(infer_wav.shape))
-
- # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
- cor_nom = F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
- self.sola_buffer[None, None, :],
- )
- cor_den = torch.sqrt(
- F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
- ** 2,
- torch.ones(1, 1, self.crossfade_frame, device=device),
- )
- + 1e-8
- )
- sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
- print("sola offset: " + str(int(sola_offset)))
-
- # crossfade
- self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
- self.output_wav[: self.crossfade_frame] *= self.fade_in_window
- self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
- if sola_offset < self.sola_search_frame:
- self.sola_buffer[:] = (
- infer_wav[
- -self.sola_search_frame
- - self.crossfade_frame
- + sola_offset : -self.sola_search_frame
- + sola_offset
- ]
- * self.fade_out_window
- )
- else:
- self.sola_buffer[:] = (
- infer_wav[-self.crossfade_frame :] * self.fade_out_window
- )
-
- if self.config.O_noise_reduce:
- outdata[:] = np.tile(
- nr.reduce_noise(
- y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
- ),
- (2, 1),
- ).T
- else:
- outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
- total_time = time.perf_counter() - start_time
- self.window["infer_time"].update(int(total_time * 1000))
- print("infer time:" + str(total_time))
- print("f0_method: " + str(self.config.f0_method))
-
- def get_devices(self, update: bool = True):
- """获取设备列表"""
- if update:
- sd._terminate()
- sd._initialize()
- devices = sd.query_devices()
- hostapis = sd.query_hostapis()
- for hostapi in hostapis:
- for device_idx in hostapi["devices"]:
- devices[device_idx]["hostapi_name"] = hostapi["name"]
- input_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_output_channels"] > 0
- ]
- input_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_output_channels"] > 0
- ]
- return (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- )
-
- def set_devices(self, input_device, output_device):
- """设置输出设备"""
- (
- input_devices,
- output_devices,
- input_device_indices,
- output_device_indices,
- ) = self.get_devices()
- sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
- sd.default.device[1] = output_device_indices[
- output_devices.index(output_device)
- ]
- print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
- print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
-
-
-gui = GUI()
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/infer/train-index.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/infer/train-index.py
deleted file mode 100644
index 44b447ef32148c181eb4bcd9013a22a82371b82c..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/infer/train-index.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""
-格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个
-"""
-import os
-import logging
-
-logger = logging.getLogger(__name__)
-
-import faiss
-import numpy as np
-
-# ###########如果是原始特征要先写save
-inp_root = r"E:\codes\py39\dataset\mi\2-co256"
-npys = []
-for name in sorted(list(os.listdir(inp_root))):
- phone = np.load("%s/%s" % (inp_root, name))
- npys.append(phone)
-big_npy = np.concatenate(npys, 0)
-logger.debug(big_npy.shape) # (6196072, 192)#fp32#4.43G
-np.save("infer/big_src_feature_mi.npy", big_npy)
-
-##################train+add
-# big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy")
-logger.debug(big_npy.shape)
-index = faiss.index_factory(256, "IVF512,Flat") # mi
-logger.info("Training...")
-index_ivf = faiss.extract_index_ivf(index) #
-index_ivf.nprobe = 9
-index.train(big_npy)
-faiss.write_index(index, "infer/trained_IVF512_Flat_mi_baseline_src_feat.index")
-logger.info("Adding...")
-index.add(big_npy)
-faiss.write_index(index, "infer/added_IVF512_Flat_mi_baseline_src_feat.index")
-"""
-大小(都是FP32)
-big_src_feature 2.95G
- (3098036, 256)
-big_emb 4.43G
- (6196072, 192)
-big_emb双倍是因为求特征要repeat后再加pitch
-
-"""
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/transactions.py b/spaces/Lianjd/stock_dashboard/backtrader/analyzers/transactions.py
deleted file mode 100644
index 7bdf91a28def6e9f51f3ae43d854e44ded542f71..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/transactions.py
+++ /dev/null
@@ -1,103 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-
-import collections
-
-import backtrader as bt
-from backtrader import Order, Position
-
-
-class Transactions(bt.Analyzer):
- '''This analyzer reports the transactions occurred with each an every data in
- the system
-
- It looks at the order execution bits to create a ``Position`` starting from
- 0 during each ``next`` cycle.
-
- The result is used during next to record the transactions
-
- Params:
-
- - headers (default: ``True``)
-
- Add an initial key to the dictionary holding the results with the names
- of the datas
-
- This analyzer was modeled to facilitate the integration with
- ``pyfolio`` and the header names are taken from the samples used for
- it::
-
- 'date', 'amount', 'price', 'sid', 'symbol', 'value'
-
- Methods:
-
- - get_analysis
-
- Returns a dictionary with returns as values and the datetime points for
- each return as keys
- '''
- params = (
- ('headers', False),
- ('_pfheaders', ('date', 'amount', 'price', 'sid', 'symbol', 'value')),
- )
-
- def start(self):
- super(Transactions, self).start()
- if self.p.headers:
- self.rets[self.p._pfheaders[0]] = [list(self.p._pfheaders[1:])]
-
- self._positions = collections.defaultdict(Position)
- self._idnames = list(enumerate(self.strategy.getdatanames()))
-
- def notify_order(self, order):
- # An order could have several partial executions per cycle (unlikely
- # but possible) and therefore: collect each new execution notification
- # and let the work for next
-
- # We use a fresh Position object for each round to get summary of what
- # the execution bits have done in that round
- if order.status not in [Order.Partial, Order.Completed]:
- return # It's not an execution
-
- pos = self._positions[order.data._name]
- for exbit in order.executed.iterpending():
- if exbit is None:
- break # end of pending reached
-
- pos.update(exbit.size, exbit.price)
-
- def next(self):
- # super(Transactions, self).next() # let dtkey update
- entries = []
- for i, dname in self._idnames:
- pos = self._positions.get(dname, None)
- if pos is not None:
- size, price = pos.size, pos.price
- if size:
- entries.append([size, price, i, dname, -size * price])
-
- if entries:
- self.rets[self.strategy.datetime.datetime()] = entries
-
- self._positions.clear()
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/observers/trades.py b/spaces/Lianjd/stock_dashboard/backtrader/observers/trades.py
deleted file mode 100644
index 085b3bd4e1c38c509b43d8c5e5219373764ffbe3..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/observers/trades.py
+++ /dev/null
@@ -1,162 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-import uuid
-
-from .. import Observer
-from ..utils.py3 import with_metaclass
-
-from ..trade import Trade
-
-
-class Trades(Observer):
- '''This observer keeps track of full trades and plot the PnL level achieved
- when a trade is closed.
-
- A trade is open when a position goes from 0 (or crossing over 0) to X and
- is then closed when it goes back to 0 (or crosses over 0 in the opposite
- direction)
-
- Params:
- - ``pnlcomm`` (def: ``True``)
-
- Show net/profit and loss, i.e.: after commission. If set to ``False``
- if will show the result of trades before commission
- '''
- _stclock = True
-
- lines = ('pnlplus', 'pnlminus')
-
- params = dict(pnlcomm=True)
-
- plotinfo = dict(plot=True, subplot=True,
- plotname='Trades - Net Profit/Loss',
- plotymargin=0.10,
- plothlines=[0.0])
-
- plotlines = dict(
- pnlplus=dict(_name='Positive',
- ls='', marker='o', color='blue',
- markersize=8.0, fillstyle='full'),
- pnlminus=dict(_name='Negative',
- ls='', marker='o', color='red',
- markersize=8.0, fillstyle='full')
- )
-
- def __init__(self):
-
- self.trades = 0
-
- self.trades_long = 0
- self.trades_short = 0
-
- self.trades_plus = 0
- self.trades_minus = 0
-
- self.trades_plus_gross = 0
- self.trades_minus_gross = 0
-
- self.trades_win = 0
- self.trades_win_max = 0
- self.trades_win_min = 0
-
- self.trades_loss = 0
- self.trades_loss_max = 0
- self.trades_loss_min = 0
-
- self.trades_length = 0
- self.trades_length_max = 0
- self.trades_length_min = 0
-
- def next(self):
- for trade in self._owner._tradespending:
- if trade.data not in self.ddatas:
- continue
-
- if not trade.isclosed:
- continue
-
- pnl = trade.pnlcomm if self.p.pnlcomm else trade.pnl
-
- if pnl >= 0.0:
- self.lines.pnlplus[0] = pnl
- else:
- self.lines.pnlminus[0] = pnl
-
-
-class MetaDataTrades(Observer.__class__):
- def donew(cls, *args, **kwargs):
- _obj, args, kwargs = super(MetaDataTrades, cls).donew(*args, **kwargs)
-
- # Recreate the lines dynamically
- if _obj.params.usenames:
- lnames = tuple(x._name for x in _obj.datas)
- else:
- lnames = tuple('data{}'.format(x) for x in range(len(_obj.datas)))
-
- # Generate a new lines class
- linescls = cls.lines._derive(uuid.uuid4().hex, lnames, 0, ())
-
- # Instantiate lines
- _obj.lines = linescls()
-
- # Generate plotlines info
- markers = ['o', 'v', '^', '<', '>', '1', '2', '3', '4', '8', 's', 'p',
- '*', 'h', 'H', '+', 'x', 'D', 'd']
-
- colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'b', 'g', 'r', 'c', 'm',
- 'y', 'k', 'b', 'g', 'r', 'c', 'm']
-
- basedict = dict(ls='', markersize=8.0, fillstyle='full')
-
- plines = dict()
- for lname, marker, color in zip(lnames, markers, colors):
- plines[lname] = d = basedict.copy()
- d.update(marker=marker, color=color)
-
- plotlines = cls.plotlines._derive(
- uuid.uuid4().hex, plines, [], recurse=True)
- _obj.plotlines = plotlines()
-
- return _obj, args, kwargs # return the instantiated object and args
-
-
-class DataTrades(with_metaclass(MetaDataTrades, Observer)):
- _stclock = True
-
- params = (('usenames', True),)
-
- plotinfo = dict(plot=True, subplot=True, plothlines=[0.0],
- plotymargin=0.10)
-
- plotlines = dict()
-
- def next(self):
- for trade in self._owner._tradespending:
- if trade.data not in self.ddatas:
- continue
-
- if not trade.isclosed:
- continue
-
- self.lines[trade.data._id - 1][0] = trade.pnl
diff --git a/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_net.py b/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_net.py
deleted file mode 100644
index ab6aa82d3e9055a838f1f9076b12f05fdfc154d0..0000000000000000000000000000000000000000
--- a/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_net.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def conv_bn(inp, oup, stride=1, leaky=0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True))
-
-
-def conv_bn_no_relu(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- )
-
-
-def conv_bn1X1(inp, oup, stride, leaky=0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True))
-
-
-def conv_dw(inp, oup, stride, leaky=0.1):
- return nn.Sequential(
- nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
- nn.BatchNorm2d(inp),
- nn.LeakyReLU(negative_slope=leaky, inplace=True),
- nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True),
- )
-
-
-class SSH(nn.Module):
-
- def __init__(self, in_channel, out_channel):
- super(SSH, self).__init__()
- assert out_channel % 4 == 0
- leaky = 0
- if (out_channel <= 64):
- leaky = 0.1
- self.conv3X3 = conv_bn_no_relu(in_channel, out_channel // 2, stride=1)
-
- self.conv5X5_1 = conv_bn(in_channel, out_channel // 4, stride=1, leaky=leaky)
- self.conv5X5_2 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1)
-
- self.conv7X7_2 = conv_bn(out_channel // 4, out_channel // 4, stride=1, leaky=leaky)
- self.conv7x7_3 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1)
-
- def forward(self, input):
- conv3X3 = self.conv3X3(input)
-
- conv5X5_1 = self.conv5X5_1(input)
- conv5X5 = self.conv5X5_2(conv5X5_1)
-
- conv7X7_2 = self.conv7X7_2(conv5X5_1)
- conv7X7 = self.conv7x7_3(conv7X7_2)
-
- out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1)
- out = F.relu(out)
- return out
-
-
-class FPN(nn.Module):
-
- def __init__(self, in_channels_list, out_channels):
- super(FPN, self).__init__()
- leaky = 0
- if (out_channels <= 64):
- leaky = 0.1
- self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride=1, leaky=leaky)
- self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride=1, leaky=leaky)
- self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride=1, leaky=leaky)
-
- self.merge1 = conv_bn(out_channels, out_channels, leaky=leaky)
- self.merge2 = conv_bn(out_channels, out_channels, leaky=leaky)
-
- def forward(self, input):
- # names = list(input.keys())
- # input = list(input.values())
-
- output1 = self.output1(input[0])
- output2 = self.output2(input[1])
- output3 = self.output3(input[2])
-
- up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode='nearest')
- output2 = output2 + up3
- output2 = self.merge2(output2)
-
- up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode='nearest')
- output1 = output1 + up2
- output1 = self.merge1(output1)
-
- out = [output1, output2, output3]
- return out
-
-
-class MobileNetV1(nn.Module):
-
- def __init__(self):
- super(MobileNetV1, self).__init__()
- self.stage1 = nn.Sequential(
- conv_bn(3, 8, 2, leaky=0.1), # 3
- conv_dw(8, 16, 1), # 7
- conv_dw(16, 32, 2), # 11
- conv_dw(32, 32, 1), # 19
- conv_dw(32, 64, 2), # 27
- conv_dw(64, 64, 1), # 43
- )
- self.stage2 = nn.Sequential(
- conv_dw(64, 128, 2), # 43 + 16 = 59
- conv_dw(128, 128, 1), # 59 + 32 = 91
- conv_dw(128, 128, 1), # 91 + 32 = 123
- conv_dw(128, 128, 1), # 123 + 32 = 155
- conv_dw(128, 128, 1), # 155 + 32 = 187
- conv_dw(128, 128, 1), # 187 + 32 = 219
- )
- self.stage3 = nn.Sequential(
- conv_dw(128, 256, 2), # 219 +3 2 = 241
- conv_dw(256, 256, 1), # 241 + 64 = 301
- )
- self.avg = nn.AdaptiveAvgPool2d((1, 1))
- self.fc = nn.Linear(256, 1000)
-
- def forward(self, x):
- x = self.stage1(x)
- x = self.stage2(x)
- x = self.stage3(x)
- x = self.avg(x)
- # x = self.model(x)
- x = x.view(-1, 256)
- x = self.fc(x)
- return x
-
-
-class ClassHead(nn.Module):
-
- def __init__(self, inchannels=512, num_anchors=3):
- super(ClassHead, self).__init__()
- self.num_anchors = num_anchors
- self.conv1x1 = nn.Conv2d(inchannels, self.num_anchors * 2, kernel_size=(1, 1), stride=1, padding=0)
-
- def forward(self, x):
- out = self.conv1x1(x)
- out = out.permute(0, 2, 3, 1).contiguous()
-
- return out.view(out.shape[0], -1, 2)
-
-
-class BboxHead(nn.Module):
-
- def __init__(self, inchannels=512, num_anchors=3):
- super(BboxHead, self).__init__()
- self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 4, kernel_size=(1, 1), stride=1, padding=0)
-
- def forward(self, x):
- out = self.conv1x1(x)
- out = out.permute(0, 2, 3, 1).contiguous()
-
- return out.view(out.shape[0], -1, 4)
-
-
-class LandmarkHead(nn.Module):
-
- def __init__(self, inchannels=512, num_anchors=3):
- super(LandmarkHead, self).__init__()
- self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 10, kernel_size=(1, 1), stride=1, padding=0)
-
- def forward(self, x):
- out = self.conv1x1(x)
- out = out.permute(0, 2, 3, 1).contiguous()
-
- return out.view(out.shape[0], -1, 10)
-
-
-def make_class_head(fpn_num=3, inchannels=64, anchor_num=2):
- classhead = nn.ModuleList()
- for i in range(fpn_num):
- classhead.append(ClassHead(inchannels, anchor_num))
- return classhead
-
-
-def make_bbox_head(fpn_num=3, inchannels=64, anchor_num=2):
- bboxhead = nn.ModuleList()
- for i in range(fpn_num):
- bboxhead.append(BboxHead(inchannels, anchor_num))
- return bboxhead
-
-
-def make_landmark_head(fpn_num=3, inchannels=64, anchor_num=2):
- landmarkhead = nn.ModuleList()
- for i in range(fpn_num):
- landmarkhead.append(LandmarkHead(inchannels, anchor_num))
- return landmarkhead
diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/live_audio/audio_io.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/live_audio/audio_io.py
deleted file mode 100644
index 3ff83a66e8d9f0bb15250f1c3c2b5ea36745ff55..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/crazy_functions/live_audio/audio_io.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import numpy as np
-from scipy import interpolate
-
-def Singleton(cls):
- _instance = {}
-
- def _singleton(*args, **kargs):
- if cls not in _instance:
- _instance[cls] = cls(*args, **kargs)
- return _instance[cls]
-
- return _singleton
-
-
-@Singleton
-class RealtimeAudioDistribution():
- def __init__(self) -> None:
- self.data = {}
- self.max_len = 1024*1024
- self.rate = 48000 # 只读,每秒采样数量
-
- def clean_up(self):
- self.data = {}
-
- def feed(self, uuid, audio):
- self.rate, audio_ = audio
- # print('feed', len(audio_), audio_[-25:])
- if uuid not in self.data:
- self.data[uuid] = audio_
- else:
- new_arr = np.concatenate((self.data[uuid], audio_))
- if len(new_arr) > self.max_len: new_arr = new_arr[-self.max_len:]
- self.data[uuid] = new_arr
-
- def read(self, uuid):
- if uuid in self.data:
- res = self.data.pop(uuid)
- print('\r read-', len(res), '-', max(res), end='', flush=True)
- else:
- res = None
- return res
-
-def change_sample_rate(audio, old_sr, new_sr):
- duration = audio.shape[0] / old_sr
-
- time_old = np.linspace(0, duration, audio.shape[0])
- time_new = np.linspace(0, duration, int(audio.shape[0] * new_sr / old_sr))
-
- interpolator = interpolate.interp1d(time_old, audio.T)
- new_audio = interpolator(time_new).T
- return new_audio.astype(np.int16)
\ No newline at end of file
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_1200e.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_1200e.py
deleted file mode 100644
index bc7fbf69b42b11ea9b8ae4d14216d2fcf20e717c..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_1200e.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.007, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(policy='poly', power=0.9, min_lr=1e-7, by_epoch=True)
-# running settings
-runner = dict(type='EpochBasedRunner', max_epochs=1200)
-checkpoint_config = dict(interval=100)
diff --git a/spaces/MAPS-research/GEMRec-Gallery/Archive/Gallery_beta0913.py b/spaces/MAPS-research/GEMRec-Gallery/Archive/Gallery_beta0913.py
deleted file mode 100644
index a482a1bfc7d85e31589ac76e420fe3bd9c3f8268..0000000000000000000000000000000000000000
--- a/spaces/MAPS-research/GEMRec-Gallery/Archive/Gallery_beta0913.py
+++ /dev/null
@@ -1,643 +0,0 @@
-import os
-import requests
-
-import altair as alt
-import numpy as np
-import pandas as pd
-import streamlit as st
-import streamlit.components.v1 as components
-
-from bs4 import BeautifulSoup
-from datasets import load_dataset, Dataset, load_from_disk
-from huggingface_hub import login
-from streamlit_agraph import agraph, Node, Edge, Config
-from streamlit_extras.switch_page_button import switch_page
-from sklearn.svm import LinearSVC
-
-SCORE_NAME_MAPPING = {'clip': 'clip_score', 'rank': 'msq_score', 'pop': 'model_download_count'}
-
-
-class GalleryApp:
- def __init__(self, promptBook, images_ds):
- self.promptBook = promptBook
- self.images_ds = images_ds
-
- def gallery_standard(self, items, col_num, info):
- rows = len(items) // col_num + 1
- containers = [st.container() for _ in range(rows)]
- for idx in range(0, len(items), col_num):
- row_idx = idx // col_num
- with containers[row_idx]:
- cols = st.columns(col_num)
- for j in range(col_num):
- if idx + j < len(items):
- with cols[j]:
- # show image
- # image = self.images_ds[items.iloc[idx + j]['row_idx'].item()]['image']
- image = f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{items.iloc[idx + j]['image_id']}.png"
- st.image(image, use_column_width=True)
-
- # handel checkbox information
- prompt_id = items.iloc[idx + j]['prompt_id']
- modelVersion_id = items.iloc[idx + j]['modelVersion_id']
-
- check_init = True if modelVersion_id in st.session_state.selected_dict.get(prompt_id, []) else False
-
- # st.write("Position: ", idx + j)
-
- # show checkbox
- st.checkbox('Select', key=f'select_{prompt_id}_{modelVersion_id}', value=check_init)
-
- # show selected info
- for key in info:
- st.write(f"**{key}**: {items.iloc[idx + j][key]}")
-
- def gallery_graph(self, items):
- items = load_tsne_coordinates(items)
-
- # sort items to be popularity from low to high, so that most popular ones will be on the top
- items = items.sort_values(by=['model_download_count'], ascending=True).reset_index(drop=True)
-
- scale = 50
- items.loc[:, 'x'] = items['x'] * scale
- items.loc[:, 'y'] = items['y'] * scale
-
- nodes = []
- edges = []
-
- for idx in items.index:
- # if items.loc[idx, 'modelVersion_id'] in st.session_state.selected_dict.get(items.loc[idx, 'prompt_id'], 0):
- # opacity = 0.2
- # else:
- # opacity = 1.0
-
- nodes.append(Node(id=items.loc[idx, 'image_id'],
- # label=str(items.loc[idx, 'model_name']),
- title=f"model name: {items.loc[idx, 'model_name']}\nmodelVersion name: {items.loc[idx, 'modelVersion_name']}\nclip score: {items.loc[idx, 'clip_score']}\nmcos score: {items.loc[idx, 'mcos_score']}\npopularity: {items.loc[idx, 'model_download_count']}",
- size=20,
- shape='image',
- image=f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{items.loc[idx, 'image_id']}.png",
- x=items.loc[idx, 'x'].item(),
- y=items.loc[idx, 'y'].item(),
- # fixed=True,
- color={'background': '#E0E0E1', 'border': '#ffffff', 'highlight': {'border': '#F04542'}},
- # opacity=opacity,
- shadow={'enabled': True, 'color': 'rgba(0,0,0,0.4)', 'size': 10, 'x': 1, 'y': 1},
- borderWidth=2,
- shapeProperties={'useBorderWithImage': True},
- )
- )
-
- config = Config(width='100%',
- height='600',
- directed=True,
- physics=False,
- hierarchical=False,
- interaction={'navigationButtons': True, 'dragNodes': False, 'multiselect': False},
- # **kwargs
- )
-
- return agraph(nodes=nodes,
- edges=edges,
- config=config,
- )
-
- def selection_panel(self, items):
- # temperal function
-
- selecters = st.columns([1, 4])
-
- if 'score_weights' not in st.session_state:
- st.session_state.score_weights = [1.0, 0.8, 0.2, 0.8]
-
- # select sort type
- with selecters[0]:
- sort_type = st.selectbox('Sort by', ['Scores', 'IDs and Names'])
- if sort_type == 'Scores':
- sort_by = 'weighted_score_sum'
-
- # select other options
- with selecters[1]:
- if sort_type == 'IDs and Names':
- sub_selecters = st.columns([3, 1])
- # select sort by
- with sub_selecters[0]:
- sort_by = st.selectbox('Sort by',
- ['model_name', 'model_id', 'modelVersion_name', 'modelVersion_id', 'norm_nsfw'],
- label_visibility='hidden')
-
- continue_idx = 1
-
- else:
- # add custom weights
- sub_selecters = st.columns([1, 1, 1, 1])
-
- with sub_selecters[0]:
- clip_weight = st.number_input('Clip Score Weight', min_value=-100.0, max_value=100.0, value=1.0, step=0.1, help='the weight for normalized clip score')
- with sub_selecters[1]:
- mcos_weight = st.number_input('Dissimilarity Weight', min_value=-100.0, max_value=100.0, value=0.8, step=0.1, help='the weight for m(eam) s(imilarity) q(antile) score for measuring distinctiveness')
- with sub_selecters[2]:
- pop_weight = st.number_input('Popularity Weight', min_value=-100.0, max_value=100.0, value=0.2, step=0.1, help='the weight for normalized popularity score')
-
- items.loc[:, 'weighted_score_sum'] = round(items[f'norm_clip'] * clip_weight + items[f'norm_mcos'] * mcos_weight + items[
- 'norm_pop'] * pop_weight, 4)
-
- continue_idx = 3
-
- # save latest weights
- st.session_state.score_weights[0] = round(clip_weight, 2)
- st.session_state.score_weights[1] = round(mcos_weight, 2)
- st.session_state.score_weights[2] = round(pop_weight, 2)
-
- # select threshold
- with sub_selecters[continue_idx]:
- nsfw_threshold = st.number_input('NSFW Score Threshold', min_value=0.0, max_value=1.0, value=0.8, step=0.01, help='Only show models with nsfw score lower than this threshold, set 1.0 to show all images')
- items = items[items['norm_nsfw'] <= nsfw_threshold].reset_index(drop=True)
-
- # save latest threshold
- st.session_state.score_weights[3] = nsfw_threshold
-
- # draw a distribution histogram
- if sort_type == 'Scores':
- try:
- with st.expander('Show score distribution histogram and select score range'):
- st.write('**Score distribution histogram**')
- chart_space = st.container()
- # st.write('Select the range of scores to show')
- hist_data = pd.DataFrame(items[sort_by])
- mini = hist_data[sort_by].min().item()
- mini = mini//0.1 * 0.1
- maxi = hist_data[sort_by].max().item()
- maxi = maxi//0.1 * 0.1 + 0.1
- st.write('**Select the range of scores to show**')
- r = st.slider('Select the range of scores to show', min_value=mini, max_value=maxi, value=(mini, maxi), step=0.05, label_visibility='collapsed')
- with chart_space:
- st.altair_chart(altair_histogram(hist_data, sort_by, r[0], r[1]), use_container_width=True)
- # event_dict = altair_component(altair_chart=altair_histogram(hist_data, sort_by))
- # r = event_dict.get(sort_by)
- if r:
- items = items[(items[sort_by] >= r[0]) & (items[sort_by] <= r[1])].reset_index(drop=True)
- # st.write(r)
- except:
- pass
-
- display_options = st.columns([1, 4])
-
- with display_options[0]:
- # select order
- order = st.selectbox('Order', ['Ascending', 'Descending'], index=1 if sort_type == 'Scores' else 0)
- if order == 'Ascending':
- order = True
- else:
- order = False
-
- with display_options[1]:
-
- # select info to show
- info = st.multiselect('Show Info',
- ['model_name', 'model_id', 'modelVersion_name', 'modelVersion_id',
- 'weighted_score_sum', 'model_download_count', 'clip_score', 'mcos_score',
- 'nsfw_score', 'norm_nsfw'],
- default=sort_by)
-
- # apply sorting to dataframe
- items = items.sort_values(by=[sort_by], ascending=order).reset_index(drop=True)
-
- # select number of columns
- col_num = st.slider('Number of columns', min_value=1, max_value=9, value=4, step=1, key='col_num')
-
- return items, info, col_num
-
- def sidebar(self):
- with st.sidebar:
- prompt_tags = self.promptBook['tag'].unique()
- # sort tags by alphabetical order
- prompt_tags = np.sort(prompt_tags)[::1]
-
- tag = st.selectbox('Select a tag', prompt_tags, index=5)
-
- items = self.promptBook[self.promptBook['tag'] == tag].reset_index(drop=True)
-
- prompts = np.sort(items['prompt'].unique())[::1]
-
- selected_prompt = st.selectbox('Select prompt', prompts, index=3)
-
- mode = st.radio('Select a mode', ['Gallery', 'Graph'], horizontal=True, index=1)
-
- items = items[items['prompt'] == selected_prompt].reset_index(drop=True)
- prompt_id = items['prompt_id'].unique()[0]
- note = items['note'].unique()[0]
-
- # show source
- if isinstance(note, str):
- if note.isdigit():
- st.caption(f"`Source: civitai`")
- else:
- st.caption(f"`Source: {note}`")
- else:
- st.caption("`Source: Parti-prompts`")
-
- # show image metadata
- image_metadatas = ['prompt', 'negativePrompt', 'sampler', 'cfgScale', 'size', 'seed']
- for key in image_metadatas:
- label = ' '.join(key.split('_')).capitalize()
- st.write(f"**{label}**")
- if items[key][0] == ' ':
- st.write('`None`')
- else:
- st.caption(f"{items[key][0]}")
-
- # for note as civitai image id, add civitai reference
- if isinstance(note, str) and note.isdigit():
- try:
- st.write(f'**[Civitai Reference](https://civitai.com/images/{note})**')
- res = requests.get(f'https://civitai.com/images/{note}')
- # st.write(res.text)
- soup = BeautifulSoup(res.text, 'html.parser')
- image_section = soup.find('div', {'class': 'mantine-12rlksp'})
- image_url = image_section.find('img')['src']
- st.image(image_url, use_column_width=True)
- except:
- pass
-
- return prompt_tags, tag, prompt_id, items, mode
-
- def app(self):
- st.title('Model Visualization and Retrieval')
- st.write('This is a gallery of images generated by the models')
-
- prompt_tags, tag, prompt_id, items, mode = self.sidebar()
- # items, info, col_num = self.selection_panel(items)
-
- # subset = st.radio('Select a subset', ['All', 'Selected Only'], index=0, horizontal=True)
- # try:
- # if subset == 'Selected Only':
- # items = items[items['modelVersion_id'].isin(st.session_state.selected_dict[prompt_id])].reset_index(drop=True)
- # except:
- # pass
-
- # add safety check for some prompts
- safety_check = True
- unsafe_prompts = {}
- # initialize unsafe prompts
- for prompt_tag in prompt_tags:
- unsafe_prompts[prompt_tag] = []
- # manually add unsafe prompts
- unsafe_prompts['world knowledge'] = [83]
- unsafe_prompts['abstract'] = [1, 3]
-
- if int(prompt_id.item()) in unsafe_prompts[tag]:
- st.warning('This prompt may contain unsafe content. They might be offensive, depressing, or sexual.')
- safety_check = st.checkbox('I understand that this prompt may contain unsafe content. Show these images anyway.', key=f'safety_{prompt_id}')
-
- if safety_check:
- if mode == 'Gallery':
- self.gallery_mode(prompt_id, items)
- elif mode == 'Graph':
- self.graph_mode(prompt_id, items)
-
-
- def graph_mode(self, prompt_id, items):
- graph_cols = st.columns([3, 1])
- prompt = st.chat_input(f"Selected model version ids: {str(st.session_state.selected_dict.get(prompt_id, []))}",
- disabled=False, key=f'{prompt_id}')
- if prompt:
- switch_page("ranking")
-
- with graph_cols[0]:
- graph_space = st.empty()
-
- with graph_space.container():
- return_value = self.gallery_graph(items)
-
- with graph_cols[1]:
- if return_value:
- with st.form(key=f'{prompt_id}'):
- image_url = f"https://modelcofferbucket.s3-accelerate.amazonaws.com/{return_value}.png"
-
- st.image(image_url)
-
- item = items[items['image_id'] == return_value].reset_index(drop=True).iloc[0]
- modelVersion_id = item['modelVersion_id']
-
- # handle selection
- if 'selected_dict' in st.session_state:
- if item['prompt_id'] not in st.session_state.selected_dict:
- st.session_state.selected_dict[item['prompt_id']] = []
-
- if modelVersion_id in st.session_state.selected_dict[item['prompt_id']]:
- checked = True
- else:
- checked = False
-
- if checked:
- # deselect = st.button('Deselect', key=f'select_{item["prompt_id"]}_{item["modelVersion_id"]}', use_container_width=True)
- deselect = st.form_submit_button('Deselect', use_container_width=True)
- if deselect:
- st.session_state.selected_dict[item['prompt_id']].remove(item['modelVersion_id'])
- self.remove_ranking_states(item['prompt_id'])
- st.experimental_rerun()
-
- else:
- # select = st.button('Select', key=f'select_{item["prompt_id"]}_{item["modelVersion_id"]}', use_container_width=True, type='primary')
- select = st.form_submit_button('Select', use_container_width=True, type='primary')
- if select:
- st.session_state.selected_dict[item['prompt_id']].append(item['modelVersion_id'])
- self.remove_ranking_states(item['prompt_id'])
- st.experimental_rerun()
-
- # st.write(item)
- infos = ['model_name', 'modelVersion_name', 'model_download_count', 'clip_score', 'mcos_score',
- 'nsfw_score']
-
- infos_df = item[infos]
- # rename columns
- infos_df = infos_df.rename(index={'model_name': 'Model', 'modelVersion_name': 'Version', 'model_download_count': 'Downloads', 'clip_score': 'Clip Score', 'mcos_score': 'mcos Score', 'nsfw_score': 'NSFW Score'})
- st.table(infos_df)
-
- # for info in infos:
- # st.write(f"**{info}**:")
- # st.write(item[info])
-
- else:
- st.info('Please click on an image to show')
-
-
- def gallery_mode(self, prompt_id, items):
- items, info, col_num = self.selection_panel(items)
-
- if 'selected_dict' in st.session_state:
- # st.write('checked: ', str(st.session_state.selected_dict.get(prompt_id, [])))
- dynamic_weight_options = ['Grid Search', 'SVM', 'Greedy']
- dynamic_weight_panel = st.columns(len(dynamic_weight_options))
-
- if len(st.session_state.selected_dict.get(prompt_id, [])) > 0:
- btn_disable = False
- else:
- btn_disable = True
-
- for i in range(len(dynamic_weight_options)):
- method = dynamic_weight_options[i]
- with dynamic_weight_panel[i]:
- btn = st.button(method, use_container_width=True, disabled=btn_disable, on_click=self.dynamic_weight, args=(prompt_id, items, method))
-
- prompt = st.chat_input(f"Selected model version ids: {str(st.session_state.selected_dict.get(prompt_id, []))}", disabled=False, key=f'{prompt_id}')
- if prompt:
- switch_page("ranking")
-
- with st.form(key=f'{prompt_id}'):
- # buttons = st.columns([1, 1, 1])
- buttons_space = st.columns([1, 1, 1, 1])
- gallery_space = st.empty()
-
- with buttons_space[0]:
- continue_btn = st.form_submit_button('Confirm Selection', use_container_width=True, type='primary')
- if continue_btn:
- self.submit_actions('Continue', prompt_id)
-
- with buttons_space[1]:
- select_btn = st.form_submit_button('Select All', use_container_width=True)
- if select_btn:
- self.submit_actions('Select', prompt_id)
-
- with buttons_space[2]:
- deselect_btn = st.form_submit_button('Deselect All', use_container_width=True)
- if deselect_btn:
- self.submit_actions('Deselect', prompt_id)
-
- with buttons_space[3]:
- refresh_btn = st.form_submit_button('Refresh', on_click=gallery_space.empty, use_container_width=True)
-
- with gallery_space.container():
- with st.spinner('Loading images...'):
- self.gallery_standard(items, col_num, info)
-
- st.info("Don't forget to scroll back to top and click the 'Confirm Selection' button to save your selection!!!")
-
-
-
- def submit_actions(self, status, prompt_id):
- # remove counter from session state
- # st.session_state.pop('counter', None)
- self.remove_ranking_states('prompt_id')
- if status == 'Select':
- modelVersions = self.promptBook[self.promptBook['prompt_id'] == prompt_id]['modelVersion_id'].unique()
- st.session_state.selected_dict[prompt_id] = modelVersions.tolist()
- print(st.session_state.selected_dict, 'select')
- st.experimental_rerun()
- elif status == 'Deselect':
- st.session_state.selected_dict[prompt_id] = []
- print(st.session_state.selected_dict, 'deselect')
- st.experimental_rerun()
- # self.promptBook.loc[self.promptBook['prompt_id'] == prompt_id, 'checked'] = False
- elif status == 'Continue':
- st.session_state.selected_dict[prompt_id] = []
- for key in st.session_state:
- keys = key.split('_')
- if keys[0] == 'select' and keys[1] == str(prompt_id):
- if st.session_state[key]:
- st.session_state.selected_dict[prompt_id].append(int(keys[2]))
- # switch_page("ranking")
- print(st.session_state.selected_dict, 'continue')
- st.experimental_rerun()
-
- def dynamic_weight(self, prompt_id, items, method='Grid Search'):
- selected = items[
- items['modelVersion_id'].isin(st.session_state.selected_dict[prompt_id])].reset_index(drop=True)
- optimal_weight = [0, 0, 0]
-
- if method == 'Grid Search':
- # grid search method
- top_ranking = len(items) * len(selected)
-
- for clip_weight in np.arange(-1, 1, 0.1):
- for mcos_weight in np.arange(-1, 1, 0.1):
- for pop_weight in np.arange(-1, 1, 0.1):
-
- weight_all = clip_weight*items[f'norm_clip'] + mcos_weight*items[f'norm_mcos'] + pop_weight*items['norm_pop']
- weight_all_sorted = weight_all.sort_values(ascending=False).reset_index(drop=True)
- # print('weight_all_sorted:', weight_all_sorted)
- weight_selected = clip_weight*selected[f'norm_clip'] + mcos_weight*selected[f'norm_mcos'] + pop_weight*selected['norm_pop']
-
- # get the index of values of weight_selected in weight_all_sorted
- rankings = []
- for weight in weight_selected:
- rankings.append(weight_all_sorted.index[weight_all_sorted == weight].tolist()[0])
- if sum(rankings) <= top_ranking:
- top_ranking = sum(rankings)
- print('current top ranking:', top_ranking, rankings)
- optimal_weight = [clip_weight, mcos_weight, pop_weight]
- print('optimal weight:', optimal_weight)
-
- elif method == 'SVM':
- # svm method
- print('start svm method')
- # get residual dataframe that contains models not selected
- residual = items[~items['modelVersion_id'].isin(selected['modelVersion_id'])].reset_index(drop=True)
- residual = residual[['norm_clip_crop', 'norm_mcos_crop', 'norm_pop']]
- residual = residual.to_numpy()
- selected = selected[['norm_clip_crop', 'norm_mcos_crop', 'norm_pop']]
- selected = selected.to_numpy()
-
- y = np.concatenate((np.full((len(selected), 1), -1), np.full((len(residual), 1), 1)), axis=0).ravel()
- X = np.concatenate((selected, residual), axis=0)
-
- # fit svm model, and get parameters for the hyperplane
- clf = LinearSVC(random_state=0, C=1.0, fit_intercept=False, dual='auto')
- clf.fit(X, y)
- optimal_weight = clf.coef_[0].tolist()
- print('optimal weight:', optimal_weight)
- pass
-
- elif method == 'Greedy':
- for idx in selected.index:
- # find which score is the highest, clip, mcos, or pop
- clip_score = selected.loc[idx, 'norm_clip_crop']
- mcos_score = selected.loc[idx, 'norm_mcos_crop']
- pop_score = selected.loc[idx, 'norm_pop']
- if clip_score >= mcos_score and clip_score >= pop_score:
- optimal_weight[0] += 1
- elif mcos_score >= clip_score and mcos_score >= pop_score:
- optimal_weight[1] += 1
- elif pop_score >= clip_score and pop_score >= mcos_score:
- optimal_weight[2] += 1
-
- # normalize optimal_weight
- optimal_weight = [round(weight/len(selected), 2) for weight in optimal_weight]
- print('optimal weight:', optimal_weight)
- print('optimal weight:', optimal_weight)
-
- st.session_state.score_weights[0: 3] = optimal_weight
-
-
- def remove_ranking_states(self, prompt_id):
- # for drag sort
- try:
- st.session_state.counter[prompt_id] = 0
- st.session_state.ranking[prompt_id] = {}
- print('remove ranking states')
- except:
- print('no sort ranking states to remove')
-
- # for battles
- try:
- st.session_state.pointer[prompt_id] = {'left': 0, 'right': 1}
- print('remove battles states')
- except:
- print('no battles states to remove')
-
- # for page progress
- try:
- st.session_state.progress[prompt_id] = 'ranking'
- print('reset page progress states')
- except:
- print('no page progress states to be reset')
-
-
-# hist_data = pd.DataFrame(np.random.normal(42, 10, (200, 1)), columns=["x"])
-@st.cache_resource
-def altair_histogram(hist_data, sort_by, mini, maxi):
- brushed = alt.selection_interval(encodings=['x'], name="brushed")
-
- chart = (
- alt.Chart(hist_data)
- .mark_bar(opacity=0.7, cornerRadius=2)
- .encode(alt.X(f"{sort_by}:Q", bin=alt.Bin(maxbins=25)), y="count()")
- # .add_selection(brushed)
- # .properties(width=800, height=300)
- )
-
- # Create a transparent rectangle for highlighting the range
- highlight = (
- alt.Chart(pd.DataFrame({'x1': [mini], 'x2': [maxi]}))
- .mark_rect(opacity=0.3)
- .encode(x='x1', x2='x2')
- # .properties(width=800, height=300)
- )
-
- # Layer the chart and the highlight rectangle
- layered_chart = alt.layer(chart, highlight)
-
- return layered_chart
-
-
-@st.cache_data
-def load_hf_dataset():
- # login to huggingface
- login(token=os.environ.get("HF_TOKEN"))
-
- # load from huggingface
- roster = pd.DataFrame(load_dataset('MAPS-research/GEMRec-Roster', split='train'))
- promptBook = pd.DataFrame(load_dataset('MAPS-research/GEMRec-Metadata', split='train'))
- # images_ds = load_from_disk(os.path.join(os.getcwd(), 'data', 'promptbook'))
- images_ds = None # set to None for now since we use s3 bucket to store images
-
- # # process dataset
- # roster = roster[['model_id', 'model_name', 'modelVersion_id', 'modelVersion_name',
- # 'model_download_count']].drop_duplicates().reset_index(drop=True)
-
- # add 'custom_score_weights' column to promptBook if not exist
- if 'weighted_score_sum' not in promptBook.columns:
- promptBook.loc[:, 'weighted_score_sum'] = 0
-
- # merge roster and promptbook
- promptBook = promptBook.merge(roster[['model_id', 'model_name', 'modelVersion_id', 'modelVersion_name', 'model_download_count']],
- on=['model_id', 'modelVersion_id'], how='left')
-
- # add column to record current row index
- promptBook.loc[:, 'row_idx'] = promptBook.index
-
- # apply a nsfw filter
- promptBook = promptBook[promptBook['nsfw_score'] <= 0.84].reset_index(drop=True)
-
- # add a column that adds up 'norm_clip', 'norm_mcos', and 'norm_pop'
- score_weights = [1.0, 0.8, 0.2]
- promptBook.loc[:, 'total_score'] = round(promptBook['norm_clip'] * score_weights[0] + promptBook['norm_mcos'] * score_weights[1] + promptBook['norm_pop'] * score_weights[2], 4)
-
- return roster, promptBook, images_ds
-
-@st.cache_data
-def load_tsne_coordinates(items):
- # load tsne coordinates
- tsne_df = pd.read_parquet('./data/feats_tsne.parquet')
-
- # print(tsne_df['modelVersion_id'].dtype)
-
- print('before merge:', items)
- items = items.merge(tsne_df, on=['modelVersion_id', 'prompt_id'], how='left')
- print('after merge:', items)
- return items
-
-
-if __name__ == "__main__":
- st.set_page_config(page_title="Model Coffer Gallery", page_icon="🖼️", layout="wide")
-
- if 'user_id' not in st.session_state:
- st.warning('Please log in first.')
- home_btn = st.button('Go to Home Page')
- if home_btn:
- switch_page("home")
- else:
- # st.write('You have already logged in as ' + st.session_state.user_id[0])
- roster, promptBook, images_ds = load_hf_dataset()
- # print(promptBook.columns)
-
- # initialize selected_dict
- if 'selected_dict' not in st.session_state:
- st.session_state['selected_dict'] = {}
-
- app = GalleryApp(promptBook=promptBook, images_ds=images_ds)
- app.app()
-
- # components.html(
- # """
- #
- # """,
- # # unsafe_allow_html=True,
- # )
diff --git a/spaces/MajinBog/ItsJayQz-GTA5_Artwork_Diffusion/README.md b/spaces/MajinBog/ItsJayQz-GTA5_Artwork_Diffusion/README.md
deleted file mode 100644
index 6623db534a590838df47d40419aa6758747fa0f3..0000000000000000000000000000000000000000
--- a/spaces/MajinBog/ItsJayQz-GTA5_Artwork_Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ItsJayQz-GTA5 Artwork Diffusion
-emoji: 🐢
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/__init__.py
deleted file mode 100644
index 34383d83f5e76bc801f31b20e5651e383be348b6..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .build_sam import (
- build_sam,
- build_sam_vit_h,
- build_sam_vit_l,
- build_sam_vit_b,
- sam_model_registry,
-)
-from .predictor import SamPredictor
-from .automatic_mask_generator import SamAutomaticMaskGenerator
diff --git a/spaces/Manjushri/MusicGen/MODEL_CARD.md b/spaces/Manjushri/MusicGen/MODEL_CARD.md
deleted file mode 100644
index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/MusicGen/MODEL_CARD.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# MusicGen Model Card
-
-## Model details
-
-**Organization developing the model:** The FAIR team of Meta AI.
-
-**Model date:** MusicGen was trained between April 2023 and May 2023.
-
-**Model version:** This is the version 1 of the model.
-
-**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
-
-**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv].
-
-**Citation details** See [our paper][arxiv]
-
-**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
-
-**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
-
-## Intended use
-**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
-
-- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
-- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
-
-**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
-
-**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
-
-## Metrics
-
-**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
-
-- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
-- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
-- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
-
-Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
-
-- Overall quality of the music samples;
-- Text relevance to the provided text input;
-- Adherence to the melody for melody-guided music generation.
-
-More details on performance measures and human studies can be found in the paper.
-
-**Decision thresholds:** Not applicable.
-
-## Evaluation datasets
-
-The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
-
-## Training datasets
-
-The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
-
-## Quantitative analysis
-
-More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
-
-## Limitations and biases
-
-**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
-
-**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
-
-**Limitations:**
-
-- The model is not able to generate realistic vocals.
-- The model has been trained with English descriptions and will not perform as well in other languages.
-- The model does not perform equally well for all music styles and cultures.
-- The model sometimes generates end of songs, collapsing to silence.
-- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
-
-**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
-
-**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
-
-**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
-
-[arxiv]: https://arxiv.org/abs/2306.05284
diff --git a/spaces/Manjushri/MusicGen/audiocraft/models/encodec.py b/spaces/Manjushri/MusicGen/audiocraft/models/encodec.py
deleted file mode 100644
index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/MusicGen/audiocraft/models/encodec.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from abc import ABC, abstractmethod
-import typing as tp
-
-from einops import rearrange
-import torch
-from torch import nn
-
-from .. import quantization as qt
-
-
-class CompressionModel(ABC, nn.Module):
-
- @abstractmethod
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- ...
-
- @abstractmethod
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """See `EncodecModel.encode`"""
- ...
-
- @abstractmethod
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """See `EncodecModel.decode`"""
- ...
-
- @property
- @abstractmethod
- def channels(self) -> int:
- ...
-
- @property
- @abstractmethod
- def frame_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def sample_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def cardinality(self) -> int:
- ...
-
- @property
- @abstractmethod
- def num_codebooks(self) -> int:
- ...
-
- @property
- @abstractmethod
- def total_codebooks(self) -> int:
- ...
-
- @abstractmethod
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- ...
-
-
-class EncodecModel(CompressionModel):
- """Encodec model operating on the raw waveform.
-
- Args:
- encoder (nn.Module): Encoder network.
- decoder (nn.Module): Decoder network.
- quantizer (qt.BaseQuantizer): Quantizer network.
- frame_rate (int): Frame rate for the latent representation.
- sample_rate (int): Audio sample rate.
- channels (int): Number of audio channels.
- causal (bool): Whether to use a causal version of the model.
- renormalize (bool): Whether to renormalize the audio before running the model.
- """
- # we need assignement to override the property in the abstract class,
- # I couldn't find a better way...
- frame_rate: int = 0
- sample_rate: int = 0
- channels: int = 0
-
- def __init__(self,
- encoder: nn.Module,
- decoder: nn.Module,
- quantizer: qt.BaseQuantizer,
- frame_rate: int,
- sample_rate: int,
- channels: int,
- causal: bool = False,
- renormalize: bool = False):
- super().__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.quantizer = quantizer
- self.frame_rate = frame_rate
- self.sample_rate = sample_rate
- self.channels = channels
- self.renormalize = renormalize
- self.causal = causal
- if self.causal:
- # we force disabling here to avoid handling linear overlap of segments
- # as supported in original EnCodec codebase.
- assert not self.renormalize, 'Causal model does not support renormalize'
-
- @property
- def total_codebooks(self):
- """Total number of quantizer codebooks available.
- """
- return self.quantizer.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
- """
- return self.quantizer.num_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- self.quantizer.set_num_codebooks(n)
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- return self.quantizer.bins
-
- def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- scale: tp.Optional[torch.Tensor]
- if self.renormalize:
- mono = x.mean(dim=1, keepdim=True)
- volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt()
- scale = 1e-8 + volume
- x = x / scale
- scale = scale.view(-1, 1)
- else:
- scale = None
- return x, scale
-
- def postprocess(self,
- x: torch.Tensor,
- scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor:
- if scale is not None:
- assert self.renormalize
- x = x * scale.view(-1, 1, 1)
- return x
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- assert x.dim() == 3
- length = x.shape[-1]
- x, scale = self.preprocess(x)
-
- emb = self.encoder(x)
- q_res = self.quantizer(emb, self.frame_rate)
- out = self.decoder(q_res.x)
-
- # remove extra padding added by the encoder and decoder
- assert out.shape[-1] >= length, (out.shape[-1], length)
- out = out[..., :length]
-
- q_res.x = self.postprocess(out, scale)
-
- return q_res
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """Encode the given input tensor to quantized representation along with scale parameter.
-
- Args:
- x (torch.Tensor): Float tensor of shape [B, C, T]
-
- Returns:
- codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of:
- codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep.
- scale a float tensor containing the scale for audio renormalizealization.
- """
- assert x.dim() == 3
- x, scale = self.preprocess(x)
- emb = self.encoder(x)
- codes = self.quantizer.encode(emb)
- return codes, scale
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """Decode the given codes to a reconstructed representation, using the scale to perform
- audio denormalization if needed.
-
- Args:
- codes (torch.Tensor): Int tensor of shape [B, K, T]
- scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value.
-
- Returns:
- out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio.
- """
- emb = self.quantizer.decode(codes)
- out = self.decoder(emb)
- out = self.postprocess(out, scale)
- # out contains extra padding added by the encoder and decoder
- return out
-
-
-class FlattenedCompressionModel(CompressionModel):
- """Wraps a CompressionModel and flatten its codebooks, e.g.
- instead of returning [B, K, T], return [B, S, T * (K // S)] with
- S the number of codebooks per step, and `K // S` the number of 'virtual steps'
- for each real time step.
-
- Args:
- model (CompressionModel): compression model to wrap.
- codebooks_per_step (int): number of codebooks to keep per step,
- this must divide the number of codebooks provided by the wrapped model.
- extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1,
- if each codebook has a cardinality N, then the first codebook will
- use the range [0, N - 1], and the second [N, 2 N - 1] etc.
- On decoding, this can lead to potentially invalid sequences.
- Any invalid entry will be silently remapped to the proper range
- with a modulo.
- """
- def __init__(self, model: CompressionModel, codebooks_per_step: int = 1,
- extend_cardinality: bool = True):
- super().__init__()
- self.model = model
- self.codebooks_per_step = codebooks_per_step
- self.extend_cardinality = extend_cardinality
-
- @property
- def total_codebooks(self):
- return self.model.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
-
- ..Warning:: this reports the number of codebooks after the flattening
- of the codebooks!
- """
- assert self.model.num_codebooks % self.codebooks_per_step == 0
- return self.codebooks_per_step
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
-
- ..Warning:: this sets the number of codebooks **before** the flattening
- of the codebooks.
- """
- assert n % self.codebooks_per_step == 0
- self.model.set_num_codebooks(n)
-
- @property
- def num_virtual_steps(self) -> int:
- """Return the number of virtual steps, e.g. one real step
- will be split into that many steps.
- """
- return self.model.num_codebooks // self.codebooks_per_step
-
- @property
- def frame_rate(self) -> int:
- return self.model.frame_rate * self.num_virtual_steps
-
- @property
- def sample_rate(self) -> int:
- return self.model.sample_rate
-
- @property
- def channels(self) -> int:
- return self.model.channels
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- if self.extend_cardinality:
- return self.model.cardinality * self.num_virtual_steps
- else:
- return self.model.cardinality
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- raise NotImplementedError("Not supported, use encode and decode.")
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- indices, scales = self.model.encode(x)
- B, K, T = indices.shape
- indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step)
- if self.extend_cardinality:
- for virtual_step in range(1, self.num_virtual_steps):
- indices[..., virtual_step] += self.model.cardinality * virtual_step
- indices = rearrange(indices, 'b k t v -> b k (t v)')
- return (indices, scales)
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- B, K, T = codes.shape
- assert T % self.num_virtual_steps == 0
- codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps)
- # We silently ignore potential errors from the LM when
- # using extend_cardinality.
- codes = codes % self.model.cardinality
- return self.model.decode(codes, scale)
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/compose.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/compose.py
deleted file mode 100644
index cbfcbb925c6d4ebf849328b9f94ef6fc24359bf5..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/compose.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import collections
-
-from annotator.uniformer.mmcv.utils import build_from_cfg
-
-from ..builder import PIPELINES
-
-
-@PIPELINES.register_module()
-class Compose(object):
- """Compose multiple transforms sequentially.
-
- Args:
- transforms (Sequence[dict | callable]): Sequence of transform object or
- config dict to be composed.
- """
-
- def __init__(self, transforms):
- assert isinstance(transforms, collections.abc.Sequence)
- self.transforms = []
- for transform in transforms:
- if isinstance(transform, dict):
- transform = build_from_cfg(transform, PIPELINES)
- self.transforms.append(transform)
- elif callable(transform):
- self.transforms.append(transform)
- else:
- raise TypeError('transform must be callable or a dict')
-
- def __call__(self, data):
- """Call function to apply transforms sequentially.
-
- Args:
- data (dict): A result dict contains the data to transform.
-
- Returns:
- dict: Transformed data.
- """
-
- for t in self.transforms:
- data = t(data)
- if data is None:
- return None
- return data
-
- def __repr__(self):
- format_string = self.__class__.__name__ + '('
- for t in self.transforms:
- format_string += '\n'
- format_string += f' {t}'
- format_string += '\n)'
- return format_string
diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/__init__.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/MingGatsby/VoiceFixer/README.md b/spaces/MingGatsby/VoiceFixer/README.md
deleted file mode 100644
index 76d456430de8f9e57614e0d0b6ba3a4ea945530b..0000000000000000000000000000000000000000
--- a/spaces/MingGatsby/VoiceFixer/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: VoiceFixer
-emoji: 💩
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-app_file: app.py
-pinned: false
-duplicated_from: Kevin676/VoiceFixer
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/tune.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/tune.py
deleted file mode 100644
index 3473b5e94bd3c1f737a18f0187790d5df2d7a2aa..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/tune.py
+++ /dev/null
@@ -1,262 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-r"""Run grid search.
-
-Look at launch_tuning.sh for details on how to tune at scale.
-
-Usage example:
-Tune with one worker on the local machine.
-
-CONFIG="agent=c(algorithm='pg'),"
-CONFIG+="env=c(task_cycle=['reverse-tune', 'remove-tune'])"
-HPARAM_SPACE_TYPE="pg"
-OUT_DIR="/tmp/bf_pg_tune"
-MAX_NPE=5000000
-NUM_REPETITIONS=50
-rm -rf $OUT_DIR
-mkdir $OUT_DIR
-bazel run -c opt single_task:tune -- \
- --alsologtostderr \
- --config="$CONFIG" \
- --max_npe="$MAX_NPE" \
- --num_repetitions="$NUM_REPETITIONS" \
- --logdir="$OUT_DIR" \
- --summary_interval=1 \
- --model_v=0 \
- --hparam_space="$HPARAM_SPACE_TYPE" \
- --tuner_id=0 \
- --num_tuners=1 \
- 2>&1 >"$OUT_DIR/tuner_0.log"
-learning/brain/tensorboard/tensorboard.sh --port 12345 --logdir "$OUT_DIR"
-"""
-
-import ast
-import os
-
-from absl import app
-from absl import flags
-from absl import logging
-import numpy as np
-from six.moves import xrange
-import tensorflow as tf
-
-from single_task import defaults # brain coder
-from single_task import run as run_lib # brain coder
-
-FLAGS = flags.FLAGS
-flags.DEFINE_integer(
- 'tuner_id', 0,
- 'The unique ID for this tuning worker.')
-flags.DEFINE_integer(
- 'num_tuners', 1,
- 'How many tuners are there.')
-flags.DEFINE_string(
- 'hparam_space', 'default',
- 'String name which denotes the hparam space to tune over. This is '
- 'algorithm dependent.')
-flags.DEFINE_string(
- 'fixed_hparams', '',
- 'HParams string. Used to fix hparams during tuning.')
-flags.DEFINE_float(
- 'success_rate_objective_weight', 1.0,
- 'How much to weight success rate vs num programs seen. By default, only '
- 'success rate is optimized (this is the setting used in the paper).')
-
-
-def parse_hparams_string(hparams_str):
- hparams = {}
- for term in hparams_str.split(','):
- if not term:
- continue
- name, value = term.split('=')
- hparams[name.strip()] = ast.literal_eval(value)
- return hparams
-
-
-def int_to_multibase(n, bases):
- digits = [0] * len(bases)
- for i, b in enumerate(bases):
- n, d = divmod(n, b)
- digits[i] = d
- return digits
-
-
-def hparams_for_index(index, tuning_space):
- keys = sorted(tuning_space.keys())
- indices = int_to_multibase(index, [len(tuning_space[k]) for k in keys])
- return tf.contrib.training.HParams(
- **{k: tuning_space[k][i] for k, i in zip(keys, indices)})
-
-
-def run_tuner_loop(ns):
- """Run tuning loop for this worker."""
- is_chief = FLAGS.task_id == 0
- tuning_space = ns.define_tuner_hparam_space(
- hparam_space_type=FLAGS.hparam_space)
- fixed_hparams = parse_hparams_string(FLAGS.fixed_hparams)
- for name, value in fixed_hparams.iteritems():
- tuning_space[name] = [value]
- tuning_space_size = np.prod([len(values) for values in tuning_space.values()])
-
- num_local_trials, remainder = divmod(tuning_space_size, FLAGS.num_tuners)
- if FLAGS.tuner_id < remainder:
- num_local_trials += 1
- starting_trial_id = (
- num_local_trials * FLAGS.tuner_id + min(remainder, FLAGS.tuner_id))
-
- logging.info('tuning_space_size: %d', tuning_space_size)
- logging.info('num_local_trials: %d', num_local_trials)
- logging.info('starting_trial_id: %d', starting_trial_id)
-
- for local_trial_index in xrange(num_local_trials):
- trial_config = defaults.default_config_with_updates(FLAGS.config)
- global_trial_index = local_trial_index + starting_trial_id
- trial_name = 'trial_' + str(global_trial_index)
- trial_dir = os.path.join(FLAGS.logdir, trial_name)
- hparams = hparams_for_index(global_trial_index, tuning_space)
- ns.write_hparams_to_config(
- trial_config, hparams, hparam_space_type=FLAGS.hparam_space)
-
- results_list = ns.run_training(
- config=trial_config, tuner=None, logdir=trial_dir, is_chief=is_chief,
- trial_name=trial_name)
-
- if not is_chief:
- # Only chief worker needs to write tuning results to disk.
- continue
-
- objective, metrics = compute_tuning_objective(
- results_list, hparams, trial_name, num_trials=tuning_space_size)
- logging.info('metrics:\n%s', metrics)
- logging.info('objective: %s', objective)
- logging.info('programs_seen_fraction: %s',
- metrics['programs_seen_fraction'])
- logging.info('success_rate: %s', metrics['success_rate'])
- logging.info('success_rate_objective_weight: %s',
- FLAGS.success_rate_objective_weight)
-
- tuning_results_file = os.path.join(trial_dir, 'tuning_results.txt')
- with tf.gfile.FastGFile(tuning_results_file, 'a') as writer:
- writer.write(str(metrics) + '\n')
-
- logging.info('Trial %s complete.', trial_name)
-
-
-def compute_tuning_objective(results_list, hparams, trial_name, num_trials):
- """Compute tuning objective and metrics given results and trial information.
-
- Args:
- results_list: List of results dicts read from disk. These are written by
- workers.
- hparams: tf.contrib.training.HParams instance containing the hparams used
- in this trial (only the hparams which are being tuned).
- trial_name: Name of this trial. Used to create a trial directory.
- num_trials: Total number of trials that need to be run. This is saved in the
- metrics dict for future reference.
-
- Returns:
- objective: The objective computed for this trial. Choose the hparams for the
- trial with the largest objective value.
- metrics: Information about this trial. A dict.
- """
- found_solution = [r['found_solution'] for r in results_list]
- successful_program_counts = [
- r['npe'] for r in results_list if r['found_solution']]
-
- success_rate = sum(found_solution) / float(len(results_list))
-
- max_programs = FLAGS.max_npe # Per run.
- all_program_counts = [
- r['npe'] if r['found_solution'] else max_programs
- for r in results_list]
- programs_seen_fraction = (
- float(sum(all_program_counts))
- / (max_programs * len(all_program_counts)))
-
- # min/max/avg stats are over successful runs.
- metrics = {
- 'num_runs': len(results_list),
- 'num_succeeded': sum(found_solution),
- 'success_rate': success_rate,
- 'programs_seen_fraction': programs_seen_fraction,
- 'avg_programs': np.mean(successful_program_counts),
- 'max_possible_programs_per_run': max_programs,
- 'global_step': sum([r['num_batches'] for r in results_list]),
- 'hparams': hparams.values(),
- 'trial_name': trial_name,
- 'num_trials': num_trials}
-
- # Report stats per tasks.
- tasks = [r['task'] for r in results_list]
- for task in set(tasks):
- task_list = [r for r in results_list if r['task'] == task]
- found_solution = [r['found_solution'] for r in task_list]
- successful_rewards = [
- r['best_reward'] for r in task_list
- if r['found_solution']]
- successful_num_batches = [
- r['num_batches']
- for r in task_list if r['found_solution']]
- successful_program_counts = [
- r['npe'] for r in task_list if r['found_solution']]
- metrics_append = {
- task + '__num_runs': len(task_list),
- task + '__num_succeeded': sum(found_solution),
- task + '__success_rate': (
- sum(found_solution) / float(len(task_list)))}
- metrics.update(metrics_append)
- if any(found_solution):
- metrics_append = {
- task + '__min_reward': min(successful_rewards),
- task + '__max_reward': max(successful_rewards),
- task + '__avg_reward': np.median(successful_rewards),
- task + '__min_programs': min(successful_program_counts),
- task + '__max_programs': max(successful_program_counts),
- task + '__avg_programs': np.mean(successful_program_counts),
- task + '__min_batches': min(successful_num_batches),
- task + '__max_batches': max(successful_num_batches),
- task + '__avg_batches': np.mean(successful_num_batches)}
- metrics.update(metrics_append)
-
- # Objective will be maximized.
- # Maximize success rate, minimize num programs seen.
- # Max objective is always 1.
- weight = FLAGS.success_rate_objective_weight
- objective = (
- weight * success_rate
- + (1 - weight) * (1 - programs_seen_fraction))
- metrics['objective'] = objective
-
- return objective, metrics
-
-
-def main(argv):
- del argv
-
- logging.set_verbosity(FLAGS.log_level)
-
- if not FLAGS.logdir:
- raise ValueError('logdir flag must be provided.')
- if FLAGS.num_workers <= 0:
- raise ValueError('num_workers flag must be greater than 0.')
- if FLAGS.task_id < 0:
- raise ValueError('task_id flag must be greater than or equal to 0.')
- if FLAGS.task_id >= FLAGS.num_workers:
- raise ValueError(
- 'task_id flag must be strictly less than num_workers flag.')
- if FLAGS.num_tuners <= 0:
- raise ValueError('num_tuners flag must be greater than 0.')
- if FLAGS.tuner_id < 0:
- raise ValueError('tuner_id flag must be greater than or equal to 0.')
- if FLAGS.tuner_id >= FLAGS.num_tuners:
- raise ValueError(
- 'tuner_id flag must be strictly less than num_tuners flag.')
-
- ns, _ = run_lib.get_namespace(FLAGS.config)
- run_tuner_loop(ns)
-
-
-if __name__ == '__main__':
- app.run(main)
diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_download_init_models.sh b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_download_init_models.sh
deleted file mode 100644
index 1900bd0b03566d29dac8a8de5f4fce623be98a92..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_download_init_models.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-# Script to download models to initialize the RGB and D models for training.We
-# use ResNet-v2-50 for both modalities.
-
-mkdir -p data/init_models
-cd data/init_models
-
-# RGB Models are initialized by pre-training on ImageNet.
-mkdir -p resnet_v2_50
-RGB_URL="http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz"
-wget $RGB_URL
-tar -xf resnet_v2_50_2017_04_14.tar.gz -C resnet_v2_50
-
-# Depth models are initialized by distilling the RGB model to D images using
-# Cross-Modal Distillation (https://arxiv.org/abs/1507.00448).
-mkdir -p distill_rgb_to_d_resnet_v2_50
-D_URL="http://download.tensorflow.org/models/cognitive_mapping_and_planning/2017_04_16/distill_rgb_to_d_resnet_v2_50.tar"
-wget $D_URL
-tar -xf distill_rgb_to_d_resnet_v2_50.tar -C distill_rgb_to_d_resnet_v2_50
diff --git a/spaces/Nee001/bing0/src/lib/bots/bing/types.ts b/spaces/Nee001/bing0/src/lib/bots/bing/types.ts
deleted file mode 100644
index 5a9813b797d13b592ec17b45cfac4bd46510d883..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/lib/bots/bing/types.ts
+++ /dev/null
@@ -1,261 +0,0 @@
-export type Author = 'user' | 'system' | 'bot'
-
-export type BotId = 'bing'
-
-export enum BingConversationStyle {
- Creative = 'Creative',
- Balanced = 'Balanced',
- Precise = 'Precise'
-}
-
-export enum ErrorCode {
- CONVERSATION_LIMIT = 'CONVERSATION_LIMIT',
- BING_UNAUTHORIZED = 'BING_UNAUTHORIZED',
- BING_IP_FORBIDDEN = 'BING_IP_FORBIDDEN',
- BING_TRY_LATER = 'BING_TRY_LATER',
- BING_FORBIDDEN = 'BING_FORBIDDEN',
- BING_CAPTCHA = 'BING_CAPTCHA',
- THROTTLE_LIMIT = 'THROTTLE_LIMIT',
- NOTFOUND_ERROR = 'NOT_FOUND_ERROR',
- UNKOWN_ERROR = 'UNKOWN_ERROR',
- NETWORK_ERROR = 'NETWORK_ERROR',
-}
-
-export class ChatError extends Error {
- code: ErrorCode
- constructor(message: string, code: ErrorCode) {
- super(message)
- this.code = code
- }
-}
-
-export type ChatMessageModel = {
- id: string
- author: Author
- text: string
- error?: ChatError
- throttling?: Throttling
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
-}
-
-export interface ConversationModel {
- messages: ChatMessageModel[]
-}
-
-export type Event =
- | {
- type: 'UPDATE_ANSWER'
- data: {
- text: string
- spokenText?: string
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
- throttling?: Throttling
- }
- }
- | {
- type: 'DONE'
- }
- | {
- type: 'ERROR'
- error: ChatError
- }
-
-export interface SendMessageParams {
- prompt: string
- imageUrl?: string
- options: T
- onEvent: (event: Event) => void
- signal?: AbortSignal
-}
-
-export interface ConversationResponse {
- conversationId: string
- clientId: string
- conversationSignature: string
- result: {
- value: string
- message?: string
- }
-}
-
-export interface Telemetry {
- metrics?: null
- startTime: string
-}
-
-export interface ChatUpdateArgument {
- messages?: ChatResponseMessage[]
- throttling?: Throttling
- requestId: string
- result: null
-}
-
-export type ChatUpdateCompleteResponse = {
- type: 2
- invocationId: string
- item: ChatResponseItem
-} | {
- type: 1
- target: string
- arguments: ChatUpdateArgument[]
-} | {
- type: 3
- invocationId: string
-} | {
- type: 6 | 7
-}
-
-export interface ChatRequestResult {
- value: string
- serviceVersion: string
- error?: string
-}
-
-export interface ChatResponseItem {
- messages: ChatResponseMessage[]
- firstNewMessageIndex: number
- suggestedResponses: null
- conversationId: string
- requestId: string
- conversationExpiryTime: string
- telemetry: Telemetry
- result: ChatRequestResult
- throttling: Throttling
-}
-export enum InvocationEventType {
- Invocation = 1,
- StreamItem = 2,
- Completion = 3,
- StreamInvocation = 4,
- CancelInvocation = 5,
- Ping = 6,
- Close = 7,
-}
-
-// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts
-
-export interface ConversationInfo {
- conversationId: string
- clientId: string
- conversationSignature: string
- invocationId: number
- conversationStyle: BingConversationStyle
- prompt: string
- imageUrl?: string
-}
-
-export interface BingChatResponse {
- conversationSignature: string
- conversationId: string
- clientId: string
- invocationId: number
- conversationExpiryTime: Date
- response: string
- details: ChatResponseMessage
-}
-
-export interface Throttling {
- maxNumLongDocSummaryUserMessagesInConversation: number
- maxNumUserMessagesInConversation: number
- numLongDocSummaryUserMessagesInConversation: number
- numUserMessagesInConversation: number
-}
-
-export interface ChatResponseMessage {
- text: string
- spokenText?: string
- author: string
- createdAt: Date
- timestamp: Date
- messageId: string
- requestId: string
- offense: string
- adaptiveCards: AdaptiveCard[]
- sourceAttributions: SourceAttribution[]
- feedback: Feedback
- contentOrigin: string
- messageType?: string
- contentType?: string
- privacy: null
- suggestedResponses: SuggestedResponse[]
-}
-
-export interface AdaptiveCard {
- type: string
- version: string
- body: Body[]
-}
-
-export interface Body {
- type: string
- text: string
- wrap: boolean
- size?: string
-}
-
-export interface Feedback {
- tag: null
- updatedOn: null
- type: string
-}
-
-export interface SourceAttribution {
- providerDisplayName: string
- seeMoreUrl: string
- searchQuery: string
-}
-
-export interface SuggestedResponse {
- text: string
- author?: Author
- createdAt?: Date
- timestamp?: Date
- messageId?: string
- messageType?: string
- offense?: string
- feedback?: Feedback
- contentOrigin?: string
- privacy?: null
-}
-
-export interface KBlobRequest {
- knowledgeRequest: KnowledgeRequestContext
- imageBase64?: string
-}
-
-export interface KBlobResponse {
- blobId: string
- processedBlobId?: string
-}
-
-export interface KnowledgeRequestContext {
- imageInfo: ImageInfo;
- knowledgeRequest: KnowledgeRequest;
-}
-
-export interface ImageInfo {
- url?: string;
-}
-
-export interface KnowledgeRequest {
- invokedSkills: string[];
- subscriptionId: string;
- invokedSkillsRequestData: InvokedSkillsRequestData;
- convoData: ConvoData;
-}
-
-export interface ConvoData {
- convoid: string;
- convotone: BingConversationStyle;
-}
-
-export interface InvokedSkillsRequestData {
- enableFaceBlur: boolean;
-}
-
-export interface FileItem {
- url: string;
- status?: 'loading' | 'error' | 'loaded'
-}
diff --git a/spaces/NoCrypt/pixelization/models/networks.py b/spaces/NoCrypt/pixelization/models/networks.py
deleted file mode 100644
index 0b3f3f825d3d4b6513ab040f6018823f7c2bda03..0000000000000000000000000000000000000000
--- a/spaces/NoCrypt/pixelization/models/networks.py
+++ /dev/null
@@ -1,244 +0,0 @@
-import torch
-import torch.nn as nn
-from torch.nn import init
-import functools
-from torch.optim import lr_scheduler
-from .c2pGen import *
-from .p2cGen import *
-from .c2pDis import *
-
-class Identity(nn.Module):
- def forward(self, x):
- return x
-
-def get_norm_layer(norm_type='instance'):
- """Return a normalization layer
-
- Parameters:
- norm_type (str) -- the name of the normalization layer: batch | instance | none
-
- For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev).
- For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics.
- """
- if norm_type == 'batch':
- norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True)
- elif norm_type == 'instance':
- norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False)
- elif norm_type == 'none':
- def norm_layer(x): return Identity()
- else:
- raise NotImplementedError('normalization layer [%s] is not found' % norm_type)
- return norm_layer
-
-
-def get_scheduler(optimizer, opt):
- """Return a learning rate scheduler
-
- Parameters:
- optimizer -- the optimizer of the network
- opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.
- opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine
-
- For 'linear', we keep the same learning rate for the first epochs
- and linearly decay the rate to zero over the next epochs.
- For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
- See https://pytorch.org/docs/stable/optim.html for more details.
- """
- if opt.lr_policy == 'linear':
- def lambda_rule(epoch):
- lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs_decay + 1)
- return lr_l
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
- elif opt.lr_policy == 'step':
- scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1)
- elif opt.lr_policy == 'plateau':
- scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
- elif opt.lr_policy == 'cosine':
- scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
- else:
- return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
- return scheduler
-
-
-def init_weights(net, init_type='normal', init_gain=0.02):
- """Initialize network weights.
-
- Parameters:
- net (network) -- network to be initialized
- init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
- init_gain (float) -- scaling factor for normal, xavier and orthogonal.
-
- """
- def init_func(m): # define the initialization function
- classname = m.__class__.__name__
- if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
- if init_type == 'normal':
- init.normal_(m.weight.data, 0.0, init_gain)
- elif init_type == 'xavier':
- init.xavier_normal_(m.weight.data, gain=init_gain)
- elif init_type == 'kaiming':
- init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
- elif init_type == 'orthogonal':
- init.orthogonal_(m.weight.data, gain=init_gain)
- else:
- raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
- if hasattr(m, 'bias') and m.bias is not None:
- init.constant_(m.bias.data, 0.0)
- elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies.
- init.normal_(m.weight.data, 1.0, init_gain)
- init.constant_(m.bias.data, 0.0)
-
- #print('initialize network with %s' % init_type)
- net.apply(init_func) # apply the initialization function
-
-
-def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):
- """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
- Parameters:
- net (network) -- the network to be initialized
- init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
- gain (float) -- scaling factor for normal, xavier and orthogonal.
- gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
-
- Return an initialized network.
- """
- gpu_ids = [0]
- if len(gpu_ids) > 0:
- # assert(torch.cuda.is_available()) #uncomment this for using gpu
- net.to(torch.device("cpu")) #change this for using gpu to gpu_ids[0]
- net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs
- init_weights(net, init_type, init_gain=init_gain)
- return net
-
-
-def define_G(input_nc, output_nc, ngf, netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[]):
- """Create a generator
-
- Parameters:
- input_nc (int) -- the number of channels in input images
- output_nc (int) -- the number of channels in output images
- ngf (int) -- the number of filters in the last conv layer
- netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_256 | unet_128
- norm (str) -- the name of normalization layers used in the network: batch | instance | none
- use_dropout (bool) -- if use dropout layers.
- init_type (str) -- the name of our initialization method.
- init_gain (float) -- scaling factor for normal, xavier and orthogonal.
- gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
-
- Returns a generator
- """
- net = None
- norm_layer = get_norm_layer(norm_type=norm)
-
- if netG == 'c2pGen': # style_dim mlp_dim
- net = C2PGen(input_nc, output_nc, ngf, 2, 4, 256, 256, activ='relu', pad_type='reflect')
- #print('c2pgen resblock is 8')
- elif netG == 'p2cGen':
- net = P2CGen(input_nc, output_nc, ngf, 2, 3, activ='relu', pad_type='reflect')
- elif netG == 'antialias':
- net = AliasNet(input_nc, output_nc, ngf, 2, 3, activ='relu', pad_type='reflect')
- else:
- raise NotImplementedError('Generator model name [%s] is not recognized' % netG)
- return init_net(net, init_type, init_gain, gpu_ids)
-
-
-
-def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', init_type='normal', init_gain=0.02, gpu_ids=[]):
- """Create a discriminator
-
- Parameters:
- input_nc (int) -- the number of channels in input images
- ndf (int) -- the number of filters in the first conv layer
- netD (str) -- the architecture's name: basic | n_layers | pixel
- n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers'
- norm (str) -- the type of normalization layers used in the network.
- init_type (str) -- the name of the initialization method.
- init_gain (float) -- scaling factor for normal, xavier and orthogonal.
- gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
-
- Returns a discriminator
- """
- net = None
- norm_layer = get_norm_layer(norm_type=norm)
-
-
- if netD == 'CPDis':
- net = CPDis(image_size=256, conv_dim=64, repeat_num=3, norm='SN')
- elif netD == 'CPDis_cls':
- net = CPDis_cls(image_size=256, conv_dim=64, repeat_num=3, norm='SN')
- else:
- raise NotImplementedError('Discriminator model name [%s] is not recognized' % netD)
- return init_net(net, init_type, init_gain, gpu_ids)
-
-
-class GANLoss(nn.Module):
- """Define different GAN objectives.
-
- The GANLoss class abstracts away the need to create the target label tensor
- that has the same size as the input.
- """
-
- def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0):
- """ Initialize the GANLoss class.
-
- Parameters:
- gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp.
- target_real_label (bool) - - label for a real image
- target_fake_label (bool) - - label of a fake image
-
- Note: Do not use sigmoid as the last layer of Discriminator.
- LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss.
- """
- super(GANLoss, self).__init__()
- self.register_buffer('real_label', torch.tensor(target_real_label))
- self.register_buffer('fake_label', torch.tensor(target_fake_label))
- self.gan_mode = gan_mode
- if gan_mode == 'lsgan':
- self.loss = nn.MSELoss()
- elif gan_mode == 'vanilla':
- self.loss = nn.BCEWithLogitsLoss()
- elif gan_mode in ['wgangp']:
- self.loss = None
- else:
- raise NotImplementedError('gan mode %s not implemented' % gan_mode)
-
- def get_target_tensor(self, prediction, target_is_real):
- """Create label tensors with the same size as the input.
-
- Parameters:
- prediction (tensor) - - tpyically the prediction from a discriminator
- target_is_real (bool) - - if the ground truth label is for real images or fake images
-
- Returns:
- A label tensor filled with ground truth label, and with the size of the input
- """
-
- if target_is_real:
- target_tensor = self.real_label
- else:
- target_tensor = self.fake_label
- return target_tensor.expand_as(prediction)
-
- def __call__(self, prediction, target_is_real):
- """Calculate loss given Discriminator's output and grount truth labels.
-
- Parameters:
- prediction (tensor) - - tpyically the prediction output from a discriminator
- target_is_real (bool) - - if the ground truth label is for real images or fake images
-
- Returns:
- the calculated loss.
- """
- if self.gan_mode in ['lsgan', 'vanilla']:
- target_tensor = self.get_target_tensor(prediction, target_is_real)
- loss = self.loss(prediction, target_tensor)
- elif self.gan_mode == 'wgangp':
- if target_is_real:
- loss = -prediction.mean()
- else:
- loss = prediction.mean()
- return loss
-
-
-
-
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/text_to_speech.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/text_to_speech.py
deleted file mode 100644
index 5646e41d39f6e39d4b046ee34ff69b998dab160d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/text_to_speech.py
+++ /dev/null
@@ -1,467 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import os.path as op
-
-import torch
-import torch.nn.functional as F
-import numpy as np
-
-from fairseq.data.audio.text_to_speech_dataset import TextToSpeechDatasetCreator
-from fairseq.tasks import register_task
-from fairseq.tasks.speech_to_text import SpeechToTextTask
-from fairseq.speech_generator import (
- AutoRegressiveSpeechGenerator, NonAutoregressiveSpeechGenerator,
- TeacherForcingAutoRegressiveSpeechGenerator
-)
-
-logging.basicConfig(
- format='%(asctime)s | %(levelname)s | %(name)s | %(message)s',
- datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO
-)
-logger = logging.getLogger(__name__)
-
-
-try:
- from tensorboardX import SummaryWriter
-except ImportError:
- logger.info("Please install tensorboardX: pip install tensorboardX")
- SummaryWriter = None
-
-
-@register_task('text_to_speech')
-class TextToSpeechTask(SpeechToTextTask):
- @staticmethod
- def add_args(parser):
- parser.add_argument('data', help='manifest root path')
- parser.add_argument(
- '--config-yaml', type=str, default='config.yaml',
- help='Configuration YAML filename (under manifest root)'
- )
- parser.add_argument('--max-source-positions', default=1024, type=int,
- metavar='N',
- help='max number of tokens in the source sequence')
- parser.add_argument('--max-target-positions', default=1200, type=int,
- metavar='N',
- help='max number of tokens in the target sequence')
- parser.add_argument("--n-frames-per-step", type=int, default=1)
- parser.add_argument("--eos-prob-threshold", type=float, default=0.5)
- parser.add_argument("--eval-inference", action="store_true")
- parser.add_argument("--eval-tb-nsample", type=int, default=8)
- parser.add_argument("--vocoder", type=str, default="griffin_lim")
- parser.add_argument("--spec-bwd-max-iter", type=int, default=8)
-
- def __init__(self, args, src_dict):
- super().__init__(args, src_dict)
- self.src_dict = src_dict
- self.sr = self.data_cfg.config.get("features").get("sample_rate")
-
- self.tensorboard_writer = None
- self.tensorboard_dir = ""
- if args.tensorboard_logdir and SummaryWriter is not None:
- self.tensorboard_dir = os.path.join(args.tensorboard_logdir,
- "valid_extra")
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- is_train_split = split.startswith('train')
- pre_tokenizer = self.build_tokenizer(self.args)
- bpe_tokenizer = self.build_bpe(self.args)
- self.datasets[split] = TextToSpeechDatasetCreator.from_tsv(
- self.args.data, self.data_cfg, split, self.src_dict,
- pre_tokenizer, bpe_tokenizer, is_train_split=is_train_split,
- epoch=epoch, seed=self.args.seed,
- n_frames_per_step=self.args.n_frames_per_step,
- speaker_to_id=self.speaker_to_id
- )
-
- @property
- def target_dictionary(self):
- return None
-
- @property
- def source_dictionary(self):
- return self.src_dict
-
- def get_speaker_embeddings_path(self):
- speaker_emb_path = None
- if self.data_cfg.config.get("speaker_emb_filename") is not None:
- speaker_emb_path = op.join(
- self.args.data, self.data_cfg.config.get("speaker_emb_filename")
- )
- return speaker_emb_path
-
- @classmethod
- def get_speaker_embeddings(cls, args):
- embed_speaker = None
- if args.speaker_to_id is not None:
- if args.speaker_emb_path is None:
- embed_speaker = torch.nn.Embedding(
- len(args.speaker_to_id), args.speaker_embed_dim
- )
- else:
- speaker_emb_mat = np.load(args.speaker_emb_path)
- assert speaker_emb_mat.shape[1] == args.speaker_embed_dim
- embed_speaker = torch.nn.Embedding.from_pretrained(
- torch.from_numpy(speaker_emb_mat), freeze=True,
- )
- logger.info(
- f"load speaker embeddings from {args.speaker_emb_path}. "
- f"train embedding? {embed_speaker.weight.requires_grad}\n"
- f"embeddings:\n{speaker_emb_mat}"
- )
- return embed_speaker
-
- def build_model(self, cfg):
- cfg.pitch_min = self.data_cfg.config["features"].get("pitch_min", None)
- cfg.pitch_max = self.data_cfg.config["features"].get("pitch_max", None)
- cfg.energy_min = self.data_cfg.config["features"].get("energy_min", None)
- cfg.energy_max = self.data_cfg.config["features"].get("energy_max", None)
- cfg.speaker_emb_path = self.get_speaker_embeddings_path()
- model = super().build_model(cfg)
- self.generator = None
- if getattr(cfg, "eval_inference", False):
- self.generator = self.build_generator([model], cfg)
- return model
-
- def build_generator(self, models, cfg, vocoder=None, **unused):
- if vocoder is None:
- vocoder = self.build_default_vocoder()
- model = models[0]
- if getattr(model, "NON_AUTOREGRESSIVE", False):
- return NonAutoregressiveSpeechGenerator(
- model, vocoder, self.data_cfg
- )
- else:
- generator = AutoRegressiveSpeechGenerator
- if getattr(cfg, "teacher_forcing", False):
- generator = TeacherForcingAutoRegressiveSpeechGenerator
- logger.info("Teacher forcing mode for generation")
- return generator(
- model, vocoder, self.data_cfg,
- max_iter=self.args.max_target_positions,
- eos_prob_threshold=self.args.eos_prob_threshold
- )
-
- def build_default_vocoder(self):
- from fairseq.models.text_to_speech.vocoder import get_vocoder
- vocoder = get_vocoder(self.args, self.data_cfg)
- if torch.cuda.is_available() and not self.args.cpu:
- vocoder = vocoder.cuda()
- else:
- vocoder = vocoder.cpu()
- return vocoder
-
- def valid_step(self, sample, model, criterion):
- loss, sample_size, logging_output = super().valid_step(
- sample, model, criterion
- )
-
- if getattr(self.args, "eval_inference", False):
- hypos, inference_losses = self.valid_step_with_inference(
- sample, model, self.generator
- )
- for k, v in inference_losses.items():
- assert(k not in logging_output)
- logging_output[k] = v
-
- picked_id = 0
- if self.tensorboard_dir and (sample["id"] == picked_id).any():
- self.log_tensorboard(
- sample,
- hypos[:self.args.eval_tb_nsample],
- model._num_updates,
- is_na_model=getattr(model, "NON_AUTOREGRESSIVE", False)
- )
- return loss, sample_size, logging_output
-
- def valid_step_with_inference(self, sample, model, generator):
- hypos = generator.generate(model, sample, has_targ=True)
-
- losses = {
- "mcd_loss": 0.,
- "targ_frames": 0.,
- "pred_frames": 0.,
- "nins": 0.,
- "ndel": 0.,
- }
- rets = batch_mel_cepstral_distortion(
- [hypo["targ_waveform"] for hypo in hypos],
- [hypo["waveform"] for hypo in hypos],
- self.sr,
- normalize_type=None
- )
- for d, extra in rets:
- pathmap = extra[-1]
- losses["mcd_loss"] += d.item()
- losses["targ_frames"] += pathmap.size(0)
- losses["pred_frames"] += pathmap.size(1)
- losses["nins"] += (pathmap.sum(dim=1) - 1).sum().item()
- losses["ndel"] += (pathmap.sum(dim=0) - 1).sum().item()
-
- return hypos, losses
-
- def log_tensorboard(self, sample, hypos, num_updates, is_na_model=False):
- if self.tensorboard_writer is None:
- self.tensorboard_writer = SummaryWriter(self.tensorboard_dir)
- tb_writer = self.tensorboard_writer
- for b in range(len(hypos)):
- idx = sample["id"][b]
- text = sample["src_texts"][b]
- targ = hypos[b]["targ_feature"]
- pred = hypos[b]["feature"]
- attn = hypos[b]["attn"]
-
- if is_na_model:
- data = plot_tts_output(
- [targ.transpose(0, 1), pred.transpose(0, 1)],
- [f"target (idx={idx})", "output"], attn,
- "alignment", ret_np=True, suptitle=text,
- )
- else:
- eos_prob = hypos[b]["eos_prob"]
- data = plot_tts_output(
- [targ.transpose(0, 1), pred.transpose(0, 1), attn],
- [f"target (idx={idx})", "output", "alignment"], eos_prob,
- "eos prob", ret_np=True, suptitle=text,
- )
-
- tb_writer.add_image(
- f"inference_sample_{b}", data, num_updates,
- dataformats="HWC"
- )
-
- if hypos[b]["waveform"] is not None:
- targ_wave = hypos[b]["targ_waveform"].detach().cpu().float()
- pred_wave = hypos[b]["waveform"].detach().cpu().float()
- tb_writer.add_audio(
- f"inference_targ_{b}",
- targ_wave,
- num_updates,
- sample_rate=self.sr
- )
- tb_writer.add_audio(
- f"inference_pred_{b}",
- pred_wave,
- num_updates,
- sample_rate=self.sr
- )
-
-
-def save_figure_to_numpy(fig):
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- return data
-
-
-DEFAULT_V_MIN = np.log(1e-5)
-
-
-def plot_tts_output(
- data_2d, title_2d, data_1d, title_1d, figsize=(24, 4),
- v_min=DEFAULT_V_MIN, v_max=3, ret_np=False, suptitle=""
-):
- try:
- import matplotlib.pyplot as plt
- from mpl_toolkits.axes_grid1 import make_axes_locatable
- except ImportError:
- raise ImportError("Please install Matplotlib: pip install matplotlib")
-
- data_2d = [
- x.detach().cpu().float().numpy()
- if isinstance(x, torch.Tensor) else x for x in data_2d
- ]
- fig, axes = plt.subplots(1, len(data_2d) + 1, figsize=figsize)
- if suptitle:
- fig.suptitle(suptitle[:400]) # capped at 400 chars
- axes = [axes] if len(data_2d) == 0 else axes
- for ax, x, name in zip(axes, data_2d, title_2d):
- ax.set_title(name)
- divider = make_axes_locatable(ax)
- cax = divider.append_axes('right', size='5%', pad=0.05)
- im = ax.imshow(
- x, origin="lower", aspect="auto", vmin=max(x.min(), v_min),
- vmax=min(x.max(), v_max)
- )
- fig.colorbar(im, cax=cax, orientation='vertical')
-
- if isinstance(data_1d, torch.Tensor):
- data_1d = data_1d.detach().cpu().numpy()
- axes[-1].plot(data_1d)
- axes[-1].set_title(title_1d)
- plt.tight_layout()
-
- if ret_np:
- fig.canvas.draw()
- data = save_figure_to_numpy(fig)
- plt.close(fig)
- return data
-
-
-def antidiag_indices(offset, min_i=0, max_i=None, min_j=0, max_j=None):
- """
- for a (3, 4) matrix with min_i=1, max_i=3, min_j=1, max_j=4, outputs
-
- offset=2 (1, 1),
- offset=3 (2, 1), (1, 2)
- offset=4 (2, 2), (1, 3)
- offset=5 (2, 3)
-
- constraints:
- i + j = offset
- min_j <= j < max_j
- min_i <= offset - j < max_i
- """
- if max_i is None:
- max_i = offset + 1
- if max_j is None:
- max_j = offset + 1
- min_j = max(min_j, offset - max_i + 1, 0)
- max_j = min(max_j, offset - min_i + 1, offset + 1)
- j = torch.arange(min_j, max_j)
- i = offset - j
- return torch.stack([i, j])
-
-
-def batch_dynamic_time_warping(distance, shapes=None):
- """full batched DTW without any constraints
-
- distance: (batchsize, max_M, max_N) matrix
- shapes: (batchsize,) vector specifying (M, N) for each entry
- """
- # ptr: 0=left, 1=up-left, 2=up
- ptr2dij = {0: (0, -1), 1: (-1, -1), 2: (-1, 0)}
-
- bsz, m, n = distance.size()
- cumdist = torch.zeros_like(distance)
- backptr = torch.zeros_like(distance).type(torch.int32) - 1
-
- # initialize
- cumdist[:, 0, :] = distance[:, 0, :].cumsum(dim=-1)
- cumdist[:, :, 0] = distance[:, :, 0].cumsum(dim=-1)
- backptr[:, 0, :] = 0
- backptr[:, :, 0] = 2
-
- # DP with optimized anti-diagonal parallelization, O(M+N) steps
- for offset in range(2, m + n - 1):
- ind = antidiag_indices(offset, 1, m, 1, n)
- c = torch.stack(
- [cumdist[:, ind[0], ind[1] - 1], cumdist[:, ind[0] - 1, ind[1] - 1],
- cumdist[:, ind[0] - 1, ind[1]], ],
- dim=2
- )
- v, b = c.min(axis=-1)
- backptr[:, ind[0], ind[1]] = b.int()
- cumdist[:, ind[0], ind[1]] = v + distance[:, ind[0], ind[1]]
-
- # backtrace
- pathmap = torch.zeros_like(backptr)
- for b in range(bsz):
- i = m - 1 if shapes is None else (shapes[b][0] - 1).item()
- j = n - 1 if shapes is None else (shapes[b][1] - 1).item()
- dtwpath = [(i, j)]
- while (i != 0 or j != 0) and len(dtwpath) < 10000:
- assert (i >= 0 and j >= 0)
- di, dj = ptr2dij[backptr[b, i, j].item()]
- i, j = i + di, j + dj
- dtwpath.append((i, j))
- dtwpath = dtwpath[::-1]
- indices = torch.from_numpy(np.array(dtwpath))
- pathmap[b, indices[:, 0], indices[:, 1]] = 1
-
- return cumdist, backptr, pathmap
-
-
-def compute_l2_dist(x1, x2):
- """compute an (m, n) L2 distance matrix from (m, d) and (n, d) matrices"""
- return torch.cdist(x1.unsqueeze(0), x2.unsqueeze(0), p=2).squeeze(0).pow(2)
-
-
-def compute_rms_dist(x1, x2):
- l2_dist = compute_l2_dist(x1, x2)
- return (l2_dist / x1.size(1)).pow(0.5)
-
-
-def get_divisor(pathmap, normalize_type):
- if normalize_type is None:
- return 1
- elif normalize_type == "len1":
- return pathmap.size(0)
- elif normalize_type == "len2":
- return pathmap.size(1)
- elif normalize_type == "path":
- return pathmap.sum().item()
- else:
- raise ValueError(f"normalize_type {normalize_type} not supported")
-
-
-def batch_compute_distortion(y1, y2, sr, feat_fn, dist_fn, normalize_type):
- d, s, x1, x2 = [], [], [], []
- for cur_y1, cur_y2 in zip(y1, y2):
- assert (cur_y1.ndim == 1 and cur_y2.ndim == 1)
- cur_x1 = feat_fn(cur_y1)
- cur_x2 = feat_fn(cur_y2)
- x1.append(cur_x1)
- x2.append(cur_x2)
-
- cur_d = dist_fn(cur_x1, cur_x2)
- d.append(cur_d)
- s.append(d[-1].size())
- max_m = max(ss[0] for ss in s)
- max_n = max(ss[1] for ss in s)
- d = torch.stack(
- [F.pad(dd, (0, max_n - dd.size(1), 0, max_m - dd.size(0))) for dd in d]
- )
- s = torch.LongTensor(s).to(d.device)
- cumdists, backptrs, pathmaps = batch_dynamic_time_warping(d, s)
-
- rets = []
- itr = zip(s, x1, x2, d, cumdists, backptrs, pathmaps)
- for (m, n), cur_x1, cur_x2, dist, cumdist, backptr, pathmap in itr:
- cumdist = cumdist[:m, :n]
- backptr = backptr[:m, :n]
- pathmap = pathmap[:m, :n]
- divisor = get_divisor(pathmap, normalize_type)
-
- distortion = cumdist[-1, -1] / divisor
- ret = distortion, (cur_x1, cur_x2, dist, cumdist, backptr, pathmap)
- rets.append(ret)
- return rets
-
-
-def batch_mel_cepstral_distortion(
- y1, y2, sr, normalize_type="path", mfcc_fn=None
-):
- """
- https://arxiv.org/pdf/2011.03568.pdf
-
- The root mean squared error computed on 13-dimensional MFCC using DTW for
- alignment. MFCC features are computed from an 80-channel log-mel
- spectrogram using a 50ms Hann window and hop of 12.5ms.
-
- y1: list of waveforms
- y2: list of waveforms
- sr: sampling rate
- """
-
- try:
- import torchaudio
- except ImportError:
- raise ImportError("Please install torchaudio: pip install torchaudio")
-
- if mfcc_fn is None or mfcc_fn.sample_rate != sr:
- melkwargs = {
- "n_fft": int(0.05 * sr), "win_length": int(0.05 * sr),
- "hop_length": int(0.0125 * sr), "f_min": 20,
- "n_mels": 80, "window_fn": torch.hann_window
- }
- mfcc_fn = torchaudio.transforms.MFCC(
- sr, n_mfcc=13, log_mels=True, melkwargs=melkwargs
- ).to(y1[0].device)
- return batch_compute_distortion(
- y1, y2, sr, lambda y: mfcc_fn(y).transpose(-1, -2), compute_rms_dist,
- normalize_type
- )
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/preprocess_RACE.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/preprocess_RACE.py
deleted file mode 100644
index cdd66072718ccb6033304c97926271909a17f9d6..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/preprocess_RACE.py
+++ /dev/null
@@ -1,102 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import json
-import os
-import re
-
-
-class InputExample:
- def __init__(self, paragraph, qa_list, label):
- self.paragraph = paragraph
- self.qa_list = qa_list
- self.label = label
-
-
-def get_examples(data_dir, set_type):
- """
- Extract paragraph and question-answer list from each json file
- """
- examples = []
-
- levels = ["middle", "high"]
- set_type_c = set_type.split("-")
- if len(set_type_c) == 2:
- levels = [set_type_c[1]]
- set_type = set_type_c[0]
- for level in levels:
- cur_dir = os.path.join(data_dir, set_type, level)
- for filename in os.listdir(cur_dir):
- cur_path = os.path.join(cur_dir, filename)
- with open(cur_path, "r") as f:
- cur_data = json.load(f)
- answers = cur_data["answers"]
- options = cur_data["options"]
- questions = cur_data["questions"]
- context = cur_data["article"].replace("\n", " ")
- context = re.sub(r"\s+", " ", context)
- for i in range(len(answers)):
- label = ord(answers[i]) - ord("A")
- qa_list = []
- question = questions[i]
- for j in range(4):
- option = options[i][j]
- if "_" in question:
- qa_cat = question.replace("_", option)
- else:
- qa_cat = " ".join([question, option])
- qa_cat = re.sub(r"\s+", " ", qa_cat)
- qa_list.append(qa_cat)
- examples.append(InputExample(context, qa_list, label))
-
- return examples
-
-
-def main():
- """
- Helper script to extract paragraphs questions and answers from RACE datasets.
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--input-dir",
- help="input directory for downloaded RACE dataset",
- )
- parser.add_argument(
- "--output-dir",
- help="output directory for extracted data",
- )
- args = parser.parse_args()
-
- if not os.path.exists(args.output_dir):
- os.makedirs(args.output_dir, exist_ok=True)
-
- for set_type in ["train", "dev", "test-middle", "test-high"]:
- examples = get_examples(args.input_dir, set_type)
- qa_file_paths = [
- os.path.join(args.output_dir, set_type + ".input" + str(i + 1))
- for i in range(4)
- ]
- qa_files = [open(qa_file_path, "w") for qa_file_path in qa_file_paths]
- outf_context_path = os.path.join(args.output_dir, set_type + ".input0")
- outf_label_path = os.path.join(args.output_dir, set_type + ".label")
- outf_context = open(outf_context_path, "w")
- outf_label = open(outf_label_path, "w")
- for example in examples:
- outf_context.write(example.paragraph + "\n")
- for i in range(4):
- qa_files[i].write(example.qa_list[i] + "\n")
- outf_label.write(str(example.label) + "\n")
-
- for f in qa_files:
- f.close()
- outf_label.close()
- outf_context.close()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py
deleted file mode 100644
index 731320e74ebed4d8ceec58c07cb906542b8b021b..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 2 # 100ep -> 200ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 2 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py
deleted file mode 100644
index ac1118bcb1a8e7cc991a820ff17c4ae889d2d7e9..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py
+++ /dev/null
@@ -1,367 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/data/datasets/register_coco_panoptic_annos_semseg.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-import json
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets import load_sem_seg
-from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES
-from detectron2.utils.file_io import PathManager
-import contextlib
-import logging
-import io
-from fvcore.common.timer import Timer
-import pycocotools.mask as mask_util
-from detectron2.structures import BoxMode
-
-
-logger = logging.getLogger(__name__)
-
-
-_PREDEFINED_SPLITS_COCO_PANOPTIC = {
- "coco_2017_train_panoptic": (
- # This is the original panoptic annotation directory
- "coco/panoptic_train2017",
- "coco/annotations/panoptic_train2017.json",
- # This directory contains semantic annotations that are
- # converted from panoptic annotations.
- # It is used by PanopticFPN.
- # You can use the script at detectron2/datasets/prepare_panoptic_fpn.py
- # to create these directories.
- "coco/panoptic_semseg_train2017",
- ),
- "coco_2017_val_panoptic": (
- "coco/panoptic_val2017",
- "coco/annotations/panoptic_val2017.json",
- "coco/panoptic_semseg_val2017",
- ),
-}
-
-def load_coco_instance_json(json_file, image_root, dataset_name=None):
- from pycocotools.coco import COCO
-
- timer = Timer()
- json_file = PathManager.get_local_path(json_file)
- with contextlib.redirect_stdout(io.StringIO()):
- coco_api = COCO(json_file)
- if timer.seconds() > 1:
- logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds()))
-
- id_map = None
- if dataset_name is not None:
- meta = MetadataCatalog.get(dataset_name)
- cat_ids = sorted(coco_api.getCatIds())
- cats = coco_api.loadCats(cat_ids)
- # The categories in a custom json file may not be sorted.
- thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])]
- meta.thing_classes = thing_classes
-
- # In COCO, certain category ids are artificially removed,
- # and by convention they are always ignored.
- # We deal with COCO's id issue and translate
- # the category ids to contiguous ids in [0, 80).
-
- # It works by looking at the "categories" field in the json, therefore
- # if users' own json also have incontiguous ids, we'll
- # apply this mapping as well but print a warning.
- if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)):
- if "coco" not in dataset_name:
- logger.warning(
- """
-Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
-"""
- )
- id_map = {v: i for i, v in enumerate(cat_ids)}
- meta.thing_dataset_id_to_contiguous_id = id_map
-
- # sort indices for reproducible results
- img_ids = sorted(coco_api.imgs.keys())
- # imgs is a list of dicts, each looks something like:
- # {'license': 4,
- # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg',
- # 'file_name': 'COCO_val2014_000000001268.jpg',
- # 'height': 427,
- # 'width': 640,
- # 'date_captured': '2013-11-17 05:57:24',
- # 'id': 1268}
- imgs = coco_api.loadImgs(img_ids)
- # anns is a list[list[dict]], where each dict is an annotation
- # record for an object. The inner list enumerates the objects in an image
- # and the outer list enumerates over images. Example of anns[0]:
- # [{'segmentation': [[192.81,
- # 247.09,
- # ...
- # 219.03,
- # 249.06]],
- # 'area': 1035.749,
- # 'iscrowd': 0,
- # 'image_id': 1268,
- # 'bbox': [192.81, 224.8, 74.73, 33.43],
- # 'category_id': 16,
- # 'id': 42986},
- # ...]
- anns = [coco_api.imgToAnns[img_id] for img_id in img_ids]
- total_num_valid_anns = sum([len(x) for x in anns])
- total_num_anns = len(coco_api.anns)
- if total_num_valid_anns < total_num_anns:
- logger.warning(
- f"{json_file} contains {total_num_anns} annotations, but only "
- f"{total_num_valid_anns} of them match to images in the file."
- )
-
- if "minival" not in json_file:
- # The popular valminusminival & minival annotations for COCO2014 contain this bug.
- # However the ratio of buggy annotations there is tiny and does not affect accuracy.
- # Therefore we explicitly white-list them.
- ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
- assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique!".format(
- json_file
- )
-
- imgs_anns = list(zip(imgs, anns))
- logger.info("Loaded {} images in COCO format from {}".format(len(imgs_anns), json_file))
-
- dataset_dicts = {}
-
- ann_keys = ["iscrowd", "bbox", "keypoints", "category_id"]
-
- num_instances_without_valid_segmentation = 0
-
- for (img_dict, anno_dict_list) in imgs_anns:
- record = {}
- record["file_name"] = os.path.join(image_root, img_dict["file_name"])
- record["height"] = img_dict["height"]
- record["width"] = img_dict["width"]
- image_id = record["image_id"] = img_dict["id"]
-
- objs = []
- for anno in anno_dict_list:
- # Check that the image_id in this annotation is the same as
- # the image_id we're looking at.
- # This fails only when the data parsing logic or the annotation file is buggy.
-
- # The original COCO valminusminival2014 & minival2014 annotation files
- # actually contains bugs that, together with certain ways of using COCO API,
- # can trigger this assertion.
- assert anno["image_id"] == image_id
-
- assert anno.get("ignore", 0) == 0, '"ignore" in COCO json file is not supported.'
-
- obj = {key: anno[key] for key in ann_keys if key in anno}
- if "bbox" in obj and len(obj["bbox"]) == 0:
- raise ValueError(
- f"One annotation of image {image_id} contains empty 'bbox' value! "
- "This json does not have valid COCO format."
- )
-
- segm = anno.get("segmentation", None)
- if segm: # either list[list[float]] or dict(RLE)
- if isinstance(segm, dict):
- if isinstance(segm["counts"], list):
- # convert to compressed RLE
- segm = mask_util.frPyObjects(segm, *segm["size"])
- else:
- # filter out invalid polygons (< 3 points)
- segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6]
- if len(segm) == 0:
- num_instances_without_valid_segmentation += 1
- continue # ignore this instance
- obj["segmentation"] = segm
-
- keypts = anno.get("keypoints", None)
- if keypts: # list[int]
- for idx, v in enumerate(keypts):
- if idx % 3 != 2:
- # COCO's segmentation coordinates are floating points in [0, H or W],
- # but keypoint coordinates are integers in [0, H-1 or W-1]
- # Therefore we assume the coordinates are "pixel indices" and
- # add 0.5 to convert to floating point coordinates.
- keypts[idx] = v + 0.5
- obj["keypoints"] = keypts
-
- obj["bbox_mode"] = BoxMode.XYWH_ABS
- if id_map:
- annotation_category_id = obj["category_id"]
- try:
- obj["category_id"] = id_map[annotation_category_id]
- except KeyError as e:
- raise KeyError(
- f"Encountered category_id={annotation_category_id} "
- "but this id does not exist in 'categories' of the json file."
- ) from e
- objs.append(obj)
- record["annotations"] = objs
- dataset_dicts[image_id] = record
-
- if num_instances_without_valid_segmentation > 0:
- logger.warning(
- "Filtered out {} instances without valid segmentation. ".format(
- num_instances_without_valid_segmentation
- )
- + "There might be issues in your dataset generation process. Please "
- "check https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html carefully"
- )
- return dataset_dicts
-
-def get_metadata():
- meta = {}
- # The following metadata maps contiguous id from [0, #thing categories +
- # #stuff categories) to their names and colors. We have to replica of the
- # same name and color under "thing_*" and "stuff_*" because the current
- # visualization function in D2 handles thing and class classes differently
- # due to some heuristic used in Panoptic FPN. We keep the same naming to
- # enable reusing existing visualization functions.
- thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1]
- thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1]
- stuff_classes = [k["name"] for k in COCO_CATEGORIES]
- stuff_colors = [k["color"] for k in COCO_CATEGORIES]
-
- meta["thing_classes"] = thing_classes
- meta["thing_colors"] = thing_colors
- meta["stuff_classes"] = stuff_classes
- meta["stuff_colors"] = stuff_colors
-
- # Convert category id for training:
- # category id: like semantic segmentation, it is the class id for each
- # pixel. Since there are some classes not used in evaluation, the category
- # id is not always contiguous and thus we have two set of category ids:
- # - original category id: category id in the original dataset, mainly
- # used for evaluation.
- # - contiguous category id: [0, #classes), in order to train the linear
- # softmax classifier.
- thing_dataset_id_to_contiguous_id = {}
- stuff_dataset_id_to_contiguous_id = {}
-
- for i, cat in enumerate(COCO_CATEGORIES):
- if cat["isthing"]:
- thing_dataset_id_to_contiguous_id[cat["id"]] = i
- # else:
- # stuff_dataset_id_to_contiguous_id[cat["id"]] = i
-
- # in order to use sem_seg evaluator
- stuff_dataset_id_to_contiguous_id[cat["id"]] = i
-
- meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id
- meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id
-
- return meta
-
-
-def load_coco_panoptic_json(json_file, instances_json, instances_name, image_dir, gt_dir, semseg_dir, meta):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/coco/train2017".
- gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017".
- json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json".
- Returns:
- list[dict]: a list of dicts in Detectron2 standard format. (See
- `Using Custom Datasets `_ )
- """
-
- def _convert_category_id(segment_info, meta):
- if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]:
- segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- segment_info["isthing"] = True
- else:
- segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][
- segment_info["category_id"]
- ]
- segment_info["isthing"] = False
- return segment_info
-
- with PathManager.open(json_file) as f:
- json_info = json.load(f)
-
- instance_data_dicts = load_coco_instance_json(instances_json, image_dir.replace("panoptic_", ""), instances_name)
-
- ret = []
- for ann in json_info["annotations"]:
- image_id = int(ann["image_id"])
- # TODO: currently we assume image and label has the same filename but
- # different extension, and images have extension ".jpg" for COCO. Need
- # to make image extension a user-provided argument if we extend this
- # function to support other COCO-like datasets.
- image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg")
- label_file = os.path.join(gt_dir, ann["file_name"])
- sem_label_file = os.path.join(semseg_dir, ann["file_name"])
- segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]]
- ret.append(
- {
- "file_name": image_file,
- "image_id": image_id,
- "pan_seg_file_name": label_file,
- "sem_seg_file_name": sem_label_file,
- "segments_info": segments_info,
- "annotations": instance_data_dicts[image_id]["annotations"],
- }
- )
- assert len(ret), f"No images found in {image_dir}!"
- assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"]
- assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"]
- assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"]
- return ret
-
-
-def register_coco_panoptic_annos_sem_seg(
- name, metadata, image_root, panoptic_root, panoptic_json, sem_seg_root, instances_json, instances_name,
-):
- panoptic_name = name
- delattr(MetadataCatalog.get(panoptic_name), "thing_classes")
- delattr(MetadataCatalog.get(panoptic_name), "thing_colors")
- MetadataCatalog.get(panoptic_name).set(
- thing_classes=metadata["thing_classes"],
- thing_colors=metadata["thing_colors"],
- # thing_dataset_id_to_contiguous_id=metadata["thing_dataset_id_to_contiguous_id"],
- )
-
- # the name is "coco_2017_train_panoptic_with_sem_seg" and "coco_2017_val_panoptic_with_sem_seg"
- semantic_name = name + "_with_sem_seg"
- DatasetCatalog.register(
- semantic_name,
- lambda: load_coco_panoptic_json(panoptic_json, instances_json, instances_name, image_root, panoptic_root, sem_seg_root, metadata),
- )
- MetadataCatalog.get(semantic_name).set(
- sem_seg_root=sem_seg_root,
- panoptic_root=panoptic_root,
- image_root=image_root,
- panoptic_json=panoptic_json,
- json_file=instances_json,
- evaluator_type="coco_panoptic_seg",
- ignore_label=255,
- label_divisor=1000,
- **metadata,
- )
-
-
-def register_all_coco_panoptic_annos_sem_seg(root):
- for (
- prefix,
- (panoptic_root, panoptic_json, semantic_root),
- ) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items():
-
- prefix_instances = prefix[: -len("_panoptic")]
- instances_meta = MetadataCatalog.get(prefix_instances)
- image_root, instances_json = instances_meta.image_root, instances_meta.json_file
-
- if 'val' in instances_json:
- instances_json = instances_json.replace('instances_', 'panoptic2instances_')
-
- register_coco_panoptic_annos_sem_seg(
- prefix,
- get_metadata(),
- image_root,
- os.path.join(root, panoptic_root),
- os.path.join(root, panoptic_json),
- os.path.join(root, semantic_root),
- instances_json,
- prefix_instances,
- )
-
-
-_root = os.getenv("DETECTRON2_DATASETS", "datasets")
-register_all_coco_panoptic_annos_sem_seg(_root)
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-scheme-source.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-scheme-source.go
deleted file mode 100644
index a3e1c417e8829e4f486161fc206bd51f82add790..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-scheme-source.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/Nan-Do-LeetCodeWizard_13B_v1.0/README.md b/spaces/PeepDaSlan9/Nan-Do-LeetCodeWizard_13B_v1.0/README.md
deleted file mode 100644
index 956cd4a0407ec8757ff05b4bf713c85ed0959d4b..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/Nan-Do-LeetCodeWizard_13B_v1.0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Nan-Do-LeetCodeWizard 13B V1.0
-emoji: 🌍
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/dialog.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/dialog.tsx
deleted file mode 100644
index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/components/ui/dialog.tsx
+++ /dev/null
@@ -1,128 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as DialogPrimitive from '@radix-ui/react-dialog'
-
-import { cn } from '@/lib/utils'
-import { IconClose } from '@/components/ui/icons'
-
-const Dialog = DialogPrimitive.Root
-
-const DialogTrigger = DialogPrimitive.Trigger
-
-const DialogPortal = ({
- className,
- children,
- ...props
-}: DialogPrimitive.DialogPortalProps) => (
-
-
- {children}
-
-
-)
-DialogPortal.displayName = DialogPrimitive.Portal.displayName
-
-const DialogOverlay = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogOverlay.displayName = DialogPrimitive.Overlay.displayName
-
-const DialogContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
- {children}
-
-
- Close
-
-
-
-))
-DialogContent.displayName = DialogPrimitive.Content.displayName
-
-const DialogHeader = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-DialogHeader.displayName = 'DialogHeader'
-
-const DialogFooter = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-DialogFooter.displayName = 'DialogFooter'
-
-const DialogTitle = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogTitle.displayName = DialogPrimitive.Title.displayName
-
-const DialogDescription = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogDescription.displayName = DialogPrimitive.Description.displayName
-
-export {
- Dialog,
- DialogTrigger,
- DialogContent,
- DialogHeader,
- DialogFooter,
- DialogTitle,
- DialogDescription
-}
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/sync_bn.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/sync_bn.py
deleted file mode 100644
index f78f39181d75bb85c53e8c7c8eaf45690e9f0bee..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/sync_bn.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import torch
-
-import annotator.uniformer.mmcv as mmcv
-
-
-class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm):
- """A general BatchNorm layer without input dimension check.
-
- Reproduced from @kapily's work:
- (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547)
- The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc
- is `_check_input_dim` that is designed for tensor sanity checks.
- The check has been bypassed in this class for the convenience of converting
- SyncBatchNorm.
- """
-
- def _check_input_dim(self, input):
- return
-
-
-def revert_sync_batchnorm(module):
- """Helper function to convert all `SyncBatchNorm` (SyncBN) and
- `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to
- `BatchNormXd` layers.
-
- Adapted from @kapily's work:
- (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547)
-
- Args:
- module (nn.Module): The module containing `SyncBatchNorm` layers.
-
- Returns:
- module_output: The converted module with `BatchNormXd` layers.
- """
- module_output = module
- module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm]
- if hasattr(mmcv, 'ops'):
- module_checklist.append(mmcv.ops.SyncBatchNorm)
- if isinstance(module, tuple(module_checklist)):
- module_output = _BatchNormXd(module.num_features, module.eps,
- module.momentum, module.affine,
- module.track_running_stats)
- if module.affine:
- # no_grad() may not be needed here but
- # just to be consistent with `convert_sync_batchnorm()`
- with torch.no_grad():
- module_output.weight = module.weight
- module_output.bias = module.bias
- module_output.running_mean = module.running_mean
- module_output.running_var = module.running_var
- module_output.num_batches_tracked = module.num_batches_tracked
- module_output.training = module.training
- # qconfig exists in quantized models
- if hasattr(module, 'qconfig'):
- module_output.qconfig = module.qconfig
- for name, child in module.named_children():
- module_output.add_module(name, revert_sync_batchnorm(child))
- del module
- return module_output
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/__init__.py
deleted file mode 100644
index 835df136bdcf69348281d22914d41aa84cdf92b1..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/visualization/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .color import Color, color_val
-from .image import imshow, imshow_bboxes, imshow_det_bboxes
-from .optflow import flow2rgb, flowshow, make_color_wheel
-
-__all__ = [
- 'Color', 'color_val', 'imshow', 'imshow_bboxes', 'imshow_det_bboxes',
- 'flowshow', 'flow2rgb', 'make_color_wheel'
-]
diff --git a/spaces/Potanin/12345/rmvpe.py b/spaces/Potanin/12345/rmvpe.py
deleted file mode 100644
index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000
--- a/spaces/Potanin/12345/rmvpe.py
+++ /dev/null
@@ -1,432 +0,0 @@
-import sys, torch, numpy as np, traceback, pdb
-import torch.nn as nn
-from time import time as ttime
-import torch.nn.functional as F
-
-
-class BiGRU(nn.Module):
- def __init__(self, input_features, hidden_features, num_layers):
- super(BiGRU, self).__init__()
- self.gru = nn.GRU(
- input_features,
- hidden_features,
- num_layers=num_layers,
- batch_first=True,
- bidirectional=True,
- )
-
- def forward(self, x):
- return self.gru(x)[0]
-
-
-class ConvBlockRes(nn.Module):
- def __init__(self, in_channels, out_channels, momentum=0.01):
- super(ConvBlockRes, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- nn.Conv2d(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- if in_channels != out_channels:
- self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1))
- self.is_shortcut = True
- else:
- self.is_shortcut = False
-
- def forward(self, x):
- if self.is_shortcut:
- return self.conv(x) + self.shortcut(x)
- else:
- return self.conv(x) + x
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- in_channels,
- in_size,
- n_encoders,
- kernel_size,
- n_blocks,
- out_channels=16,
- momentum=0.01,
- ):
- super(Encoder, self).__init__()
- self.n_encoders = n_encoders
- self.bn = nn.BatchNorm2d(in_channels, momentum=momentum)
- self.layers = nn.ModuleList()
- self.latent_channels = []
- for i in range(self.n_encoders):
- self.layers.append(
- ResEncoderBlock(
- in_channels, out_channels, kernel_size, n_blocks, momentum=momentum
- )
- )
- self.latent_channels.append([out_channels, in_size])
- in_channels = out_channels
- out_channels *= 2
- in_size //= 2
- self.out_size = in_size
- self.out_channel = out_channels
-
- def forward(self, x):
- concat_tensors = []
- x = self.bn(x)
- for i in range(self.n_encoders):
- _, x = self.layers[i](x)
- concat_tensors.append(_)
- return x, concat_tensors
-
-
-class ResEncoderBlock(nn.Module):
- def __init__(
- self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01
- ):
- super(ResEncoderBlock, self).__init__()
- self.n_blocks = n_blocks
- self.conv = nn.ModuleList()
- self.conv.append(ConvBlockRes(in_channels, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv.append(ConvBlockRes(out_channels, out_channels, momentum))
- self.kernel_size = kernel_size
- if self.kernel_size is not None:
- self.pool = nn.AvgPool2d(kernel_size=kernel_size)
-
- def forward(self, x):
- for i in range(self.n_blocks):
- x = self.conv[i](x)
- if self.kernel_size is not None:
- return x, self.pool(x)
- else:
- return x
-
-
-class Intermediate(nn.Module): #
- def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01):
- super(Intermediate, self).__init__()
- self.n_inters = n_inters
- self.layers = nn.ModuleList()
- self.layers.append(
- ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum)
- )
- for i in range(self.n_inters - 1):
- self.layers.append(
- ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum)
- )
-
- def forward(self, x):
- for i in range(self.n_inters):
- x = self.layers[i](x)
- return x
-
-
-class ResDecoderBlock(nn.Module):
- def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01):
- super(ResDecoderBlock, self).__init__()
- out_padding = (0, 1) if stride == (1, 2) else (1, 1)
- self.n_blocks = n_blocks
- self.conv1 = nn.Sequential(
- nn.ConvTranspose2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=stride,
- padding=(1, 1),
- output_padding=out_padding,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- self.conv2 = nn.ModuleList()
- self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum))
-
- def forward(self, x, concat_tensor):
- x = self.conv1(x)
- x = torch.cat((x, concat_tensor), dim=1)
- for i in range(self.n_blocks):
- x = self.conv2[i](x)
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01):
- super(Decoder, self).__init__()
- self.layers = nn.ModuleList()
- self.n_decoders = n_decoders
- for i in range(self.n_decoders):
- out_channels = in_channels // 2
- self.layers.append(
- ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum)
- )
- in_channels = out_channels
-
- def forward(self, x, concat_tensors):
- for i in range(self.n_decoders):
- x = self.layers[i](x, concat_tensors[-1 - i])
- return x
-
-
-class DeepUnet(nn.Module):
- def __init__(
- self,
- kernel_size,
- n_blocks,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(DeepUnet, self).__init__()
- self.encoder = Encoder(
- in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels
- )
- self.intermediate = Intermediate(
- self.encoder.out_channel // 2,
- self.encoder.out_channel,
- inter_layers,
- n_blocks,
- )
- self.decoder = Decoder(
- self.encoder.out_channel, en_de_layers, kernel_size, n_blocks
- )
-
- def forward(self, x):
- x, concat_tensors = self.encoder(x)
- x = self.intermediate(x)
- x = self.decoder(x, concat_tensors)
- return x
-
-
-class E2E(nn.Module):
- def __init__(
- self,
- n_blocks,
- n_gru,
- kernel_size,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(E2E, self).__init__()
- self.unet = DeepUnet(
- kernel_size,
- n_blocks,
- en_de_layers,
- inter_layers,
- in_channels,
- en_out_channels,
- )
- self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1))
- if n_gru:
- self.fc = nn.Sequential(
- BiGRU(3 * 128, 256, n_gru),
- nn.Linear(512, 360),
- nn.Dropout(0.25),
- nn.Sigmoid(),
- )
- else:
- self.fc = nn.Sequential(
- nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid()
- )
-
- def forward(self, mel):
- mel = mel.transpose(-1, -2).unsqueeze(1)
- x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2)
- x = self.fc(x)
- return x
-
-
-from librosa.filters import mel
-
-
-class MelSpectrogram(torch.nn.Module):
- def __init__(
- self,
- is_half,
- n_mel_channels,
- sampling_rate,
- win_length,
- hop_length,
- n_fft=None,
- mel_fmin=0,
- mel_fmax=None,
- clamp=1e-5,
- ):
- super().__init__()
- n_fft = win_length if n_fft is None else n_fft
- self.hann_window = {}
- mel_basis = mel(
- sr=sampling_rate,
- n_fft=n_fft,
- n_mels=n_mel_channels,
- fmin=mel_fmin,
- fmax=mel_fmax,
- htk=True,
- )
- mel_basis = torch.from_numpy(mel_basis).float()
- self.register_buffer("mel_basis", mel_basis)
- self.n_fft = win_length if n_fft is None else n_fft
- self.hop_length = hop_length
- self.win_length = win_length
- self.sampling_rate = sampling_rate
- self.n_mel_channels = n_mel_channels
- self.clamp = clamp
- self.is_half = is_half
-
- def forward(self, audio, keyshift=0, speed=1, center=True):
- factor = 2 ** (keyshift / 12)
- n_fft_new = int(np.round(self.n_fft * factor))
- win_length_new = int(np.round(self.win_length * factor))
- hop_length_new = int(np.round(self.hop_length * speed))
- keyshift_key = str(keyshift) + "_" + str(audio.device)
- if keyshift_key not in self.hann_window:
- self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to(
- audio.device
- )
- fft = torch.stft(
- audio,
- n_fft=n_fft_new,
- hop_length=hop_length_new,
- win_length=win_length_new,
- window=self.hann_window[keyshift_key],
- center=center,
- return_complex=True,
- )
- magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2))
- if keyshift != 0:
- size = self.n_fft // 2 + 1
- resize = magnitude.size(1)
- if resize < size:
- magnitude = F.pad(magnitude, (0, 0, 0, size - resize))
- magnitude = magnitude[:, :size, :] * self.win_length / win_length_new
- mel_output = torch.matmul(self.mel_basis, magnitude)
- if self.is_half == True:
- mel_output = mel_output.half()
- log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp))
- return log_mel_spec
-
-
-class RMVPE:
- def __init__(self, model_path, is_half, device=None):
- self.resample_kernel = {}
- model = E2E(4, 1, (2, 2))
- ckpt = torch.load(model_path, map_location="cpu")
- model.load_state_dict(ckpt)
- model.eval()
- if is_half == True:
- model = model.half()
- self.model = model
- self.resample_kernel = {}
- self.is_half = is_half
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- self.device = device
- self.mel_extractor = MelSpectrogram(
- is_half, 128, 16000, 1024, 160, None, 30, 8000
- ).to(device)
- self.model = self.model.to(device)
- cents_mapping = 20 * np.arange(360) + 1997.3794084376191
- self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368
-
- def mel2hidden(self, mel):
- with torch.no_grad():
- n_frames = mel.shape[-1]
- mel = F.pad(
- mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect"
- )
- hidden = self.model(mel)
- return hidden[:, :n_frames]
-
- def decode(self, hidden, thred=0.03):
- cents_pred = self.to_local_average_cents(hidden, thred=thred)
- f0 = 10 * (2 ** (cents_pred / 1200))
- f0[f0 == 10] = 0
- # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred])
- return f0
-
- def infer_from_audio(self, audio, thred=0.03):
- audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0)
- # torch.cuda.synchronize()
- # t0=ttime()
- mel = self.mel_extractor(audio, center=True)
- # torch.cuda.synchronize()
- # t1=ttime()
- hidden = self.mel2hidden(mel)
- # torch.cuda.synchronize()
- # t2=ttime()
- hidden = hidden.squeeze(0).cpu().numpy()
- if self.is_half == True:
- hidden = hidden.astype("float32")
- f0 = self.decode(hidden, thred=thred)
- # torch.cuda.synchronize()
- # t3=ttime()
- # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0))
- return f0
-
- def to_local_average_cents(self, salience, thred=0.05):
- # t0 = ttime()
- center = np.argmax(salience, axis=1) # 帧长#index
- salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368
- # t1 = ttime()
- center += 4
- todo_salience = []
- todo_cents_mapping = []
- starts = center - 4
- ends = center + 5
- for idx in range(salience.shape[0]):
- todo_salience.append(salience[:, starts[idx] : ends[idx]][idx])
- todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]])
- # t2 = ttime()
- todo_salience = np.array(todo_salience) # 帧长,9
- todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9
- product_sum = np.sum(todo_salience * todo_cents_mapping, 1)
- weight_sum = np.sum(todo_salience, 1) # 帧长
- devided = product_sum / weight_sum # 帧长
- # t3 = ttime()
- maxx = np.max(salience, axis=1) # 帧长
- devided[maxx <= thred] = 0
- # t4 = ttime()
- # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3))
- return devided
-
-
-# if __name__ == '__main__':
-# audio, sampling_rate = sf.read("卢本伟语录~1.wav")
-# if len(audio.shape) > 1:
-# audio = librosa.to_mono(audio.transpose(1, 0))
-# audio_bak = audio.copy()
-# if sampling_rate != 16000:
-# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
-# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt"
-# thred = 0.03 # 0.01
-# device = 'cuda' if torch.cuda.is_available() else 'cpu'
-# rmvpe = RMVPE(model_path,is_half=False, device=device)
-# t0=ttime()
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# t1=ttime()
-# print(f0.shape,t1-t0)
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CONTRIBUTING.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CONTRIBUTING.md
deleted file mode 100644
index a3e9507643d4439f509a8fc8b87dc73417ef9822..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CONTRIBUTING.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Contributing to AudioCraft
-
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Pull Requests
-
-AudioCraft is the implementation of a research paper.
-Therefore, we do not plan on accepting many pull requests for new features.
-We certainly welcome them for bug fixes.
-
-1. Fork the repo and create your branch from `main`.
-2. If you've added code that should be tested, add tests.
-3. If you've changed APIs, update the documentation.
-4. Ensure the test suite passes.
-5. Make sure your code lints.
-6. If you haven't already, complete the Contributor License Agreement ("CLA").
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Meta's open source projects.
-
-Complete your CLA here:
-
-## Issues
-We use GitHub issues to track public bugs. Please ensure your description is
-clear and has sufficient instructions to be able to reproduce the issue.
-
-Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## License
-By contributing to encodec, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/RMXK/RVC_HFF/tools/infer_batch_rvc.py b/spaces/RMXK/RVC_HFF/tools/infer_batch_rvc.py
deleted file mode 100644
index 763d17f14877a2ce35f750202e91356c1f24270f..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/tools/infer_batch_rvc.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import argparse
-import os
-import sys
-
-print("Command-line arguments:", sys.argv)
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import sys
-
-import tqdm as tq
-from dotenv import load_dotenv
-from scipy.io import wavfile
-
-from configs.config import Config
-from infer.modules.vc.modules import VC
-
-
-def arg_parse() -> tuple:
- parser = argparse.ArgumentParser()
- parser.add_argument("--f0up_key", type=int, default=0)
- parser.add_argument("--input_path", type=str, help="input path")
- parser.add_argument("--index_path", type=str, help="index path")
- parser.add_argument("--f0method", type=str, default="harvest", help="harvest or pm")
- parser.add_argument("--opt_path", type=str, help="opt path")
- parser.add_argument("--model_name", type=str, help="store in assets/weight_root")
- parser.add_argument("--index_rate", type=float, default=0.66, help="index rate")
- parser.add_argument("--device", type=str, help="device")
- parser.add_argument("--is_half", type=bool, help="use half -> True")
- parser.add_argument("--filter_radius", type=int, default=3, help="filter radius")
- parser.add_argument("--resample_sr", type=int, default=0, help="resample sr")
- parser.add_argument("--rms_mix_rate", type=float, default=1, help="rms mix rate")
- parser.add_argument("--protect", type=float, default=0.33, help="protect")
-
- args = parser.parse_args()
- sys.argv = sys.argv[:1]
-
- return args
-
-
-def main():
- load_dotenv()
- args = arg_parse()
- config = Config()
- config.device = args.device if args.device else config.device
- config.is_half = args.is_half if args.is_half else config.is_half
- vc = VC(config)
- vc.get_vc(args.model_name)
- audios = os.listdir(args.input_path)
- for file in tq.tqdm(audios):
- if file.endswith(".wav"):
- file_path = os.path.join(args.input_path, file)
- _, wav_opt = vc.vc_single(
- 0,
- file_path,
- args.f0up_key,
- None,
- args.f0method,
- args.index_path,
- None,
- args.index_rate,
- args.filter_radius,
- args.resample_sr,
- args.rms_mix_rate,
- args.protect,
- )
- out_path = os.path.join(args.opt_path, file)
- wavfile.write(out_path, wav_opt[0], wav_opt[1])
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py
deleted file mode 100644
index 7686fe85a7cc94188da76bfb1c10ad2a10821256..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distro/__init__.py
+++ /dev/null
@@ -1,54 +0,0 @@
-from .distro import (
- NORMALIZED_DISTRO_ID,
- NORMALIZED_LSB_ID,
- NORMALIZED_OS_ID,
- LinuxDistribution,
- __version__,
- build_number,
- codename,
- distro_release_attr,
- distro_release_info,
- id,
- info,
- like,
- linux_distribution,
- lsb_release_attr,
- lsb_release_info,
- major_version,
- minor_version,
- name,
- os_release_attr,
- os_release_info,
- uname_attr,
- uname_info,
- version,
- version_parts,
-)
-
-__all__ = [
- "NORMALIZED_DISTRO_ID",
- "NORMALIZED_LSB_ID",
- "NORMALIZED_OS_ID",
- "LinuxDistribution",
- "build_number",
- "codename",
- "distro_release_attr",
- "distro_release_info",
- "id",
- "info",
- "like",
- "linux_distribution",
- "lsb_release_attr",
- "lsb_release_info",
- "major_version",
- "minor_version",
- "name",
- "os_release_attr",
- "os_release_info",
- "uname_attr",
- "uname_info",
- "version",
- "version_parts",
-]
-
-__version__ = __version__
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/live_render.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/live_render.py
deleted file mode 100644
index b90fbf7f35097694f727e201b0b378942d70a443..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/live_render.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import sys
-from typing import Optional, Tuple
-
-if sys.version_info >= (3, 8):
- from typing import Literal
-else:
- from pip._vendor.typing_extensions import Literal # pragma: no cover
-
-
-from ._loop import loop_last
-from .console import Console, ConsoleOptions, RenderableType, RenderResult
-from .control import Control
-from .segment import ControlType, Segment
-from .style import StyleType
-from .text import Text
-
-VerticalOverflowMethod = Literal["crop", "ellipsis", "visible"]
-
-
-class LiveRender:
- """Creates a renderable that may be updated.
-
- Args:
- renderable (RenderableType): Any renderable object.
- style (StyleType, optional): An optional style to apply to the renderable. Defaults to "".
- """
-
- def __init__(
- self,
- renderable: RenderableType,
- style: StyleType = "",
- vertical_overflow: VerticalOverflowMethod = "ellipsis",
- ) -> None:
- self.renderable = renderable
- self.style = style
- self.vertical_overflow = vertical_overflow
- self._shape: Optional[Tuple[int, int]] = None
-
- def set_renderable(self, renderable: RenderableType) -> None:
- """Set a new renderable.
-
- Args:
- renderable (RenderableType): Any renderable object, including str.
- """
- self.renderable = renderable
-
- def position_cursor(self) -> Control:
- """Get control codes to move cursor to beginning of live render.
-
- Returns:
- Control: A control instance that may be printed.
- """
- if self._shape is not None:
- _, height = self._shape
- return Control(
- ControlType.CARRIAGE_RETURN,
- (ControlType.ERASE_IN_LINE, 2),
- *(
- (
- (ControlType.CURSOR_UP, 1),
- (ControlType.ERASE_IN_LINE, 2),
- )
- * (height - 1)
- )
- )
- return Control()
-
- def restore_cursor(self) -> Control:
- """Get control codes to clear the render and restore the cursor to its previous position.
-
- Returns:
- Control: A Control instance that may be printed.
- """
- if self._shape is not None:
- _, height = self._shape
- return Control(
- ControlType.CARRIAGE_RETURN,
- *((ControlType.CURSOR_UP, 1), (ControlType.ERASE_IN_LINE, 2)) * height
- )
- return Control()
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
-
- renderable = self.renderable
- style = console.get_style(self.style)
- lines = console.render_lines(renderable, options, style=style, pad=False)
- shape = Segment.get_shape(lines)
-
- _, height = shape
- if height > options.size.height:
- if self.vertical_overflow == "crop":
- lines = lines[: options.size.height]
- shape = Segment.get_shape(lines)
- elif self.vertical_overflow == "ellipsis":
- lines = lines[: (options.size.height - 1)]
- overflow_text = Text(
- "...",
- overflow="crop",
- justify="center",
- end="",
- style="live.ellipsis",
- )
- lines.append(list(console.render(overflow_text)))
- shape = Segment.get_shape(lines)
- self._shape = shape
-
- new_line = Segment.line()
- for last, line in loop_last(lines):
- yield from line
- if not last:
- yield new_line
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/version.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/version.py
deleted file mode 100644
index de9a09a4ed3b078b37e7490a6686f660ae935aca..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/version.py
+++ /dev/null
@@ -1,504 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-import collections
-import itertools
-import re
-import warnings
-from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union
-
-from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType
-
-__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"]
-
-InfiniteTypes = Union[InfinityType, NegativeInfinityType]
-PrePostDevType = Union[InfiniteTypes, Tuple[str, int]]
-SubLocalType = Union[InfiniteTypes, int, str]
-LocalType = Union[
- NegativeInfinityType,
- Tuple[
- Union[
- SubLocalType,
- Tuple[SubLocalType, str],
- Tuple[NegativeInfinityType, SubLocalType],
- ],
- ...,
- ],
-]
-CmpKey = Tuple[
- int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType
-]
-LegacyCmpKey = Tuple[int, Tuple[str, ...]]
-VersionComparisonMethod = Callable[
- [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool
-]
-
-_Version = collections.namedtuple(
- "_Version", ["epoch", "release", "dev", "pre", "post", "local"]
-)
-
-
-def parse(version: str) -> Union["LegacyVersion", "Version"]:
- """
- Parse the given version string and return either a :class:`Version` object
- or a :class:`LegacyVersion` object depending on if the given version is
- a valid PEP 440 version or a legacy version.
- """
- try:
- return Version(version)
- except InvalidVersion:
- return LegacyVersion(version)
-
-
-class InvalidVersion(ValueError):
- """
- An invalid version was found, users should refer to PEP 440.
- """
-
-
-class _BaseVersion:
- _key: Union[CmpKey, LegacyCmpKey]
-
- def __hash__(self) -> int:
- return hash(self._key)
-
- # Please keep the duplicated `isinstance` check
- # in the six comparisons hereunder
- # unless you find a way to avoid adding overhead function calls.
- def __lt__(self, other: "_BaseVersion") -> bool:
- if not isinstance(other, _BaseVersion):
- return NotImplemented
-
- return self._key < other._key
-
- def __le__(self, other: "_BaseVersion") -> bool:
- if not isinstance(other, _BaseVersion):
- return NotImplemented
-
- return self._key <= other._key
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, _BaseVersion):
- return NotImplemented
-
- return self._key == other._key
-
- def __ge__(self, other: "_BaseVersion") -> bool:
- if not isinstance(other, _BaseVersion):
- return NotImplemented
-
- return self._key >= other._key
-
- def __gt__(self, other: "_BaseVersion") -> bool:
- if not isinstance(other, _BaseVersion):
- return NotImplemented
-
- return self._key > other._key
-
- def __ne__(self, other: object) -> bool:
- if not isinstance(other, _BaseVersion):
- return NotImplemented
-
- return self._key != other._key
-
-
-class LegacyVersion(_BaseVersion):
- def __init__(self, version: str) -> None:
- self._version = str(version)
- self._key = _legacy_cmpkey(self._version)
-
- warnings.warn(
- "Creating a LegacyVersion has been deprecated and will be "
- "removed in the next major release",
- DeprecationWarning,
- )
-
- def __str__(self) -> str:
- return self._version
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def public(self) -> str:
- return self._version
-
- @property
- def base_version(self) -> str:
- return self._version
-
- @property
- def epoch(self) -> int:
- return -1
-
- @property
- def release(self) -> None:
- return None
-
- @property
- def pre(self) -> None:
- return None
-
- @property
- def post(self) -> None:
- return None
-
- @property
- def dev(self) -> None:
- return None
-
- @property
- def local(self) -> None:
- return None
-
- @property
- def is_prerelease(self) -> bool:
- return False
-
- @property
- def is_postrelease(self) -> bool:
- return False
-
- @property
- def is_devrelease(self) -> bool:
- return False
-
-
-_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE)
-
-_legacy_version_replacement_map = {
- "pre": "c",
- "preview": "c",
- "-": "final-",
- "rc": "c",
- "dev": "@",
-}
-
-
-def _parse_version_parts(s: str) -> Iterator[str]:
- for part in _legacy_version_component_re.split(s):
- part = _legacy_version_replacement_map.get(part, part)
-
- if not part or part == ".":
- continue
-
- if part[:1] in "0123456789":
- # pad for numeric comparison
- yield part.zfill(8)
- else:
- yield "*" + part
-
- # ensure that alpha/beta/candidate are before final
- yield "*final"
-
-
-def _legacy_cmpkey(version: str) -> LegacyCmpKey:
-
- # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch
- # greater than or equal to 0. This will effectively put the LegacyVersion,
- # which uses the defacto standard originally implemented by setuptools,
- # as before all PEP 440 versions.
- epoch = -1
-
- # This scheme is taken from pkg_resources.parse_version setuptools prior to
- # it's adoption of the packaging library.
- parts: List[str] = []
- for part in _parse_version_parts(version.lower()):
- if part.startswith("*"):
- # remove "-" before a prerelease tag
- if part < "*final":
- while parts and parts[-1] == "*final-":
- parts.pop()
-
- # remove trailing zeros from each series of numeric parts
- while parts and parts[-1] == "00000000":
- parts.pop()
-
- parts.append(part)
-
- return epoch, tuple(parts)
-
-
-# Deliberately not anchored to the start and end of the string, to make it
-# easier for 3rd party code to reuse
-VERSION_PATTERN = r"""
- v?
- (?:
- (?:(?P[0-9]+)!)? # epoch
- (?P[0-9]+(?:\.[0-9]+)*) # release segment
- (?P
# pre-release
- [-_\.]?
- (?P(a|b|c|rc|alpha|beta|pre|preview))
- [-_\.]?
- (?P[0-9]+)?
- )?
- (?P # post release
- (?:-(?P[0-9]+))
- |
- (?:
- [-_\.]?
- (?Ppost|rev|r)
- [-_\.]?
- (?P[0-9]+)?
- )
- )?
- (?P # dev release
- [-_\.]?
- (?Pdev)
- [-_\.]?
- (?P[0-9]+)?
- )?
- )
- (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
-"""
-
-
-class Version(_BaseVersion):
-
- _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
-
- def __init__(self, version: str) -> None:
-
- # Validate the version and parse it into pieces
- match = self._regex.search(version)
- if not match:
- raise InvalidVersion(f"Invalid version: '{version}'")
-
- # Store the parsed out pieces of the version
- self._version = _Version(
- epoch=int(match.group("epoch")) if match.group("epoch") else 0,
- release=tuple(int(i) for i in match.group("release").split(".")),
- pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
- post=_parse_letter_version(
- match.group("post_l"), match.group("post_n1") or match.group("post_n2")
- ),
- dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
- local=_parse_local_version(match.group("local")),
- )
-
- # Generate a key which will be used for sorting
- self._key = _cmpkey(
- self._version.epoch,
- self._version.release,
- self._version.pre,
- self._version.post,
- self._version.dev,
- self._version.local,
- )
-
- def __repr__(self) -> str:
- return f""
-
- def __str__(self) -> str:
- parts = []
-
- # Epoch
- if self.epoch != 0:
- parts.append(f"{self.epoch}!")
-
- # Release segment
- parts.append(".".join(str(x) for x in self.release))
-
- # Pre-release
- if self.pre is not None:
- parts.append("".join(str(x) for x in self.pre))
-
- # Post-release
- if self.post is not None:
- parts.append(f".post{self.post}")
-
- # Development release
- if self.dev is not None:
- parts.append(f".dev{self.dev}")
-
- # Local version segment
- if self.local is not None:
- parts.append(f"+{self.local}")
-
- return "".join(parts)
-
- @property
- def epoch(self) -> int:
- _epoch: int = self._version.epoch
- return _epoch
-
- @property
- def release(self) -> Tuple[int, ...]:
- _release: Tuple[int, ...] = self._version.release
- return _release
-
- @property
- def pre(self) -> Optional[Tuple[str, int]]:
- _pre: Optional[Tuple[str, int]] = self._version.pre
- return _pre
-
- @property
- def post(self) -> Optional[int]:
- return self._version.post[1] if self._version.post else None
-
- @property
- def dev(self) -> Optional[int]:
- return self._version.dev[1] if self._version.dev else None
-
- @property
- def local(self) -> Optional[str]:
- if self._version.local:
- return ".".join(str(x) for x in self._version.local)
- else:
- return None
-
- @property
- def public(self) -> str:
- return str(self).split("+", 1)[0]
-
- @property
- def base_version(self) -> str:
- parts = []
-
- # Epoch
- if self.epoch != 0:
- parts.append(f"{self.epoch}!")
-
- # Release segment
- parts.append(".".join(str(x) for x in self.release))
-
- return "".join(parts)
-
- @property
- def is_prerelease(self) -> bool:
- return self.dev is not None or self.pre is not None
-
- @property
- def is_postrelease(self) -> bool:
- return self.post is not None
-
- @property
- def is_devrelease(self) -> bool:
- return self.dev is not None
-
- @property
- def major(self) -> int:
- return self.release[0] if len(self.release) >= 1 else 0
-
- @property
- def minor(self) -> int:
- return self.release[1] if len(self.release) >= 2 else 0
-
- @property
- def micro(self) -> int:
- return self.release[2] if len(self.release) >= 3 else 0
-
-
-def _parse_letter_version(
- letter: str, number: Union[str, bytes, SupportsInt]
-) -> Optional[Tuple[str, int]]:
-
- if letter:
- # We consider there to be an implicit 0 in a pre-release if there is
- # not a numeral associated with it.
- if number is None:
- number = 0
-
- # We normalize any letters to their lower case form
- letter = letter.lower()
-
- # We consider some words to be alternate spellings of other words and
- # in those cases we want to normalize the spellings to our preferred
- # spelling.
- if letter == "alpha":
- letter = "a"
- elif letter == "beta":
- letter = "b"
- elif letter in ["c", "pre", "preview"]:
- letter = "rc"
- elif letter in ["rev", "r"]:
- letter = "post"
-
- return letter, int(number)
- if not letter and number:
- # We assume if we are given a number, but we are not given a letter
- # then this is using the implicit post release syntax (e.g. 1.0-1)
- letter = "post"
-
- return letter, int(number)
-
- return None
-
-
-_local_version_separators = re.compile(r"[\._-]")
-
-
-def _parse_local_version(local: str) -> Optional[LocalType]:
- """
- Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
- """
- if local is not None:
- return tuple(
- part.lower() if not part.isdigit() else int(part)
- for part in _local_version_separators.split(local)
- )
- return None
-
-
-def _cmpkey(
- epoch: int,
- release: Tuple[int, ...],
- pre: Optional[Tuple[str, int]],
- post: Optional[Tuple[str, int]],
- dev: Optional[Tuple[str, int]],
- local: Optional[Tuple[SubLocalType]],
-) -> CmpKey:
-
- # When we compare a release version, we want to compare it with all of the
- # trailing zeros removed. So we'll use a reverse the list, drop all the now
- # leading zeros until we come to something non zero, then take the rest
- # re-reverse it back into the correct order and make it a tuple and use
- # that for our sorting key.
- _release = tuple(
- reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
- )
-
- # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
- # We'll do this by abusing the pre segment, but we _only_ want to do this
- # if there is not a pre or a post segment. If we have one of those then
- # the normal sorting rules will handle this case correctly.
- if pre is None and post is None and dev is not None:
- _pre: PrePostDevType = NegativeInfinity
- # Versions without a pre-release (except as noted above) should sort after
- # those with one.
- elif pre is None:
- _pre = Infinity
- else:
- _pre = pre
-
- # Versions without a post segment should sort before those with one.
- if post is None:
- _post: PrePostDevType = NegativeInfinity
-
- else:
- _post = post
-
- # Versions without a development segment should sort after those with one.
- if dev is None:
- _dev: PrePostDevType = Infinity
-
- else:
- _dev = dev
-
- if local is None:
- # Versions without a local segment should sort before those with one.
- _local: LocalType = NegativeInfinity
- else:
- # Versions with a local segment need that segment parsed to implement
- # the sorting rules in PEP440.
- # - Alpha numeric segments sort before numeric segments
- # - Alpha numeric segments sort lexicographically
- # - Numeric segments sort numerically
- # - Shorter versions sort before longer versions when the prefixes
- # match exactly
- _local = tuple(
- (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
- )
-
- return epoch, _release, _pre, _post, _dev, _local
diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/benchmarks/num_inliers.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/benchmarks/num_inliers.py
deleted file mode 100644
index f2b36f6a2b97b9c7010ef2455352531ffe3e4405..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/benchmarks/num_inliers.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import torch
-import torch.nn as nn
-from DeDoDe.utils import *
-import DeDoDe
-
-
-class NumInliersBenchmark(nn.Module):
- def __init__(
- self,
- dataset,
- num_samples=1000,
- batch_size=8,
- num_keypoints=10_000,
- device="cuda",
- ) -> None:
- super().__init__()
- sampler = torch.utils.data.WeightedRandomSampler(
- torch.ones(len(dataset)), replacement=False, num_samples=num_samples
- )
- dataloader = torch.utils.data.DataLoader(
- dataset, batch_size=batch_size, num_workers=batch_size, sampler=sampler
- )
- self.dataloader = dataloader
- self.tracked_metrics = {}
- self.batch_size = batch_size
- self.N = len(dataloader)
- self.num_keypoints = num_keypoints
-
- def compute_batch_metrics(self, outputs, batch, device="cuda"):
- kpts_A, kpts_B = outputs["keypoints_A"], outputs["keypoints_B"]
- B, K, H, W = batch["im_A"].shape
- gt_warp_A_to_B, valid_mask_A_to_B = get_gt_warp(
- batch["im_A_depth"],
- batch["im_B_depth"],
- batch["T_1to2"],
- batch["K1"],
- batch["K2"],
- H=H,
- W=W,
- )
- kpts_A_to_B = F.grid_sample(
- gt_warp_A_to_B[..., 2:].float().permute(0, 3, 1, 2),
- kpts_A[..., None, :],
- align_corners=False,
- mode="bilinear",
- )[..., 0].mT
- legit_A_to_B = F.grid_sample(
- valid_mask_A_to_B.reshape(B, 1, H, W),
- kpts_A[..., None, :],
- align_corners=False,
- mode="bilinear",
- )[..., 0, :, 0]
- dists = (
- torch.cdist(kpts_A_to_B, kpts_B).min(dim=-1).values[legit_A_to_B > 0.0]
- ).float()
- if legit_A_to_B.sum() == 0:
- return
- percent_inliers_at_1 = (dists < 0.02).float().mean()
- percent_inliers_at_05 = (dists < 0.01).float().mean()
- percent_inliers_at_025 = (dists < 0.005).float().mean()
- percent_inliers_at_01 = (dists < 0.002).float().mean()
- percent_inliers_at_005 = (dists < 0.001).float().mean()
-
- inlier_bins = torch.linspace(0, 0.002, steps=100, device=device)[None]
- inlier_counts = (dists[..., None] < inlier_bins).float().mean(dim=0)
- self.tracked_metrics["inlier_counts"] = (
- self.tracked_metrics.get("inlier_counts", 0) + 1 / self.N * inlier_counts
- )
- self.tracked_metrics["percent_inliers_at_1"] = (
- self.tracked_metrics.get("percent_inliers_at_1", 0)
- + 1 / self.N * percent_inliers_at_1
- )
- self.tracked_metrics["percent_inliers_at_05"] = (
- self.tracked_metrics.get("percent_inliers_at_05", 0)
- + 1 / self.N * percent_inliers_at_05
- )
- self.tracked_metrics["percent_inliers_at_025"] = (
- self.tracked_metrics.get("percent_inliers_at_025", 0)
- + 1 / self.N * percent_inliers_at_025
- )
- self.tracked_metrics["percent_inliers_at_01"] = (
- self.tracked_metrics.get("percent_inliers_at_01", 0)
- + 1 / self.N * percent_inliers_at_01
- )
- self.tracked_metrics["percent_inliers_at_005"] = (
- self.tracked_metrics.get("percent_inliers_at_005", 0)
- + 1 / self.N * percent_inliers_at_005
- )
-
- def benchmark(self, detector):
- self.tracked_metrics = {}
- from tqdm import tqdm
-
- print("Evaluating percent inliers...")
- for idx, batch in tqdm(enumerate(self.dataloader), mininterval=10.0):
- batch = to_cuda(batch)
- outputs = detector.detect(batch, num_keypoints=self.num_keypoints)
- keypoints_A, keypoints_B = (
- outputs["keypoints"][: self.batch_size],
- outputs["keypoints"][self.batch_size :],
- )
- if isinstance(outputs["keypoints"], (tuple, list)):
- keypoints_A, keypoints_B = torch.stack(keypoints_A), torch.stack(
- keypoints_B
- )
- outputs = {"keypoints_A": keypoints_A, "keypoints_B": keypoints_B}
- self.compute_batch_metrics(outputs, batch)
- import matplotlib.pyplot as plt
-
- plt.plot(
- torch.linspace(0, 0.002, steps=100),
- self.tracked_metrics["inlier_counts"].cpu(),
- )
- import numpy as np
-
- x = np.linspace(0, 0.002, 100)
- sigma = 0.52 * 2 / 512
- F = 1 - np.exp(-(x**2) / (2 * sigma**2))
- plt.plot(x, F)
- plt.savefig("vis/inlier_counts")
- [
- print(name, metric.item() * self.N / (idx + 1))
- for name, metric in self.tracked_metrics.items()
- if "percent" in name
- ]
diff --git a/spaces/Realcat/image-matching-webui/third_party/d2net/megadepth_utils/undistort_reconstructions.py b/spaces/Realcat/image-matching-webui/third_party/d2net/megadepth_utils/undistort_reconstructions.py
deleted file mode 100644
index 822c9abd3fc75fd8fc1e8d9ada75aa76802c6798..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/d2net/megadepth_utils/undistort_reconstructions.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import argparse
-
-import imagesize
-
-import os
-
-import subprocess
-
-parser = argparse.ArgumentParser(description="MegaDepth Undistortion")
-
-parser.add_argument(
- "--colmap_path", type=str, required=True, help="path to colmap executable"
-)
-parser.add_argument("--base_path", type=str, required=True, help="path to MegaDepth")
-
-args = parser.parse_args()
-
-sfm_path = os.path.join(args.base_path, "MegaDepth_v1_SfM")
-base_depth_path = os.path.join(args.base_path, "phoenix/S6/zl548/MegaDepth_v1")
-output_path = os.path.join(args.base_path, "Undistorted_SfM")
-
-os.mkdir(output_path)
-
-for scene_name in os.listdir(base_depth_path):
- current_output_path = os.path.join(output_path, scene_name)
- os.mkdir(current_output_path)
-
- image_path = os.path.join(base_depth_path, scene_name, "dense0", "imgs")
- if not os.path.exists(image_path):
- continue
-
- # Find the maximum image size in scene.
- max_image_size = 0
- for image_name in os.listdir(image_path):
- max_image_size = max(
- max_image_size, max(imagesize.get(os.path.join(image_path, image_name)))
- )
-
- # Undistort the images and update the reconstruction.
- subprocess.call(
- [
- os.path.join(args.colmap_path, "colmap"),
- "image_undistorter",
- "--image_path",
- os.path.join(sfm_path, scene_name, "images"),
- "--input_path",
- os.path.join(sfm_path, scene_name, "sparse", "manhattan", "0"),
- "--output_path",
- current_output_path,
- "--max_image_size",
- str(max_image_size),
- ]
- )
-
- # Transform the reconstruction to raw text format.
- sparse_txt_path = os.path.join(current_output_path, "sparse-txt")
- os.mkdir(sparse_txt_path)
- subprocess.call(
- [
- os.path.join(args.colmap_path, "colmap"),
- "model_converter",
- "--input_path",
- os.path.join(current_output_path, "sparse"),
- "--output_path",
- sparse_txt_path,
- "--output_type",
- "TXT",
- ]
- )
diff --git a/spaces/Redgon/bingo/next.config.js b/spaces/Redgon/bingo/next.config.js
deleted file mode 100644
index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000
--- a/spaces/Redgon/bingo/next.config.js
+++ /dev/null
@@ -1,38 +0,0 @@
-/** @type {import('next').NextConfig} */
-const nextConfig = {
- // output: 'export',
- // assetPrefix: '.',
- webpack: (config, { isServer }) => {
- if (!isServer) {
- config.resolve = {
- ...config.resolve,
- fallback: {
- 'bufferutil': false,
- 'utf-8-validate': false,
- http: false,
- https: false,
- stream: false,
- // fixes proxy-agent dependencies
- net: false,
- dns: false,
- tls: false,
- assert: false,
- // fixes next-i18next dependencies
- path: false,
- fs: false,
- // fixes mapbox dependencies
- events: false,
- // fixes sentry dependencies
- process: false
- }
- };
- }
- config.module.exprContextCritical = false;
-
- return config;
- },
-}
-
-module.exports = (...args) => {
- return nextConfig
-}
diff --git a/spaces/Reeve/Ohayou_Face/models/encoders/model_irse.py b/spaces/Reeve/Ohayou_Face/models/encoders/model_irse.py
deleted file mode 100644
index bc41ace0ba04cf4285c283a28e6c36113a18e6d6..0000000000000000000000000000000000000000
--- a/spaces/Reeve/Ohayou_Face/models/encoders/model_irse.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module
-from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm
-
-"""
-Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Backbone(Module):
- def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True):
- super(Backbone, self).__init__()
- assert input_size in [112, 224], "input_size should be 112 or 224"
- assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152"
- assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se"
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- if input_size == 112:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 7 * 7, 512),
- BatchNorm1d(512, affine=affine))
- else:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 14 * 14, 512),
- BatchNorm1d(512, affine=affine))
-
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer(x)
- return l2_norm(x)
-
-
-def IR_50(input_size):
- """Constructs a ir-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_101(input_size):
- """Constructs a ir-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_152(input_size):
- """Constructs a ir-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_50(input_size):
- """Constructs a ir_se-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_101(input_size):
- """Constructs a ir_se-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_152(input_size):
- """Constructs a ir_se-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
diff --git a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_123812KB.py
deleted file mode 100644
index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_123812KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/hubert_model.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/hubert_model.py
deleted file mode 100644
index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000
--- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/hubert_model.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import copy
-from typing import Optional, Tuple
-import random
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
-
-class Hubert(nn.Module):
- def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
- super().__init__()
- self._mask = mask
- self.feature_extractor = FeatureExtractor()
- self.feature_projection = FeatureProjection()
- self.positional_embedding = PositionalConvEmbedding()
- self.norm = nn.LayerNorm(768)
- self.dropout = nn.Dropout(0.1)
- self.encoder = TransformerEncoder(
- nn.TransformerEncoderLayer(
- 768, 12, 3072, activation="gelu", batch_first=True
- ),
- 12,
- )
- self.proj = nn.Linear(768, 256)
-
- self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
- self.label_embedding = nn.Embedding(num_label_embeddings, 256)
-
- def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- mask = None
- if self.training and self._mask:
- mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
- x[mask] = self.masked_spec_embed.to(x.dtype)
- return x, mask
-
- def encode(
- self, x: torch.Tensor, layer: Optional[int] = None
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- x = self.feature_extractor(x)
- x = self.feature_projection(x.transpose(1, 2))
- x, mask = self.mask(x)
- x = x + self.positional_embedding(x)
- x = self.dropout(self.norm(x))
- x = self.encoder(x, output_layer=layer)
- return x, mask
-
- def logits(self, x: torch.Tensor) -> torch.Tensor:
- logits = torch.cosine_similarity(
- x.unsqueeze(2),
- self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
- dim=-1,
- )
- return logits / 0.1
-
- def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- x, mask = self.encode(x)
- x = self.proj(x)
- logits = self.logits(x)
- return logits, mask
-
-
-class HubertSoft(Hubert):
- def __init__(self):
- super().__init__()
-
- @torch.inference_mode()
- def units(self, wav: torch.Tensor) -> torch.Tensor:
- wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
- x, _ = self.encode(wav)
- return self.proj(x)
-
-
-class FeatureExtractor(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
- self.norm0 = nn.GroupNorm(512, 512)
- self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
- self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = F.gelu(self.norm0(self.conv0(x)))
- x = F.gelu(self.conv1(x))
- x = F.gelu(self.conv2(x))
- x = F.gelu(self.conv3(x))
- x = F.gelu(self.conv4(x))
- x = F.gelu(self.conv5(x))
- x = F.gelu(self.conv6(x))
- return x
-
-
-class FeatureProjection(nn.Module):
- def __init__(self):
- super().__init__()
- self.norm = nn.LayerNorm(512)
- self.projection = nn.Linear(512, 768)
- self.dropout = nn.Dropout(0.1)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.norm(x)
- x = self.projection(x)
- x = self.dropout(x)
- return x
-
-
-class PositionalConvEmbedding(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv = nn.Conv1d(
- 768,
- 768,
- kernel_size=128,
- padding=128 // 2,
- groups=16,
- )
- self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.conv(x.transpose(1, 2))
- x = F.gelu(x[:, :, :-1])
- return x.transpose(1, 2)
-
-
-class TransformerEncoder(nn.Module):
- def __init__(
- self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
- ) -> None:
- super(TransformerEncoder, self).__init__()
- self.layers = nn.ModuleList(
- [copy.deepcopy(encoder_layer) for _ in range(num_layers)]
- )
- self.num_layers = num_layers
-
- def forward(
- self,
- src: torch.Tensor,
- mask: torch.Tensor = None,
- src_key_padding_mask: torch.Tensor = None,
- output_layer: Optional[int] = None,
- ) -> torch.Tensor:
- output = src
- for layer in self.layers[:output_layer]:
- output = layer(
- output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
- )
- return output
-
-
-def _compute_mask(
- shape: Tuple[int, int],
- mask_prob: float,
- mask_length: int,
- device: torch.device,
- min_masks: int = 0,
-) -> torch.Tensor:
- batch_size, sequence_length = shape
-
- if mask_length < 1:
- raise ValueError("`mask_length` has to be bigger than 0.")
-
- if mask_length > sequence_length:
- raise ValueError(
- f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
- )
-
- # compute number of masked spans in batch
- num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
- num_masked_spans = max(num_masked_spans, min_masks)
-
- # make sure num masked indices <= sequence_length
- if num_masked_spans * mask_length > sequence_length:
- num_masked_spans = sequence_length // mask_length
-
- # SpecAugment mask to fill
- mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
-
- # uniform distribution to sample from, make sure that offset samples are < sequence_length
- uniform_dist = torch.ones(
- (batch_size, sequence_length - (mask_length - 1)), device=device
- )
-
- # get random indices to mask
- mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
-
- # expand masked indices to masked spans
- mask_indices = (
- mask_indices.unsqueeze(dim=-1)
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- offsets = (
- torch.arange(mask_length, device=device)[None, None, :]
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- mask_idxs = mask_indices + offsets
-
- # scatter indices to mask
- mask = mask.scatter(1, mask_idxs, True)
-
- return mask
-
-
-def hubert_soft(
- path: str
-) -> HubertSoft:
- r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
- Args:
- path (str): path of a pretrained model
- """
- hubert = HubertSoft()
- checkpoint = torch.load(path)
- consume_prefix_in_state_dict_if_present(checkpoint, "module.")
- hubert.load_state_dict(checkpoint)
- hubert.eval()
- return hubert
diff --git a/spaces/Salesforce/BLIP/train_caption.py b/spaces/Salesforce/BLIP/train_caption.py
deleted file mode 100644
index 7c639ac646b9a1b8074b6e9c2343b961de76db05..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/BLIP/train_caption.py
+++ /dev/null
@@ -1,206 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
-'''
-import argparse
-import os
-import ruamel_yaml as yaml
-import numpy as np
-import random
-import time
-import datetime
-import json
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-from torch.utils.data import DataLoader
-
-from models.blip import blip_decoder
-import utils
-from utils import cosine_lr_schedule
-from data import create_dataset, create_sampler, create_loader
-from data.utils import save_result, coco_caption_eval
-
-def train(model, data_loader, optimizer, epoch, device):
- # train
- model.train()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
- metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}'))
- header = 'Train Caption Epoch: [{}]'.format(epoch)
- print_freq = 50
-
- for i, (image, caption, _) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
- image = image.to(device)
-
- loss = model(image, caption)
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- metric_logger.update(loss=loss.item())
- metric_logger.update(lr=optimizer.param_groups[0]["lr"])
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("Averaged stats:", metric_logger.global_avg())
- return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}
-
-
-@torch.no_grad()
-def evaluate(model, data_loader, device, config):
- # evaluate
- model.eval()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- header = 'Caption generation:'
- print_freq = 10
-
- result = []
- for image, image_id in metric_logger.log_every(data_loader, print_freq, header):
-
- image = image.to(device)
-
- captions = model.generate(image, sample=False, num_beams=config['num_beams'], max_length=config['max_length'],
- min_length=config['min_length'])
-
- for caption, img_id in zip(captions, image_id):
- result.append({"image_id": img_id.item(), "caption": caption})
-
- return result
-
-
-def main(args, config):
- utils.init_distributed_mode(args)
-
- device = torch.device(args.device)
-
- # fix the seed for reproducibility
- seed = args.seed + utils.get_rank()
- torch.manual_seed(seed)
- np.random.seed(seed)
- random.seed(seed)
- cudnn.benchmark = True
-
- #### Dataset ####
- print("Creating captioning dataset")
- train_dataset, val_dataset, test_dataset = create_dataset('caption_coco', config)
-
- if args.distributed:
- num_tasks = utils.get_world_size()
- global_rank = utils.get_rank()
- samplers = create_sampler([train_dataset,val_dataset,test_dataset], [True,False,False], num_tasks, global_rank)
- else:
- samplers = [None, None, None]
-
- train_loader, val_loader, test_loader = create_loader([train_dataset, val_dataset, test_dataset],samplers,
- batch_size=[config['batch_size']]*3,num_workers=[4,4,4],
- is_trains=[True, False, False], collate_fns=[None,None,None])
-
- #### Model ####
- print("Creating model")
- model = blip_decoder(pretrained=config['pretrained'], image_size=config['image_size'], vit=config['vit'],
- vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'],
- prompt=config['prompt'])
-
- model = model.to(device)
-
- model_without_ddp = model
- if args.distributed:
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- model_without_ddp = model.module
-
- optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay'])
-
- best = 0
- best_epoch = 0
-
- print("Start training")
- start_time = time.time()
- for epoch in range(0, config['max_epoch']):
- if not args.evaluate:
- if args.distributed:
- train_loader.sampler.set_epoch(epoch)
-
- cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr'])
-
- train_stats = train(model, train_loader, optimizer, epoch, device)
-
- val_result = evaluate(model_without_ddp, val_loader, device, config)
- val_result_file = save_result(val_result, args.result_dir, 'val_epoch%d'%epoch, remove_duplicate='image_id')
-
- test_result = evaluate(model_without_ddp, test_loader, device, config)
- test_result_file = save_result(test_result, args.result_dir, 'test_epoch%d'%epoch, remove_duplicate='image_id')
-
- if utils.is_main_process():
- coco_val = coco_caption_eval(config['coco_gt_root'],val_result_file,'val')
- coco_test = coco_caption_eval(config['coco_gt_root'],test_result_file,'test')
-
- if args.evaluate:
- log_stats = {**{f'val_{k}': v for k, v in coco_val.eval.items()},
- **{f'test_{k}': v for k, v in coco_test.eval.items()},
- }
- with open(os.path.join(args.output_dir, "evaluate.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
- else:
- save_obj = {
- 'model': model_without_ddp.state_dict(),
- 'optimizer': optimizer.state_dict(),
- 'config': config,
- 'epoch': epoch,
- }
-
- if coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4'] > best:
- best = coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4']
- best_epoch = epoch
- torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth'))
-
- log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
- **{f'val_{k}': v for k, v in coco_val.eval.items()},
- **{f'test_{k}': v for k, v in coco_test.eval.items()},
- 'epoch': epoch,
- 'best_epoch': best_epoch,
- }
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
-
- if args.evaluate:
- break
- dist.barrier()
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--config', default='./configs/caption_coco.yaml')
- parser.add_argument('--output_dir', default='output/Caption_coco')
- parser.add_argument('--evaluate', action='store_true')
- parser.add_argument('--device', default='cuda')
- parser.add_argument('--seed', default=42, type=int)
- parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes')
- parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')
- parser.add_argument('--distributed', default=True, type=bool)
- args = parser.parse_args()
-
- config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
-
- args.result_dir = os.path.join(args.output_dir, 'result')
-
- Path(args.output_dir).mkdir(parents=True, exist_ok=True)
- Path(args.result_dir).mkdir(parents=True, exist_ok=True)
-
- yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w'))
-
- main(args, config)
\ No newline at end of file
diff --git a/spaces/SamerKharboush/chatGPT-Sam-Turbo/custom.css b/spaces/SamerKharboush/chatGPT-Sam-Turbo/custom.css
deleted file mode 100644
index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000
--- a/spaces/SamerKharboush/chatGPT-Sam-Turbo/custom.css
+++ /dev/null
@@ -1,162 +0,0 @@
-:root {
- --chatbot-color-light: #F3F3F3;
- --chatbot-color-dark: #121111;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2.5em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-
-#chuanhu_chatbot, #status_display {
- transition: all 0.6s;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色 */
-#chuanhu_chatbot {
- background-color: var(--chatbot-color-light) !important;
-}
-[data-testid = "bot"] {
- background-color: #FFFFFF !important;
-}
-[data-testid = "user"] {
- background-color: #95EC69 !important;
-}
-/* 对话气泡 */
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: hsla(0, 0%, 0%, 80%)!important;
- border-radius: 10px;
- padding: 1.4em 1.2em 0em 1.4em;
- margin: 1.2em 2em 1.2em 0.5em;
- color: #FFF;
- box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* 代码高亮样式 */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/Sense-X/uniformer_image_demo/README.md b/spaces/Sense-X/uniformer_image_demo/README.md
deleted file mode 100644
index d9c60ca136c04ffffac2d4d9b23a29c472bd7be9..0000000000000000000000000000000000000000
--- a/spaces/Sense-X/uniformer_image_demo/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Uniformer_image_demo
-emoji: 📉
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
-license: mit
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/StatsByZach/app/home.py b/spaces/StatsByZach/app/home.py
deleted file mode 100644
index a60c5251f2882fe9416cbe02682ccce644b37f3c..0000000000000000000000000000000000000000
--- a/spaces/StatsByZach/app/home.py
+++ /dev/null
@@ -1,84 +0,0 @@
-##### home.py #####
-# Home page
-# Zach Andrews
-
-# Import modules
-from shiny import *
-import shinyswatch
-import plotly.express as px
-from shinywidgets import output_widget, render_widget
-import pandas as pd
-from configure import base_url
-
-# Create app
-home = App(ui.page_fluid(
- ui.tags.base(href=base_url),
- ui.tags.div(
- {"style": "width:75%;margin: 0 auto"},
- ui.tags.style(
- """
- h4 {
- margin-top: 1em;font-size:35px;
- }
- h2{
- font-size:25px;
- }
- """
- ),
- shinyswatch.theme.darkly(),ui.tags.h4("Stats By Zach"),
- ui.tags.i("A website for hockey analytics"),
- ui.navset_tab(
- ui.nav_control(
- ui.a(
- "Home",
- href="home/"
- ),
- ),
- ui.nav_menu(
- "Skater Charts",
- ui.nav_control(
- ui.a(
- "On-Ice xG Rates",
- href="skater-xg-rates/"
- ),
- ui.a(
- "On-Ice xGF%",
- href="skater-xg-percentages/"
- ),
- ),
- ),
- ui.nav_menu(
- "Goalie Charts",
- ui.nav_control(
- ui.a(
- "GSAx Timeline",
- href="gsax-timeline/"
- ),
- ui.a(
- "GSAx Leaderboard",
- href="gsax-leaderboard/"
- ),
- ui.a(
- "GSAx Comparison",
- href="gsax-comparison/"
- )
- ),
- ),ui.nav_menu(
- "Team Charts",
- ui.nav_control(
- ui.a(
- "Team xG Rates",
- href="team-xg-rates/"
- ),
- ),
- ),ui.nav_control(
- ui.a(
- "Games",
- href="games/"
- ),
- ),ui.nav_control(
- ui.a(
- "About",
- href="about/"
- ),
- )),ui.tags.br(),ui.tags.h5("Welcome to Stats By Zach!"),ui.tags.h6("The 2023-24 NHL regular season is here, and the StatsByZach website is officially up and running for it! As I've state before, this website is still a work in progress, with lots of work to be done in terms of styling and compatibility especially. Along with that, I am focusing on finding a new hosting solution, adding more charts, and some prerformace enhancements as well. Thank you for paying the site a visit, and I do hope you can use my data to better understand the NHL. The website gets updated daily, and I try to make improvements on a regular basis, so please do visit the site often, and feel free to reach out to me on Twitter @StatsByZach for any feedback or suggestions. Enjoy the site, and happy hockey season!"))), None)
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.h b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.h
deleted file mode 100644
index 3a2b582ab430575ec6fdb0e0799c5c2c39a56b15..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/attach.h
+++ /dev/null
@@ -1,57 +0,0 @@
-/* ****************************************************************************
- *
- * Copyright (c) Brainwy software Ltda.
- *
- * This source code is subject to terms and conditions of the Apache License, Version 2.0. A
- * copy of the license can be found in the License.html file at the root of this distribution. If
- * you cannot locate the Apache License, Version 2.0, please send an email to
- * vspython@microsoft.com. By using this source code in any fashion, you are agreeing to be bound
- * by the terms of the Apache License, Version 2.0.
- *
- * You must not remove this notice, or any other, from this software.
- *
- * ***************************************************************************/
-
-#ifndef _ATTACH_DLL_H_
-#define _ATTACH_DLL_H_
-
-#if defined DLL_EXPORT
-#define DECLDIR __declspec(dllexport)
-#else
-#define DECLDIR __declspec(dllimport)
-#endif
-
-
-extern "C"
-{
- DECLDIR int AttachAndRunPythonCode(const char *command, int *result );
-
- /*
- * Helper to print debug information from the current process
- */
- DECLDIR int PrintDebugInfo();
-
- /*
- Could be used with ctypes (note that the threading should be initialized, so,
- doing it in a thread as below is recommended):
-
- def check():
-
- import ctypes
- lib = ctypes.cdll.LoadLibrary(r'C:\...\attach_x86.dll')
- print 'result', lib.AttachDebuggerTracing(0)
-
- t = threading.Thread(target=check)
- t.start()
- t.join()
- */
- DECLDIR int AttachDebuggerTracing(
- bool showDebugInfo,
- void* pSetTraceFunc, // Actually PyObject*, but we don't want to include it here.
- void* pTraceFunc, // Actually PyObject*, but we don't want to include it here.
- unsigned int threadId,
- void* pPyNone // Actually PyObject*, but we don't want to include it here.
- );
-}
-
-#endif
\ No newline at end of file
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/furthest_point_sample.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/furthest_point_sample.py
deleted file mode 100644
index 374b7a878f1972c183941af28ba1df216ac1a60f..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/furthest_point_sample.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'furthest_point_sampling_forward',
- 'furthest_point_sampling_with_dist_forward'
-])
-
-
-class FurthestPointSampling(Function):
- """Uses iterative furthest point sampling to select a set of features whose
- corresponding points have the furthest distance."""
-
- @staticmethod
- def forward(ctx, points_xyz: torch.Tensor,
- num_points: int) -> torch.Tensor:
- """
- Args:
- points_xyz (Tensor): (B, N, 3) where N > num_points.
- num_points (int): Number of points in the sampled set.
-
- Returns:
- Tensor: (B, num_points) indices of the sampled points.
- """
- assert points_xyz.is_contiguous()
-
- B, N = points_xyz.size()[:2]
- output = torch.cuda.IntTensor(B, num_points)
- temp = torch.cuda.FloatTensor(B, N).fill_(1e10)
-
- ext_module.furthest_point_sampling_forward(
- points_xyz,
- temp,
- output,
- b=B,
- n=N,
- m=num_points,
- )
- if torch.__version__ != 'parrots':
- ctx.mark_non_differentiable(output)
- return output
-
- @staticmethod
- def backward(xyz, a=None):
- return None, None
-
-
-class FurthestPointSamplingWithDist(Function):
- """Uses iterative furthest point sampling to select a set of features whose
- corresponding points have the furthest distance."""
-
- @staticmethod
- def forward(ctx, points_dist: torch.Tensor,
- num_points: int) -> torch.Tensor:
- """
- Args:
- points_dist (Tensor): (B, N, N) Distance between each point pair.
- num_points (int): Number of points in the sampled set.
-
- Returns:
- Tensor: (B, num_points) indices of the sampled points.
- """
- assert points_dist.is_contiguous()
-
- B, N, _ = points_dist.size()
- output = points_dist.new_zeros([B, num_points], dtype=torch.int32)
- temp = points_dist.new_zeros([B, N]).fill_(1e10)
-
- ext_module.furthest_point_sampling_with_dist_forward(
- points_dist, temp, output, b=B, n=N, m=num_points)
- if torch.__version__ != 'parrots':
- ctx.mark_non_differentiable(output)
- return output
-
- @staticmethod
- def backward(xyz, a=None):
- return None, None
-
-
-furthest_point_sample = FurthestPointSampling.apply
-furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply
diff --git a/spaces/TEL123/Real-CUGAN/README.md b/spaces/TEL123/Real-CUGAN/README.md
deleted file mode 100644
index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000
--- a/spaces/TEL123/Real-CUGAN/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Real CUGAN
-emoji: 🐢
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: gpl-3.0
-duplicated_from: DianXian/Real-CUGAN
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TEnngal/TEnngal/Dockerfile b/spaces/TEnngal/TEnngal/Dockerfile
deleted file mode 100644
index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000
--- a/spaces/TEnngal/TEnngal/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM weaigc/bingo:latest
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-CMD npm start
diff --git a/spaces/TEnngal/bingo/src/components/user-menu.tsx b/spaces/TEnngal/bingo/src/components/user-menu.tsx
deleted file mode 100644
index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000
--- a/spaces/TEnngal/bingo/src/components/user-menu.tsx
+++ /dev/null
@@ -1,113 +0,0 @@
-'use client'
-
-import { useEffect, useState } from 'react'
-import Image from 'next/image'
-import { toast } from 'react-hot-toast'
-import { Button } from '@/components/ui/button'
-import pkg from '../../package.json'
-import {
- DropdownMenu,
- DropdownMenuContent,
- DropdownMenuItem,
- DropdownMenuSeparator,
- DropdownMenuTrigger
-} from '@/components/ui/dropdown-menu'
-import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons'
-import SettingIcon from '@/assets/images/settings.svg'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-export function UserMenu() {
- const [host, setHost] = useState('')
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- useEffect(() => {
- setHost(location.host)
- }, [])
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
- return (
-
-
-## Results
-
-
-
-COCO Object Detection Results
-
-
-
-
-
-
-ODinW Object Detection Results
-
-
-
-
-
-
-Marrying Grounding DINO with Stable Diffusion for Image Editing
-
-
-
-
-
-
-Marrying Grounding DINO with GLIGEN for more Detailed Image Editing
-
-
-
-
-## Model
-
-Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder.
-
-
-
-
-## Acknowledgement
-
-Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work!
-
-We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well.
-
-Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models.
-
-
-## Citation
-
-If you find our work helpful for your research, please consider citing the following BibTeX entry.
-
-```bibtex
-@inproceedings{ShilongLiu2023GroundingDM,
- title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection},
- author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang},
- year={2023}
-}
-```
-
-
-
-
diff --git a/spaces/arbml/Ashaar/poetry_diacritizer/util/learning_rates.py b/spaces/arbml/Ashaar/poetry_diacritizer/util/learning_rates.py
deleted file mode 100644
index dd3325b4ed746f2d65e00750e40156aef6b6d851..0000000000000000000000000000000000000000
--- a/spaces/arbml/Ashaar/poetry_diacritizer/util/learning_rates.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import numpy as np
-import math
-
-
-class LearningRateDecay:
- def __init__(self, lr=0.002, warmup_steps=4000.0) -> None:
- self.lr = lr
- self.warmup_steps = warmup_steps
-
- def __call__(self, global_step) -> float:
- step = global_step + 1.0
- lr = (
- self.lr
- * self.warmup_steps ** 0.5
- * np.minimum(step * self.warmup_steps ** -1.5, step ** -0.5)
- )
-
- return lr
-
-class SquareRootScheduler:
- def __init__(self, lr=0.1):
- self.lr = lr
-
- def __call__(self, global_step):
- global_step = global_step // 1000
- return self.lr * pow(global_step + 1.0, -0.5)
-
-
-class CosineScheduler:
- def __init__(
- self, max_update, base_lr=0.02, final_lr=0, warmup_steps=0, warmup_begin_lr=0
- ):
- self.base_lr_orig = base_lr
- self.max_update = max_update
- self.final_lr = final_lr
- self.warmup_steps = warmup_steps
- self.warmup_begin_lr = warmup_begin_lr
- self.max_steps = self.max_update - self.warmup_steps
-
- def get_warmup_lr(self, global_step):
- increase = (
- (self.base_lr_orig - self.warmup_begin_lr)
- * float(global_step)
- / float(self.warmup_steps)
- )
- return self.warmup_begin_lr + increase
-
- def __call__(self, global_step):
- if global_step < self.warmup_steps:
- return self.get_warmup_lr(global_step)
- if global_step <= self.max_update:
- self.base_lr = (
- self.final_lr
- + (self.base_lr_orig - self.final_lr)
- * (
- 1
- + math.cos(
- math.pi * (global_step - self.warmup_steps) / self.max_steps
- )
- )
- / 2
- )
- return self.base_lr
-
-def adjust_learning_rate(optimizer, global_step):
- lr = LearningRateDecay()(global_step=global_step)
- for param_group in optimizer.param_groups:
- param_group["lr"] = lr
- return lr
-
diff --git a/spaces/arch-123/bingo/src/lib/isomorphic/index.ts b/spaces/arch-123/bingo/src/lib/isomorphic/index.ts
deleted file mode 100644
index d4ebae951004bc8ec388f82548f4204a6c2a0a50..0000000000000000000000000000000000000000
--- a/spaces/arch-123/bingo/src/lib/isomorphic/index.ts
+++ /dev/null
@@ -1,8 +0,0 @@
-'use client'
-
-import Debug from 'debug'
-export * from 'ifw'
-
-export const debug = typeof document === 'undefined' ? Debug('bingo')
- : process.env.NEXT_PUBLIC_DEBUG ? console.info.bind(console)
- : () => {}
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/speaker_encoder_config.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/speaker_encoder_config.py
deleted file mode 100644
index 6dceb00277ba68efe128936ff7f9456338f9753f..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/speaker_encoder_config.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from dataclasses import asdict, dataclass
-
-from TTS.encoder.configs.base_encoder_config import BaseEncoderConfig
-
-
-@dataclass
-class SpeakerEncoderConfig(BaseEncoderConfig):
- """Defines parameters for Speaker Encoder model."""
-
- model: str = "speaker_encoder"
- class_name_key: str = "speaker_name"
diff --git a/spaces/artificialguybr/video-dubbing/whisper/tests/test_tokenizer.py b/spaces/artificialguybr/video-dubbing/whisper/tests/test_tokenizer.py
deleted file mode 100644
index 09d0351e12adf783922183c95fddb961d2f1426a..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/whisper/tests/test_tokenizer.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from whisper.tokenizer import get_tokenizer
-
-
-def test_tokenizer():
- gpt2_tokenizer = get_tokenizer(multilingual=False)
- multilingual_tokenizer = get_tokenizer(multilingual=True)
-
- text = "다람쥐 헌 쳇바퀴에 타고파"
- gpt2_tokens = gpt2_tokenizer.encode(text)
- multilingual_tokens = multilingual_tokenizer.encode(text)
-
- assert gpt2_tokenizer.decode(gpt2_tokens) == text
- assert multilingual_tokenizer.decode(multilingual_tokens) == text
- assert len(gpt2_tokens) > len(multilingual_tokens)
-
-
-def test_split_on_unicode():
- multilingual_tokenizer = get_tokenizer(multilingual=True)
-
- tokens = [8404, 871, 287, 6, 246, 526, 3210, 20378]
- words, word_tokens = multilingual_tokenizer.split_tokens_on_unicode(tokens)
-
- assert words == [" elle", " est", " l", "'", "�", "é", "rit", "oire"]
- assert word_tokens == [[8404], [871], [287], [6], [246], [526], [3210], [20378]]
diff --git a/spaces/arxify/RVC-beta-v2-0618/Dockerfile b/spaces/arxify/RVC-beta-v2-0618/Dockerfile
deleted file mode 100644
index 49f62d5f9c0901931de6523721b3a97b40f34219..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/Dockerfile
+++ /dev/null
@@ -1,13 +0,0 @@
-# syntax=docker/dockerfile:1
-
-FROM python:3.10-bullseye
-
-EXPOSE 7865
-
-WORKDIR /app
-
-COPY . .
-
-RUN pip3 install -r requirements.txt
-
-CMD ["python3", "infer-web.py"]
\ No newline at end of file
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CAST.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CAST.py
deleted file mode 100644
index ff13bd4a8927e358d743964f5c2d7de0a10ce211..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_CAST.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# SelfTest/Cipher/CAST.py: Self-test for the CAST-128 (CAST5) cipher
-#
-# Written in 2008 by Dwayne C. Litzenberger
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-"""Self-test suite for Crypto.Cipher.CAST"""
-
-import unittest
-
-from Crypto.Util.py3compat import bchr
-
-from Crypto.Cipher import CAST
-
-# This is a list of (plaintext, ciphertext, key) tuples.
-test_data = [
- # Test vectors from RFC 2144, B.1
- ('0123456789abcdef', '238b4fe5847e44b2',
- '0123456712345678234567893456789a',
- '128-bit key'),
-
- ('0123456789abcdef', 'eb6a711a2c02271b',
- '01234567123456782345',
- '80-bit key'),
-
- ('0123456789abcdef', '7ac816d16e9b302e',
- '0123456712',
- '40-bit key'),
-]
-
-
-class KeyLength(unittest.TestCase):
-
- def runTest(self):
- self.assertRaises(ValueError, CAST.new, bchr(0) * 4, CAST.MODE_ECB)
- self.assertRaises(ValueError, CAST.new, bchr(0) * 17, CAST.MODE_ECB)
-
-
-class TestOutput(unittest.TestCase):
-
- def runTest(self):
- # Encrypt/Decrypt data and test output parameter
-
- cipher = CAST.new(b'4'*16, CAST.MODE_ECB)
-
- pt = b'5' * 16
- ct = cipher.encrypt(pt)
-
- output = bytearray(16)
- res = cipher.encrypt(pt, output=output)
- self.assertEqual(ct, output)
- self.assertEqual(res, None)
-
- res = cipher.decrypt(ct, output=output)
- self.assertEqual(pt, output)
- self.assertEqual(res, None)
-
- output = memoryview(bytearray(16))
- cipher.encrypt(pt, output=output)
- self.assertEqual(ct, output)
-
- cipher.decrypt(ct, output=output)
- self.assertEqual(pt, output)
-
- self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16)
- self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16)
-
- shorter_output = bytearray(7)
- self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output)
- self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output)
-
-
-def get_tests(config={}):
- from .common import make_block_tests
-
- tests = make_block_tests(CAST, "CAST", test_data)
- tests.append(KeyLength())
- tests.append(TestOutput())
- return tests
-
-if __name__ == '__main__':
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/roll_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/roll_dataset.py
deleted file mode 100644
index a2915eeb3e8fb4dfb4b2bb33e0464ad0783d854c..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/roll_dataset.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from . import BaseWrapperDataset
-
-
-class RollDataset(BaseWrapperDataset):
- def __init__(self, dataset, shifts):
- super().__init__(dataset)
- self.shifts = shifts
-
- def __getitem__(self, index):
- item = self.dataset[index]
- return torch.roll(item, self.shifts)
diff --git a/spaces/avans06/whisper-webui-translate/src/whisper/dummyWhisperContainer.py b/spaces/avans06/whisper-webui-translate/src/whisper/dummyWhisperContainer.py
deleted file mode 100644
index dddc2dc50e78880befe29d15c924ab811413a8f8..0000000000000000000000000000000000000000
--- a/spaces/avans06/whisper-webui-translate/src/whisper/dummyWhisperContainer.py
+++ /dev/null
@@ -1,101 +0,0 @@
-from typing import List
-
-import ffmpeg
-from src.config import ModelConfig
-from src.hooks.progressListener import ProgressListener
-from src.modelCache import ModelCache
-from src.prompts.abstractPromptStrategy import AbstractPromptStrategy
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer
-
-class DummyWhisperContainer(AbstractWhisperContainer):
- def __init__(self, model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- super().__init__(model_name, device, compute_type, download_root, cache, models)
-
- def ensure_downloaded(self):
- """
- Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before
- passing the container to a subprocess.
- """
- print("[Dummy] Ensuring that the model is downloaded")
-
- def _create_model(self):
- print("[Dummy] Creating dummy whisper model " + self.model_name + " for device " + str(self.device))
- return None
-
- def create_callback(self, language: str = None, task: str = None,
- prompt_strategy: AbstractPromptStrategy = None,
- **decodeOptions: dict) -> AbstractWhisperCallback:
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- prompt_strategy: AbstractPromptStrategy
- The prompt strategy to use. If not specified, the prompt from Whisper will be used.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- return DummyWhisperCallback(self, language=language, task=task, prompt_strategy=prompt_strategy, **decodeOptions)
-
-class DummyWhisperCallback(AbstractWhisperCallback):
- def __init__(self, model_container: DummyWhisperContainer, **decodeOptions: dict):
- self.model_container = model_container
- self.decodeOptions = decodeOptions
-
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- progress_listener: ProgressListener
- A callback to receive progress updates.
- """
- print("[Dummy] Invoking dummy whisper callback for segment " + str(segment_index))
-
- # Estimate length
- if isinstance(audio, str):
- audio_length = ffmpeg.probe(audio)["format"]["duration"]
- # Format is pcm_s16le at a sample rate of 16000, loaded as a float32 array.
- else:
- audio_length = len(audio) / 16000
-
- # Convert the segments to a format that is easier to serialize
- whisper_segments = [{
- "text": "Dummy text for segment " + str(segment_index),
- "start": 0,
- "end": audio_length,
-
- # Extra fields added by faster-whisper
- "words": []
- }]
-
- result = {
- "segments": whisper_segments,
- "text": "Dummy text for segment " + str(segment_index),
- "language": "en" if detected_language is None else detected_language,
-
- # Extra fields added by faster-whisper
- "language_probability": 1.0,
- "duration": audio_length,
- }
-
- if progress_listener is not None:
- progress_listener.on_finished()
- return result
\ No newline at end of file
diff --git a/spaces/awacke1/AI-BigGAN-Image-Gen/README.md b/spaces/awacke1/AI-BigGAN-Image-Gen/README.md
deleted file mode 100644
index 8e1bf4ff7508f3d197f40e79c95dfbac5f1a28b9..0000000000000000000000000000000000000000
--- a/spaces/awacke1/AI-BigGAN-Image-Gen/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ImgGen🧠🖼️-AIImageGenerator
-emoji: 🧠🖼️
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/main.py b/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/main.py
deleted file mode 100644
index a57ba031d85c0a96fb39f4cb67f8225a09d5da17..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/main.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from fastapi import FastAPI
-from fastapi.staticfiles import StaticFiles
-from fastapi.responses import FileResponse
-
-from transformers import pipeline
-
-app = FastAPI()
-
-pipe_flan = pipeline("text2text-generation", model="google/flan-t5-small")
-#pipe_flan = pipeline("text2text-generation", model="google/flan-t5-large")
-#Try large rather than small? google/flan-t5-small
-
-
-@app.get("/infer_t5")
-def t5(input):
- output = pipe_flan(input)
- return {"output": output[0]["generated_text"]}
-
-app.mount("/", StaticFiles(directory="static", html=True), name="static")
-
-@app.get("/")
-def index() -> FileResponse:
- return FileResponse(path="/app/static/index.html", media_type="text/html")
diff --git a/spaces/awacke1/HTML5.Aframe.Frogger.Test/style.css b/spaces/awacke1/HTML5.Aframe.Frogger.Test/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/HTML5.Aframe.Frogger.Test/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/awacke1/PromptSuperHeroImageGenerator/index.html b/spaces/awacke1/PromptSuperHeroImageGenerator/index.html
deleted file mode 100644
index 6250c2958a7186a4e64f21c02b0359ff5ecd7e97..0000000000000000000000000000000000000000
--- a/spaces/awacke1/PromptSuperHeroImageGenerator/index.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/awacke1/StreamlitAIPP1/app.py b/spaces/awacke1/StreamlitAIPP1/app.py
deleted file mode 100644
index d2ea9861f7c6c8cdd343ed8ea2e309962169d3d8..0000000000000000000000000000000000000000
--- a/spaces/awacke1/StreamlitAIPP1/app.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import streamlit as st
-import time
-
-def main():
- st.title("Simple Streamlit Program")
-
- # Wait for 5 seconds
- with st.spinner("Waiting for 5 seconds..."):
- time.sleep(5)
- st.success("Completed!")
-
- # File Upload
- st.header("File Upload")
- uploaded_file = st.file_uploader("Upload a file")
-
- if uploaded_file is not None:
- file_contents = uploaded_file.read()
- st.markdown("### File Contents")
- st.markdown(f"```{file_contents.decode('utf-8')}```")
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/awacke1/mixture-of-experts-dr-llama/app.py b/spaces/awacke1/mixture-of-experts-dr-llama/app.py
deleted file mode 100644
index 73282a536fdcbd216d67ae576dced4f6a83429b1..0000000000000000000000000000000000000000
--- a/spaces/awacke1/mixture-of-experts-dr-llama/app.py
+++ /dev/null
@@ -1,794 +0,0 @@
-# Imports
-import base64
-import glob
-import json
-import math
-import openai
-import os
-import pytz
-import re
-import requests
-import streamlit as st
-import textract
-import time
-import zipfile
-import huggingface_hub
-import dotenv
-from audio_recorder_streamlit import audio_recorder
-from bs4 import BeautifulSoup
-from collections import deque
-from datetime import datetime
-from dotenv import load_dotenv
-from huggingface_hub import InferenceClient
-from io import BytesIO
-from langchain.chat_models import ChatOpenAI
-from langchain.chains import ConversationalRetrievalChain
-from langchain.embeddings import OpenAIEmbeddings
-from langchain.memory import ConversationBufferMemory
-from langchain.text_splitter import CharacterTextSplitter
-from langchain.vectorstores import FAISS
-from openai import ChatCompletion
-from PyPDF2 import PdfReader
-from templates import bot_template, css, user_template
-from xml.etree import ElementTree as ET
-import streamlit.components.v1 as components # Import Streamlit Components for HTML5
-
-
-st.set_page_config(page_title="🐪Llama Whisperer🦙 Voice Chat🌟", layout="wide")
-
-
-def add_Med_Licensing_Exam_Dataset():
- import streamlit as st
- from datasets import load_dataset
- dataset = load_dataset("augtoma/usmle_step_1")['test'] # Using 'test' split
- st.title("USMLE Step 1 Dataset Viewer")
- if len(dataset) == 0:
- st.write("😢 The dataset is empty.")
- else:
- st.write("""
- 🔍 Use the search box to filter questions or use the grid to scroll through the dataset.
- """)
-
- # 👩🔬 Search Box
- search_term = st.text_input("Search for a specific question:", "")
-
- # 🎛 Pagination
- records_per_page = 100
- num_records = len(dataset)
- num_pages = max(int(num_records / records_per_page), 1)
-
- # Skip generating the slider if num_pages is 1 (i.e., all records fit in one page)
- if num_pages > 1:
- page_number = st.select_slider("Select page:", options=list(range(1, num_pages + 1)))
- else:
- page_number = 1 # Only one page
-
- # 📊 Display Data
- start_idx = (page_number - 1) * records_per_page
- end_idx = start_idx + records_per_page
-
- # 🧪 Apply the Search Filter
- filtered_data = []
- for record in dataset[start_idx:end_idx]:
- if isinstance(record, dict) and 'text' in record and 'id' in record:
- if search_term:
- if search_term.lower() in record['text'].lower():
- st.markdown(record)
- filtered_data.append(record)
- else:
- filtered_data.append(record)
-
- # 🌐 Render the Grid
- for record in filtered_data:
- st.write(f"## Question ID: {record['id']}")
- st.write(f"### Question:")
- st.write(f"{record['text']}")
- st.write(f"### Answer:")
- st.write(f"{record['answer']}")
- st.write("---")
-
- st.write(f"😊 Total Records: {num_records} | 📄 Displaying {start_idx+1} to {min(end_idx, num_records)}")
-
-# 1. Constants and Top Level UI Variables
-
-# My Inference API Copy
-# API_URL = 'https://qe55p8afio98s0u3.us-east-1.aws.endpoints.huggingface.cloud' # Dr Llama
-# Original:
-API_URL = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-chat-hf"
-API_KEY = os.getenv('API_KEY')
-MODEL1="meta-llama/Llama-2-7b-chat-hf"
-MODEL1URL="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf"
-HF_KEY = os.getenv('HF_KEY')
-headers = {
- "Authorization": f"Bearer {HF_KEY}",
- "Content-Type": "application/json"
-}
-key = os.getenv('OPENAI_API_KEY')
-prompt = f"Write instructions to teach anyone to write a discharge plan. List the entities, features and relationships to CCDA and FHIR objects in boldface."
-should_save = st.sidebar.checkbox("💾 Save", value=True, help="Save your session data.")
-
-# 2. Prompt label button demo for LLM
-def add_witty_humor_buttons():
- with st.expander("Wit and Humor 🤣", expanded=True):
- # Tip about the Dromedary family
- st.markdown("🔬 **Fun Fact**: Dromedaries, part of the camel family, have a single hump and are adapted to arid environments. Their 'superpowers' include the ability to survive without water for up to 7 days, thanks to their specialized blood cells and water storage in their hump.")
-
- # Define button descriptions
- descriptions = {
- "Generate Limericks 😂": "Write ten random adult limericks based on quotes that are tweet length and make you laugh 🎭",
- "Wise Quotes 🧙": "Generate ten wise quotes that are tweet length 🦉",
- "Funny Rhymes 🎤": "Create ten funny rhymes that are tweet length 🎶",
- "Medical Jokes 💉": "Create ten medical jokes that are tweet length 🏥",
- "Minnesota Humor ❄️": "Create ten jokes about Minnesota that are tweet length 🌨️",
- "Top Funny Stories 📖": "Create ten funny stories that are tweet length 📚",
- "More Funny Rhymes 🎙️": "Create ten more funny rhymes that are tweet length 🎵"
- }
-
- # Create columns
- col1, col2, col3 = st.columns([1, 1, 1], gap="small")
-
- # Add buttons to columns
- if col1.button("Generate Limericks 😂"):
- StreamLLMChatResponse(descriptions["Generate Limericks 😂"])
-
- if col2.button("Wise Quotes 🧙"):
- StreamLLMChatResponse(descriptions["Wise Quotes 🧙"])
-
- if col3.button("Funny Rhymes 🎤"):
- StreamLLMChatResponse(descriptions["Funny Rhymes 🎤"])
-
- col4, col5, col6 = st.columns([1, 1, 1], gap="small")
-
- if col4.button("Medical Jokes 💉"):
- StreamLLMChatResponse(descriptions["Medical Jokes 💉"])
-
- if col5.button("Minnesota Humor ❄️"):
- StreamLLMChatResponse(descriptions["Minnesota Humor ❄️"])
-
- if col6.button("Top Funny Stories 📖"):
- StreamLLMChatResponse(descriptions["Top Funny Stories 📖"])
-
- col7 = st.columns(1, gap="small")
-
- if col7[0].button("More Funny Rhymes 🎙️"):
- StreamLLMChatResponse(descriptions["More Funny Rhymes 🎙️"])
-
-def SpeechSynthesis(result):
- documentHTML5='''
-
-
-
- Read It Aloud
-
-
-
-
🔊 Read It Aloud
-
-
-
-
-
- '''
-
- components.html(documentHTML5, width=1280, height=1024)
- #return result
-
-
-# 3. Stream Llama Response
-# @st.cache_resource
-def StreamLLMChatResponse(prompt):
- try:
- endpoint_url = API_URL
- hf_token = API_KEY
- client = InferenceClient(endpoint_url, token=hf_token)
- gen_kwargs = dict(
- max_new_tokens=512,
- top_k=30,
- top_p=0.9,
- temperature=0.2,
- repetition_penalty=1.02,
- stop_sequences=["\nUser:", "<|endoftext|>", ""],
- )
- stream = client.text_generation(prompt, stream=True, details=True, **gen_kwargs)
- report=[]
- res_box = st.empty()
- collected_chunks=[]
- collected_messages=[]
- allresults=''
- for r in stream:
- if r.token.special:
- continue
- if r.token.text in gen_kwargs["stop_sequences"]:
- break
- collected_chunks.append(r.token.text)
- chunk_message = r.token.text
- collected_messages.append(chunk_message)
- try:
- report.append(r.token.text)
- if len(r.token.text) > 0:
- result="".join(report).strip()
- res_box.markdown(f'*{result}*')
-
- except:
- st.write('Stream llm issue')
- SpeechSynthesis(result)
- return result
- except:
- st.write('Llama model is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).')
-
-# 4. Run query with payload
-def query(payload):
- response = requests.post(API_URL, headers=headers, json=payload)
- st.markdown(response.json())
- return response.json()
-def get_output(prompt):
- return query({"inputs": prompt})
-
-# 5. Auto name generated output files from time and content
-def generate_filename(prompt, file_type):
- central = pytz.timezone('US/Central')
- safe_date_time = datetime.now(central).strftime("%m%d_%H%M")
- replaced_prompt = prompt.replace(" ", "_").replace("\n", "_")
- safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:45]
- return f"{safe_date_time}_{safe_prompt}.{file_type}"
-
-# 6. Speech transcription via OpenAI service
-def transcribe_audio(openai_key, file_path, model):
- openai.api_key = openai_key
- OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions"
- headers = {
- "Authorization": f"Bearer {openai_key}",
- }
- with open(file_path, 'rb') as f:
- data = {'file': f}
- response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model})
- if response.status_code == 200:
- st.write(response.json())
- chatResponse = chat_with_model(response.json().get('text'), '') # *************************************
- transcript = response.json().get('text')
- filename = generate_filename(transcript, 'txt')
- response = chatResponse
- user_prompt = transcript
- create_file(filename, user_prompt, response, should_save)
- return transcript
- else:
- st.write(response.json())
- st.error("Error in API call.")
- return None
-
-# 7. Auto stop on silence audio control for recording WAV files
-def save_and_play_audio(audio_recorder):
- audio_bytes = audio_recorder(key='audio_recorder')
- if audio_bytes:
- filename = generate_filename("Recording", "wav")
- with open(filename, 'wb') as f:
- f.write(audio_bytes)
- st.audio(audio_bytes, format="audio/wav")
- return filename
- return None
-
-# 8. File creator that interprets type and creates output file for text, markdown and code
-def create_file(filename, prompt, response, should_save=True):
- if not should_save:
- return
- base_filename, ext = os.path.splitext(filename)
- if ext in ['.txt', '.htm', '.md']:
- with open(f"{base_filename}.md", 'w') as file:
- try:
- content = prompt.strip() + '\r\n' + response
- file.write(content)
- except:
- st.write('.')
-
- #has_python_code = re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response)
- #has_python_code = bool(re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response))
- #if has_python_code:
- # python_code = re.findall(r"```python([\s\S]*?)```", response)[0].strip()
- # with open(f"{base_filename}-Code.py", 'w') as file:
- # file.write(python_code)
- # with open(f"{base_filename}.md", 'w') as file:
- # content = prompt.strip() + '\r\n' + response
- # file.write(content)
-
-def truncate_document(document, length):
- return document[:length]
-def divide_document(document, max_length):
- return [document[i:i+max_length] for i in range(0, len(document), max_length)]
-
-# 9. Sidebar with UI controls to review and re-run prompts and continue responses
-@st.cache_resource
-def get_table_download_link(file_path):
- with open(file_path, 'r') as file:
- data = file.read()
-
- b64 = base64.b64encode(data.encode()).decode()
- file_name = os.path.basename(file_path)
- ext = os.path.splitext(file_name)[1] # get the file extension
- if ext == '.txt':
- mime_type = 'text/plain'
- elif ext == '.py':
- mime_type = 'text/plain'
- elif ext == '.xlsx':
- mime_type = 'text/plain'
- elif ext == '.csv':
- mime_type = 'text/plain'
- elif ext == '.htm':
- mime_type = 'text/html'
- elif ext == '.md':
- mime_type = 'text/markdown'
- else:
- mime_type = 'application/octet-stream' # general binary data type
- href = f'{file_name}'
- return href
-
-
-def CompressXML(xml_text):
- root = ET.fromstring(xml_text)
- for elem in list(root.iter()):
- if isinstance(elem.tag, str) and 'Comment' in elem.tag:
- elem.parent.remove(elem)
- return ET.tostring(root, encoding='unicode', method="xml")
-
-# 10. Read in and provide UI for past files
-@st.cache_resource
-def read_file_content(file,max_length):
- if file.type == "application/json":
- content = json.load(file)
- return str(content)
- elif file.type == "text/html" or file.type == "text/htm":
- content = BeautifulSoup(file, "html.parser")
- return content.text
- elif file.type == "application/xml" or file.type == "text/xml":
- tree = ET.parse(file)
- root = tree.getroot()
- xml = CompressXML(ET.tostring(root, encoding='unicode'))
- return xml
- elif file.type == "text/markdown" or file.type == "text/md":
- md = mistune.create_markdown()
- content = md(file.read().decode())
- return content
- elif file.type == "text/plain":
- return file.getvalue().decode()
- else:
- return ""
-
-# 11. Chat with GPT - Caution on quota - now favoring fastest AI pipeline STT Whisper->LLM Llama->TTS
-@st.cache_resource
-def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'):
- model = model_choice
- conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
- conversation.append({'role': 'user', 'content': prompt})
- if len(document_section)>0:
- conversation.append({'role': 'assistant', 'content': document_section})
- start_time = time.time()
- report = []
- res_box = st.empty()
- collected_chunks = []
- collected_messages = []
- for chunk in openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=conversation, temperature=0.5, stream=True):
- collected_chunks.append(chunk)
- chunk_message = chunk['choices'][0]['delta']
- collected_messages.append(chunk_message)
- content=chunk["choices"][0].get("delta",{}).get("content")
- try:
- report.append(content)
- if len(content) > 0:
- result = "".join(report).strip()
- res_box.markdown(f'*{result}*')
- except:
- st.write(' ')
- full_reply_content = ''.join([m.get('content', '') for m in collected_messages])
- st.write("Elapsed time:")
- st.write(time.time() - start_time)
- return full_reply_content
-
-# 12. Embedding VectorDB for LLM query of documents to text to compress inputs and prompt together as Chat memory using Langchain
-@st.cache_resource
-def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'):
- conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
- conversation.append({'role': 'user', 'content': prompt})
- if len(file_content)>0:
- conversation.append({'role': 'assistant', 'content': file_content})
- response = openai.ChatCompletion.create(model=model_choice, messages=conversation)
- return response['choices'][0]['message']['content']
-
-def extract_mime_type(file):
- if isinstance(file, str):
- pattern = r"type='(.*?)'"
- match = re.search(pattern, file)
- if match:
- return match.group(1)
- else:
- raise ValueError(f"Unable to extract MIME type from {file}")
- elif isinstance(file, streamlit.UploadedFile):
- return file.type
- else:
- raise TypeError("Input should be a string or a streamlit.UploadedFile object")
-
-def extract_file_extension(file):
- # get the file name directly from the UploadedFile object
- file_name = file.name
- pattern = r".*?\.(.*?)$"
- match = re.search(pattern, file_name)
- if match:
- return match.group(1)
- else:
- raise ValueError(f"Unable to extract file extension from {file_name}")
-
-# Normalize input as text from PDF and other formats
-@st.cache_resource
-def pdf2txt(docs):
- text = ""
- for file in docs:
- file_extension = extract_file_extension(file)
- st.write(f"File type extension: {file_extension}")
- if file_extension.lower() in ['py', 'txt', 'html', 'htm', 'xml', 'json']:
- text += file.getvalue().decode('utf-8')
- elif file_extension.lower() == 'pdf':
- from PyPDF2 import PdfReader
- pdf = PdfReader(BytesIO(file.getvalue()))
- for page in range(len(pdf.pages)):
- text += pdf.pages[page].extract_text() # new PyPDF2 syntax
- return text
-
-def txt2chunks(text):
- text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len)
- return text_splitter.split_text(text)
-
-# Vector Store using FAISS
-@st.cache_resource
-def vector_store(text_chunks):
- embeddings = OpenAIEmbeddings(openai_api_key=key)
- return FAISS.from_texts(texts=text_chunks, embedding=embeddings)
-
-# Memory and Retrieval chains
-@st.cache_resource
-def get_chain(vectorstore):
- llm = ChatOpenAI()
- memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)
- return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory)
-
-def process_user_input(user_question):
- response = st.session_state.conversation({'question': user_question})
- st.session_state.chat_history = response['chat_history']
- for i, message in enumerate(st.session_state.chat_history):
- template = user_template if i % 2 == 0 else bot_template
- st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True)
- filename = generate_filename(user_question, 'txt')
- response = message.content
- user_prompt = user_question
- create_file(filename, user_prompt, response, should_save)
-
-def divide_prompt(prompt, max_length):
- words = prompt.split()
- chunks = []
- current_chunk = []
- current_length = 0
- for word in words:
- if len(word) + current_length <= max_length:
- current_length += len(word) + 1
- current_chunk.append(word)
- else:
- chunks.append(' '.join(current_chunk))
- current_chunk = [word]
- current_length = len(word)
- chunks.append(' '.join(current_chunk))
- return chunks
-
-
-# 13. Provide way of saving all and deleting all to give way of reviewing output and saving locally before clearing it
-
-@st.cache_resource
-def create_zip_of_files(files):
- zip_name = "all_files.zip"
- with zipfile.ZipFile(zip_name, 'w') as zipf:
- for file in files:
- zipf.write(file)
- return zip_name
-
-@st.cache_resource
-def get_zip_download_link(zip_file):
- with open(zip_file, 'rb') as f:
- data = f.read()
- b64 = base64.b64encode(data).decode()
- href = f'Download All'
- return href
-
-# 14. Inference Endpoints for Whisper (best fastest STT) on NVIDIA T4 and Llama (best fastest AGI LLM) on NVIDIA A10
-# My Inference Endpoint
-API_URL_IE = f'https://tonpixzfvq3791u9.us-east-1.aws.endpoints.huggingface.cloud'
-# Original
-API_URL_IE = "https://api-inference.huggingface.co/models/openai/whisper-small.en"
-MODEL2 = "openai/whisper-small.en"
-MODEL2_URL = "https://huggingface.co/openai/whisper-small.en"
-#headers = {
-# "Authorization": "Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
-# "Content-Type": "audio/wav"
-#}
-HF_KEY = os.getenv('HF_KEY')
-headers = {
- "Authorization": f"Bearer {HF_KEY}",
- "Content-Type": "audio/wav"
-}
-
-#@st.cache_resource
-def query(filename):
- with open(filename, "rb") as f:
- data = f.read()
- response = requests.post(API_URL_IE, headers=headers, data=data)
- return response.json()
-
-def generate_filename(prompt, file_type):
- central = pytz.timezone('US/Central')
- safe_date_time = datetime.now(central).strftime("%m%d_%H%M")
- replaced_prompt = prompt.replace(" ", "_").replace("\n", "_")
- safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:90]
- return f"{safe_date_time}_{safe_prompt}.{file_type}"
-
-# 15. Audio recorder to Wav file
-def save_and_play_audio(audio_recorder):
- audio_bytes = audio_recorder()
- if audio_bytes:
- filename = generate_filename("Recording", "wav")
- with open(filename, 'wb') as f:
- f.write(audio_bytes)
- st.audio(audio_bytes, format="audio/wav")
- return filename
-
-# 16. Speech transcription to file output
-def transcribe_audio(filename):
- output = query(filename)
- return output
-
-
-def whisper_main():
- st.title("Speech to Text")
- st.write("Record your speech and get the text.")
-
- # Audio, transcribe, GPT:
- filename = save_and_play_audio(audio_recorder)
- if filename is not None:
- transcription = transcribe_audio(filename)
- #try:
-
- transcript = transcription['text']
- #except:
- #st.write('Whisper model is asleep. Starting up now on T4 GPU - please give 5 minutes then retry as it scales up from zero to activate running container(s).')
-
- st.write(transcript)
- response = StreamLLMChatResponse(transcript)
- # st.write(response) - redundant with streaming result?
- filename = generate_filename(transcript, ".txt")
- create_file(filename, transcript, response, should_save)
- #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
-import streamlit as st
-
-# Sample function to demonstrate a response, replace with your own logic
-def StreamMedChatResponse(topic):
- st.write(f"Showing resources or questions related to: {topic}")
-
-def add_multi_system_agent_topics():
- with st.expander("Multi-System Agent AI Topics 🤖", expanded=True):
- st.markdown("🤖 **Explore Multi-System Agent AI Topics**: This section provides a variety of topics related to multi-system agent AI systems.")
-
- # Define multi-system agent AI topics and descriptions
- descriptions = {
- "Reinforcement Learning 🎮": "Questions related to reinforcement learning algorithms and applications 🕹️",
- "Natural Language Processing 🗣️": "Questions about natural language processing techniques and chatbot development 🗨️",
- "Multi-Agent Systems 🤝": "Questions pertaining to multi-agent systems and cooperative AI interactions 🤖",
- "Conversational AI 🗨️": "Questions on building conversational AI agents and chatbots for various platforms 💬",
- "Distributed AI Systems 🌐": "Questions about distributed AI systems and their implementation in networked environments 🌐",
- "AI Ethics and Bias 🤔": "Questions related to ethics and bias considerations in AI systems and decision-making 🧠",
- "AI in Healthcare 🏥": "Questions about the application of AI in healthcare and medical diagnosis 🩺",
- "AI in Autonomous Vehicles 🚗": "Questions on the use of AI in autonomous vehicles and self-driving technology 🚗"
- }
-
- # Create columns
- col1, col2, col3, col4 = st.columns([1, 1, 1, 1], gap="small")
-
- # Add buttons to columns
- if col1.button("Reinforcement Learning 🎮"):
- st.write(descriptions["Reinforcement Learning 🎮"])
- StreamLLMChatResponse(descriptions["Reinforcement Learning 🎮"])
-
- if col2.button("Natural Language Processing 🗣️"):
- st.write(descriptions["Natural Language Processing 🗣️"])
- StreamLLMChatResponse(descriptions["Natural Language Processing 🗣️"])
-
- if col3.button("Multi-Agent Systems 🤝"):
- st.write(descriptions["Multi-Agent Systems 🤝"])
- StreamLLMChatResponse(descriptions["Multi-Agent Systems 🤝"])
-
- if col4.button("Conversational AI 🗨️"):
- st.write(descriptions["Conversational AI 🗨️"])
- StreamLLMChatResponse(descriptions["Conversational AI 🗨️"])
-
- col5, col6, col7, col8 = st.columns([1, 1, 1, 1], gap="small")
-
- if col5.button("Distributed AI Systems 🌐"):
- st.write(descriptions["Distributed AI Systems 🌐"])
- StreamLLMChatResponse(descriptions["Distributed AI Systems 🌐"])
-
- if col6.button("AI Ethics and Bias 🤔"):
- st.write(descriptions["AI Ethics and Bias 🤔"])
- StreamLLMChatResponse(descriptions["AI Ethics and Bias 🤔"])
-
- if col7.button("AI in Healthcare 🏥"):
- st.write(descriptions["AI in Healthcare 🏥"])
- StreamLLMChatResponse(descriptions["AI in Healthcare 🏥"])
-
- if col8.button("AI in Autonomous Vehicles 🚗"):
- st.write(descriptions["AI in Autonomous Vehicles 🚗"])
- StreamLLMChatResponse(descriptions["AI in Autonomous Vehicles 🚗"])
-
-
-# 17. Main
-def main():
-
- st.title("Try Some Topics:")
- prompt = f"Write ten funny jokes that are tweet length stories that make you laugh. Show as markdown outline with emojis for each."
-
- # Add Wit and Humor buttons
- # add_witty_humor_buttons()
- # Calling the function to add the multi-system agent AI topics buttons
- add_multi_system_agent_topics()
-
- example_input = st.text_input("Enter your example text:", value=prompt, help="Enter text to get a response from DromeLlama.")
- if st.button("Run Prompt With DromeLlama", help="Click to run the prompt."):
- try:
- StreamLLMChatResponse(example_input)
- except:
- st.write('DromeLlama is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).')
-
- openai.api_key = os.getenv('OPENAI_KEY')
- menu = ["txt", "htm", "xlsx", "csv", "md", "py"]
- choice = st.sidebar.selectbox("Output File Type:", menu)
- model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301'))
- user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100)
- collength, colupload = st.columns([2,3]) # adjust the ratio as needed
- with collength:
- max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000)
- with colupload:
- uploaded_file = st.file_uploader("Add a file for context:", type=["pdf", "xml", "json", "xlsx", "csv", "html", "htm", "md", "txt"])
- document_sections = deque()
- document_responses = {}
- if uploaded_file is not None:
- file_content = read_file_content(uploaded_file, max_length)
- document_sections.extend(divide_document(file_content, max_length))
- if len(document_sections) > 0:
- if st.button("👁️ View Upload"):
- st.markdown("**Sections of the uploaded file:**")
- for i, section in enumerate(list(document_sections)):
- st.markdown(f"**Section {i+1}**\n{section}")
- st.markdown("**Chat with the model:**")
- for i, section in enumerate(list(document_sections)):
- if i in document_responses:
- st.markdown(f"**Section {i+1}**\n{document_responses[i]}")
- else:
- if st.button(f"Chat about Section {i+1}"):
- st.write('Reasoning with your inputs...')
- response = chat_with_model(user_prompt, section, model_choice)
- st.write('Response:')
- st.write(response)
- document_responses[i] = response
- filename = generate_filename(f"{user_prompt}_section_{i+1}", choice)
- create_file(filename, user_prompt, response, should_save)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
- if st.button('💬 Chat'):
- st.write('Reasoning with your inputs...')
- user_prompt_sections = divide_prompt(user_prompt, max_length)
- full_response = ''
- for prompt_section in user_prompt_sections:
- response = chat_with_model(prompt_section, ''.join(list(document_sections)), model_choice)
- full_response += response + '\n' # Combine the responses
- response = full_response
- st.write('Response:')
- st.write(response)
- filename = generate_filename(user_prompt, choice)
- create_file(filename, user_prompt, response, should_save)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
- # Compose a file sidebar of past encounters
- all_files = glob.glob("*.*")
- all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names
- all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order
- if st.sidebar.button("🗑 Delete All"):
- for file in all_files:
- os.remove(file)
- st.experimental_rerun()
- if st.sidebar.button("⬇️ Download All"):
- zip_file = create_zip_of_files(all_files)
- st.sidebar.markdown(get_zip_download_link(zip_file), unsafe_allow_html=True)
- file_contents=''
- next_action=''
- for file in all_files:
- col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed
- with col1:
- if st.button("🌐", key="md_"+file): # md emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='md'
- with col2:
- st.markdown(get_table_download_link(file), unsafe_allow_html=True)
- with col3:
- if st.button("📂", key="open_"+file): # open emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='open'
- with col4:
- if st.button("🔍", key="read_"+file): # search emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='search'
- with col5:
- if st.button("🗑", key="delete_"+file):
- os.remove(file)
- st.experimental_rerun()
-
-
- if len(file_contents) > 0:
- if next_action=='open':
- file_content_area = st.text_area("File Contents:", file_contents, height=500)
- if next_action=='md':
- st.markdown(file_contents)
- if next_action=='search':
- file_content_area = st.text_area("File Contents:", file_contents, height=500)
- st.write('Reasoning with your inputs...')
-
- # new - llama
- response = StreamLLMChatResponse(file_contents)
- filename = generate_filename(user_prompt, ".md")
- create_file(filename, file_contents, response, should_save)
- SpeechSynthesis(response)
-
- # old - gpt
- #response = chat_with_model(user_prompt, file_contents, model_choice)
- #filename = generate_filename(file_contents, choice)
- #create_file(filename, user_prompt, response, should_save)
-
- st.experimental_rerun()
-
- # Feedback
- # Step: Give User a Way to Upvote or Downvote
- feedback = st.radio("Step 8: Give your feedback", ("👍 Upvote", "👎 Downvote"))
- if feedback == "👍 Upvote":
- st.write("You upvoted 👍. Thank you for your feedback!")
- else:
- st.write("You downvoted 👎. Thank you for your feedback!")
-
- load_dotenv()
- st.write(css, unsafe_allow_html=True)
- st.header("Chat with documents :books:")
- user_question = st.text_input("Ask a question about your documents:")
- if user_question:
- process_user_input(user_question)
- with st.sidebar:
- st.subheader("Your documents")
- docs = st.file_uploader("import documents", accept_multiple_files=True)
- with st.spinner("Processing"):
- raw = pdf2txt(docs)
- if len(raw) > 0:
- length = str(len(raw))
- text_chunks = txt2chunks(raw)
- vectorstore = vector_store(text_chunks)
- st.session_state.conversation = get_chain(vectorstore)
- st.markdown('# AI Search Index of Length:' + length + ' Created.') # add timing
- filename = generate_filename(raw, 'txt')
- create_file(filename, raw, '', should_save)
-
-# 18. Run AI Pipeline
-if __name__ == "__main__":
- whisper_main()
- main()
- add_Med_Licensing_Exam_Dataset()
\ No newline at end of file
diff --git a/spaces/azer123456789/nicky007-stable-diffusion-logo-fine-tuned/README.md b/spaces/azer123456789/nicky007-stable-diffusion-logo-fine-tuned/README.md
deleted file mode 100644
index 9b5ae4179dd00a721dfef4521be7c253e11efc81..0000000000000000000000000000000000000000
--- a/spaces/azer123456789/nicky007-stable-diffusion-logo-fine-tuned/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Nicky007 Stable Diffusion Logo Fine Tuned
-emoji: 🌖
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/math/Math3Node.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/math/Math3Node.js
deleted file mode 100644
index 5bc866146fe38adab12a7972f5238b2c423040fd..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/math/Math3Node.js
+++ /dev/null
@@ -1,121 +0,0 @@
-/**
- * @author sunag / http://www.sunag.com.br/
- */
-
-import { TempNode } from '../core/TempNode.js';
-
-function Math3Node( a, b, c, method ) {
-
- TempNode.call( this );
-
- this.a = a;
- this.b = b;
- this.c = c;
-
- this.method = method;
-
-}
-
-Math3Node.MIX = 'mix';
-Math3Node.CLAMP = 'clamp';
-Math3Node.REFRACT = 'refract';
-Math3Node.SMOOTHSTEP = 'smoothstep';
-Math3Node.FACEFORWARD = 'faceforward';
-
-Math3Node.prototype = Object.create( TempNode.prototype );
-Math3Node.prototype.constructor = Math3Node;
-Math3Node.prototype.nodeType = "Math3";
-
-Math3Node.prototype.getType = function ( builder ) {
-
- var a = builder.getTypeLength( this.a.getType( builder ) );
- var b = builder.getTypeLength( this.b.getType( builder ) );
- var c = builder.getTypeLength( this.c.getType( builder ) );
-
- if ( a > b && a > c ) {
-
- return this.a.getType( builder );
-
- } else if ( b > c ) {
-
- return this.b.getType( builder );
-
- }
-
- return this.c.getType( builder );
-
-};
-
-Math3Node.prototype.generate = function ( builder, output ) {
-
- var a, b, c,
- al = builder.getTypeLength( this.a.getType( builder ) ),
- bl = builder.getTypeLength( this.b.getType( builder ) ),
- cl = builder.getTypeLength( this.c.getType( builder ) ),
- type = this.getType( builder );
-
- // optimzer
-
- switch ( this.method ) {
-
- case Math3Node.REFRACT:
-
- a = this.a.build( builder, type );
- b = this.b.build( builder, type );
- c = this.c.build( builder, 'f' );
-
- break;
-
- case Math3Node.MIX:
-
- a = this.a.build( builder, type );
- b = this.b.build( builder, type );
- c = this.c.build( builder, cl === 1 ? 'f' : type );
-
- break;
-
- default:
-
- a = this.a.build( builder, type );
- b = this.b.build( builder, type );
- c = this.c.build( builder, type );
-
- break;
-
- }
-
- return builder.format( this.method + '( ' + a + ', ' + b + ', ' + c + ' )', type, output );
-
-};
-
-Math3Node.prototype.copy = function ( source ) {
-
- TempNode.prototype.copy.call( this, source );
-
- this.a = source.a;
- this.b = source.b;
- this.c = source.c;
- this.method = source.method;
-
-};
-
-Math3Node.prototype.toJSON = function ( meta ) {
-
- var data = this.getJSONNode( meta );
-
- if ( ! data ) {
-
- data = this.createJSONNode( meta );
-
- data.a = this.a.toJSON( meta ).uuid;
- data.b = this.b.toJSON( meta ).uuid;
- data.c = this.c.toJSON( meta ).uuid;
- data.method = this.method;
-
- }
-
- return data;
-
-};
-
-export { Math3Node };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Interpolations.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Interpolations.js
deleted file mode 100644
index 0fb812c1ff9a78f234b36bd3b1dc768fcf479eda..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/Interpolations.js
+++ /dev/null
@@ -1,81 +0,0 @@
-/**
- * @author zz85 / http://www.lab4games.net/zz85/blog
- *
- * Bezier Curves formulas obtained from
- * http://en.wikipedia.org/wiki/Bézier_curve
- */
-
-function CatmullRom( t, p0, p1, p2, p3 ) {
-
- var v0 = ( p2 - p0 ) * 0.5;
- var v1 = ( p3 - p1 ) * 0.5;
- var t2 = t * t;
- var t3 = t * t2;
- return ( 2 * p1 - 2 * p2 + v0 + v1 ) * t3 + ( - 3 * p1 + 3 * p2 - 2 * v0 - v1 ) * t2 + v0 * t + p1;
-
-}
-
-//
-
-function QuadraticBezierP0( t, p ) {
-
- var k = 1 - t;
- return k * k * p;
-
-}
-
-function QuadraticBezierP1( t, p ) {
-
- return 2 * ( 1 - t ) * t * p;
-
-}
-
-function QuadraticBezierP2( t, p ) {
-
- return t * t * p;
-
-}
-
-function QuadraticBezier( t, p0, p1, p2 ) {
-
- return QuadraticBezierP0( t, p0 ) + QuadraticBezierP1( t, p1 ) +
- QuadraticBezierP2( t, p2 );
-
-}
-
-//
-
-function CubicBezierP0( t, p ) {
-
- var k = 1 - t;
- return k * k * k * p;
-
-}
-
-function CubicBezierP1( t, p ) {
-
- var k = 1 - t;
- return 3 * k * k * t * p;
-
-}
-
-function CubicBezierP2( t, p ) {
-
- return 3 * ( 1 - t ) * t * t * p;
-
-}
-
-function CubicBezierP3( t, p ) {
-
- return t * t * t * p;
-
-}
-
-function CubicBezier( t, p0, p1, p2, p3 ) {
-
- return CubicBezierP0( t, p0 ) + CubicBezierP1( t, p1 ) + CubicBezierP2( t, p2 ) +
- CubicBezierP3( t, p3 );
-
-}
-
-export { CatmullRom, QuadraticBezier, CubicBezier };
diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621111527.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621111527.py
deleted file mode 100644
index 152de6689bf9d96901148e8acb07268812c99d11..0000000000000000000000000000000000000000
--- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621111527.py
+++ /dev/null
@@ -1,37 +0,0 @@
-#-*- coding : utf-8-*-
-import base64
-from subprocess import STDOUT
-import streamlit as st
-import pandas as pd
-import camelot as cam # extracting tables from PDFs
-
-st.title("PDF Table Extractor")
-
-input_pdf = st.file_uploader(label = "", type = 'pdf')
-
-background = st.selectbox("表格线条是否透明",(False,True))
-extractor_mode = st.selectbox("单页抽取 OR 全文抽取",("单页抽取","全文抽取"))
-
-def extractor(page,result_name):
- tables_all= cam.read_pdf("input.pdf", pages=page, process_background=background)
- result_all = pd.ExcelWriter(result_name, engine='xlsxwriter')
- for i in range(0,len(tables_all)):
- table = tables_all[i].df
- sheetname = str(i)
- table.to_excel(result_all, sheetname,index=False)
- result_all.save()
- with open(result_name,'rb') as f:
- st.download_button('抽取完成, 点击下载!', f,file_name=result_name,mime="application/vnd.ms-excel")
-
-
-if input_pdf is not None:
- # byte object into a PDF file
- with open("input.pdf", "wb") as f:
- base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8')
- f.write(base64.b64decode(base64_pdf))
- f.close()
- if extractor_mode == "单页抽取":
- page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1)
- extractor(page_number,"result.xlsx")
- if extractor_mode == "全文抽取":
- extractor("all","result_all.xlsx")
\ No newline at end of file
diff --git a/spaces/bennyguo/threestudio/Dockerfile b/spaces/bennyguo/threestudio/Dockerfile
deleted file mode 100644
index a0870b339ac159f8e8953776ad7224c5f6a2c05d..0000000000000000000000000000000000000000
--- a/spaces/bennyguo/threestudio/Dockerfile
+++ /dev/null
@@ -1,67 +0,0 @@
-# Reference:
-# https://github.com/cvpaperchallenge/Ascender
-# https://github.com/nerfstudio-project/nerfstudio
-
-FROM nvidia/cuda:11.8.0-devel-ubuntu22.04
-
-ARG USER_NAME=dreamer
-ARG GROUP_NAME=dreamers
-ARG UID=1000
-ARG GID=1000
-
-# Set compute capability for nerfacc and tiny-cuda-nn
-# See https://developer.nvidia.com/cuda-gpus and limit number to speed-up build
-ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX"
-ENV TCNN_CUDA_ARCHITECTURES=90;89;86;80;75;70;61;60
-# Speed-up build for RTX 30xx
-# ENV TORCH_CUDA_ARCH_LIST="8.6"
-# ENV TCNN_CUDA_ARCHITECTURES=86
-# Speed-up build for RTX 40xx
-# ENV TORCH_CUDA_ARCH_LIST="8.9"
-# ENV TCNN_CUDA_ARCHITECTURES=89
-
-ENV CUDA_HOME=/usr/local/cuda
-ENV PATH=${CUDA_HOME}/bin:/home/${USER_NAME}/.local/bin:${PATH}
-ENV LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH}
-ENV LIBRARY_PATH=${CUDA_HOME}/lib64/stubs:${LIBRARY_PATH}
-
-# apt install by root user
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- build-essential \
- curl \
- git \
- libegl1-mesa-dev \
- libgl1-mesa-dev \
- libgles2-mesa-dev \
- libglib2.0-0 \
- libsm6 \
- libxext6 \
- libxrender1 \
- python-is-python3 \
- python3.10-dev \
- python3-pip \
- wget \
- && rm -rf /var/lib/apt/lists/*
-
-# Change user to non-root user
-RUN groupadd -g ${GID} ${GROUP_NAME} \
- && useradd -ms /bin/sh -u ${UID} -g ${GID} ${USER_NAME}
-USER ${USER_NAME}
-
-RUN pip install --upgrade pip setuptools ninja
-RUN pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --index-url https://download.pytorch.org/whl/cu118
-# Install nerfacc and tiny-cuda-nn before installing requirements.txt
-# because these two installations are time consuming and error prone
-# RUN pip install git+https://github.com/KAIR-BAIR/nerfacc.git@v0.5.2
-RUN pip install nerfacc==0.5.2 -f https://nerfacc-bucket.s3.us-west-2.amazonaws.com/whl/torch-2.0.0_cu118.html
-RUN pip install git+https://github.com/NVlabs/tiny-cuda-nn.git#subdirectory=bindings/torch
-
-COPY requirements.txt /tmp
-RUN cd /tmp && pip install -r requirements.txt
-
-# avoid caching the old version
-ADD "https://api.github.com/repos/threestudio-project/threestudio/commits?per_page=1" latest_commit
-RUN git clone https://github.com/threestudio-project/threestudio.git /home/${USER_NAME}/threestudio
-WORKDIR /home/${USER_NAME}/threestudio
-RUN git checkout 27d69d9845016c8b8aa0bac92ab6d4fea8d1e1b8
-CMD ["python", "gradio_app.py", "launch", "--listen", "--hf-space"]
diff --git a/spaces/bilby/bilby-retrievalqa/README.md b/spaces/bilby/bilby-retrievalqa/README.md
deleted file mode 100644
index 92bbe4919f5b2a207fab5faf9d70063784004340..0000000000000000000000000000000000000000
--- a/spaces/bilby/bilby-retrievalqa/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Bilby Retrievalqa
-emoji: 🏆
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bpHigh/AI-Research-Buddy/README.md b/spaces/bpHigh/AI-Research-Buddy/README.md
deleted file mode 100644
index df611cc4f31d5c3883cc2edcc1e14df96c454e41..0000000000000000000000000000000000000000
--- a/spaces/bpHigh/AI-Research-Buddy/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI Research Buddy
-emoji: 💻
-colorFrom: purple
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/losses.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/losses.py
deleted file mode 100644
index be293e739bdc2d91273f30fb789befe7c8b49a43..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/adversarial/losses.py
+++ /dev/null
@@ -1,228 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility module to handle adversarial losses without requiring to mess up the main training loop.
-"""
-
-import typing as tp
-
-import flashy
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-ADVERSARIAL_LOSSES = ['mse', 'hinge', 'hinge2']
-
-
-AdvLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor], torch.Tensor]]
-FeatLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]]
-
-
-class AdversarialLoss(nn.Module):
- """Adversary training wrapper.
-
- Args:
- adversary (nn.Module): The adversary module will be used to estimate the logits given the fake and real samples.
- We assume here the adversary output is ``Tuple[List[torch.Tensor], List[List[torch.Tensor]]]``
- where the first item is a list of logits and the second item is a list of feature maps.
- optimizer (torch.optim.Optimizer): Optimizer used for training the given module.
- loss (AdvLossType): Loss function for generator training.
- loss_real (AdvLossType): Loss function for adversarial training on logits from real samples.
- loss_fake (AdvLossType): Loss function for adversarial training on logits from fake samples.
- loss_feat (FeatLossType): Feature matching loss function for generator training.
- normalize (bool): Whether to normalize by number of sub-discriminators.
-
- Example of usage:
- adv_loss = AdversarialLoss(adversaries, optimizer, loss, loss_real, loss_fake)
- for real in loader:
- noise = torch.randn(...)
- fake = model(noise)
- adv_loss.train_adv(fake, real)
- loss, _ = adv_loss(fake, real)
- loss.backward()
- """
- def __init__(self,
- adversary: nn.Module,
- optimizer: torch.optim.Optimizer,
- loss: AdvLossType,
- loss_real: AdvLossType,
- loss_fake: AdvLossType,
- loss_feat: tp.Optional[FeatLossType] = None,
- normalize: bool = True):
- super().__init__()
- self.adversary: nn.Module = adversary
- flashy.distrib.broadcast_model(self.adversary)
- self.optimizer = optimizer
- self.loss = loss
- self.loss_real = loss_real
- self.loss_fake = loss_fake
- self.loss_feat = loss_feat
- self.normalize = normalize
-
- def _save_to_state_dict(self, destination, prefix, keep_vars):
- # Add the optimizer state dict inside our own.
- super()._save_to_state_dict(destination, prefix, keep_vars)
- destination[prefix + 'optimizer'] = self.optimizer.state_dict()
- return destination
-
- def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs):
- # Load optimizer state.
- self.optimizer.load_state_dict(state_dict.pop(prefix + 'optimizer'))
- super()._load_from_state_dict(state_dict, prefix, *args, **kwargs)
-
- def get_adversary_pred(self, x):
- """Run adversary model, validating expected output format."""
- logits, fmaps = self.adversary(x)
- assert isinstance(logits, list) and all([isinstance(t, torch.Tensor) for t in logits]), \
- f'Expecting a list of tensors as logits but {type(logits)} found.'
- assert isinstance(fmaps, list), f'Expecting a list of features maps but {type(fmaps)} found.'
- for fmap in fmaps:
- assert isinstance(fmap, list) and all([isinstance(f, torch.Tensor) for f in fmap]), \
- f'Expecting a list of tensors as feature maps but {type(fmap)} found.'
- return logits, fmaps
-
- def train_adv(self, fake: torch.Tensor, real: torch.Tensor) -> torch.Tensor:
- """Train the adversary with the given fake and real example.
-
- We assume the adversary output is the following format: Tuple[List[torch.Tensor], List[List[torch.Tensor]]].
- The first item being the logits and second item being a list of feature maps for each sub-discriminator.
-
- This will automatically synchronize gradients (with `flashy.distrib.eager_sync_model`)
- and call the optimizer.
- """
- loss = torch.tensor(0., device=fake.device)
- all_logits_fake_is_fake, _ = self.get_adversary_pred(fake.detach())
- all_logits_real_is_fake, _ = self.get_adversary_pred(real.detach())
- n_sub_adversaries = len(all_logits_fake_is_fake)
- for logit_fake_is_fake, logit_real_is_fake in zip(all_logits_fake_is_fake, all_logits_real_is_fake):
- loss += self.loss_fake(logit_fake_is_fake) + self.loss_real(logit_real_is_fake)
-
- if self.normalize:
- loss /= n_sub_adversaries
-
- self.optimizer.zero_grad()
- with flashy.distrib.eager_sync_model(self.adversary):
- loss.backward()
- self.optimizer.step()
-
- return loss
-
- def forward(self, fake: torch.Tensor, real: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- """Return the loss for the generator, i.e. trying to fool the adversary,
- and feature matching loss if provided.
- """
- adv = torch.tensor(0., device=fake.device)
- feat = torch.tensor(0., device=fake.device)
- with flashy.utils.readonly(self.adversary):
- all_logits_fake_is_fake, all_fmap_fake = self.get_adversary_pred(fake)
- all_logits_real_is_fake, all_fmap_real = self.get_adversary_pred(real)
- n_sub_adversaries = len(all_logits_fake_is_fake)
- for logit_fake_is_fake in all_logits_fake_is_fake:
- adv += self.loss(logit_fake_is_fake)
- if self.loss_feat:
- for fmap_fake, fmap_real in zip(all_fmap_fake, all_fmap_real):
- feat += self.loss_feat(fmap_fake, fmap_real)
-
- if self.normalize:
- adv /= n_sub_adversaries
- feat /= n_sub_adversaries
-
- return adv, feat
-
-
-def get_adv_criterion(loss_type: str) -> tp.Callable:
- assert loss_type in ADVERSARIAL_LOSSES
- if loss_type == 'mse':
- return mse_loss
- elif loss_type == 'hinge':
- return hinge_loss
- elif loss_type == 'hinge2':
- return hinge2_loss
- raise ValueError('Unsupported loss')
-
-
-def get_fake_criterion(loss_type: str) -> tp.Callable:
- assert loss_type in ADVERSARIAL_LOSSES
- if loss_type == 'mse':
- return mse_fake_loss
- elif loss_type in ['hinge', 'hinge2']:
- return hinge_fake_loss
- raise ValueError('Unsupported loss')
-
-
-def get_real_criterion(loss_type: str) -> tp.Callable:
- assert loss_type in ADVERSARIAL_LOSSES
- if loss_type == 'mse':
- return mse_real_loss
- elif loss_type in ['hinge', 'hinge2']:
- return hinge_real_loss
- raise ValueError('Unsupported loss')
-
-
-def mse_real_loss(x: torch.Tensor) -> torch.Tensor:
- return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x))
-
-
-def mse_fake_loss(x: torch.Tensor) -> torch.Tensor:
- return F.mse_loss(x, torch.tensor(0., device=x.device).expand_as(x))
-
-
-def hinge_real_loss(x: torch.Tensor) -> torch.Tensor:
- return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x)))
-
-
-def hinge_fake_loss(x: torch.Tensor) -> torch.Tensor:
- return -torch.mean(torch.min(-x - 1, torch.tensor(0., device=x.device).expand_as(x)))
-
-
-def mse_loss(x: torch.Tensor) -> torch.Tensor:
- if x.numel() == 0:
- return torch.tensor([0.0], device=x.device)
- return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x))
-
-
-def hinge_loss(x: torch.Tensor) -> torch.Tensor:
- if x.numel() == 0:
- return torch.tensor([0.0], device=x.device)
- return -x.mean()
-
-
-def hinge2_loss(x: torch.Tensor) -> torch.Tensor:
- if x.numel() == 0:
- return torch.tensor([0.0])
- return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x)))
-
-
-class FeatureMatchingLoss(nn.Module):
- """Feature matching loss for adversarial training.
-
- Args:
- loss (nn.Module): Loss to use for feature matching (default=torch.nn.L1).
- normalize (bool): Whether to normalize the loss.
- by number of feature maps.
- """
- def __init__(self, loss: nn.Module = torch.nn.L1Loss(), normalize: bool = True):
- super().__init__()
- self.loss = loss
- self.normalize = normalize
-
- def forward(self, fmap_fake: tp.List[torch.Tensor], fmap_real: tp.List[torch.Tensor]) -> torch.Tensor:
- assert len(fmap_fake) == len(fmap_real) and len(fmap_fake) > 0
- feat_loss = torch.tensor(0., device=fmap_fake[0].device)
- feat_scale = torch.tensor(0., device=fmap_fake[0].device)
- n_fmaps = 0
- for (feat_fake, feat_real) in zip(fmap_fake, fmap_real):
- assert feat_fake.shape == feat_real.shape
- n_fmaps += 1
- feat_loss += self.loss(feat_fake, feat_real)
- feat_scale += torch.mean(torch.abs(feat_real))
-
- if self.normalize:
- feat_loss /= n_fmaps
-
- return feat_loss
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/tracing.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/tracing.py
deleted file mode 100644
index 577df4e2f4ad0a1a309d31d7c28311be11f87247..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/tracing.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import inspect
-import torch
-
-from detectron2.utils.env import TORCH_VERSION
-
-try:
- from torch.fx._symbolic_trace import is_fx_tracing as is_fx_tracing_current
-
- tracing_current_exists = True
-except ImportError:
- tracing_current_exists = False
-
-try:
- from torch.fx._symbolic_trace import _orig_module_call
-
- tracing_legacy_exists = True
-except ImportError:
- tracing_legacy_exists = False
-
-
-@torch.jit.ignore
-def is_fx_tracing_legacy() -> bool:
- """
- Returns a bool indicating whether torch.fx is currently symbolically tracing a module.
- Can be useful for gating module logic that is incompatible with symbolic tracing.
- """
- return torch.nn.Module.__call__ is not _orig_module_call
-
-
-@torch.jit.ignore
-def is_fx_tracing() -> bool:
- """Returns whether execution is currently in
- Torch FX tracing mode"""
- if TORCH_VERSION >= (1, 10) and tracing_current_exists:
- return is_fx_tracing_current()
- elif tracing_legacy_exists:
- return is_fx_tracing_legacy()
- else:
- # Can't find either current or legacy tracing indication code.
- # Enabling this assert_fx_safe() call regardless of tracing status.
- return False
-
-
-@torch.jit.ignore
-def assert_fx_safe(condition: bool, message: str) -> torch.Tensor:
- """An FX-tracing safe version of assert.
- Avoids erroneous type assertion triggering when types are masked inside
- an fx.proxy.Proxy object during tracing.
- Args: condition - either a boolean expression or a string representing
- the condition to test. If this assert triggers an exception when tracing
- due to dynamic control flow, try encasing the expression in quotation
- marks and supplying it as a string."""
- # Must return a concrete tensor for compatibility with PyTorch <=1.8.
- # If <=1.8 compatibility is not needed, return type can be converted to None
- if not is_fx_tracing():
- try:
- if isinstance(condition, str):
- caller_frame = inspect.currentframe().f_back
- torch._assert(
- eval(condition, caller_frame.f_globals, caller_frame.f_locals), message
- )
- return torch.ones(1)
- else:
- torch._assert(condition, message)
- return torch.ones(1)
- except torch.fx.proxy.TraceError as e:
- print(
- "Found a non-FX compatible assertion. Skipping the check. Failure is shown below"
- + str(e)
- )
- return torch.zeros(1)
diff --git a/spaces/cahya/indonesian-whisperer/README.md b/spaces/cahya/indonesian-whisperer/README.md
deleted file mode 100644
index 6c1dd64da2f9226f205fc5e6bddf22f947e2906f..0000000000000000000000000000000000000000
--- a/spaces/cahya/indonesian-whisperer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Indonesian Whisperer
-emoji: 🇮🇩
-colorFrom: purple
-colorTo: red
-sdk: docker
-pinned: true
-license: cc
-tags:
-- whisper-event
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/losses/__init__.py b/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/losses/__init__.py
deleted file mode 100644
index b03080a907cb5cb4b316ceb74866ddbc406b33bf..0000000000000000000000000000000000000000
--- a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/losses/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .stft_loss import * # NOQA
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py
deleted file mode 100644
index 40cf18131810307157a9a7d1f6d5922b00fd73d5..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from ..common.optim import SGD as optimizer
-from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
-from ..common.data.coco_panoptic_separated import dataloader
-from ..common.models.panoptic_fpn import model
-from ..common.train import train
-
-model.backbone.bottom_up.freeze_at = 2
-train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/keypoint_rcnn_fpn.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/keypoint_rcnn_fpn.py
deleted file mode 100644
index 56b3994df249884d4816fc9a5c7f553a9ab6f400..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/keypoint_rcnn_fpn.py
+++ /dev/null
@@ -1,33 +0,0 @@
-from detectron2.config import LazyCall as L
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.poolers import ROIPooler
-from detectron2.modeling.roi_heads import KRCNNConvDeconvUpsampleHead
-
-from .mask_rcnn_fpn import model
-
-[model.roi_heads.pop(x) for x in ["mask_in_features", "mask_pooler", "mask_head"]]
-
-model.roi_heads.update(
- num_classes=1,
- keypoint_in_features=["p2", "p3", "p4", "p5"],
- keypoint_pooler=L(ROIPooler)(
- output_size=14,
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- keypoint_head=L(KRCNNConvDeconvUpsampleHead)(
- input_shape=ShapeSpec(channels=256, width=14, height=14),
- num_keypoints=17,
- conv_dims=[512] * 8,
- loss_normalizer="visible",
- ),
-)
-
-# Detectron1 uses 2000 proposals per-batch, but this option is per-image in detectron2.
-# 1000 proposals per-image is found to hurt box AP.
-# Therefore we increase it to 1500 per-image.
-model.proposal_generator.post_nms_topk = (1500, 1000)
-
-# Keypoint AP degrades (though box AP improves) when using plain L1 loss
-model.roi_heads.box_predictor.smooth_l1_beta = 0.5
diff --git a/spaces/cchuang2009/News-Forum/app.py b/spaces/cchuang2009/News-Forum/app.py
deleted file mode 100644
index cee952e680289672e550d61bb54af422b35f3f53..0000000000000000000000000000000000000000
--- a/spaces/cchuang2009/News-Forum/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# load required modules, 載入必要模組
-import pandas as pd
-from datetime import datetime
-import streamlit as st
-
-st.set_page_config(page_title="News Archive", page_icon=":newspaper:", layout="wide")
-
-# Read CSV file, 讀取資料
-#df = pd.read_csv("news.csv")
-df = pd.read_csv("https://raw.githubusercontent.com/cchuang2009/streamlit-News-Forum/main/news.csv")
-
-# Convert date column to datetime 轉換時間資料格式
-df["Published_date"] = pd.to_datetime(df["Published_date"])
-
-# Sort by date, 依照時間排序
-df = df.sort_values("Published_date", ascending=False)
-
-# Set default selection to current year and month, 預定使用登錄的年月
-now = datetime.now()
-default_year_month = now.strftime("%Y-%b")
-
-# Get unique year-month combinations from dataframe, 利用年月設定資料現選項
-year_months = df["Published_date"].dt.strftime("%Y-%b").unique()
-months = sorted(year_months, reverse=True)
-
-# Add the last year-month to the months list if it's not already there
-if default_year_month not in months:
- months.append(default_year_month)
-
-
-# Sidebar menu for selecting month, 設定左邊選項
-selected_month = st.sidebar.selectbox("Select Month", months, index=months.index(default_year_month))
-
-# Keyword search box, 關鍵詞查詢
-search_term = st.sidebar.text_input("Search News", "")
-
-# Filter dataframe by selected month and search term, 關鍵詞查詢結果
-filtered_df = df[(df["Published_date"].dt.strftime("%Y-%b") == selected_month) & (df["Title"].str.contains(search_term, case=False))]
-
-# Display selected news, 顯示選取的項目
-st.write(f"## News for :blue[{selected_month}]")
-
-for title, source, date in filtered_df[["Title", "Source", "Published_date"]].itertuples(index=False):
- with st.expander(f'**{title}**'):
- st.write(f"{source}", unsafe_allow_html=True)
- st.write(f"*Published on :orange[{date.date()}]*")
-
-# Show last 5 news articles in sidebar, 列出最新的五個訊息
-st.sidebar.markdown("## Last 5 News Articles")
-last_5_articles = df.head()[["Title", "Source", "Published_date"]].values.tolist()[::-1]
-for article in last_5_articles:
- title, source, date = article
- st.sidebar.markdown(f"[{title}] - *Published on :orange[{date.date()}]*")
-
-# If no selection made, show the most recent news article in main area, 如果如果沒有選項, 使用最後的日期內訊息
-if not selected_month:
- st.write(f"# Latest News: [{df.iloc[0]['Title']}]({df.iloc[0]['Source']})")
diff --git a/spaces/ccolas/TastyPiano/src/music/pipeline/synth2midi.py b/spaces/ccolas/TastyPiano/src/music/pipeline/synth2midi.py
deleted file mode 100644
index b1257a3fa3ec84e51f2ef4bd9861b7d9ede68219..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/music/pipeline/synth2midi.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import mido
-mido.set_backend('mido.backends.pygame')
-from mido import Message, MidiFile, MidiTrack
-import time
-import pynput
-import sys
-sys.path.append('../../')
-from src.music.config import SYNTH_RECORDED_MIDI_PATH
-from datetime import datetime
-
-#TODO: debug this with other cable, keyboard and sound card
-global KEY_PRESSED
-KEY_PRESSED = None
-
-def on_press(key):
- global KEY_PRESSED
- try:
- KEY_PRESSED = key.name
- except:
- pass
-
-def on_release(key):
- global KEY_PRESSED
- KEY_PRESSED = None
-
-
-def is_pressed(key):
- global KEY_PRESSED
- return KEY_PRESSED == key
-
-# keyboard listener
-listener = pynput.keyboard.Listener(on_press=on_press, on_release=on_release)
-listener.start()
-
-LEN_MIDI_RECORDINGS = 30
-class MidiRecorder:
- def __init__(self, place='', len_midi_recordings=LEN_MIDI_RECORDINGS):
- self.place = place
- self.len_midi_recordings = len_midi_recordings
- self.port = mido.open_input(mido.get_input_names()[0])
-
- def get_filename(self):
- now = datetime.now()
- return self.place + '_' + now.strftime("%b_%d_%Y_%Hh%Mm%Ss") + '.mid'
-
- def read_last_midi_msgs(self):
- return list(self.port.iter_pending())
-
- def live_read(self):
- while not is_pressed('esc'):
- for msg in self.read_last_midi_msgs():
- print(msg)
-
- def check_if_recording_started(self, msgs, t_init):
- started = False
- if len(msgs) > 0:
- for m in msgs:
- if m.type == 'note_on':
- started = True
- t_init = time.time()
- return started, t_init
-
- def create_empty_midi(self):
- mid = MidiFile()
- track = MidiTrack()
- mid.tracks.append(track)
- track.append(Message('program_change', program=0, time=0))
- return mid, track
-
- def record_next_N_seconds(self, n=None, saving_path=None):
- if saving_path is None:
- saving_path = SYNTH_RECORDED_PATH + self.get_filename()
- if n is None:
- n = self.len_midi_recordings
-
- print(f'Recoding the next {n} secs.'
- f'\n\tRecording starts when the first key is pressed;'
- f'\n\tPress Enter to end the recording;'
- f'\n\tPress BackSpace (<--) to cancel the recording;'
- f'\n\tSaving to {saving_path}')
- try:
- mid, track = self.create_empty_midi()
- started = False
- backspace_pressed = False
- t_init = time.time()
- while not is_pressed('enter') and (time.time() - t_init) < n:
- msgs = self.read_last_midi_msgs()
- if not started:
- started, t_init = self.check_if_recording_started(msgs, t_init)
- if started:
- print("\n\t--> First note pressed, it's on!")
- for m in msgs:
- print(m)
- if m.type == 'note_on' and m.velocity == 0:
- m_off = Message(type='note_off', velocity=127, note=m.note, channel=m.channel, time=m.time)
- track.append(m_off)
- track.append(m)
- if is_pressed('backspace'):
- backspace_pressed = True
- print('\n \t--> Recording cancelled! (you pressed BackSpace)')
- break
- # save the file
- if not backspace_pressed and len(mid.tracks[0]) > 0:
- mid.save(saving_path)
- print(f'\n--> Recording saved, duration: {mid.length:.2f} secs, {len(mid.tracks[0])} events.')
- except:
- print('\n --> The recording failed.')
-
-
- def run(self):
- # with pynput.Listener(
- # on_press=self.on_press) as listener:
- # listener.join()
- ready_msg = False
- print('Starting the recording loop!\n\tPress BackSpace to cancel the current recording;\n\tPress Esc to quit the loop (only works while not recording)')
- while True:
- if not ready_msg:
- print('-------\nReady to record!')
- print('Press space to start a recording\n')
- ready_msg = True
-
- if is_pressed('space'):
- self.record_next_N_seconds()
- ready_msg = False
- if is_pressed('esc'):
- print('End of the recording session. See you soon!')
- break
-
-
-midi_recorder = MidiRecorder(place='home')
-midi_recorder.live_read()
-# midi_recorder.run()
-
-
-# try:
-# controls[msg.control] = msg.value
-# except:
-# notes.append(msg.note)
-# port = mido.open_input()
-# while True:
-# for msg in port.iter_pending():
-# print(msg)
-#
-# print('start pause')
-# time.sleep(5)
-# print('stop pause')
\ No newline at end of file
diff --git a/spaces/cfwef/gpt/theme.py b/spaces/cfwef/gpt/theme.py
deleted file mode 100644
index 1a186aacabf5d982cbe9426a198f2a0b4bdef9d1..0000000000000000000000000000000000000000
--- a/spaces/cfwef/gpt/theme.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import gradio as gr
-
-# gradio可用颜色列表
-# gr.themes.utils.colors.slate (石板色)
-# gr.themes.utils.colors.gray (灰色)
-# gr.themes.utils.colors.zinc (锌色)
-# gr.themes.utils.colors.neutral (中性色)
-# gr.themes.utils.colors.stone (石头色)
-# gr.themes.utils.colors.red (红色)
-# gr.themes.utils.colors.orange (橙色)
-# gr.themes.utils.colors.amber (琥珀色)
-# gr.themes.utils.colors.yellow (黄色)
-# gr.themes.utils.colors.lime (酸橙色)
-# gr.themes.utils.colors.green (绿色)
-# gr.themes.utils.colors.emerald (祖母绿)
-# gr.themes.utils.colors.teal (青蓝色)
-# gr.themes.utils.colors.cyan (青色)
-# gr.themes.utils.colors.sky (天蓝色)
-# gr.themes.utils.colors.blue (蓝色)
-# gr.themes.utils.colors.indigo (靛蓝色)
-# gr.themes.utils.colors.violet (紫罗兰色)
-# gr.themes.utils.colors.purple (紫色)
-# gr.themes.utils.colors.fuchsia (洋红色)
-# gr.themes.utils.colors.pink (粉红色)
-# gr.themes.utils.colors.rose (玫瑰色)
-
-def adjust_theme():
- try:
- color_er = gr.themes.utils.colors.pink
- set_theme = gr.themes.Default(
- primary_hue=gr.themes.utils.colors.orange,
- neutral_hue=gr.themes.utils.colors.gray,
- font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")],
- font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")])
- set_theme.set(
- # Colors
- input_background_fill_dark="*neutral_800",
- # Transition
- button_transition="none",
- # Shadows
- button_shadow="*shadow_drop",
- button_shadow_hover="*shadow_drop_lg",
- button_shadow_active="*shadow_inset",
- input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset",
- input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset",
- input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset",
- checkbox_label_shadow="*shadow_drop",
- block_shadow="*shadow_drop",
- form_gap_width="1px",
- # Button borders
- input_border_width="1px",
- input_background_fill="white",
- # Gradients
- stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)",
- stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)",
- error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)",
- error_background_fill_dark="*background_fill_primary",
- checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)",
- checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
- checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)",
- checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
- button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)",
- button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)",
- button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)",
- button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)",
- button_primary_border_color_dark="*primary_500",
- button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)",
- button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)",
- button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)",
- button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)",
- button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})",
- button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})",
- button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})",
- button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})",
- button_cancel_border_color=color_er.c200,
- button_cancel_border_color_dark=color_er.c600,
- button_cancel_text_color=color_er.c600,
- button_cancel_text_color_dark="white",
- )
- except:
- set_theme = None; print('gradio版本较旧, 不能自定义字体和颜色')
- return set_theme
-
-advanced_css = """
-/* 设置表格的外边距为1em,内部单元格之间边框合并,空单元格显示. */
-.markdown-body table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-
-/* 设置表格单元格的内边距为5px,边框粗细为1.2px,颜色为--border-color-primary. */
-.markdown-body th, .markdown-body td {
- border: 1.2px solid var(--border-color-primary);
- padding: 5px;
-}
-
-/* 设置表头背景颜色为rgba(175,184,193,0.2),透明度为0.2. */
-.markdown-body thead {
- background-color: rgba(175,184,193,0.2);
-}
-
-/* 设置表头单元格的内边距为0.5em和0.2em. */
-.markdown-body thead th {
- padding: .5em .2em;
-}
-
-/* 去掉列表前缀的默认间距,使其与文本线对齐. */
-.markdown-body ol, .markdown-body ul {
- padding-inline-start: 2em !important;
-}
-
-/* 设定聊天气泡的样式,包括圆角、最大宽度和阴影等. */
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- /* padding: var(--spacing-xl) !important; */
- /* font-size: var(--text-md) !important; */
- /* line-height: var(--line-md) !important; */
- /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
- /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
-}
-[data-testid = "bot"] {
- max-width: 95%;
- /* width: auto !important; */
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 100%;
- /* width: auto !important; */
- border-bottom-right-radius: 0 !important;
-}
-
-/* 行内代码的背景设为淡灰色,设定圆角和间距. */
-.markdown-body code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */
-.markdown-body pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: rgba(175,184,193,0.2);
- border-radius: 10px;
- padding: 1em;
- margin: 1em 2em 1em 0.5em;
-}
-"""
\ No newline at end of file
diff --git a/spaces/chansung/llm-discord-bot/health_check_200.py b/spaces/chansung/llm-discord-bot/health_check_200.py
deleted file mode 100644
index 15d7a435d6e6af9d4d888e68233d84437d2df38e..0000000000000000000000000000000000000000
--- a/spaces/chansung/llm-discord-bot/health_check_200.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import sys
-from http.server import BaseHTTPRequestHandler, HTTPServer
-
-class S(BaseHTTPRequestHandler):
- def _set_headers(self):
- self.send_response(200)
- self.send_header('Content-type', 'application/json')
- self.end_headers()
-
- def do_GET(self):
- self._set_headers()
- self.wfile.write(b"")
-
-def run_dummy_server(server_class=HTTPServer, handler_class=S, port=7860):
- server_address = ('', port)
- httpd = server_class(server_address, handler_class)
- print('Starting httpd...')
- httpd.serve_forever()
-
-run_dummy_server()
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/src/fsner/__init__.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/src/fsner/__init__.py
deleted file mode 100644
index 130813cc119c1689912b3de28abb59cb18a92045..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/src/fsner/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .model import FSNERModel
-from .tokenizer_utils import FSNERTokenizerUtils
-
-
-__all__ = ["FSNERModel", "FSNERTokenizerUtils"]
diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/modeling_tf_outputs.py b/spaces/chendl/compositional_test/transformers/src/transformers/modeling_tf_outputs.py
deleted file mode 100644
index f8148b169543fa022e67bbc56f5d75291ea7612d..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/src/transformers/modeling_tf_outputs.py
+++ /dev/null
@@ -1,989 +0,0 @@
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import warnings
-from dataclasses import dataclass
-from typing import List, Optional, Tuple
-
-import tensorflow as tf
-
-from .utils import ModelOutput
-
-
-@dataclass
-class TFBaseModelOutput(ModelOutput):
- """
- Base class for model's outputs, with potential hidden states and attentions.
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- last_hidden_state: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFBaseModelOutputWithNoAttention(ModelOutput):
- """
- Base class for model's outputs, with potential hidden states.
-
- Args:
- last_hidden_state (`tf.Tensor` shape `(batch_size, num_channels, height, width)`):
- Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, num_channels, height, width)`.
-
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- """
-
- last_hidden_state: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor, ...]] = None
-
-
-@dataclass
-class TFBaseModelOutputWithPooling(ModelOutput):
- """
- Base class for model's outputs that also contains a pooling of the last hidden states.
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`):
- Last layer hidden-state of the first token of the sequence (classification token) further processed by a
- Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
- prediction (classification) objective during pretraining.
-
- This output is usually *not* a good summary of the semantic content of the input, you're often better with
- averaging or pooling the sequence of hidden-states for the whole input sequence.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- last_hidden_state: tf.Tensor = None
- pooler_output: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFBaseModelOutputWithPoolingAndNoAttention(ModelOutput):
- """
- Base class for model's outputs that also contains a pooling of the last hidden states.
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`):
- Sequence of hidden-states at the output of the last layer of the model.
- pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`):
- Last layer hidden-state after a pooling operation on the spatial dimensions.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, num_channels, height, width)`.
-
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- """
-
- last_hidden_state: tf.Tensor = None
- pooler_output: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor, ...]] = None
-
-
-@dataclass
-class TFBaseModelOutputWithPoolingAndCrossAttentions(ModelOutput):
- """
- Base class for model's outputs that also contains a pooling of the last hidden states.
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`):
- Last layer hidden-state of the first token of the sequence (classification token) further processed by a
- Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
- prediction (classification) objective during pretraining.
-
- This output is usually *not* a good summary of the semantic content of the input, you're often better with
- averaging or pooling the sequence of hidden-states for the whole input sequence.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- """
-
- last_hidden_state: tf.Tensor = None
- pooler_output: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
- cross_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFBaseModelOutputWithPast(ModelOutput):
- """
- Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding).
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
-
- If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
- hidden_size)` is output.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- last_hidden_state: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFBaseModelOutputWithCrossAttentions(ModelOutput):
- """
- Base class for model's outputs, with potential hidden states and attentions.
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- """
-
- last_hidden_state: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
- cross_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFBaseModelOutputWithPastAndCrossAttentions(ModelOutput):
- """
- Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding).
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
-
- If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
- hidden_size)` is output.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- """
-
- last_hidden_state: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
- cross_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSeq2SeqModelOutput(ModelOutput):
- """
- Base class for model encoder's outputs that also contains : pre-computed hidden states that can speed up sequential
- decoding.
-
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the decoder of the model.
-
- If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
- hidden_size)` is output.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
-
- last_hidden_state: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- decoder_attentions: Optional[Tuple[tf.Tensor]] = None
- cross_attentions: Optional[Tuple[tf.Tensor]] = None
- encoder_last_hidden_state: Optional[tf.Tensor] = None
- encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- encoder_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFCausalLMOutput(ModelOutput):
- """
- Base class for causal language model (or autoregressive) outputs.
-
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss (for next-token prediction).
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFCausalLMOutputWithPast(ModelOutput):
- """
- Base class for causal language model (or autoregressive) outputs.
-
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss (for next-token prediction).
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFCausalLMOutputWithCrossAttentions(ModelOutput):
- """
- Base class for causal language model (or autoregressive) outputs.
-
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss (for next-token prediction).
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
- cross_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFMaskedLMOutput(ModelOutput):
- """
- Base class for masked language models outputs.
-
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Masked language modeling (MLM) loss.
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSeq2SeqLMOutput(ModelOutput):
- """
- Base class for sequence-to-sequence language models outputs.
-
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss.
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- decoder_attentions: Optional[Tuple[tf.Tensor]] = None
- cross_attentions: Optional[Tuple[tf.Tensor]] = None
- encoder_last_hidden_state: Optional[tf.Tensor] = None
- encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- encoder_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFNextSentencePredictorOutput(ModelOutput):
- """
- Base class for outputs of models predicting if two sentences are consecutive or not.
-
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `next_sentence_label` is provided):
- Next sentence prediction loss.
- logits (`tf.Tensor` of shape `(batch_size, 2)`):
- Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
- before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSequenceClassifierOutput(ModelOutput):
- """
- Base class for outputs of sentence classification models.
-
- Args:
- loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSeq2SeqSequenceClassifierOutput(ModelOutput):
- """
- Base class for outputs of sequence-to-sequence sentence classification models.
-
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `label` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- decoder_attentions: Optional[Tuple[tf.Tensor]] = None
- cross_attentions: Optional[Tuple[tf.Tensor]] = None
- encoder_last_hidden_state: Optional[tf.Tensor] = None
- encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- encoder_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSemanticSegmenterOutput(ModelOutput):
- """
- Base class for outputs of semantic segmentation models.
-
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels, logits_height, logits_width)`):
- Classification scores for each pixel.
-
-
-
- The logits returned do not necessarily have the same size as the `pixel_values` passed as inputs. This is
- to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
- original image size as post-processing. You should always check your logits shape and resize as needed.
-
-
-
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, patch_size, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSemanticSegmenterOutputWithNoAttention(ModelOutput):
- """
- Base class for outputs of semantic segmentation models that do not output attention scores.
-
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels, logits_height, logits_width)`):
- Classification scores for each pixel.
-
-
-
- The logits returned do not necessarily have the same size as the `pixel_values` passed as inputs. This is
- to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
- original image size as post-processing. You should always check your logits shape and resize as needed.
-
-
-
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, patch_size, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFImageClassifierOutput(ModelOutput):
- """
- Base class for outputs of image classification models.
-
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called
- feature maps) of the model at the output of each stage.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFMultipleChoiceModelOutput(ModelOutput):
- """
- Base class for outputs of multiple choice models.
-
- Args:
- loss (`tf.Tensor` of shape *(batch_size, )*, *optional*, returned when `labels` is provided):
- Classification loss.
- logits (`tf.Tensor` of shape `(batch_size, num_choices)`):
- *num_choices* is the second dimension of the input tensors. (see *input_ids* above).
-
- Classification scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFTokenClassifierOutput(ModelOutput):
- """
- Base class for outputs of token classification models.
-
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of unmasked labels, returned when `labels` is provided) :
- Classification loss.
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.num_labels)`):
- Classification scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFQuestionAnsweringModelOutput(ModelOutput):
- """
- Base class for outputs of question answering models.
-
- Args:
- loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `start_positions` and `end_positions` are provided):
- Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
- start_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-start scores (before SoftMax).
- end_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-end scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- start_logits: tf.Tensor = None
- end_logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSeq2SeqQuestionAnsweringModelOutput(ModelOutput):
- """
- Base class for outputs of sequence-to-sequence question answering models.
-
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
- start_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-start scores (before SoftMax).
- end_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-end scores (before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
-
- loss: Optional[tf.Tensor] = None
- start_logits: tf.Tensor = None
- end_logits: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- decoder_attentions: Optional[Tuple[tf.Tensor]] = None
- encoder_last_hidden_state: Optional[tf.Tensor] = None
- encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
- encoder_attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFSequenceClassifierOutputWithPast(ModelOutput):
- """
- Base class for outputs of sentence classification models.
-
- Args:
- loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
-
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- past_key_values: Optional[List[tf.Tensor]] = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
-
-@dataclass
-class TFImageClassifierOutputWithNoAttention(ModelOutput):
- """
- Base class for outputs of image classification models.
-
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called
- feature maps) of the model at the output of each stage.
- """
-
- loss: Optional[tf.Tensor] = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor, ...]] = None
-
-
-@dataclass
-class TFMaskedImageModelingOutput(ModelOutput):
- """
- Base class for outputs of masked image completion / in-painting models.
-
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `bool_masked_pos` is provided):
- Reconstruction loss.
- reconstruction (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`):
- Reconstructed / completed images.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when
- `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called
- feature maps) of the model at the output of each stage.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when
- `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- loss: Optional[tf.Tensor] = None
- reconstruction: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor]] = None
- attentions: Optional[Tuple[tf.Tensor]] = None
-
- @property
- def logits(self):
- warnings.warn(
- "logits attribute is deprecated and will be removed in version 5 of Transformers."
- " Please use the reconstruction attribute to retrieve the final output instead.",
- FutureWarning,
- )
- return self.reconstruction
diff --git a/spaces/chopey/DhivehiTransliteration/app.py b/spaces/chopey/DhivehiTransliteration/app.py
deleted file mode 100644
index 03e80c8d17a7d8ddc2d5f9532b068e13f4daf9e0..0000000000000000000000000000000000000000
--- a/spaces/chopey/DhivehiTransliteration/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import gradio as gr
-import torch
-from transformers import T5Tokenizer, T5ForConditionalGeneration, Trainer, TrainingArguments
-
-
-def transliteration(source_word):
- #return "Hello " + name + "!!"
- #source_word = "Manik aai Ameela maqaamun vakikuran hushahalhaifi"
- source_word_str = source_word.lower()
-
- tokenizer = T5Tokenizer.from_pretrained("chopey/dvt5-base")
- model = T5ForConditionalGeneration.from_pretrained('chopey/model_t5_base')
-
- input_ids = tokenizer.encode(source_word_str, return_tensors="pt")
- output_ids = model.generate(input_ids, max_length=512)
- transliteration = tokenizer.decode(output_ids[0], skip_special_tokens=True)
-
- return transliteration
-
-#test
-iface = gr.Interface(fn=transliteration, inputs="text", outputs="text")
-iface.launch()
-
-
-
-
-
-
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/FontFile.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/FontFile.py
deleted file mode 100644
index 5ec0a6632e3182382467688662ebc5e6c324da91..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/FontFile.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# base class for raster font file parsers
-#
-# history:
-# 1997-06-05 fl created
-# 1997-08-19 fl restrict image width
-#
-# Copyright (c) 1997-1998 by Secret Labs AB
-# Copyright (c) 1997-1998 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import os
-
-from . import Image, _binary
-
-WIDTH = 800
-
-
-def puti16(fp, values):
- """Write network order (big-endian) 16-bit sequence"""
- for v in values:
- if v < 0:
- v += 65536
- fp.write(_binary.o16be(v))
-
-
-class FontFile:
- """Base class for raster font file handlers."""
-
- bitmap = None
-
- def __init__(self):
- self.info = {}
- self.glyph = [None] * 256
-
- def __getitem__(self, ix):
- return self.glyph[ix]
-
- def compile(self):
- """Create metrics and bitmap"""
-
- if self.bitmap:
- return
-
- # create bitmap large enough to hold all data
- h = w = maxwidth = 0
- lines = 1
- for glyph in self:
- if glyph:
- d, dst, src, im = glyph
- h = max(h, src[3] - src[1])
- w = w + (src[2] - src[0])
- if w > WIDTH:
- lines += 1
- w = src[2] - src[0]
- maxwidth = max(maxwidth, w)
-
- xsize = maxwidth
- ysize = lines * h
-
- if xsize == 0 and ysize == 0:
- return ""
-
- self.ysize = h
-
- # paste glyphs into bitmap
- self.bitmap = Image.new("1", (xsize, ysize))
- self.metrics = [None] * 256
- x = y = 0
- for i in range(256):
- glyph = self[i]
- if glyph:
- d, dst, src, im = glyph
- xx = src[2] - src[0]
- # yy = src[3] - src[1]
- x0, y0 = x, y
- x = x + xx
- if x > WIDTH:
- x, y = 0, y + h
- x0, y0 = x, y
- x = xx
- s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0
- self.bitmap.paste(im.crop(src), s)
- self.metrics[i] = d, dst, s
-
- def save(self, filename):
- """Save font"""
-
- self.compile()
-
- # font data
- self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG")
-
- # font metrics
- with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp:
- fp.write(b"PILfont\n")
- fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!!
- fp.write(b"DATA\n")
- for id in range(256):
- m = self.metrics[id]
- if not m:
- puti16(fp, [0] * 10)
- else:
- puti16(fp, m[0] + m[1] + m[2])
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/summary.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/summary.py
deleted file mode 100644
index ef152cad769074d092e34b03a337b5c896560415..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/summary.py
+++ /dev/null
@@ -1,39 +0,0 @@
-from typing import Optional
-
-from clickhouse_connect.datatypes.registry import get_from_name
-
-from clickhouse_connect.driver.query import QueryResult
-
-
-class QuerySummary:
- summary = {}
-
- def __init__(self, summary: Optional[dict] = None):
- if summary is not None:
- self.summary = summary
-
- @property
- def written_rows(self) -> int:
- return int(self.summary.get('written_rows', 0))
-
- def written_bytes(self) -> int:
- return int(self.summary.get('written_bytes', 0))
-
- def query_id(self) -> str:
- return self.summary.get('query_id', '')
-
- def as_query_result(self) -> QueryResult:
- data = []
- column_names = []
- column_types = []
- str_type = get_from_name('String')
- int_type = get_from_name('Int64')
- for key, value in self.summary.items():
- column_names.append(key)
- if value.isnumeric():
- data.append(int(value))
- column_types.append(int_type)
- else:
- data.append(value)
- column_types.append(str_type)
- return QueryResult([data], column_names=tuple(column_names), column_types=tuple(column_types))
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cycler.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cycler.py
deleted file mode 100644
index f86b68de64b8066b98d8fa2d92bf5983ea582237..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cycler.py
+++ /dev/null
@@ -1,501 +0,0 @@
-"""
-Cycler
-======
-
-Cycling through combinations of values, producing dictionaries.
-
-You can add cyclers::
-
- from cycler import cycler
- cc = (cycler(color=list('rgb')) +
- cycler(linestyle=['-', '--', '-.']))
- for d in cc:
- print(d)
-
-Results in::
-
- {'color': 'r', 'linestyle': '-'}
- {'color': 'g', 'linestyle': '--'}
- {'color': 'b', 'linestyle': '-.'}
-
-
-You can multiply cyclers::
-
- from cycler import cycler
- cc = (cycler(color=list('rgb')) *
- cycler(linestyle=['-', '--', '-.']))
- for d in cc:
- print(d)
-
-Results in::
-
- {'color': 'r', 'linestyle': '-'}
- {'color': 'r', 'linestyle': '--'}
- {'color': 'r', 'linestyle': '-.'}
- {'color': 'g', 'linestyle': '-'}
- {'color': 'g', 'linestyle': '--'}
- {'color': 'g', 'linestyle': '-.'}
- {'color': 'b', 'linestyle': '-'}
- {'color': 'b', 'linestyle': '--'}
- {'color': 'b', 'linestyle': '-.'}
-"""
-
-
-import copy
-from functools import reduce
-from itertools import product, cycle
-from operator import mul, add
-
-__version__ = '0.10.0'
-
-
-def _process_keys(left, right):
- """
- Helper function to compose cycler keys.
-
- Parameters
- ----------
- left, right : iterable of dictionaries or None
- The cyclers to be composed.
-
- Returns
- -------
- keys : set
- The keys in the composition of the two cyclers.
- """
- l_peek = next(iter(left)) if left is not None else {}
- r_peek = next(iter(right)) if right is not None else {}
- l_key = set(l_peek.keys())
- r_key = set(r_peek.keys())
- if l_key & r_key:
- raise ValueError("Can not compose overlapping cycles")
- return l_key | r_key
-
-
-def concat(left, right):
- r"""
- Concatenate `Cycler`\s, as if chained using `itertools.chain`.
-
- The keys must match exactly.
-
- Examples
- --------
- >>> num = cycler('a', range(3))
- >>> let = cycler('a', 'abc')
- >>> num.concat(let)
- cycler('a', [0, 1, 2, 'a', 'b', 'c'])
-
- Returns
- -------
- `Cycler`
- The concatenated cycler.
- """
- if left.keys != right.keys:
- raise ValueError("Keys do not match:\n"
- "\tIntersection: {both!r}\n"
- "\tDisjoint: {just_one!r}".format(
- both=left.keys & right.keys,
- just_one=left.keys ^ right.keys))
- _l = left.by_key()
- _r = right.by_key()
- return reduce(add, (_cycler(k, _l[k] + _r[k]) for k in left.keys))
-
-
-class Cycler:
- """
- Composable cycles.
-
- This class has compositions methods:
-
- ``+``
- for 'inner' products (zip)
-
- ``+=``
- in-place ``+``
-
- ``*``
- for outer products (`itertools.product`) and integer multiplication
-
- ``*=``
- in-place ``*``
-
- and supports basic slicing via ``[]``.
-
- Parameters
- ----------
- left, right : Cycler or None
- The 'left' and 'right' cyclers.
- op : func or None
- Function which composes the 'left' and 'right' cyclers.
- """
-
- def __call__(self):
- return cycle(self)
-
- def __init__(self, left, right=None, op=None):
- """
- Semi-private init.
-
- Do not use this directly, use `cycler` function instead.
- """
- if isinstance(left, Cycler):
- self._left = Cycler(left._left, left._right, left._op)
- elif left is not None:
- # Need to copy the dictionary or else that will be a residual
- # mutable that could lead to strange errors
- self._left = [copy.copy(v) for v in left]
- else:
- self._left = None
-
- if isinstance(right, Cycler):
- self._right = Cycler(right._left, right._right, right._op)
- elif right is not None:
- # Need to copy the dictionary or else that will be a residual
- # mutable that could lead to strange errors
- self._right = [copy.copy(v) for v in right]
- else:
- self._right = None
-
- self._keys = _process_keys(self._left, self._right)
- self._op = op
-
- def __contains__(self, k):
- return k in self._keys
-
- @property
- def keys(self):
- """The keys this Cycler knows about."""
- return set(self._keys)
-
- def change_key(self, old, new):
- """
- Change a key in this cycler to a new name.
- Modification is performed in-place.
-
- Does nothing if the old key is the same as the new key.
- Raises a ValueError if the new key is already a key.
- Raises a KeyError if the old key isn't a key.
- """
- if old == new:
- return
- if new in self._keys:
- raise ValueError(
- "Can't replace {old} with {new}, {new} is already a key"
- .format(old=old, new=new)
- )
- if old not in self._keys:
- raise KeyError("Can't replace {old} with {new}, {old} is not a key"
- .format(old=old, new=new))
-
- self._keys.remove(old)
- self._keys.add(new)
-
- if self._right is not None and old in self._right.keys:
- self._right.change_key(old, new)
-
- # self._left should always be non-None
- # if self._keys is non-empty.
- elif isinstance(self._left, Cycler):
- self._left.change_key(old, new)
- else:
- # It should be completely safe at this point to
- # assume that the old key can be found in each
- # iteration.
- self._left = [{new: entry[old]} for entry in self._left]
-
- @classmethod
- def _from_iter(cls, label, itr):
- """
- Class method to create 'base' Cycler objects
- that do not have a 'right' or 'op' and for which
- the 'left' object is not another Cycler.
-
- Parameters
- ----------
- label : str
- The property key.
-
- itr : iterable
- Finite length iterable of the property values.
-
- Returns
- -------
- `Cycler`
- New 'base' cycler.
- """
- ret = cls(None)
- ret._left = list({label: v} for v in itr)
- ret._keys = {label}
- return ret
-
- def __getitem__(self, key):
- # TODO : maybe add numpy style fancy slicing
- if isinstance(key, slice):
- trans = self.by_key()
- return reduce(add, (_cycler(k, v[key]) for k, v in trans.items()))
- else:
- raise ValueError("Can only use slices with Cycler.__getitem__")
-
- def __iter__(self):
- if self._right is None:
- for left in self._left:
- yield dict(left)
- else:
- for a, b in self._op(self._left, self._right):
- out = {}
- out.update(a)
- out.update(b)
- yield out
-
- def __add__(self, other):
- """
- Pair-wise combine two equal length cyclers (zip).
-
- Parameters
- ----------
- other : Cycler
- """
- if len(self) != len(other):
- raise ValueError("Can only add equal length cycles, "
- f"not {len(self)} and {len(other)}")
- return Cycler(self, other, zip)
-
- def __mul__(self, other):
- """
- Outer product of two cyclers (`itertools.product`) or integer
- multiplication.
-
- Parameters
- ----------
- other : Cycler or int
- """
- if isinstance(other, Cycler):
- return Cycler(self, other, product)
- elif isinstance(other, int):
- trans = self.by_key()
- return reduce(add, (_cycler(k, v*other) for k, v in trans.items()))
- else:
- return NotImplemented
-
- def __rmul__(self, other):
- return self * other
-
- def __len__(self):
- op_dict = {zip: min, product: mul}
- if self._right is None:
- return len(self._left)
- l_len = len(self._left)
- r_len = len(self._right)
- return op_dict[self._op](l_len, r_len)
-
- def __iadd__(self, other):
- """
- In-place pair-wise combine two equal length cyclers (zip).
-
- Parameters
- ----------
- other : Cycler
- """
- if not isinstance(other, Cycler):
- raise TypeError("Cannot += with a non-Cycler object")
- # True shallow copy of self is fine since this is in-place
- old_self = copy.copy(self)
- self._keys = _process_keys(old_self, other)
- self._left = old_self
- self._op = zip
- self._right = Cycler(other._left, other._right, other._op)
- return self
-
- def __imul__(self, other):
- """
- In-place outer product of two cyclers (`itertools.product`).
-
- Parameters
- ----------
- other : Cycler
- """
- if not isinstance(other, Cycler):
- raise TypeError("Cannot *= with a non-Cycler object")
- # True shallow copy of self is fine since this is in-place
- old_self = copy.copy(self)
- self._keys = _process_keys(old_self, other)
- self._left = old_self
- self._op = product
- self._right = Cycler(other._left, other._right, other._op)
- return self
-
- def __eq__(self, other):
- if len(self) != len(other):
- return False
- if self.keys ^ other.keys:
- return False
- return all(a == b for a, b in zip(self, other))
-
- def __ne__(self, other):
- return not (self == other)
-
- __hash__ = None
-
- def __repr__(self):
- op_map = {zip: '+', product: '*'}
- if self._right is None:
- lab = self.keys.pop()
- itr = list(v[lab] for v in self)
- return f"cycler({lab!r}, {itr!r})"
- else:
- op = op_map.get(self._op, '?')
- msg = "({left!r} {op} {right!r})"
- return msg.format(left=self._left, op=op, right=self._right)
-
- def _repr_html_(self):
- # an table showing the value of each key through a full cycle
- output = "
"
- sorted_keys = sorted(self.keys, key=repr)
- for key in sorted_keys:
- output += f"
{key!r}
"
- for d in iter(self):
- output += "
"
- for k in sorted_keys:
- output += f"
{d[k]!r}
"
- output += "
"
- output += "
"
- return output
-
- def by_key(self):
- """
- Values by key.
-
- This returns the transposed values of the cycler. Iterating
- over a `Cycler` yields dicts with a single value for each key,
- this method returns a `dict` of `list` which are the values
- for the given key.
-
- The returned value can be used to create an equivalent `Cycler`
- using only `+`.
-
- Returns
- -------
- transpose : dict
- dict of lists of the values for each key.
- """
-
- # TODO : sort out if this is a bottle neck, if there is a better way
- # and if we care.
-
- keys = self.keys
- out = {k: list() for k in keys}
-
- for d in self:
- for k in keys:
- out[k].append(d[k])
- return out
-
- # for back compatibility
- _transpose = by_key
-
- def simplify(self):
- """
- Simplify the cycler into a sum (but no products) of cyclers.
-
- Returns
- -------
- simple : Cycler
- """
- # TODO: sort out if it is worth the effort to make sure this is
- # balanced. Currently it is is
- # (((a + b) + c) + d) vs
- # ((a + b) + (c + d))
- # I would believe that there is some performance implications
- trans = self.by_key()
- return reduce(add, (_cycler(k, v) for k, v in trans.items()))
-
- concat = concat
-
-
-def cycler(*args, **kwargs):
- """
- Create a new `Cycler` object from a single positional argument,
- a pair of positional arguments, or the combination of keyword arguments.
-
- cycler(arg)
- cycler(label1=itr1[, label2=iter2[, ...]])
- cycler(label, itr)
-
- Form 1 simply copies a given `Cycler` object.
-
- Form 2 composes a `Cycler` as an inner product of the
- pairs of keyword arguments. In other words, all of the
- iterables are cycled simultaneously, as if through zip().
-
- Form 3 creates a `Cycler` from a label and an iterable.
- This is useful for when the label cannot be a keyword argument
- (e.g., an integer or a name that has a space in it).
-
- Parameters
- ----------
- arg : Cycler
- Copy constructor for Cycler (does a shallow copy of iterables).
- label : name
- The property key. In the 2-arg form of the function,
- the label can be any hashable object. In the keyword argument
- form of the function, it must be a valid python identifier.
- itr : iterable
- Finite length iterable of the property values.
- Can be a single-property `Cycler` that would
- be like a key change, but as a shallow copy.
-
- Returns
- -------
- cycler : Cycler
- New `Cycler` for the given property
-
- """
- if args and kwargs:
- raise TypeError("cyl() can only accept positional OR keyword "
- "arguments -- not both.")
-
- if len(args) == 1:
- if not isinstance(args[0], Cycler):
- raise TypeError("If only one positional argument given, it must "
- "be a Cycler instance.")
- return Cycler(args[0])
- elif len(args) == 2:
- return _cycler(*args)
- elif len(args) > 2:
- raise TypeError("Only a single Cycler can be accepted as the lone "
- "positional argument. Use keyword arguments instead.")
-
- if kwargs:
- return reduce(add, (_cycler(k, v) for k, v in kwargs.items()))
-
- raise TypeError("Must have at least a positional OR keyword arguments")
-
-
-def _cycler(label, itr):
- """
- Create a new `Cycler` object from a property name and iterable of values.
-
- Parameters
- ----------
- label : hashable
- The property key.
- itr : iterable
- Finite length iterable of the property values.
-
- Returns
- -------
- cycler : Cycler
- New `Cycler` for the given property
- """
- if isinstance(itr, Cycler):
- keys = itr.keys
- if len(keys) != 1:
- msg = "Can not create Cycler from a multi-property Cycler"
- raise ValueError(msg)
-
- lab = keys.pop()
- # Doesn't need to be a new list because
- # _from_iter() will be creating that new list anyway.
- itr = (v[lab] for v in itr)
-
- return Cycler._from_iter(label, itr)
diff --git a/spaces/cihyFjudo/fairness-paper-search/Tamil Actress Yvijaya Hot Sex Photo.md b/spaces/cihyFjudo/fairness-paper-search/Tamil Actress Yvijaya Hot Sex Photo.md
deleted file mode 100644
index 1d1ac7a5e77d9da712aa960dcda1b3f8b92ef862..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Tamil Actress Yvijaya Hot Sex Photo.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Photos Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Unrated Videos Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian XXX Videos Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Indian Videos Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian MP4 Videos Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Indian Images Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Leaked Videos Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian Leaked Pics Search tamil actress y vijaya fake nudenani iyer pussyrabi xxx naga danceperiyamanaval isai nudesri divya fake sexxxnxcbestility 4 u com sexsi indian XXX Posts
Hot Indian Actress Rare HQ Photos: Kannada Actress All Actress Hot Photos Tamil Actress very Hot sri lanka Hot Indian Actress Rare HQ Photos: Tamil Actress Meera INDIAN ACTRESS: South Indian actress Priyamani full Different boob show, nipple visible tamil actress Hot Indian Actress Rare HQ Photos: Telugu Actress Richa
-
Shruthi South Indian Television Serial Actress Tamil Serial Artists: Shruthi Raj Tamil Serial Artists: Suzan, Shruthi Raj and Devi Kiruba Marathi Actor And Actress Shruti Marathe Old And New Tamil serial actress: Sruthi Raj Tamil Serial Artists: Serial Artist Shamitha tamil tv serial, sun tv tamil serial, tamil actress names
-
Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Photos Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r XXX Videos Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r HD Videos Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Indian Videos Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r MP4 Videos Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Indian Images Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Leaked Videos Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r Leaked Pics Search tamil actress y vijaya fake nudewetha menon hot sexy nude in kayam hindi actor rekha xxx sexy si bhai bahan sex nude sana bhabhi nude picww indian chudai hinde pon satore sex 3gp download comhnma qureshi xxxwww anjala javeri nude sex photosactor niveditha thomos nude fakeactor urmila unni pussyasmita sood ki nude pussy xxx imageian bhabi sex videowww xxx 鍞筹拷锟藉敵鍌曃鍞筹æshin crakshitha south actress xxx photosext sex r XXX Posts
Huge tamil boobs bhabi saree bra fucking Different boob show, nipple visible tamil actress Bihari wives lifting petticoat sex pic bihari cheating bhabhi samyuktha sexy dress removing big bollywood: Mona Lisa Without Clothes Wallpapers in Hot Bra Desi college girl transparent saree show 1stbuzz: kasthuri hot Dasi Girls: Sobia Kashmiri
-
Girls lifting saree hot picture Telugu saree wearing aunties photos Bihari wives lifting petticoat sex pic Removing Saree Sex photos of indian aunties bhabhi and girls Aunty girls removing saree shows sexy big boobs photos Komal aunty yellow petticoat blouse me boob show HD photos Mallu wife big hips remove
-
Tamil Aunties Upskirt Lifting Saree Peeing Photosl
Bihari wives lifting petticoat sex pic Indian Aunty Lifting Saree Removing Saree Sex photos of indian aunties bhabhi and girls Sexy ass photo in blouse and petticoat Komal aunty yellow petticoat blouse me boob show HD photos Mallu wife big hips remove saree petticoat sex XXX photos Bengali sexy nude
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/The Motu Patlu - King of Kings Full Movie Download Experience the Thrill of Nuclear Fusion in Animation.md b/spaces/cihyFjudo/fairness-paper-search/The Motu Patlu - King of Kings Full Movie Download Experience the Thrill of Nuclear Fusion in Animation.md
deleted file mode 100644
index d394254926225aaa34ecdb2bcc737c9b7e524619..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/The Motu Patlu - King of Kings Full Movie Download Experience the Thrill of Nuclear Fusion in Animation.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
the Motu Patlu - King of Kings full movie download
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageMode.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageMode.py
deleted file mode 100644
index a0b33514296df734501c553493b0a535eca49046..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageMode.py
+++ /dev/null
@@ -1,90 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# standard mode descriptors
-#
-# History:
-# 2006-03-20 fl Added
-#
-# Copyright (c) 2006 by Secret Labs AB.
-# Copyright (c) 2006 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import sys
-
-# mode descriptor cache
-_modes = None
-
-
-class ModeDescriptor:
- """Wrapper for mode strings."""
-
- def __init__(self, mode, bands, basemode, basetype, typestr):
- self.mode = mode
- self.bands = bands
- self.basemode = basemode
- self.basetype = basetype
- self.typestr = typestr
-
- def __str__(self):
- return self.mode
-
-
-def getmode(mode):
- """Gets a mode descriptor for the given mode."""
- global _modes
- if not _modes:
- # initialize mode cache
- modes = {}
- endian = "<" if sys.byteorder == "little" else ">"
- for m, (basemode, basetype, bands, typestr) in {
- # core modes
- # Bits need to be extended to bytes
- "1": ("L", "L", ("1",), "|b1"),
- "L": ("L", "L", ("L",), "|u1"),
- "I": ("L", "I", ("I",), endian + "i4"),
- "F": ("L", "F", ("F",), endian + "f4"),
- "P": ("P", "L", ("P",), "|u1"),
- "RGB": ("RGB", "L", ("R", "G", "B"), "|u1"),
- "RGBX": ("RGB", "L", ("R", "G", "B", "X"), "|u1"),
- "RGBA": ("RGB", "L", ("R", "G", "B", "A"), "|u1"),
- "CMYK": ("RGB", "L", ("C", "M", "Y", "K"), "|u1"),
- "YCbCr": ("RGB", "L", ("Y", "Cb", "Cr"), "|u1"),
- # UNDONE - unsigned |u1i1i1
- "LAB": ("RGB", "L", ("L", "A", "B"), "|u1"),
- "HSV": ("RGB", "L", ("H", "S", "V"), "|u1"),
- # extra experimental modes
- "RGBa": ("RGB", "L", ("R", "G", "B", "a"), "|u1"),
- "BGR;15": ("RGB", "L", ("B", "G", "R"), "|u1"),
- "BGR;16": ("RGB", "L", ("B", "G", "R"), "|u1"),
- "BGR;24": ("RGB", "L", ("B", "G", "R"), "|u1"),
- "LA": ("L", "L", ("L", "A"), "|u1"),
- "La": ("L", "L", ("L", "a"), "|u1"),
- "PA": ("RGB", "L", ("P", "A"), "|u1"),
- }.items():
- modes[m] = ModeDescriptor(m, bands, basemode, basetype, typestr)
- # mapping modes
- for i16mode, typestr in {
- # I;16 == I;16L, and I;32 == I;32L
- "I;16": "u2",
- "I;16BS": ">i2",
- "I;16N": endian + "u2",
- "I;16NS": endian + "i2",
- "I;32": "u4",
- "I;32L": "i4",
- "I;32LS": ">> import sys
- >>> handler = logging.StreamHandler(sys.stdout)
- >>> formatter = LevelFormatter(
- ... fmt={
- ... '*': '[%(levelname)s] %(message)s',
- ... 'DEBUG': '%(name)s [%(levelname)s] %(message)s',
- ... 'INFO': '%(message)s',
- ... })
- >>> handler.setFormatter(formatter)
- >>> log = logging.getLogger('test')
- >>> log.setLevel(logging.DEBUG)
- >>> log.addHandler(handler)
- >>> log.debug('this uses a custom format string')
- test [DEBUG] this uses a custom format string
- >>> log.info('this also uses a custom format string')
- this also uses a custom format string
- >>> log.warning("this one uses the default format string")
- [WARNING] this one uses the default format string
- """
-
- def __init__(self, fmt=None, datefmt=None, style="%"):
- if style != "%":
- raise ValueError(
- "only '%' percent style is supported in both python 2 and 3"
- )
- if fmt is None:
- fmt = DEFAULT_FORMATS
- if isinstance(fmt, str):
- default_format = fmt
- custom_formats = {}
- elif isinstance(fmt, Mapping):
- custom_formats = dict(fmt)
- default_format = custom_formats.pop("*", None)
- else:
- raise TypeError("fmt must be a str or a dict of str: %r" % fmt)
- super(LevelFormatter, self).__init__(default_format, datefmt)
- self.default_format = self._fmt
- self.custom_formats = {}
- for level, fmt in custom_formats.items():
- level = logging._checkLevel(level)
- self.custom_formats[level] = fmt
-
- def format(self, record):
- if self.custom_formats:
- fmt = self.custom_formats.get(record.levelno, self.default_format)
- if self._fmt != fmt:
- self._fmt = fmt
- # for python >= 3.2, _style needs to be set if _fmt changes
- if PercentStyle:
- self._style = PercentStyle(fmt)
- return super(LevelFormatter, self).format(record)
-
-
-def configLogger(**kwargs):
- """A more sophisticated logging system configuation manager.
-
- This is more or less the same as :py:func:`logging.basicConfig`,
- with some additional options and defaults.
-
- The default behaviour is to create a ``StreamHandler`` which writes to
- sys.stderr, set a formatter using the ``DEFAULT_FORMATS`` strings, and add
- the handler to the top-level library logger ("fontTools").
-
- A number of optional keyword arguments may be specified, which can alter
- the default behaviour.
-
- Args:
-
- logger: Specifies the logger name or a Logger instance to be
- configured. (Defaults to "fontTools" logger). Unlike ``basicConfig``,
- this function can be called multiple times to reconfigure a logger.
- If the logger or any of its children already exists before the call is
- made, they will be reset before the new configuration is applied.
- filename: Specifies that a ``FileHandler`` be created, using the
- specified filename, rather than a ``StreamHandler``.
- filemode: Specifies the mode to open the file, if filename is
- specified. (If filemode is unspecified, it defaults to ``a``).
- format: Use the specified format string for the handler. This
- argument also accepts a dictionary of format strings keyed by
- level name, to allow customising the records appearance for
- specific levels. The special ``'*'`` key is for 'any other' level.
- datefmt: Use the specified date/time format.
- level: Set the logger level to the specified level.
- stream: Use the specified stream to initialize the StreamHandler. Note
- that this argument is incompatible with ``filename`` - if both
- are present, ``stream`` is ignored.
- handlers: If specified, this should be an iterable of already created
- handlers, which will be added to the logger. Any handler in the
- list which does not have a formatter assigned will be assigned the
- formatter created in this function.
- filters: If specified, this should be an iterable of already created
- filters. If the ``handlers`` do not already have filters assigned,
- these filters will be added to them.
- propagate: All loggers have a ``propagate`` attribute which determines
- whether to continue searching for handlers up the logging hierarchy.
- If not provided, the "propagate" attribute will be set to ``False``.
- """
- # using kwargs to enforce keyword-only arguments in py2.
- handlers = kwargs.pop("handlers", None)
- if handlers is None:
- if "stream" in kwargs and "filename" in kwargs:
- raise ValueError(
- "'stream' and 'filename' should not be " "specified together"
- )
- else:
- if "stream" in kwargs or "filename" in kwargs:
- raise ValueError(
- "'stream' or 'filename' should not be "
- "specified together with 'handlers'"
- )
- if handlers is None:
- filename = kwargs.pop("filename", None)
- mode = kwargs.pop("filemode", "a")
- if filename:
- h = logging.FileHandler(filename, mode)
- else:
- stream = kwargs.pop("stream", None)
- h = logging.StreamHandler(stream)
- handlers = [h]
- # By default, the top-level library logger is configured.
- logger = kwargs.pop("logger", "fontTools")
- if not logger or isinstance(logger, str):
- # empty "" or None means the 'root' logger
- logger = logging.getLogger(logger)
- # before (re)configuring, reset named logger and its children (if exist)
- _resetExistingLoggers(parent=logger.name)
- # use DEFAULT_FORMATS if 'format' is None
- fs = kwargs.pop("format", None)
- dfs = kwargs.pop("datefmt", None)
- # XXX: '%' is the only format style supported on both py2 and 3
- style = kwargs.pop("style", "%")
- fmt = LevelFormatter(fs, dfs, style)
- filters = kwargs.pop("filters", [])
- for h in handlers:
- if h.formatter is None:
- h.setFormatter(fmt)
- if not h.filters:
- for f in filters:
- h.addFilter(f)
- logger.addHandler(h)
- if logger.name != "root":
- # stop searching up the hierarchy for handlers
- logger.propagate = kwargs.pop("propagate", False)
- # set a custom severity level
- level = kwargs.pop("level", None)
- if level is not None:
- logger.setLevel(level)
- if kwargs:
- keys = ", ".join(kwargs.keys())
- raise ValueError("Unrecognised argument(s): %s" % keys)
-
-
-def _resetExistingLoggers(parent="root"):
- """Reset the logger named 'parent' and all its children to their initial
- state, if they already exist in the current configuration.
- """
- root = logging.root
- # get sorted list of all existing loggers
- existing = sorted(root.manager.loggerDict.keys())
- if parent == "root":
- # all the existing loggers are children of 'root'
- loggers_to_reset = [parent] + existing
- elif parent not in existing:
- # nothing to do
- return
- elif parent in existing:
- loggers_to_reset = [parent]
- # collect children, starting with the entry after parent name
- i = existing.index(parent) + 1
- prefixed = parent + "."
- pflen = len(prefixed)
- num_existing = len(existing)
- while i < num_existing:
- if existing[i][:pflen] == prefixed:
- loggers_to_reset.append(existing[i])
- i += 1
- for name in loggers_to_reset:
- if name == "root":
- root.setLevel(logging.WARNING)
- for h in root.handlers[:]:
- root.removeHandler(h)
- for f in root.filters[:]:
- root.removeFilters(f)
- root.disabled = False
- else:
- logger = root.manager.loggerDict[name]
- logger.level = logging.NOTSET
- logger.handlers = []
- logger.filters = []
- logger.propagate = True
- logger.disabled = False
-
-
-class Timer(object):
- """Keeps track of overall time and split/lap times.
-
- >>> import time
- >>> timer = Timer()
- >>> time.sleep(0.01)
- >>> print("First lap:", timer.split())
- First lap: ...
- >>> time.sleep(0.02)
- >>> print("Second lap:", timer.split())
- Second lap: ...
- >>> print("Overall time:", timer.time())
- Overall time: ...
-
- Can be used as a context manager inside with-statements.
-
- >>> with Timer() as t:
- ... time.sleep(0.01)
- >>> print("%0.3f seconds" % t.elapsed)
- 0... seconds
-
- If initialised with a logger, it can log the elapsed time automatically
- upon exiting the with-statement.
-
- >>> import logging
- >>> log = logging.getLogger("my-fancy-timer-logger")
- >>> configLogger(logger=log, level="DEBUG", format="%(message)s", stream=sys.stdout)
- >>> with Timer(log, 'do something'):
- ... time.sleep(0.01)
- Took ... to do something
-
- The same Timer instance, holding a reference to a logger, can be reused
- in multiple with-statements, optionally with different messages or levels.
-
- >>> timer = Timer(log)
- >>> with timer():
- ... time.sleep(0.01)
- elapsed time: ...s
- >>> with timer('redo it', level=logging.INFO):
- ... time.sleep(0.02)
- Took ... to redo it
-
- It can also be used as a function decorator to log the time elapsed to run
- the decorated function.
-
- >>> @timer()
- ... def test1():
- ... time.sleep(0.01)
- >>> @timer('run test 2', level=logging.INFO)
- ... def test2():
- ... time.sleep(0.02)
- >>> test1()
- Took ... to run 'test1'
- >>> test2()
- Took ... to run test 2
- """
-
- # timeit.default_timer choses the most accurate clock for each platform
- _time = timeit.default_timer
- default_msg = "elapsed time: %(time).3fs"
- default_format = "Took %(time).3fs to %(msg)s"
-
- def __init__(self, logger=None, msg=None, level=None, start=None):
- self.reset(start)
- if logger is None:
- for arg in ("msg", "level"):
- if locals().get(arg) is not None:
- raise ValueError("'%s' can't be specified without a 'logger'" % arg)
- self.logger = logger
- self.level = level if level is not None else TIME_LEVEL
- self.msg = msg
-
- def reset(self, start=None):
- """Reset timer to 'start_time' or the current time."""
- if start is None:
- self.start = self._time()
- else:
- self.start = start
- self.last = self.start
- self.elapsed = 0.0
-
- def time(self):
- """Return the overall time (in seconds) since the timer started."""
- return self._time() - self.start
-
- def split(self):
- """Split and return the lap time (in seconds) in between splits."""
- current = self._time()
- self.elapsed = current - self.last
- self.last = current
- return self.elapsed
-
- def formatTime(self, msg, time):
- """Format 'time' value in 'msg' and return formatted string.
- If 'msg' contains a '%(time)' format string, try to use that.
- Otherwise, use the predefined 'default_format'.
- If 'msg' is empty or None, fall back to 'default_msg'.
- """
- if not msg:
- msg = self.default_msg
- if msg.find("%(time)") < 0:
- msg = self.default_format % {"msg": msg, "time": time}
- else:
- try:
- msg = msg % {"time": time}
- except (KeyError, ValueError):
- pass # skip if the format string is malformed
- return msg
-
- def __enter__(self):
- """Start a new lap"""
- self.last = self._time()
- self.elapsed = 0.0
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- """End the current lap. If timer has a logger, log the time elapsed,
- using the format string in self.msg (or the default one).
- """
- time = self.split()
- if self.logger is None or exc_type:
- # if there's no logger attached, or if any exception occurred in
- # the with-statement, exit without logging the time
- return
- message = self.formatTime(self.msg, time)
- # Allow log handlers to see the individual parts to facilitate things
- # like a server accumulating aggregate stats.
- msg_parts = {"msg": self.msg, "time": time}
- self.logger.log(self.level, message, msg_parts)
-
- def __call__(self, func_or_msg=None, **kwargs):
- """If the first argument is a function, return a decorator which runs
- the wrapped function inside Timer's context manager.
- Otherwise, treat the first argument as a 'msg' string and return an updated
- Timer instance, referencing the same logger.
- A 'level' keyword can also be passed to override self.level.
- """
- if isinstance(func_or_msg, Callable):
- func = func_or_msg
- # use the function name when no explicit 'msg' is provided
- if not self.msg:
- self.msg = "run '%s'" % func.__name__
-
- @wraps(func)
- def wrapper(*args, **kwds):
- with self:
- return func(*args, **kwds)
-
- return wrapper
- else:
- msg = func_or_msg or kwargs.get("msg")
- level = kwargs.get("level", self.level)
- return self.__class__(self.logger, msg, level)
-
- def __float__(self):
- return self.elapsed
-
- def __int__(self):
- return int(self.elapsed)
-
- def __str__(self):
- return "%.3f" % self.elapsed
-
-
-class ChannelsFilter(logging.Filter):
- """Provides a hierarchical filter for log entries based on channel names.
-
- Filters out records emitted from a list of enabled channel names,
- including their children. It works the same as the ``logging.Filter``
- class, but allows the user to specify multiple channel names.
-
- >>> import sys
- >>> handler = logging.StreamHandler(sys.stdout)
- >>> handler.setFormatter(logging.Formatter("%(message)s"))
- >>> filter = ChannelsFilter("A.B", "C.D")
- >>> handler.addFilter(filter)
- >>> root = logging.getLogger()
- >>> root.addHandler(handler)
- >>> root.setLevel(level=logging.DEBUG)
- >>> logging.getLogger('A.B').debug('this record passes through')
- this record passes through
- >>> logging.getLogger('A.B.C').debug('records from children also pass')
- records from children also pass
- >>> logging.getLogger('C.D').debug('this one as well')
- this one as well
- >>> logging.getLogger('A.B.').debug('also this one')
- also this one
- >>> logging.getLogger('A.F').debug('but this one does not!')
- >>> logging.getLogger('C.DE').debug('neither this one!')
- """
-
- def __init__(self, *names):
- self.names = names
- self.num = len(names)
- self.lengths = {n: len(n) for n in names}
-
- def filter(self, record):
- if self.num == 0:
- return True
- for name in self.names:
- nlen = self.lengths[name]
- if name == record.name:
- return True
- elif record.name.find(name, 0, nlen) == 0 and record.name[nlen] == ".":
- return True
- return False
-
-
-class CapturingLogHandler(logging.Handler):
- def __init__(self, logger, level):
- super(CapturingLogHandler, self).__init__(level=level)
- self.records = []
- if isinstance(logger, str):
- self.logger = logging.getLogger(logger)
- else:
- self.logger = logger
-
- def __enter__(self):
- self.original_disabled = self.logger.disabled
- self.original_level = self.logger.level
- self.original_propagate = self.logger.propagate
-
- self.logger.addHandler(self)
- self.logger.setLevel(self.level)
- self.logger.disabled = False
- self.logger.propagate = False
-
- return self
-
- def __exit__(self, type, value, traceback):
- self.logger.removeHandler(self)
- self.logger.setLevel(self.original_level)
- self.logger.disabled = self.original_disabled
- self.logger.propagate = self.original_propagate
-
- return self
-
- def emit(self, record):
- self.records.append(record)
-
- def assertRegex(self, regexp, msg=None):
- import re
-
- pattern = re.compile(regexp)
- for r in self.records:
- if pattern.search(r.getMessage()):
- return True
- if msg is None:
- msg = "Pattern '%s' not found in logger records" % regexp
- assert 0, msg
-
-
-class LogMixin(object):
- """Mixin class that adds logging functionality to another class.
-
- You can define a new class that subclasses from ``LogMixin`` as well as
- other base classes through multiple inheritance.
- All instances of that class will have a ``log`` property that returns
- a ``logging.Logger`` named after their respective ``.``.
-
- For example:
-
- >>> class BaseClass(object):
- ... pass
- >>> class MyClass(LogMixin, BaseClass):
- ... pass
- >>> a = MyClass()
- >>> isinstance(a.log, logging.Logger)
- True
- >>> print(a.log.name)
- fontTools.misc.loggingTools.MyClass
- >>> class AnotherClass(MyClass):
- ... pass
- >>> b = AnotherClass()
- >>> isinstance(b.log, logging.Logger)
- True
- >>> print(b.log.name)
- fontTools.misc.loggingTools.AnotherClass
- """
-
- @property
- def log(self):
- if not hasattr(self, "_log"):
- name = ".".join((self.__class__.__module__, self.__class__.__name__))
- self._log = logging.getLogger(name)
- return self._log
-
-
-def deprecateArgument(name, msg, category=UserWarning):
- """Raise a warning about deprecated function argument 'name'."""
- warnings.warn("%r is deprecated; %s" % (name, msg), category=category, stacklevel=3)
-
-
-def deprecateFunction(msg, category=UserWarning):
- """Decorator to raise a warning when a deprecated function is called."""
-
- def decorator(func):
- @wraps(func)
- def wrapper(*args, **kwargs):
- warnings.warn(
- "%r is deprecated; %s" % (func.__name__, msg),
- category=category,
- stacklevel=2,
- )
- return func(*args, **kwargs)
-
- return wrapper
-
- return decorator
-
-
-if __name__ == "__main__":
- import doctest
-
- sys.exit(doctest.testmod(optionflags=doctest.ELLIPSIS).failed)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/G_M_A_P_.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/G_M_A_P_.py
deleted file mode 100644
index 39b0050c5f0591a2b36c21242863655ca1f3ef47..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/G_M_A_P_.py
+++ /dev/null
@@ -1,142 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import tobytes, tostr, safeEval
-from . import DefaultTable
-
-GMAPFormat = """
- > # big endian
- tableVersionMajor: H
- tableVersionMinor: H
- flags: H
- recordsCount: H
- recordsOffset: H
- fontNameLength: H
-"""
-# psFontName is a byte string which follows the record above. This is zero padded
-# to the beginning of the records array. The recordsOffsst is 32 bit aligned.
-
-GMAPRecordFormat1 = """
- > # big endian
- UV: L
- cid: H
- gid: H
- ggid: H
- name: 32s
-"""
-
-
-class GMAPRecord(object):
- def __init__(self, uv=0, cid=0, gid=0, ggid=0, name=""):
- self.UV = uv
- self.cid = cid
- self.gid = gid
- self.ggid = ggid
- self.name = name
-
- def toXML(self, writer, ttFont):
- writer.begintag("GMAPRecord")
- writer.newline()
- writer.simpletag("UV", value=self.UV)
- writer.newline()
- writer.simpletag("cid", value=self.cid)
- writer.newline()
- writer.simpletag("gid", value=self.gid)
- writer.newline()
- writer.simpletag("glyphletGid", value=self.gid)
- writer.newline()
- writer.simpletag("GlyphletName", value=self.name)
- writer.newline()
- writer.endtag("GMAPRecord")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- value = attrs["value"]
- if name == "GlyphletName":
- self.name = value
- else:
- setattr(self, name, safeEval(value))
-
- def compile(self, ttFont):
- if self.UV is None:
- self.UV = 0
- nameLen = len(self.name)
- if nameLen < 32:
- self.name = self.name + "\0" * (32 - nameLen)
- data = sstruct.pack(GMAPRecordFormat1, self)
- return data
-
- def __repr__(self):
- return (
- "GMAPRecord[ UV: "
- + str(self.UV)
- + ", cid: "
- + str(self.cid)
- + ", gid: "
- + str(self.gid)
- + ", ggid: "
- + str(self.ggid)
- + ", Glyphlet Name: "
- + str(self.name)
- + " ]"
- )
-
-
-class table_G_M_A_P_(DefaultTable.DefaultTable):
-
- dependencies = []
-
- def decompile(self, data, ttFont):
- dummy, newData = sstruct.unpack2(GMAPFormat, data, self)
- self.psFontName = tostr(newData[: self.fontNameLength])
- assert (
- self.recordsOffset % 4
- ) == 0, "GMAP error: recordsOffset is not 32 bit aligned."
- newData = data[self.recordsOffset :]
- self.gmapRecords = []
- for i in range(self.recordsCount):
- gmapRecord, newData = sstruct.unpack2(
- GMAPRecordFormat1, newData, GMAPRecord()
- )
- gmapRecord.name = gmapRecord.name.strip("\0")
- self.gmapRecords.append(gmapRecord)
-
- def compile(self, ttFont):
- self.recordsCount = len(self.gmapRecords)
- self.fontNameLength = len(self.psFontName)
- self.recordsOffset = 4 * (((self.fontNameLength + 12) + 3) // 4)
- data = sstruct.pack(GMAPFormat, self)
- data = data + tobytes(self.psFontName)
- data = data + b"\0" * (self.recordsOffset - len(data))
- for record in self.gmapRecords:
- data = data + record.compile(ttFont)
- return data
-
- def toXML(self, writer, ttFont):
- writer.comment("Most of this table will be recalculated by the compiler")
- writer.newline()
- formatstring, names, fixes = sstruct.getformat(GMAPFormat)
- for name in names:
- value = getattr(self, name)
- writer.simpletag(name, value=value)
- writer.newline()
- writer.simpletag("PSFontName", value=self.psFontName)
- writer.newline()
- for gmapRecord in self.gmapRecords:
- gmapRecord.toXML(writer, ttFont)
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "GMAPRecord":
- if not hasattr(self, "gmapRecords"):
- self.gmapRecords = []
- gmapRecord = GMAPRecord()
- self.gmapRecords.append(gmapRecord)
- for element in content:
- if isinstance(element, str):
- continue
- name, attrs, content = element
- gmapRecord.fromXML(name, attrs, content, ttFont)
- else:
- value = attrs["value"]
- if name == "PSFontName":
- self.psFontName = value
- else:
- setattr(self, name, safeEval(value))
diff --git a/spaces/cncn102/bingo1/src/lib/bots/bing/tts.ts b/spaces/cncn102/bingo1/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/cncn102/bingo1/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py
deleted file mode 100644
index 76e4b272b479a26c63d120c818c140870cd8c287..0000000000000000000000000000000000000000
--- a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .backbone import build_backbone
diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/version.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/version.py
deleted file mode 100644
index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000
--- a/spaces/codelion/Grounding_DINO_demo/groundingdino/version.py
+++ /dev/null
@@ -1 +0,0 @@
-__version__ = '0.1.0'
diff --git a/spaces/coding-alt/IF/README.md b/spaces/coding-alt/IF/README.md
deleted file mode 100644
index bfb11d4a094e88ea1eecdfe4489a5e868664587e..0000000000000000000000000000000000000000
--- a/spaces/coding-alt/IF/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: IF
-emoji: 🔥
-colorFrom: pink
-colorTo: red
-sdk: docker
-python_version: 3.10.11
-app_file: app.py
-pinned: false
-license: other
-duplicated_from: DeepFloyd/IF
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/cabac.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/cabac.h
deleted file mode 100644
index fdbf86b45e741fc6a8bf4728cdf00b5fefe1e08c..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/cabac.h
+++ /dev/null
@@ -1,108 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_ARM_CABAC_H
-#define AVCODEC_ARM_CABAC_H
-
-#include "config.h"
-#if HAVE_ARMV6T2_INLINE
-
-#include "libavutil/attributes.h"
-#include "libavutil/internal.h"
-#include "libavcodec/cabac.h"
-
-#define get_cabac_inline get_cabac_inline_arm
-static av_always_inline int get_cabac_inline_arm(CABACContext *c,
- uint8_t *const state)
-{
- int bit;
- void *reg_b, *reg_c, *tmp;
-
- __asm__ volatile(
- "ldrb %[bit] , [%[state]] \n\t"
- "add %[r_b] , %[tables] , %[lps_off] \n\t"
- "mov %[tmp] , %[range] \n\t"
- "and %[range] , %[range] , #0xC0 \n\t"
- "add %[r_b] , %[r_b] , %[bit] \n\t"
- "ldrb %[range] , [%[r_b], %[range], lsl #1] \n\t"
- "add %[r_b] , %[tables] , %[norm_off] \n\t"
- "sub %[r_c] , %[tmp] , %[range] \n\t"
- "lsl %[tmp] , %[r_c] , #17 \n\t"
- "cmp %[tmp] , %[low] \n\t"
- "it gt \n\t"
- "movgt %[range] , %[r_c] \n\t"
- "itt cc \n\t"
- "mvncc %[bit] , %[bit] \n\t"
- "subcc %[low] , %[low] , %[tmp] \n\t"
- "add %[r_c] , %[tables] , %[mlps_off] \n\t"
- "ldrb %[tmp] , [%[r_b], %[range]] \n\t"
- "ldrb %[r_b] , [%[r_c], %[bit]] \n\t"
- "lsl %[low] , %[low] , %[tmp] \n\t"
- "lsl %[range] , %[range] , %[tmp] \n\t"
- "uxth %[r_c] , %[low] \n\t"
- "strb %[r_b] , [%[state]] \n\t"
- "tst %[r_c] , %[r_c] \n\t"
- "bne 2f \n\t"
- "ldr %[r_c] , [%[c], %[byte]] \n\t"
-#if UNCHECKED_BITSTREAM_READER
- "ldrh %[tmp] , [%[r_c]] \n\t"
- "add %[r_c] , %[r_c] , #2 \n\t"
- "str %[r_c] , [%[c], %[byte]] \n\t"
-#else
- "ldr %[r_b] , [%[c], %[end]] \n\t"
- "ldrh %[tmp] , [%[r_c]] \n\t"
- "cmp %[r_c] , %[r_b] \n\t"
- "itt lt \n\t"
- "addlt %[r_c] , %[r_c] , #2 \n\t"
- "strlt %[r_c] , [%[c], %[byte]] \n\t"
-#endif
- "sub %[r_c] , %[low] , #1 \n\t"
- "add %[r_b] , %[tables] , %[norm_off] \n\t"
- "eor %[r_c] , %[low] , %[r_c] \n\t"
- "rev %[tmp] , %[tmp] \n\t"
- "lsr %[r_c] , %[r_c] , #15 \n\t"
- "lsr %[tmp] , %[tmp] , #15 \n\t"
- "ldrb %[r_c] , [%[r_b], %[r_c]] \n\t"
- "movw %[r_b] , #0xFFFF \n\t"
- "sub %[tmp] , %[tmp] , %[r_b] \n\t"
- "rsb %[r_c] , %[r_c] , #7 \n\t"
- "lsl %[tmp] , %[tmp] , %[r_c] \n\t"
- "add %[low] , %[low] , %[tmp] \n\t"
- "2: \n\t"
- : [bit]"=&r"(bit),
- [low]"+&r"(c->low),
- [range]"+&r"(c->range),
- [r_b]"=&r"(reg_b),
- [r_c]"=&r"(reg_c),
- [tmp]"=&r"(tmp)
- : [c]"r"(c),
- [state]"r"(state),
- [tables]"r"(ff_h264_cabac_tables),
- [byte]"M"(offsetof(CABACContext, bytestream)),
- [end]"M"(offsetof(CABACContext, bytestream_end)),
- [norm_off]"I"(H264_NORM_SHIFT_OFFSET),
- [lps_off]"I"(H264_LPS_RANGE_OFFSET),
- [mlps_off]"I"(H264_MLPS_STATE_OFFSET + 128)
- : "memory", "cc"
- );
-
- return bit & 1;
-}
-#endif /* HAVE_ARMV6T2_INLINE */
-
-#endif /* AVCODEC_ARM_CABAC_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_jpeg_syntax_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_jpeg_syntax_template.c
deleted file mode 100644
index e06abdc674b8f2f029dec09b892fcccf60409632..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_jpeg_syntax_template.c
+++ /dev/null
@@ -1,196 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-static int FUNC(frame_header)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawFrameHeader *current)
-{
- int err, i;
-
- HEADER("Frame Header");
-
- u(16, Lf, 8, 8 + 3 * JPEG_MAX_COMPONENTS);
-
- u(8, P, 2, 16);
- u(16, Y, 0, JPEG_MAX_HEIGHT);
- u(16, X, 1, JPEG_MAX_WIDTH);
- u(8, Nf, 1, JPEG_MAX_COMPONENTS);
-
- for (i = 0; i < current->Nf; i++) {
- us(8, C[i], i, 0, JPEG_MAX_COMPONENTS);
- us(4, H[i], i, 1, 4);
- us(4, V[i], i, 1, 4);
- us(8, Tq[i], i, 0, 3);
- }
-
- return 0;
-}
-
-static int FUNC(quantisation_table)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawQuantisationTable *current)
-{
- int err, i;
-
- u(4, Pq, 0, 1);
- u(4, Tq, 0, 3);
-
- if (current->Pq) {
- for (i = 0; i < 64; i++)
- us(16, Q[i], i, 1, 255);
- } else {
- for (i = 0; i < 64; i++)
- us(8, Q[i], i, 1, 255);
- }
-
- return 0;
-}
-
-static int FUNC(dqt)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawQuantisationTableSpecification *current)
-{
- int err, i, n;
-
- HEADER("Quantisation Tables");
-
- u(16, Lq, 2, 2 + 4 * 65);
- n = current->Lq / 65;
-
- for (i = 0; i < n; i++)
- CHECK(FUNC(quantisation_table)(ctx, rw, ¤t->table[i]));
-
- return 0;
-}
-
-static int FUNC(huffman_table)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawHuffmanTable *current)
-{
- int err, i, j, ij;
-
- u(4, Tc, 0, 1);
- u(4, Th, 0, 3);
-
- for (i = 0; i < 16; i++)
- us(8, L[i], i, 0, 255);
-
- ij = 0;
- for (i = 0; i < 16; i++) {
- for (j = 0; j < current->L[i]; j++) {
- if (ij >= FF_ARRAY_ELEMS(current->V))
- return AVERROR_INVALIDDATA;
- us(8, V[ij], ij, 0, 255);
- ++ij;
- }
- }
-
- return 0;
-}
-
-static int FUNC(dht)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawHuffmanTableSpecification *current)
-{
- int err, i, j, n;
-
- HEADER("Huffman Tables");
-
- u(16, Lh, 2, 2 + 8 * (1 + 16 + 256));
-
- n = 2;
- for (i = 0; n < current->Lh; i++) {
- if (i >= 8)
- return AVERROR_INVALIDDATA;
-
- CHECK(FUNC(huffman_table)(ctx, rw, ¤t->table[i]));
-
- ++n;
- for (j = 0; j < 16; j++)
- n += 1 + current->table[i].L[j];
- }
-
- return 0;
-}
-
-static int FUNC(scan_header)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawScanHeader *current)
-{
- int err, j;
-
- HEADER("Scan");
-
- u(16, Ls, 6, 6 + 2 * JPEG_MAX_COMPONENTS);
-
- u(8, Ns, 1, 4);
- for (j = 0; j < current->Ns; j++) {
- us(8, Cs[j], j, 0, JPEG_MAX_COMPONENTS);
- us(4, Td[j], j, 0, 3);
- us(4, Ta[j], j, 0, 3);
- }
-
- u(8, Ss, 0, 63);
- u(8, Se, 0, 63);
- u(4, Ah, 0, 13);
- u(4, Al, 0, 15);
-
- return 0;
-}
-
-static int FUNC(application_data)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawApplicationData *current)
-{
- int err, i;
-
- HEADER("Application Data");
-
- u(16, Lp, 2, 65535);
-
- if (current->Lp > 2) {
-#ifdef READ
- current->Ap_ref = av_buffer_alloc(current->Lp - 2);
- if (!current->Ap_ref)
- return AVERROR(ENOMEM);
- current->Ap = current->Ap_ref->data;
-#endif
-
- for (i = 0; i < current->Lp - 2; i++)
- us(8, Ap[i], i, 0, 255);
- }
-
- return 0;
-}
-
-static int FUNC(comment)(CodedBitstreamContext *ctx, RWContext *rw,
- JPEGRawComment *current)
-{
- int err, i;
-
- HEADER("Comment");
-
- u(16, Lc, 2, 65535);
-
- if (current->Lc > 2) {
-#ifdef READ
- current->Cm_ref = av_buffer_alloc(current->Lc - 2);
- if (!current->Cm_ref)
- return AVERROR(ENOMEM);
- current->Cm = current->Cm_ref->data;
-#endif
-
- for (i = 0; i < current->Lc - 2; i++)
- us(8, Cm[i], i, 0, 255);
- }
-
- return 0;
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cljrdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cljrdec.c
deleted file mode 100644
index 914f853c8fd9247b9acf98af0210dd54fc982d32..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cljrdec.c
+++ /dev/null
@@ -1,93 +0,0 @@
-/*
- * Cirrus Logic AccuPak (CLJR) decoder
- * Copyright (c) 2003 Alex Beregszaszi
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * Cirrus Logic AccuPak decoder.
- */
-
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "decode.h"
-#include "get_bits.h"
-
-static int decode_frame(AVCodecContext *avctx, AVFrame *p,
- int *got_frame, AVPacket *avpkt)
-{
- const uint8_t *buf = avpkt->data;
- int buf_size = avpkt->size;
- GetBitContext gb;
- int x, y, ret;
-
- if (avctx->height <= 0 || avctx->width <= 0) {
- av_log(avctx, AV_LOG_ERROR, "Invalid width or height\n");
- return AVERROR_INVALIDDATA;
- }
-
- if (buf_size / avctx->height < avctx->width) {
- av_log(avctx, AV_LOG_ERROR,
- "Resolution larger than buffer size. Invalid header?\n");
- return AVERROR_INVALIDDATA;
- }
-
- if ((ret = ff_get_buffer(avctx, p, 0)) < 0)
- return ret;
- p->pict_type = AV_PICTURE_TYPE_I;
- p->key_frame = 1;
-
- init_get_bits(&gb, buf, buf_size * 8);
-
- for (y = 0; y < avctx->height; y++) {
- uint8_t *luma = &p->data[0][y * p->linesize[0]];
- uint8_t *cb = &p->data[1][y * p->linesize[1]];
- uint8_t *cr = &p->data[2][y * p->linesize[2]];
- for (x = 0; x < avctx->width; x += 4) {
- luma[3] = (get_bits(&gb, 5)*33) >> 2;
- luma[2] = (get_bits(&gb, 5)*33) >> 2;
- luma[1] = (get_bits(&gb, 5)*33) >> 2;
- luma[0] = (get_bits(&gb, 5)*33) >> 2;
- luma += 4;
- *(cb++) = get_bits(&gb, 6) << 2;
- *(cr++) = get_bits(&gb, 6) << 2;
- }
- }
-
- *got_frame = 1;
-
- return buf_size;
-}
-
-static av_cold int decode_init(AVCodecContext *avctx)
-{
- avctx->pix_fmt = AV_PIX_FMT_YUV411P;
- return 0;
-}
-
-const FFCodec ff_cljr_decoder = {
- .p.name = "cljr",
- CODEC_LONG_NAME("Cirrus Logic AccuPak"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_CLJR,
- .init = decode_init,
- FF_CODEC_DECODE_CB(decode_frame),
- .p.capabilities = AV_CODEC_CAP_DR1,
-};
-
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flvdec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flvdec.h
deleted file mode 100644
index d5aff74a9828c5b77221fd106ccd1bbb313d7ae0..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flvdec.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/*
- * FLV decoder header.
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_FLVDEC_H
-#define AVCODEC_FLVDEC_H
-
-#include "mpegvideo.h"
-
-int ff_flv_decode_picture_header(MpegEncContext *s);
-
-#endif /* AVCODEC_FLVDEC_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ivi.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ivi.h
deleted file mode 100644
index 06cd4d95ff23263ad801d0411ef7cfdca27e62d2..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ivi.h
+++ /dev/null
@@ -1,342 +0,0 @@
-/*
- * common functions for Indeo Video Interactive codecs (Indeo4 and Indeo5)
- *
- * Copyright (c) 2009 Maxim Poliakovski
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * This file contains structures and macros shared by both Indeo4 and
- * Indeo5 decoders.
- */
-
-#ifndef AVCODEC_IVI_H
-#define AVCODEC_IVI_H
-
-#include "avcodec.h"
-#include "get_bits.h"
-#include
-
-/**
- * Indeo 4 frame types.
- */
-enum {
- IVI4_FRAMETYPE_INTRA = 0,
- IVI4_FRAMETYPE_INTRA1 = 1, ///< intra frame with slightly different bitstream coding
- IVI4_FRAMETYPE_INTER = 2, ///< non-droppable P-frame
- IVI4_FRAMETYPE_BIDIR = 3, ///< bidirectional frame
- IVI4_FRAMETYPE_INTER_NOREF = 4, ///< droppable P-frame
- IVI4_FRAMETYPE_NULL_FIRST = 5, ///< empty frame with no data
- IVI4_FRAMETYPE_NULL_LAST = 6 ///< empty frame with no data
-};
-
-#define IVI_VLC_BITS 13 ///< max number of bits of the ivi's huffman codes
-#define IVI5_IS_PROTECTED 0x20
-
-/**
- * huffman codebook descriptor
- */
-typedef struct IVIHuffDesc {
- int32_t num_rows;
- uint8_t xbits[16];
-} IVIHuffDesc;
-
-/**
- * macroblock/block huffman table descriptor
- */
-typedef struct IVIHuffTab {
- int32_t tab_sel; /// index of one of the predefined tables
- /// or "7" for custom one
- VLC *tab; /// pointer to the table associated with tab_sel
-
- /// the following are used only when tab_sel == 7
- IVIHuffDesc cust_desc; /// custom Huffman codebook descriptor
- VLC cust_tab; /// vlc table for custom codebook
-} IVIHuffTab;
-
-enum {
- IVI_MB_HUFF = 0, /// Huffman table is used for coding macroblocks
- IVI_BLK_HUFF = 1 /// Huffman table is used for coding blocks
-};
-
-
-/**
- * Common scan patterns (defined in ivi_common.c)
- */
-extern const uint8_t ff_ivi_vertical_scan_8x8[64];
-extern const uint8_t ff_ivi_horizontal_scan_8x8[64];
-extern const uint8_t ff_ivi_direct_scan_4x4[16];
-
-
-/**
- * Declare inverse transform function types
- */
-typedef void (InvTransformPtr)(const int32_t *in, int16_t *out, ptrdiff_t pitch, const uint8_t *flags);
-typedef void (DCTransformPtr) (const int32_t *in, int16_t *out, ptrdiff_t pitch, int blk_size);
-
-
-/**
- * run-value (RLE) table descriptor
- */
-typedef struct RVMapDesc {
- uint8_t eob_sym; ///< end of block symbol
- uint8_t esc_sym; ///< escape symbol
- uint8_t runtab[256];
- int8_t valtab[256];
-} RVMapDesc;
-
-extern const RVMapDesc ff_ivi_rvmap_tabs[9];
-
-
-/**
- * information for Indeo macroblock (16x16, 8x8 or 4x4)
- */
-typedef struct IVIMbInfo {
- int16_t xpos;
- int16_t ypos;
- uint32_t buf_offs; ///< address in the output buffer for this mb
- uint8_t type; ///< macroblock type: 0 - INTRA, 1 - INTER
- uint8_t cbp; ///< coded block pattern
- int8_t q_delta; ///< quant delta
- int8_t mv_x; ///< motion vector (x component)
- int8_t mv_y; ///< motion vector (y component)
- int8_t b_mv_x; ///< second motion vector (x component)
- int8_t b_mv_y; ///< second motion vector (y component)
-} IVIMbInfo;
-
-
-/**
- * information for Indeo tile
- */
-typedef struct IVITile {
- int xpos;
- int ypos;
- int width;
- int height;
- int mb_size;
- int is_empty; ///< = 1 if this tile doesn't contain any data
- int data_size; ///< size of the data in bytes
- int num_MBs; ///< number of macroblocks in this tile
- IVIMbInfo *mbs; ///< array of macroblock descriptors
- IVIMbInfo *ref_mbs; ///< ptr to the macroblock descriptors of the reference tile
-} IVITile;
-
-
-/**
- * information for Indeo wavelet band
- */
-typedef struct IVIBandDesc {
- int plane; ///< plane number this band belongs to
- int band_num; ///< band number
- int width;
- int height;
- int aheight; ///< aligned band height
- const uint8_t *data_ptr; ///< ptr to the first byte of the band data
- int data_size; ///< size of the band data
- int16_t *buf; ///< pointer to the output buffer for this band
- int16_t *ref_buf; ///< pointer to the reference frame buffer (for motion compensation)
- int16_t *b_ref_buf; ///< pointer to the second reference frame buffer (for motion compensation)
- int16_t *bufs[4]; ///< array of pointers to the band buffers
- ptrdiff_t pitch; ///< pitch associated with the buffers above
- int is_empty; ///< = 1 if this band doesn't contain any data
- int mb_size; ///< macroblock size
- int blk_size; ///< block size
- int is_halfpel; ///< precision of the motion compensation: 0 - fullpel, 1 - halfpel
- int inherit_mv; ///< tells if motion vector is inherited from reference macroblock
- int inherit_qdelta; ///< tells if quantiser delta is inherited from reference macroblock
- int qdelta_present; ///< tells if Qdelta signal is present in the bitstream (Indeo5 only)
- int quant_mat; ///< dequant matrix index
- int glob_quant; ///< quant base for this band
- const uint8_t *scan; ///< ptr to the scan pattern
- int scan_size; ///< size of the scantable
-
- IVIHuffTab blk_vlc; ///< vlc table for decoding block data
-
- int num_corr; ///< number of correction entries
- uint8_t corr[61*2]; ///< rvmap correction pairs
- int rvmap_sel; ///< rvmap table selector
- RVMapDesc *rv_map; ///< ptr to the RLE table for this band
- int num_tiles; ///< number of tiles in this band
- IVITile *tiles; ///< array of tile descriptors
- InvTransformPtr *inv_transform;
- int transform_size;
- DCTransformPtr *dc_transform;
- int is_2d_trans; ///< 1 indicates that the two-dimensional inverse transform is used
- int32_t checksum; ///< for debug purposes
- int checksum_present;
- int bufsize; ///< band buffer size in bytes
- const uint16_t *intra_base; ///< quantization matrix for intra blocks
- const uint16_t *inter_base; ///< quantization matrix for inter blocks
- const uint8_t *intra_scale; ///< quantization coefficient for intra blocks
- const uint8_t *inter_scale; ///< quantization coefficient for inter blocks
-} IVIBandDesc;
-
-
-/**
- * color plane (luma or chroma) information
- */
-typedef struct IVIPlaneDesc {
- uint16_t width;
- uint16_t height;
- uint8_t num_bands; ///< number of bands this plane subdivided into
- IVIBandDesc *bands; ///< array of band descriptors
-} IVIPlaneDesc;
-
-
-typedef struct IVIPicConfig {
- uint16_t pic_width;
- uint16_t pic_height;
- uint16_t chroma_width;
- uint16_t chroma_height;
- uint16_t tile_width;
- uint16_t tile_height;
- uint8_t luma_bands;
- uint8_t chroma_bands;
-} IVIPicConfig;
-
-typedef struct IVI45DecContext {
- GetBitContext gb;
- RVMapDesc rvmap_tabs[9]; ///< local corrected copy of the static rvmap tables
-
- uint32_t frame_num;
- int frame_type;
- int prev_frame_type; ///< frame type of the previous frame
- uint32_t data_size; ///< size of the frame data in bytes from picture header
- int is_scalable;
- const uint8_t *frame_data; ///< input frame data pointer
- int inter_scal; ///< signals a sequence of scalable inter frames
- uint32_t frame_size; ///< frame size in bytes
- uint32_t pic_hdr_size; ///< picture header size in bytes
- uint8_t frame_flags;
- uint16_t checksum; ///< frame checksum
-
- IVIPicConfig pic_conf;
- IVIPlaneDesc planes[3]; ///< color planes
-
- int buf_switch; ///< used to switch between three buffers
- int dst_buf; ///< buffer index for the currently decoded frame
- int ref_buf; ///< inter frame reference buffer index
- int ref2_buf; ///< temporal storage for switching buffers
- int b_ref_buf; ///< second reference frame buffer index
-
- IVIHuffTab mb_vlc; ///< current macroblock table descriptor
- IVIHuffTab blk_vlc; ///< current block table descriptor
-
- uint8_t rvmap_sel;
- uint8_t in_imf;
- uint8_t in_q; ///< flag for explicitly stored quantiser delta
- uint8_t pic_glob_quant;
- uint8_t unknown1;
-
- uint16_t gop_hdr_size;
- uint8_t gop_flags;
- uint32_t lock_word;
-
- int show_indeo4_info;
- uint8_t has_b_frames;
- uint8_t has_transp; ///< transparency mode status: 1 - enabled
- uint8_t uses_tiling;
- uint8_t uses_haar;
- uint8_t uses_fullpel;
-
- int (*decode_pic_hdr) (struct IVI45DecContext *ctx, AVCodecContext *avctx);
- int (*decode_band_hdr) (struct IVI45DecContext *ctx, IVIBandDesc *band, AVCodecContext *avctx);
- int (*decode_mb_info) (struct IVI45DecContext *ctx, IVIBandDesc *band, IVITile *tile, AVCodecContext *avctx);
- void (*switch_buffers) (struct IVI45DecContext *ctx);
- int (*is_nonnull_frame)(struct IVI45DecContext *ctx);
-
- int gop_invalid;
- int buf_invalid[4];
-
- int is_indeo4;
-
- AVFrame *p_frame;
- int got_p_frame;
-} IVI45DecContext;
-
-/** compare some properties of two pictures */
-static inline int ivi_pic_config_cmp(IVIPicConfig *str1, IVIPicConfig *str2)
-{
- return str1->pic_width != str2->pic_width || str1->pic_height != str2->pic_height ||
- str1->chroma_width != str2->chroma_width || str1->chroma_height != str2->chroma_height ||
- str1->tile_width != str2->tile_width || str1->tile_height != str2->tile_height ||
- str1->luma_bands != str2->luma_bands || str1->chroma_bands != str2->chroma_bands;
-}
-
-/** calculate number of tiles in a stride */
-#define IVI_NUM_TILES(stride, tile_size) (((stride) + (tile_size) - 1) / (tile_size))
-
-/** calculate number of macroblocks in a tile */
-#define IVI_MBs_PER_TILE(tile_width, tile_height, mb_size) \
- ((((tile_width) + (mb_size) - 1) / (mb_size)) * (((tile_height) + (mb_size) - 1) / (mb_size)))
-
-/** convert unsigned values into signed ones (the sign is in the LSB) */
-#define IVI_TOSIGNED(val) (-(((val) >> 1) ^ -((val) & 1)))
-
-/** scale motion vector */
-static inline int ivi_scale_mv(int mv, int mv_scale)
-{
- return (mv + (mv > 0) + (mv_scale - 1)) >> mv_scale;
-}
-
-/**
- * Initialize static codes used for macroblock and block decoding.
- */
-void ff_ivi_init_static_vlc(void);
-
-/**
- * Decode a huffman codebook descriptor from the bitstream
- * and select specified huffman table.
- *
- * @param[in,out] gb the GetBit context
- * @param[in] desc_coded flag signalling if table descriptor was coded
- * @param[in] which_tab codebook purpose (IVI_MB_HUFF or IVI_BLK_HUFF)
- * @param[out] huff_tab pointer to the descriptor of the selected table
- * @param[in] avctx AVCodecContext pointer
- * @return zero on success, negative value otherwise
- */
-int ff_ivi_dec_huff_desc(GetBitContext *gb, int desc_coded, int which_tab,
- IVIHuffTab *huff_tab, AVCodecContext *avctx);
-
-/**
- * Initialize planes (prepares descriptors, allocates buffers etc).
- *
- * @param[in,out] planes pointer to the array of the plane descriptors
- * @param[in] cfg pointer to the ivi_pic_config structure describing picture layout
- * @param[in] is_indeo4 flag signalling if it is Indeo 4 or not
- * @return result code: 0 - OK
- */
-int ff_ivi_init_planes(AVCodecContext *avctx, IVIPlaneDesc *planes,
- const IVIPicConfig *cfg, int is_indeo4);
-
-/**
- * Initialize tile and macroblock descriptors.
- *
- * @param[in,out] planes pointer to the array of the plane descriptors
- * @param[in] tile_width tile width
- * @param[in] tile_height tile height
- * @return result code: 0 - OK
- */
-int ff_ivi_init_tiles(IVIPlaneDesc *planes, int tile_width, int tile_height);
-
-int ff_ivi_decode_frame(AVCodecContext *avctx, AVFrame *data,
- int *got_frame, AVPacket *avpkt);
-int ff_ivi_decode_close(AVCodecContext *avctx);
-
-#endif /* AVCODEC_IVI_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopusdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopusdec.c
deleted file mode 100644
index 9b9a6103430828fc7f98efbdd6c59a9b7de845c1..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopusdec.c
+++ /dev/null
@@ -1,252 +0,0 @@
-/*
- * Opus decoder using libopus
- * Copyright (c) 2012 Nicolas George
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-
-#include "libavutil/internal.h"
-#include "libavutil/intreadwrite.h"
-#include "libavutil/ffmath.h"
-#include "libavutil/opt.h"
-
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "decode.h"
-#include "internal.h"
-#include "mathops.h"
-#include "libopus.h"
-#include "vorbis_data.h"
-
-struct libopus_context {
- AVClass *class;
- OpusMSDecoder *dec;
- int pre_skip;
-#ifndef OPUS_SET_GAIN
- union { int i; double d; } gain;
-#endif
-#ifdef OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST
- int apply_phase_inv;
-#endif
-};
-
-#define OPUS_HEAD_SIZE 19
-
-static av_cold int libopus_decode_init(AVCodecContext *avc)
-{
- struct libopus_context *opus = avc->priv_data;
- int ret, channel_map = 0, gain_db = 0, nb_streams, nb_coupled, channels;
- uint8_t mapping_arr[8] = { 0, 1 }, *mapping;
-
- channels = avc->extradata_size >= 10 ? avc->extradata[9] : (avc->ch_layout.nb_channels == 1) ? 1 : 2;
- if (channels <= 0) {
- av_log(avc, AV_LOG_WARNING,
- "Invalid number of channels %d, defaulting to stereo\n", channels);
- channels = 2;
- }
-
- avc->sample_rate = 48000;
- avc->sample_fmt = avc->request_sample_fmt == AV_SAMPLE_FMT_FLT ?
- AV_SAMPLE_FMT_FLT : AV_SAMPLE_FMT_S16;
- av_channel_layout_uninit(&avc->ch_layout);
- if (channels > 8) {
- avc->ch_layout.order = AV_CHANNEL_ORDER_UNSPEC;
- avc->ch_layout.nb_channels = channels;
- } else {
- av_channel_layout_copy(&avc->ch_layout, &ff_vorbis_ch_layouts[channels - 1]);
- }
-
- if (avc->extradata_size >= OPUS_HEAD_SIZE) {
- opus->pre_skip = AV_RL16(avc->extradata + 10);
- gain_db = sign_extend(AV_RL16(avc->extradata + 16), 16);
- channel_map = AV_RL8 (avc->extradata + 18);
- }
- if (avc->extradata_size >= OPUS_HEAD_SIZE + 2 + channels) {
- nb_streams = avc->extradata[OPUS_HEAD_SIZE + 0];
- nb_coupled = avc->extradata[OPUS_HEAD_SIZE + 1];
- if (nb_streams + nb_coupled != channels)
- av_log(avc, AV_LOG_WARNING, "Inconsistent channel mapping.\n");
- mapping = avc->extradata + OPUS_HEAD_SIZE + 2;
- } else {
- if (channels > 2 || channel_map) {
- av_log(avc, AV_LOG_ERROR,
- "No channel mapping for %d channels.\n", channels);
- return AVERROR(EINVAL);
- }
- nb_streams = 1;
- nb_coupled = channels > 1;
- mapping = mapping_arr;
- }
-
- if (channels > 2 && channels <= 8) {
- const uint8_t *vorbis_offset = ff_vorbis_channel_layout_offsets[channels - 1];
- int ch;
-
- /* Remap channels from Vorbis order to ffmpeg order */
- for (ch = 0; ch < channels; ch++)
- mapping_arr[ch] = mapping[vorbis_offset[ch]];
- mapping = mapping_arr;
- }
-
- opus->dec = opus_multistream_decoder_create(avc->sample_rate, channels,
- nb_streams, nb_coupled,
- mapping, &ret);
- if (!opus->dec) {
- av_log(avc, AV_LOG_ERROR, "Unable to create decoder: %s\n",
- opus_strerror(ret));
- return ff_opus_error_to_averror(ret);
- }
-
-#ifdef OPUS_SET_GAIN
- ret = opus_multistream_decoder_ctl(opus->dec, OPUS_SET_GAIN(gain_db));
- if (ret != OPUS_OK)
- av_log(avc, AV_LOG_WARNING, "Failed to set gain: %s\n",
- opus_strerror(ret));
-#else
- {
- double gain_lin = ff_exp10(gain_db / (20.0 * 256));
- if (avc->sample_fmt == AV_SAMPLE_FMT_FLT)
- opus->gain.d = gain_lin;
- else
- opus->gain.i = FFMIN(gain_lin * 65536, INT_MAX);
- }
-#endif
-
-#ifdef OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST
- ret = opus_multistream_decoder_ctl(opus->dec,
- OPUS_SET_PHASE_INVERSION_DISABLED(!opus->apply_phase_inv));
- if (ret != OPUS_OK)
- av_log(avc, AV_LOG_WARNING,
- "Unable to set phase inversion: %s\n",
- opus_strerror(ret));
-#endif
-
- /* Decoder delay (in samples) at 48kHz */
- avc->delay = avc->internal->skip_samples = opus->pre_skip;
-
- return 0;
-}
-
-static av_cold int libopus_decode_close(AVCodecContext *avc)
-{
- struct libopus_context *opus = avc->priv_data;
-
- if (opus->dec) {
- opus_multistream_decoder_destroy(opus->dec);
- opus->dec = NULL;
- }
- return 0;
-}
-
-#define MAX_FRAME_SIZE (960 * 6)
-
-static int libopus_decode(AVCodecContext *avc, AVFrame *frame,
- int *got_frame_ptr, AVPacket *pkt)
-{
- struct libopus_context *opus = avc->priv_data;
- int ret, nb_samples;
-
- frame->nb_samples = MAX_FRAME_SIZE;
- if ((ret = ff_get_buffer(avc, frame, 0)) < 0)
- return ret;
-
- if (avc->sample_fmt == AV_SAMPLE_FMT_S16)
- nb_samples = opus_multistream_decode(opus->dec, pkt->data, pkt->size,
- (opus_int16 *)frame->data[0],
- frame->nb_samples, 0);
- else
- nb_samples = opus_multistream_decode_float(opus->dec, pkt->data, pkt->size,
- (float *)frame->data[0],
- frame->nb_samples, 0);
-
- if (nb_samples < 0) {
- av_log(avc, AV_LOG_ERROR, "Decoding error: %s\n",
- opus_strerror(nb_samples));
- return ff_opus_error_to_averror(nb_samples);
- }
-
-#ifndef OPUS_SET_GAIN
- {
- int i = avc->ch_layout.nb_channels * nb_samples;
- if (avc->sample_fmt == AV_SAMPLE_FMT_FLT) {
- float *pcm = (float *)frame->data[0];
- for (; i > 0; i--, pcm++)
- *pcm = av_clipf(*pcm * opus->gain.d, -1, 1);
- } else {
- int16_t *pcm = (int16_t *)frame->data[0];
- for (; i > 0; i--, pcm++)
- *pcm = av_clip_int16(((int64_t)opus->gain.i * *pcm) >> 16);
- }
- }
-#endif
-
- frame->nb_samples = nb_samples;
- *got_frame_ptr = 1;
-
- return pkt->size;
-}
-
-static void libopus_flush(AVCodecContext *avc)
-{
- struct libopus_context *opus = avc->priv_data;
-
- opus_multistream_decoder_ctl(opus->dec, OPUS_RESET_STATE);
- /* The stream can have been extracted by a tool that is not Opus-aware.
- Therefore, any packet can become the first of the stream. */
- avc->internal->skip_samples = opus->pre_skip;
-}
-
-
-#define OFFSET(x) offsetof(struct libopus_context, x)
-#define FLAGS AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_DECODING_PARAM
-static const AVOption libopusdec_options[] = {
-#ifdef OPUS_SET_PHASE_INVERSION_DISABLED_REQUEST
- { "apply_phase_inv", "Apply intensity stereo phase inversion", OFFSET(apply_phase_inv), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS },
-#endif
- { NULL },
-};
-
-static const AVClass libopusdec_class = {
- .class_name = "libopusdec",
- .item_name = av_default_item_name,
- .option = libopusdec_options,
- .version = LIBAVUTIL_VERSION_INT,
-};
-
-
-const FFCodec ff_libopus_decoder = {
- .p.name = "libopus",
- CODEC_LONG_NAME("libopus Opus"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_OPUS,
- .priv_data_size = sizeof(struct libopus_context),
- .init = libopus_decode_init,
- .close = libopus_decode_close,
- FF_CODEC_DECODE_CB(libopus_decode),
- .flush = libopus_flush,
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF,
- .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
- FF_CODEC_CAP_INIT_CLEANUP,
- .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_FLT,
- AV_SAMPLE_FMT_S16,
- AV_SAMPLE_FMT_NONE },
- .p.priv_class = &libopusdec_class,
- .p.wrapper_name = "libopus",
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/m101.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/m101.c
deleted file mode 100644
index 3def577b746f6c5bba37b0c68d5c96d8ae159c44..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/m101.c
+++ /dev/null
@@ -1,115 +0,0 @@
-/*
- * Copyright (c) 2016 Michael Niedermayer
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/intreadwrite.h"
-
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "decode.h"
-
-
-static av_cold int m101_decode_init(AVCodecContext *avctx)
-{
- if (avctx->extradata_size < 6*4) {
- avpriv_request_sample(avctx, "Missing or too small extradata (size %d)", avctx->extradata_size);
- return AVERROR_INVALIDDATA;
- }
-
- if (avctx->extradata[2*4] == 10)
- avctx->pix_fmt = AV_PIX_FMT_YUV422P10;
- else if (avctx->extradata[2*4] == 8) {
- avctx->pix_fmt = AV_PIX_FMT_YUYV422;
- } else {
- avpriv_request_sample(avctx, "BPS %d", avctx->extradata[2*4]);
- return AVERROR_INVALIDDATA;
- }
-
- return 0;
-}
-
-static int m101_decode_frame(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame, AVPacket *avpkt)
-{
- const uint8_t *buf = avpkt->data;
- int stride, ret;
- int x, y;
- int min_stride = 2 * avctx->width;
- int bits = avctx->extradata[2*4];
-
- stride = AV_RL32(avctx->extradata + 5*4);
-
- if (avctx->pix_fmt == AV_PIX_FMT_YUV422P10)
- min_stride = (avctx->width + 15) / 16 * 40;
-
- if (stride < min_stride || avpkt->size < stride * (uint64_t)avctx->height) {
- av_log(avctx, AV_LOG_ERROR, "stride (%d) is invalid for packet sized %d\n",
- stride, avpkt->size);
- return AVERROR_INVALIDDATA;
- }
-
- if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
- return ret;
- frame->pict_type = AV_PICTURE_TYPE_I;
- frame->key_frame = 1;
- frame->interlaced_frame = ((avctx->extradata[3*4] & 3) != 3);
- if (frame->interlaced_frame)
- frame->top_field_first = avctx->extradata[3*4] & 1;
-
- for (y = 0; y < avctx->height; y++) {
- int src_y = y;
- if (frame->interlaced_frame)
- src_y = ((y&1)^frame->top_field_first) ? y/2 : (y/2 + avctx->height/2);
- if (bits == 8) {
- uint8_t *line = frame->data[0] + y*frame->linesize[0];
- memcpy(line, buf + src_y*stride, 2*avctx->width);
- } else {
- int block;
- uint16_t *luma = (uint16_t*)&frame->data[0][y*frame->linesize[0]];
- uint16_t *cb = (uint16_t*)&frame->data[1][y*frame->linesize[1]];
- uint16_t *cr = (uint16_t*)&frame->data[2][y*frame->linesize[2]];
- for (block = 0; 16*block < avctx->width; block ++) {
- const uint8_t *buf_src = buf + src_y*stride + 40*block;
- for (x = 0; x < 16 && x + 16*block < avctx->width; x++) {
- int xd = x + 16*block;
- if (x&1) {
- luma [xd] = (4*buf_src[2*x + 0]) + ((buf_src[32 + (x>>1)]>>4)&3);
- } else {
- luma [xd] = (4*buf_src[2*x + 0]) + (buf_src[32 + (x>>1)] &3);
- cb[xd>>1] = (4*buf_src[2*x + 1]) + ((buf_src[32 + (x>>1)]>>2)&3);
- cr[xd>>1] = (4*buf_src[2*x + 3]) + (buf_src[32 + (x>>1)]>>6);
- }
- }
- }
- }
- }
-
- *got_frame = 1;
- return avpkt->size;
-}
-
-const FFCodec ff_m101_decoder = {
- .p.name = "m101",
- CODEC_LONG_NAME("Matrox Uncompressed SD"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_M101,
- .init = m101_decode_init,
- FF_CODEC_DECODE_CB(m101_decode_frame),
- .p.capabilities = AV_CODEC_CAP_DR1,
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/6play apk Everything You Need to Know About the Best Streaming Platform in France.md b/spaces/congsaPfin/Manga-OCR/logs/6play apk Everything You Need to Know About the Best Streaming Platform in France.md
deleted file mode 100644
index 1dd75a18fde617f0d7189aeed67a40a56b5c0d3d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/6play apk Everything You Need to Know About the Best Streaming Platform in France.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
6play apk: A Streaming App for Live and Replay TV
-
If you are looking for a streaming app that lets you watch live and replay TV from various channels, you might want to check out 6play apk. This app is developed by M6 Distribution Digital, a French media company that owns several TV channels, such as M6, W9, 6ter, Gulli, Paris Première, and Téva. With 6play apk, you can enjoy unlimited access to more than 6,500 hours of exclusive programs (TV & Digital) and a unique personalized experience. You can also discover original programs available exclusively on 6play.
6play apk is an Android app that allows you to watch live and replay TV from the M6 Group channels and other partners. You can also access free 24/24 6play channels that offer continuous streaming of your favorite programs. You can also subscribe to the premium option to enjoy ad-free viewing, offline download, cast to TV, and connected TV features.
-
Features of 6play apk
-
Here are some of the features that make 6play apk a great streaming app for live and replay TV:
-
Live TV
-
You can watch M6, W9, 6ter, Gulli, Téva and Paris Première live on your Android device. You can find live your series, major sporting events, entertainment, kids programs, and news magazines.
-
Streaming
-
You can find all your favorite programs on demand, such as Love Island, New house for a new life, An almost perfect dinner, Married at first sight, and many more. You can also resume playing your programs on all your screens where you left off.
-
Free 24/24 6play Channels
-
You can enjoy your favorite programs continuously 24/24 with the free 6play channels, such as Konbini 24/24, Forbidden Zone 24/24, Criminal Investigations 24/24, Telenovelas 24/24, One day a story 24/24, Vice 24/24, Love Island France.
-
6play apk download
-6play apk mod
-6play apk android
-6play apk latest version
-6play apk for pc
-6play apk mirror
-6play apk uptodown
-6play apk pure
-6play apk old version
-6play apk cracked
-6play tv replay and streaming apk
-6play tv en direct et replay apk
-6play m6 w9 6ter gulli paris premiere teva apk
-6play live tv and catch up apk
-6play france tv app apk
-6play max ad-free streaming apk
-6play premium subscription apk
-6play original programs apk
-6play konbini channel apk
-6play cast to tv option apk
-how to install 6play apk on android
-how to watch 6play apk outside france
-how to update 6play apk on android tv
-how to download 6play apk on firestick
-how to use 6play apk on samsung smart tv
-is 6play apk safe and legal
-is 6play apk compatible with chromecast
-is 6play apk available in english
-is 6play apk free or paid
-is 6play apk working on android box
-what is new in 6play apk version 5.26.5
-what is the size of 6play apk file
-what is the rating of 6play apk on google play store
-what is the best alternative to 6play apk for android
-what is the difference between 6play and molotov tv apks
-why is 6play apk not working on my device
-why is 6play apk asking for permissions
-why is 6play apk showing ads and how to remove them
-why is 6play apk not available in my country and how to access it
-why is 6play apk slow and how to fix it
-
Original Programs
-
You can discover original programs available exclusively on 6play, such as Married at first sight: life after, Vip House Tour, Fan of... You can also watch cult series like NCIS Hawaii, 9 1 1; movies like 7 years in Tibet; documentaries like Lady Diana, Harry and Meghan: the big unboxing.
-
Live by 6play Channel
-
You can experience all the emotions of live events thanks to the Live by 6play channel. You can watch major sporting events like MMA - Cage Warriors; and find exclusive concerts.
-
Recommendations
-
You can enjoy a personalized experience with selections of programs recommended for you. You can also discover collections designed for you: Konbini, K for Korea, History in Series, The Best of Reality Series...
-
Preferences
-
You can manage your preferences simply in one click thanks to "My List". You can access your favorite programs in one click in your personalized space.
-
Multi-Screen Recovery
-
You can start playing a program on your mobile, tablet or computer and finish it on another screen.
-
The Premium
-
You can take advantage of the Paris Première or téva channels, live or in replay, whenever you want on all screens, in a non-binding subscription.
How to download and install 6play apk?
-
To download and install 6play apk on your Android device, you need to follow these steps:
-
-
Go to the Google Play Store and search for 6play, TV, Replay & Streaming or use this link: [6play, TV, Replay & Streaming](^1^).
-
Tap on the Install button and wait for the app to download.
-
Once the app is installed, open it and sign in with your 6play account or create one for free.
-
Enjoy watching live and replay TV from various channels and original programs on 6play.
-
-
If you want to download and install 6play apk on your Android TV, you need to follow these steps:
-
-
Go to the Google Play Store on your Android TV and search for 6Play for ANDROID TV or use this link: [6Play for ANDROID TV](^3^).
-
Tap on the Install button and wait for the app to download.
-
Once the app is installed, open it and sign in with your 6play account or create one for free.
-
Enjoy watching live and replay TV from various channels and original programs on 6play.
-
-
Pros and cons of 6play apk
-
Like any streaming app, 6play apk has its pros and cons. Here are some of them:
-
-
Pros
Cons
-
- A wide range of programs from various channels and genres
- Some programs are geo-restricted or require a premium subscription
-
- Free 24/24 6play channels that offer continuous streaming of your favorite programs
- Ads may interrupt your viewing experience unless you subscribe to the premium option
-
- Original programs available exclusively on 6play
- The app may not be compatible with some devices or regions
-
- A personalized experience with recommendations and preferences
- The app may have some bugs or glitches that affect its performance
-
- A multi-screen recovery feature that lets you resume playing your programs on any device
- The app may consume a lot of data or battery if you stream a lot of content
-
- A premium option that offers ad-free viewing, offline download, cast to TV, and connected TV features
- The premium option costs €1.99 per month per channel, which may be expensive for some users
-
-
Conclusion
-
6play apk is a streaming app that lets you watch live and replay TV from various channels, such as M6, W9, 6ter, Gulli, Paris Première, and Téva. You can also access free 24/24 6play channels that offer continuous streaming of your favorite programs. You can also discover original programs available exclusively on 6play. You can enjoy a personalized experience with recommendations and preferences. You can also resume playing your programs on any device with the multi-screen recovery feature. You can also subscribe to the premium option to enjoy ad-free viewing, offline download, cast to TV, and connected TV features.
-
If you are looking for a streaming app that lets you watch live and replay TV from various channels, you might want to check out 6play apk. You can download it from the Google Play Store for your Android device or Android TV. You can also visit the official website of 6play for more information.
-
FAQs
-
Here are some frequently asked questions about 6play apk:
-
-
Q: Is 6play apk safe to use?
-A: Yes, 6play apk is safe to use as long as you download it from the official sources, such as the Google Play Store or the official website of 6play. You should also avoid downloading any modded or hacked versions of the app as they may contain malware or viruses.
Q: Is 6play apk legal?
-A: Yes, 6play apk is legal as long as you use it in accordance with the terms and conditions of the app and the content providers. You should also respect the intellectual property rights of the creators and owners of the programs you watch on the app.
Q: How can I contact the support team of 6play apk?
-A: If you have any comments, questions, or issues with the app, you can contact the support team of 6play apk by sending an email to contact@6play.fr or by filling out the contact form on the app or the website. You can also check the FAQ section on the app or the website for more information.
Q: How can I cancel my premium subscription to 6play apk?
-A: If you want to cancel your premium subscription to 6play apk, you need to follow these steps:
-
Go to the Google Play Store and tap on the Menu icon.
-
Tap on Subscriptions and find your 6play premium subscription.
-
Tap on Cancel subscription and follow the instructions.
-
You will receive a confirmation email from Google Play.
-
-
Note that you can still access your premium features until the end of your current billing cycle.
-
Q: How can I cast 6play apk to my TV?
-A: If you want to cast 6play apk to your TV, you need to have a Chromecast device or a compatible smart TV. You also need to have the 6play app installed on your Android device and connected to the same Wi-Fi network as your TV. Then, you need to follow these steps:
-
Open the 6play app on your Android device and select the program you want to watch.
-
Tap on the Cast icon on the top right corner of the screen.
-
Select your TV from the list of available devices.
-
The program will start playing on your TV.
-
-
To stop casting, tap on the Cast icon again and select Disconnect.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Resources Pubg Mobile How to Get the Best Gaming Experience.md b/spaces/congsaPfin/Manga-OCR/logs/Download Resources Pubg Mobile How to Get the Best Gaming Experience.md
deleted file mode 100644
index 7ac53663237bdcd1a844c5b2409bdbeb55ee5eb1..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Resources Pubg Mobile How to Get the Best Gaming Experience.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
How to Download Resources PUBG Mobile: A Complete Guide
-
PUBG Mobile is one of the most popular and addictive battle royale games in the world. It offers a variety of maps, modes, weapons, skins, and other features that make it fun and exciting. However, to enjoy all these features, you need to download some additional resources from the game or from external sources. In this article, we will show you how to download resources PUBG mobile easily and safely.
-
What are Resources PUBG Mobile?
-
Resources PUBG mobile are files that contain data and graphics for different aspects of the game. They include maps, modes, skins, sounds, effects, and more. They are essential for running the game smoothly and enhancing your gaming experience.
Why do you need to download resources PUBG mobile?
-
You need to download resources PUBG mobile for several reasons:
-
-
To access new maps and modes that are added in every update
-
To customize your character and weapons with different skins and outfits
-
To improve the game performance and reduce lagging issues
-
To get rewards and benefits from downloading certain resource packs
-
-
What are the types of resources PUBG mobile?
-
There are several types of resources PUBG mobile that you can download from the game or from external sources. Here are some of them:
-
-
Type
Description
Size
-
Recommended Resource Pack
Contains core resources for PUBG mobile. Download for a better gaming experience.
776 MB
-
Classic Maps
Contains classic mode maps that haven't been downloaded.
Varies depending on the map
-
Themed Modes
Contains new game modes from the latest version that haven't been downloaded.
Varies depending on the mode
-
Arena Maps
Contains arena mode maps that haven't been downloaded.
Varies depending on the map
-
System Resource Pack
Contains system resources from the latest version that haven't been downloaded.
Varies depending on the version
-
Classic Graphics Pack
Contains graphics resources from older game versions.
826 MB
-
-
How to download resources PUBG mobile from the game?
-
The easiest way to download resources PUBG mobile is from the game itself. Here are the steps you need to follow:
-
How to download resource pack pubg mobile
-Pubg mobile download resources not working
-Download resources pubg mobile lite
-Pubg mobile download resources stuck
-Download resources pubg mobile error
-Download resources pubg mobile new update
-Pubg mobile download resources failed
-Download resources pubg mobile 2023
-Download resources pubg mobile apk
-Download resources pubg mobile obb
-Pubg mobile download resources slow
-Download resources pubg mobile pc
-Download resources pubg mobile ios
-Pubg mobile download resources problem
-Download resources pubg mobile kr
-Pubg mobile download resources faster
-Download resources pubg mobile vietnam
-Pubg mobile download resources tips
-Download resources pubg mobile classic mode
-Pubg mobile download resources guide
-Download resources pubg mobile themed mode
-Pubg mobile download resources rewards
-Download resources pubg mobile arena mode
-Pubg mobile download resources size
-Download resources pubg mobile system resource pack
-Pubg mobile download resources location
-Download resources pubg mobile classic graphics pack
-Pubg mobile download resources lightweight installation function
-Download resources pubg mobile excitement resource pack
-Pubg mobile download resources vintage resource pack
-Download resources pubg mobile korea superconducting tokamak advanced research experiment
-Pubg mobile download resources youtube tutorial
-Download resources pubg mobile help center
-Pubg mobile download resources kumparan.com article
-Download resources pubg mobile sportskeeda.com article
-Pubg mobile download resources expansion pack free fire max problem gaming extra video
-Download resources pubg mobile how to play like a pro tagalog explanation video
-Pubg mobile download resources how to draw map of nepal video
-Pubg mobile download resources royale adventure m9 excalibur umbra blue punisher video
-Pubg mobile download resources new update new halloween mode munno gaming video
-Pubg mobile download resources both pov jonathan vs vivone 1v1 tdm novem yt video
-Pubg mobile download resources how i achieved top 1 on the global leaderboards shibe video
-Pubg mobile download resources how to use shopify metaobjects coding with jan video
-Pubg mobile download resources tiktok see peoples liked videos foxy tech tips video
-Pubg mobile download resources snapchat how to hide your story from people foxy tech tips video
-
Step 1: Open the game and go to settings
-
Launch PUBG mobile on your device and tap on the up arrow button on the bottom right corner of the screen. Then, tap on settings.
-
Step 2: Select the download option and choose the resource packs
Step 2: Select the download option and choose the resource packs you want
-
On the settings menu, tap on the download option. You will see a list of resource packs that are available for download. You can tap on each pack to see its description, size, and rewards. Select the packs you want to download by tapping on the download button next to them.
-
Step 3: Wait for the download to complete and collect your rewards
-
After you have selected the resource packs you want, wait for the download to complete. You can see the progress bar and the remaining time on the screen. You can also pause or resume the download at any time. Once the download is finished, you can collect your rewards by tapping on the claim button. You will get some items such as silver fragments, BP, and coupons.
-
How to download resources PUBG mobile from external sources?
-
Another way to download resources PUBG mobile is from external sources. This method is useful if you want to save some data or if you have trouble downloading from the game. However, you need to be careful and only use trusted and verified websites that offer PUBG mobile OBB files. Here are the steps you need to follow:
-
Step 1: Find a reliable website that offers PUBG mobile OBB files
-
An OBB file is a data file that contains additional resources for PUBG mobile. You can find many websites that offer PUBG mobile OBB files for different versions of the game. However, not all of them are safe and secure. You need to do some research and check the reviews and ratings of the website before downloading anything. Some of the reputable websites that offer PUBG mobile OBB files are APKPure, APKMirror, and APKCombo.
-
Step 2: Download the OBB file and copy it to your device storage
-
Once you have found a reliable website, choose the OBB file that matches your game version and device compatibility. Download the OBB file to your computer or directly to your device. If you download it to your computer, you need to copy it to your device storage using a USB cable or a file manager app. The OBB file should be placed in the Android/OBB/com.tencent.ig folder on your device storage.
-
Step 3: Install the APK file and launch the game
-
In addition to the OBB file, you also need to install the APK file of PUBG mobile on your device. The APK file is an application file that contains the game itself. You can download it from the same website as the OBB file or from the official PUBG mobile website. After downloading the APK file, install it on your device by allowing unknown sources in your settings. Then, launch the game and enjoy.
-
Tips and tricks for downloading resources PUBG mobile
-
To make sure that you download resources PUBG mobile successfully and efficiently, here are some tips and tricks that you can follow:
-
Use a stable and fast internet connection
-
The most important thing for downloading resources PUBG mobile is having a good internet connection. You need a stable and fast internet connection to avoid interruptions, errors, or corruption of files. You can use Wi-Fi or mobile data, but make sure that you have enough data allowance and signal strength.
-
Check your device storage space before downloading
-
Another important thing for downloading resources PUBG mobile is having enough storage space on your device. You need to check how much space you have left before downloading any resource pack or OBB file. You can do this by going to settings > storage on your device. If you don't have enough space, you need to delete some unwanted or unnecessary files or apps from your device.
-
Delete unwanted or outdated resource packs to save space
-
If you have downloaded many resource packs or OBB files in the past, you may not need them anymore or they may be outdated. You can delete them from your device to save some space and avoid cluttering your storage. You can do this by going to settings > download in PUBG mobile and tapping on the delete button next to each resource pack or OBB file.
-
Conclusion
-
Downloading resources PUBG mobile is a simple and easy process that can enhance your gaming experience and performance. You can download resources PUBG mobile from the game itself or from external sources, depending on your preference and convenience. However, you need to be careful and only use trusted and verified websites that offer PUBG mobile OBB files. You also need to have a stable and fast internet connection, enough storage space, and delete unwanted or outdated resource packs or OBB files. We hope this article has helped you learn how to download resources PUBG mobile easily and safely.
-
FAQs
-
Here are some frequently asked questions about downloading resources PUBG mobile:
-
Q: How long does it take to download resources PUBG mobile?
-
A: The time it takes to download resources PUBG mobile depends on several factors, such as the size of the resource pack or OBB file, the speed of your internet connection, and the performance of your device. Generally, it can take from a few minutes to a few hours to download resources PUBG mobile.
-
Q: How can I update my PUBG mobile to the latest version?
-
A: You can update your PUBG mobile to the latest version by going to the Google Play Store or the App Store and tapping on the update button. Alternatively, you can download the latest APK file and OBB file from a reliable website and install them on your device.
-
Q: What are the benefits of downloading resources PUBG mobile?
-
A: The benefits of downloading resources PUBG mobile are:
-
-
You can access new maps and modes that are added in every update
-
You can customize your character and weapons with different skins and outfits
-
You can improve the game performance and reduce lagging issues
-
You can get rewards and benefits from downloading certain resource packs
-
-
Q: What are the risks of downloading resources PUBG mobile from external sources?
-
A: The risks of downloading resources PUBG mobile from external sources are:
-
-
You may download corrupted or infected files that can harm your device or compromise your data
-
You may download outdated or incompatible files that can cause errors or crashes in the game
-
You may violate the terms and conditions of PUBG mobile and get banned from the game
-
-
Q: How can I delete resources PUBG mobile from my device?
-
A: You can delete resources PUBG mobile from your device by going to settings > download in PUBG mobile and tapping on the delete button next to each resource pack or OBB file. You can also delete them manually by going to Android/OBB/com.tencent.ig folder on your device storage and deleting the unwanted files.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Umlando Download Music The Best Amapiano Songs of 2022.md b/spaces/congsaPfin/Manga-OCR/logs/Umlando Download Music The Best Amapiano Songs of 2022.md
deleted file mode 100644
index 2cb98e3ca4829bcde9be56458fd17b8f57983bf6..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Umlando Download Music The Best Amapiano Songs of 2022.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
Umlando Download Music: How to Enjoy Free and Legal Music from South Africa
-
If you are a fan of South African music, you might have heard of Umlando, a popular dance style that features catchy beats and vocals. Umlando music is a fusion of traditional and modern influences, and it has become a sensation among music lovers around the world. But how can you download Umlando music for free and legally? And where can you listen to Umlando music online? In this article, we will answer these questions and more.
Umlando is a Zulu word that means history. It is used to refer to the study of the past, especially the history of people. The word is also used in the context of dance, where it refers to a popular dance style that originated in South Africa.
-
The meaning of Umlando
-
Umlando music is a genre of dance music that incorporates elements of traditional Zulu music, such as drums, chants, and melodies, with modern influences, such as electronic beats, synths, and vocals. Umlando music is inspired by the history and culture of the Zulu people, as well as their struggles and achievements. Umlando music celebrates the diversity and richness of South African music, and it aims to connect people across generations and backgrounds.
-
The popularity of Umlando music
-
Umlando music has gained popularity in recent years, thanks to the efforts of talented artists and producers who have created catchy and innovative songs. Some of the most famous Umlando artists include 9umba, Toss, Mdoovar, Sir Trill, Sino Msolo, Lady Du, Young Stunna, Slade, and many more. Their songs have been featured on various platforms, such as Apple Music, Wynk Music, YouTube, and others. Umlando music has also attracted fans from different countries, who enjoy the upbeat and energetic vibe of the genre.
-
How to download Umlando music for free and legally
-
If you want to download Umlando music for free and legally, you have several options to choose from. There are many websites that offer free music downloads, and some of them specialize in Umlando music or other genres of South African music. Here are some of the best free music download sites for Umlando music:
-
The best free music download sites for Umlando music
-
-
Free Music Archive: This website provides free access to thousands of songs that can be downloaded or streamed online. You can search for Umlando music by using tags like "South Africa", "Zulu", "Amapiano", or "Dance". You can also browse through curated collections or trending tracks. You can download songs in MP3 format without creating an account.
-
Jamendo Music: This website allows artists to upload their music under Creative Commons licenses, which means that they give permission for anyone to download or stream their songs for free. You can find Umlando music by using filters like "Genre", "Mood", or "Instrument". You can also listen to online radio channels or playlists that feature Umlando music. You can download songs in MP3 format after creating a free account.
-
Internet Archive: This website is a digital library that archives various types of media, including audio files. You can find Umlando music by searching through categories like "Audio", "Music", or "Live Music Archive". You can also use keywords like "Umlando", "South Africa", or "Zulu". You can download songs in various formats, such as MP3, OGG, or FLAC, without creating an account.
-
-
The benefits of downloading Umlando music
-
Downloading Umlando music for free and legally has many benefits, such as:
-
-
Supporting the artists: By downloading Umlando music from legitimate sources, you are showing your appreciation and respect for the artists who created the music. You are also helping them to gain more exposure and recognition for their work.
-
Enjoying offline access: By downloading Umlando music to your device, you can enjoy listening to it anytime and anywhere, even without an internet connection. You can also create your own playlists and share them with your friends.
-
Controlling the quality: By downloading Umlando music in high-quality formats, such as MP3 or FLAC, you can ensure that you get the best sound experience possible. You can also adjust the volume and equalizer settings to suit your preferences.
-
-
How to listen to Umlando music online
-
If you prefer to listen to Umlando music online, you have many options as well. There are many streaming services that offer access to Umlando music, and some of them are free or have free trials. Here are some of the best streaming services for Umlando music:
-
The best streaming services for Umlando music
-
-
-
Streaming Service
-
Features
-
Price
-
-
-
Spotify
-
- Offers a large library of Umlando music and other genres - Allows you to create and follow playlists and podcasts - Provides personalized recommendations and curated radio stations - Supports offline mode and cross-device syncing
-
- Free with ads and limited skips - $9.99/month for Premium with no ads and unlimited skips
-
-
-
Apple Music
-
- Offers a large library of Umlando music and other genres - Allows you to create and follow playlists and podcasts - Provides personalized recommendations and curated radio stations - Supports offline mode and cross-device syncing
-
- Free for 3 months, then $9.99/month
-
-
-
Deezer
-
- Offers a large library of Umlando music and other genres - Allows you to create and follow playlists and podcasts - Provides personalized recommendations and curated radio stations - Supports offline mode and cross-device syncing
-
- Free with ads and limited skips - $9.99/month for Premium with no ads and unlimited skips
-
-
-
SoundCloud
-
- Offers a large library of Umlando music and other genres - Allows you to upload your own music and discover new artists - Provides personalized recommendations and curated radio stations - Supports offline mode and cross-device syncing
-
- Free with ads and limited skips - $9.99/month for SoundCloud Go+ with no ads and unlimited skips
-
-
The advantages of streaming Umlando music
-
Streaming Umlando music online has many advantages, such as:
-
umlando mp3 download free
-umlando by 9umba, toss and mdoovar
-umlando feat sir trill, sino msolo, lady du, young stunna and slade
-umlando apple music
-umlando youtube video
-umlando wynk music
-umlando amapiano song
-umlando lyrics and translation
-umlando remix download
-umlando instrumental download
-umlando fakaza music
-umlando zamusic download
-umlando datafilehost download
-umlando hiphopza download
-umlando sahiphop download
-umlando afrohouseking download
-umlando bamoza download
-umlando hitvibes download
-umlando flexyjam download
-umlando zulujam download
-umlando mp3 juice download
-umlando tubidy download
-umlando waploaded download
-umlando naijaloaded download
-umlando tooxclusive download
-umlando notjustok download
-umlando 9jaflaver download
-umlando audiomack download
-umlando soundcloud download
-umlando spotify download
-umlando deezer download
-umlando tidal download
-umlando amazon music download
-umlando pandora music download
-umlando shazam music download
-umlando genius music download
-umlando musixmatch music download
-umlando songmeanings music download
-umlando azlyrics music download
-umlando metrolyrics music download
-
-
Exploring new music: By streaming Umlando music online, you can discover new songs and artists that you might not find otherwise. You can also listen to different genres and styles of music that are related to Umlando music.
-
Saving storage space: By streaming Umlando music online, you can avoid using up your device's storage space with downloaded files. You can also access your music from any device that has an internet connection.
-
Staying updated: By streaming Umlando music online, you can always listen to the latest releases and trends in the genre. You can also get notified when your favorite artists drop new songs or albums.
-
-
Conclusion
-
Umlando music is a genre of dance music that originated in South Africa. It is a fusion of traditional Zulu music and modern influences, and it celebrates the history and culture of the Zulu people. Umlando music is popular among music lovers around the world, who enjoy its catchy beats and vocals. You can download or stream Umlando music for free and legally from various websites or services, depending on your preferences. Whether you download or stream Umlando music, you will surely enjoy the music and have fun.
-
FAQs
-
Here are some frequently asked questions about Umlando music and how to download or stream it:
-
-
What is the difference between Umlando and Amapiano? Amapiano is another genre of dance music that originated in South Africa. It is similar to Umlando in some aspects, such as using electronic beats and synths, but it also has influences from jazz, soul, and kwaito. Amapiano is more mellow and smooth than Umlando, which is more upbeat and energetic.
-
Is Umlando music legal to download or stream? Yes, Umlando music is legal to download or stream, as long as you use legitimate sources that have the permission of the artists or the rights holders. You should avoid using illegal or pirated websites or services that may infringe on the intellectual property rights of the creators.
-
How can I support Umlando artists? You can support Umlando artists by downloading or streaming their music from official platforms, such as Apple Music, Wynk Music, YouTube, Spotify, Apple Music, Deezer, or SoundCloud. You can also follow them on social media, share their music with your friends, or buy their merchandise or tickets to their shows.
-
What are some of the best Umlando songs to listen to? There are many great Umlando songs to listen to, but here are some of the most popular ones: - 9umba & Toss - "uThixo" - Mdoovar - "Ntwana Ka God" - Sir Trill & Sino Msolo - "Isibonelo" - Lady Du & Young Stunna - "Catalia" - Slade - "Barman"
-
Where can I learn more about Umlando music and culture? You can learn more about Umlando music and culture by visiting websites or blogs that cover South African music, such as SA Music Mag, Zkhiphani, or Fakaza. You can also watch documentaries or videos that feature Umlando artists or dancers, such as "Umlando: The History of Dance in South Africa" or "Umlando: The Dance That Moves South Africa".
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Barcode Generator And Overprinter V6610 TOP Crack.md b/spaces/contluForse/HuggingGPT/assets/Barcode Generator And Overprinter V6610 TOP Crack.md
deleted file mode 100644
index ce2266bb5f0179121e068d4a618bc629f92fcf1e..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Barcode Generator And Overprinter V6610 TOP Crack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Barcode Generator And Overprinter V6610 Crack · thriller michael jackson 1080p vs 720p · system programming and operating system d m ... 4d29de3e1b
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/CHESS Chessbase Fritz Powerbook.rar.md b/spaces/contluForse/HuggingGPT/assets/CHESS Chessbase Fritz Powerbook.rar.md
deleted file mode 100644
index bdcfbc9a8e39f7400e8ac8d7f34b8317a9b5c5c3..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/CHESS Chessbase Fritz Powerbook.rar.md
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
-When the former superhero movie star (Keaton) sets out to return to Broadway, he struggles with himself. He, in particular, struggled with the "minor question" - if he becomes a star again, what is he going to do with his life.
-In fact, he hasn't decided yet, and although he has enough money to buy himself a house and a car, he doesn't plan to spend that money on himself.
-He doesn't handle money very well, and while he doesn't care about living his life, he doesn't want to lose it.
-But it's hard for him.
-And if he decides to go, it could destroy him.
-I don't know why it took me so long to write. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Business Management Book By Cb Gupta Pdf Download.md b/spaces/diacanFperku/AutoGPT/Business Management Book By Cb Gupta Pdf Download.md
deleted file mode 100644
index 0fad8c4856c6c5756ce4348b13c3293ae25c16bb..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Business Management Book By Cb Gupta Pdf Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Business Management Book By Cb Gupta Pdf Download pdf Download Business Organisation And Management Cb Gupta Pdf Epub book pdf free download ... 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (3 Idiots Subtitle Indonesia 720p).md b/spaces/diacanFperku/AutoGPT/HD Online Player (3 Idiots Subtitle Indonesia 720p).md
deleted file mode 100644
index 90a28f7fa316928d295a4df30007c683448d23d5..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/HD Online Player (3 Idiots Subtitle Indonesia 720p).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
This study is one of the research results of Dr. Kandou N. He recently and funded by Ford Foundation of America. He is the leading expert and researcher on the development and the education in Indonesia. And he is the founder of The Human Right Documentation Center , which is the first independent human rights research center in the world, advocating the human rights on the weekly basis. Hrdolyth (as mentioned above) is different from other human rights research centers in sense of its methods. It focus on the human being and its experience, rather than the mechanism and the history. It aim to give a voice to those who are ignored and unhired, unrepresented and uncounted by the human rights and development community. You can read Dr. Kandou N. He as one of the authors of the document below (This is also the last page of the article).
-
HD Online Player (3 Idiots Subtitle Indonesia 720p)
Can the gap be bridged? Can we narrow it down to a gap of 1.5 or 0.5 points? Who can say? All we can say with certainty is that the current level of life conditions in Indonesia is not good. The gap between rich and poor has to be tackled urgently. While all of us have a deep interest in the success of Indonesian people, there is a wide misunderstanding regarding the method and the goals of the human rights and development community in Indonesia. The practical knowledge of which side the gap exists on is not present. In other words, when we want to analyze the gap, we lack reliable data, relevant information, and the capacity to actually do so. Nevertheless, why should it be so hard to narrow down the gap if the gap actually exists? The gap is not a created by some evil intent, but rather a mistake of national planning and policy.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Polyfx 3ds Max 2016 LINK Crack.md b/spaces/diacanFperku/AutoGPT/Polyfx 3ds Max 2016 LINK Crack.md
deleted file mode 100644
index 9f408b719c52dc005d760b33ab096f2c0c5fb378..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Polyfx 3ds Max 2016 LINK Crack.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Use PolyFX in 3ds Max 2016
-
PolyFX is a powerful tool that provides the ability to break an object into parts and animate them. It has many options for fine-tuning the animation and several additional tools. A great solution in the production of promotional videos, game development, etc. In this article, we will show you how to use PolyFX in 3ds Max 2016.
PolyFX is a script that you can download from ScriptSpot. After downloading the zip file, extract it and copy the PolyFX.mzp file to your 3ds Max scripts folder. Then run 3ds Max and go to MaxScript > Run Script and select the PolyFX.mzp file. This will install PolyFX and add it to your Customize User Interface dialog.
-
Step 2: Create an Object and Apply PolyFX
-
Create any object that you want to break into parts and animate. For example, we will create a simple text object with the word "PolyFX". Then go to Customize > Customize User Interface and find PolyFX under the Category > PolyFX. Drag and drop it to any toolbar or menu. Click on the PolyFX button and select your object. This will apply PolyFX to your object and open the PolyFX settings window.
-
Step 3: Adjust the Settings
-
In the PolyFX settings window, you can adjust various parameters to control how your object will break and animate. The most important ones are:
-
-
-
Mode: This determines how your object will be divided into parts. You can choose from Face, Element, Voronoi, Slice, or Custom. Each mode has its own sub-options that you can tweak.
-
Animation: This determines how your parts will move. You can choose from No Animation, Fall Down, Fly Away, Bounce, or Custom Animation. Each mode has its own sub-options that you can tweak.
-
Noise: This adds some random variation to the position, rotation, and scale of your parts.
-
Collapse Mode: This determines how your parts will collapse back to the original object. You can choose from No Collapse, Collapse Backwards, or Collapse Forwards.
-
Display Mode: This determines how your parts will be displayed in the viewport. You can choose from Show All Parts, Show Only Selected Part, or Show Only Animated Parts.
-
Select Part: This allows you to select a specific part of your object and modify its settings individually.
-
Create Keyframes: This creates keyframes for your animation based on the current frame range.
-
Delete Keyframes: This deletes all keyframes for your animation.
-
Reset Settings: This resets all settings to their default values.
-
About / Help / Donate / Buy / Update / Uninstall / Close: These are self-explanatory buttons that provide additional information or actions related to PolyFX.
-
-
For example, we will set the Mode to Voronoi with 50 parts, the Animation to Fly Away with Speed of 10, the Noise to Position with Strength of 5, and the Collapse Mode to No Collapse. Then we will create keyframes for our animation.
-
Step 4: Preview and Render Your Animation
-
To preview your animation, you can use the standard 3ds Max playback controls or scrub the timeline. You can also use the Display Mode to show only animated parts
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Ratiomaster 1.6 Download Pc LINK.md b/spaces/diacanFperku/AutoGPT/Ratiomaster 1.6 Download Pc LINK.md
deleted file mode 100644
index ed27b6b238a323f7115fc53cb480612d753ed58e..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Ratiomaster 1.6 Download Pc LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Windows 10 Counter-strike 1.6. If you want to download the free version of the CS 1.6 WINDOWS 10 compatible game, first of all, you need to ... 4d29de3e1b
-
-
-
diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/indexing/codecs/decompress_residuals.cpp b/spaces/diagaiwei/ir_chinese_medqa/colbert/indexing/codecs/decompress_residuals.cpp
deleted file mode 100644
index 1c675a6c95edf5189cfe4b9b36f5e9b1907940fd..0000000000000000000000000000000000000000
--- a/spaces/diagaiwei/ir_chinese_medqa/colbert/indexing/codecs/decompress_residuals.cpp
+++ /dev/null
@@ -1,23 +0,0 @@
-#include
-
-torch::Tensor decompress_residuals_cuda(
- const torch::Tensor binary_residuals, const torch::Tensor bucket_weights,
- const torch::Tensor reversed_bit_map,
- const torch::Tensor bucket_weight_combinations, const torch::Tensor codes,
- const torch::Tensor centroids, const int dim, const int nbits);
-
-torch::Tensor decompress_residuals(
- const torch::Tensor binary_residuals, const torch::Tensor bucket_weights,
- const torch::Tensor reversed_bit_map,
- const torch::Tensor bucket_weight_combinations, const torch::Tensor codes,
- const torch::Tensor centroids, const int dim, const int nbits) {
- // Add input verification
- return decompress_residuals_cuda(
- binary_residuals, bucket_weights, reversed_bit_map,
- bucket_weight_combinations, codes, centroids, dim, nbits);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("decompress_residuals_cpp", &decompress_residuals,
- "Decompress residuals");
-}
diff --git a/spaces/digitalxingtong/Azusa-Bert-VITS2/monotonic_align/core.py b/spaces/digitalxingtong/Azusa-Bert-VITS2/monotonic_align/core.py
deleted file mode 100644
index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Azusa-Bert-VITS2/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/README_zh.md b/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/README_zh.md
deleted file mode 100644
index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/README_zh.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/modules/training.py b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/modules/training.py
deleted file mode 100644
index 82e42f4d2928197564c0efd371ca4c3aaaae4e15..0000000000000000000000000000000000000000
--- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/modules/training.py
+++ /dev/null
@@ -1,495 +0,0 @@
-import json
-import logging
-import math
-import sys
-import threading
-import time
-import traceback
-from pathlib import Path
-
-import gradio as gr
-import torch
-import transformers
-from datasets import Dataset, load_dataset
-from peft import (LoraConfig, get_peft_model, prepare_model_for_int8_training,
- set_peft_model_state_dict)
-
-from modules import shared, ui
-from modules.evaluate import calculate_perplexity, generate_markdown_table, save_past_evaluations
-from server import get_available_loras, get_available_models
-
-# This mapping is from a very recent commit, not yet released.
-# If not available, default to a backup map for some common model types.
-try:
- from peft.utils.other import \
- TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING as \
- model_to_lora_modules
- from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
- MODEL_CLASSES = {v: k for k, v in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES}
-except:
- standard_modules = ["q_proj", "v_proj"]
- model_to_lora_modules = {"llama": standard_modules, "opt": standard_modules, "gptj": standard_modules, "gpt_neox": ["query_key_value"]}
- MODEL_CLASSES = {
- "LlamaForCausalLM": "llama",
- "OPTForCausalLM": "opt",
- "GPTJForCausalLM": "gptj",
- "GPTNeoXForCausalLM": "gpt_neox"
- }
-
-WANT_INTERRUPT = False
-
-PARAMETERS = ["lora_name", "always_override", "save_steps", "micro_batch_size", "batch_size", "epochs", "learning_rate", "lr_scheduler_type", "lora_rank", "lora_alpha", "lora_dropout", "cutoff_len", "dataset", "eval_dataset", "format", "eval_steps", "raw_text_file", "overlap_len", "newline_favor_len", "higher_rank_limit", "warmup_steps", "optimizer"]
-
-
-def get_datasets(path: str, ext: str):
- return ['None'] + sorted(set([k.stem for k in Path(path).glob(f'*.{ext}') if k.stem != 'put-trainer-datasets-here']), key=str.lower)
-
-
-def create_train_interface():
- with gr.Tab('Train LoRA', elem_id='lora-train-tab'):
- gr.Markdown("Confused? [[Click here for a guide]](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Training-LoRAs.md)")
-
- with gr.Row():
- lora_name = gr.Textbox(label='Name', info='The name of your new LoRA file')
- always_override = gr.Checkbox(label='Override Existing Files', value=False, info='If the name given is the same as an existing file, checking this will replace that file. Leaving unchecked will load that file and continue from it (must use the same rank value as the original had).')
- save_steps = gr.Number(label='Save every n steps', value=0, info='If above 0, a checkpoint of the LoRA will be saved every time this many steps pass.')
-
- with gr.Row():
- copy_from = gr.Dropdown(label='Copy parameters from', value='None', choices=get_available_loras())
- ui.create_refresh_button(copy_from, lambda: None, lambda: {'choices': get_available_loras()}, 'refresh-button')
-
- with gr.Row():
- # TODO: Implement multi-device support.
- micro_batch_size = gr.Slider(label='Micro Batch Size', value=4, minimum=1, maximum=128, step=1, info='Per-device batch size (NOTE: multiple devices not yet implemented). Increasing this will increase VRAM usage.')
- batch_size = gr.Slider(label='Batch Size', value=128, minimum=0, maximum=1024, step=4, info='Global batch size. The two batch sizes together determine gradient accumulation (gradientAccum = batch / microBatch). Higher gradient accum values lead to better quality training.')
-
- with gr.Row():
- epochs = gr.Number(label='Epochs', value=3, info='Number of times every entry in the dataset should be fed into training. So 1 means feed each item in once, 5 means feed it in five times, etc.')
- learning_rate = gr.Textbox(label='Learning Rate', value='3e-4', info='Learning rate, in scientific notation. 3e-4 is a good starting base point. 1e-2 is extremely high, 1e-6 is extremely low.')
- lr_scheduler_type = gr.Dropdown(label='LR Scheduler', value='linear', choices=['linear', 'constant', 'constant_with_warmup', 'cosine', 'cosine_with_restarts', 'polynomial', 'inverse_sqrt'], info='Learning rate scheduler - defines how the learning rate changes over time. "Constant" means never change, "linear" means to go in a straight line from the learning rate down to 0, cosine follows a curve, etc.')
-
- # TODO: What is the actual maximum rank? Likely distinct per model. This might be better to somehow be on a log scale.
- lora_rank = gr.Slider(label='LoRA Rank', value=32, minimum=0, maximum=1024, step=4, info='LoRA Rank, or dimension count. Higher values produce a larger file with better control over the model\'s content. Smaller values produce a smaller file with less overall control. Small values like 4 or 8 are great for stylistic guidance, higher values like 128 or 256 are good for teaching content upgrades, extremely high values (1024+) are difficult to train but may improve fine-detail learning for large datasets. Higher ranks also require higher VRAM.')
- lora_alpha = gr.Slider(label='LoRA Alpha', value=64, minimum=0, maximum=2048, step=4, info='LoRA Alpha. This divided by the rank becomes the scaling of the LoRA. Higher means stronger. A good standard value is twice your Rank.')
-
- cutoff_len = gr.Slider(label='Cutoff Length', minimum=0, maximum=2048, value=256, step=32, info='Cutoff length for text input. Essentially, how long of a line of text to feed in at a time. Higher values require drastically more VRAM.')
-
- with gr.Tab(label='Formatted Dataset'):
- with gr.Row():
- dataset = gr.Dropdown(choices=get_datasets('training/datasets', 'json'), value='None', label='Dataset', info='The dataset file to use for training.')
- ui.create_refresh_button(dataset, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'json')}, 'refresh-button')
- eval_dataset = gr.Dropdown(choices=get_datasets('training/datasets', 'json'), value='None', label='Evaluation Dataset', info='The (optional) dataset file used to evaluate the model after training.')
- ui.create_refresh_button(eval_dataset, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'json')}, 'refresh-button')
- format = gr.Dropdown(choices=get_datasets('training/formats', 'json'), value='None', label='Data Format', info='The format file used to decide how to format the dataset input.')
- ui.create_refresh_button(format, lambda: None, lambda: {'choices': get_datasets('training/formats', 'json')}, 'refresh-button')
-
- eval_steps = gr.Number(label='Evaluate every n steps', value=100, info='If an evaluation dataset is given, test it every time this many steps pass.')
-
- with gr.Tab(label="Raw text file"):
- with gr.Row():
- raw_text_file = gr.Dropdown(choices=get_datasets('training/datasets', 'txt'), value='None', label='Text file', info='The raw text file to use for training.')
- ui.create_refresh_button(raw_text_file, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'txt')}, 'refresh-button')
-
- with gr.Row():
- overlap_len = gr.Slider(label='Overlap Length', minimum=0, maximum=512, value=128, step=16, info='Overlap length - ie how many tokens from the prior chunk of text to include into the next chunk. (The chunks themselves will be of a size determined by Cutoff Length below). Setting overlap to exactly half the cutoff length may be ideal.')
- newline_favor_len = gr.Slider(label='Prefer Newline Cut Length', minimum=0, maximum=512, value=128, step=16, info='Length (in characters, not tokens) of the maximum distance to shift an overlap cut by to ensure chunks cut at newlines. If too low, cuts may occur in the middle of lines.')
-
- with gr.Accordion(label='Advanced Options', open=False):
- lora_dropout = gr.Slider(label='LoRA Dropout', minimum=0.0, maximum=1.0, step=0.025, value=0.05, info='Percentage probability for dropout of LoRA layers. This can help reduce overfitting. Most users should leave at default.')
- warmup_steps = gr.Number(label='Warmup Steps', value=100, info='For this many steps at the start, the learning rate will be lower than normal. This helps the trainer prepare the model and precompute statistics to improve the quality of training after the start.')
- optimizer = gr.Dropdown(label='Optimizer', value='adamw_torch', choices=['adamw_hf', 'adamw_torch', 'adamw_torch_fused', 'adamw_torch_xla', 'adamw_apex_fused', 'adafactor', 'adamw_bnb_8bit', 'adamw_anyprecision', 'sgd', 'adagrad'], info='Different optimizer implementation options, for advanced users. Effects of different options are not well documented yet.')
-
- with gr.Row():
- higher_rank_limit = gr.Checkbox(label='Enable higher ranks', value=False, info='If checked, changes Rank/Alpha slider above to go much higher. This will not work without a datacenter-class GPU.')
-
- with gr.Row():
- start_button = gr.Button("Start LoRA Training")
- stop_button = gr.Button("Interrupt")
-
- output = gr.Markdown(value="Ready")
-
- with gr.Tab('Perplexity evaluation', elem_id='evaluate-tab'):
- with gr.Row():
- with gr.Column():
- models = gr.Dropdown(get_available_models(), label='Models', multiselect=True)
- evaluate_text_file = gr.Dropdown(choices=['wikitext', 'ptb', 'ptb_new'] + get_datasets('training/datasets', 'txt')[1:], value='wikitext', label='Input dataset', info='The raw text file on which the model will be evaluated. The first options are automatically downloaded: wikitext, ptb, and ptb_new. The next options are your local text files under training/datasets.')
- with gr.Row():
- stride_length = gr.Slider(label='Stride', minimum=1, maximum=2048, value=512, step=1, info='Used to make the evaluation faster at the cost of accuracy. 1 = slowest but most accurate. 512 is a common value.')
- max_length = gr.Slider(label='max_length', minimum=0, maximum=8096, value=0, step=1, info='The context for each evaluation. If set to 0, the maximum context length for the model will be used.')
-
- with gr.Row():
- start_current_evaluation = gr.Button("Evaluate loaded model")
- start_evaluation = gr.Button("Evaluate selected models")
- stop_evaluation = gr.Button("Interrupt")
-
- with gr.Column():
- evaluation_log = gr.Markdown(value='')
-
- evaluation_table = gr.Dataframe(value=generate_markdown_table(), interactive=True)
- save_comments = gr.Button('Save comments')
-
- # Training events
- all_params = [lora_name, always_override, save_steps, micro_batch_size, batch_size, epochs, learning_rate, lr_scheduler_type, lora_rank, lora_alpha, lora_dropout, cutoff_len, dataset, eval_dataset, format, eval_steps, raw_text_file, overlap_len, newline_favor_len, higher_rank_limit, warmup_steps, optimizer]
- copy_from.change(do_copy_params, [copy_from] + all_params, all_params)
- start_button.click(do_train, all_params, output)
- stop_button.click(do_interrupt, None, None, queue=False)
- higher_rank_limit.change(change_rank_limit, [higher_rank_limit], [lora_rank, lora_alpha])
-
- # Evaluation events. For some reason, the interrupt event
- # doesn't work with the .then() syntax, so I write them one
- # by one in this ugly but functional way.
- ev = start_evaluation.click(calculate_perplexity, [models, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- tmp = gr.State('')
- start_current_evaluation.click(lambda: ['current model'], None, tmp)
- ev_cur = start_current_evaluation.click(calculate_perplexity, [tmp, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_current_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- stop_evaluation.click(None, None, None, cancels=[ev, ev_cur], queue=False)
- save_comments.click(
- save_past_evaluations, evaluation_table, None).then(
- lambda: "Comments saved.", None, evaluation_log, show_progress=False)
-
-
-def do_interrupt():
- global WANT_INTERRUPT
- WANT_INTERRUPT = True
-
-
-def do_copy_params(lora_name: str, *args):
- f_name = f"{shared.args.lora_dir}/{clean_path(None, lora_name)}/training_parameters.json"
- if Path(f_name).is_file():
- with open(f_name, 'r', encoding='utf-8') as format_file:
- params: dict[str, str] = json.load(format_file)
- else:
- params = {}
-
- result = list()
- for i in range(0, len(PARAMETERS)):
- key = PARAMETERS[i]
- if key in params:
- result.append(params[key])
- else:
- result.append(args[i])
-
- return result
-
-
-def change_rank_limit(use_higher_ranks: bool):
- mult = 2 if use_higher_ranks else 1
- return {"maximum": 1024 * mult, "__type__": "update"}, {"maximum": 2048 * mult, "__type__": "update"}
-
-
-def clean_path(base_path: str, path: str):
- """"Strips unusual symbols and forcibly builds a path as relative to the intended directory."""
- # TODO: Probably could do with a security audit to guarantee there's no ways this can be bypassed to target an unwanted path.
- # Or swap it to a strict whitelist of [a-zA-Z_0-9]
- path = path.replace('\\', '/').replace('..', '_')
- if base_path is None:
- return path
-
- return f'{Path(base_path).absolute()}/{path}'
-
-
-def do_train(lora_name: str, always_override: bool, save_steps: int, micro_batch_size: int, batch_size: int, epochs: int, learning_rate: str, lr_scheduler_type: str, lora_rank: int, lora_alpha: int, lora_dropout: float, cutoff_len: int, dataset: str, eval_dataset: str, format: str, eval_steps: int, raw_text_file: str, overlap_len: int, newline_favor_len: int, higher_rank_limit: bool, warmup_steps: int, optimizer: str):
-
- if shared.args.monkey_patch:
- from monkeypatch.peft_tuners_lora_monkey_patch import \
- replace_peft_model_with_gptq_lora_model
- replace_peft_model_with_gptq_lora_model()
-
- global WANT_INTERRUPT
- WANT_INTERRUPT = False
-
- # == Input validation / processing ==
- yield "Prepping..."
- lora_file_path = clean_path(None, lora_name)
- if lora_file_path.strip() == '':
- yield "Missing or invalid LoRA file name input."
- return
-
- lora_file_path = f"{shared.args.lora_dir}/{lora_file_path}"
- actual_lr = float(learning_rate)
- model_type = type(shared.model).__name__
-
- if model_type in MODEL_CLASSES:
- model_id = MODEL_CLASSES[model_type]
- else:
- model_id = "llama"
- if model_type == "PeftModelForCausalLM":
- if len(shared.args.lora_names) > 0:
- yield "You are trying to train a LoRA while you already have another LoRA loaded. This will work, but may have unexpected effects. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logging.warning("Training LoRA over top of another LoRA. May have unexpected effects.")
- else:
- yield "Model ID not matched due to LoRA loading. Consider reloading base model. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logging.warning("Model ID not matched due to LoRA loading. Consider reloading base model.")
- else:
- yield "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. Unexpected errors may follow. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logging.warning(f"LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: {model_type})")
-
- time.sleep(5)
-
- if shared.args.wbits > 0 and not shared.args.monkey_patch:
- yield "LoRA training in 4-bit requires loading with `--monkey-patch`"
- return
-
- elif not shared.args.load_in_8bit and shared.args.wbits <= 0:
- yield "It is highly recommended you use `--load-in-8bit` for LoRA training. *(Will continue anyway in 2 seconds, press `Interrupt` to stop.)*"
- logging.warning("It is highly recommended you use `--load-in-8bit` for LoRA training.")
- time.sleep(2) # Give it a moment for the message to show in UI before continuing
-
- if cutoff_len <= 0 or micro_batch_size <= 0 or batch_size <= 0 or actual_lr <= 0 or lora_rank <= 0 or lora_alpha <= 0:
- yield "Cannot input zeroes."
- return
-
- gradient_accumulation_steps = batch_size // micro_batch_size
- shared.tokenizer.pad_token_id = 0
- shared.tokenizer.padding_side = "left"
-
- def tokenize(prompt):
- result = shared.tokenizer(prompt, truncation=True, max_length=cutoff_len + 1, padding="max_length")
- return {
- "input_ids": result["input_ids"][:-1],
- "attention_mask": result["attention_mask"][:-1],
- }
-
- # == Prep the dataset, format, etc ==
- if raw_text_file not in ['None', '']:
- logging.info("Loading raw text file dataset...")
- with open(clean_path('training/datasets', f'{raw_text_file}.txt'), 'r', encoding='utf-8') as file:
- raw_text = file.read()
-
- tokens = shared.tokenizer.encode(raw_text)
- del raw_text # Note: could be a gig for a large dataset, so delete redundant data as we go to be safe on RAM
- tokens = list(split_chunks(tokens, cutoff_len - overlap_len))
- for i in range(1, len(tokens)):
- tokens[i] = tokens[i - 1][-overlap_len:] + tokens[i]
-
- text_chunks = [shared.tokenizer.decode(x) for x in tokens]
- del tokens
- if newline_favor_len > 0:
- text_chunks = [cut_chunk_for_newline(x, newline_favor_len) for x in text_chunks]
-
- train_data = Dataset.from_list([tokenize(x) for x in text_chunks])
- del text_chunks
- eval_data = None
-
- else:
- if dataset in ['None', '']:
- yield "**Missing dataset choice input, cannot continue.**"
- return
-
- if format in ['None', '']:
- yield "**Missing format choice input, cannot continue.**"
- return
-
- with open(clean_path('training/formats', f'{format}.json'), 'r', encoding='utf-8') as formatFile:
- format_data: dict[str, str] = json.load(formatFile)
-
- def generate_prompt(data_point: dict[str, str]):
- for options, data in format_data.items():
- if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
- for key, val in data_point.items():
- if val is not None:
- data = data.replace(f'%{key}%', val)
- return data
- raise RuntimeError(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"')
-
- def generate_and_tokenize_prompt(data_point):
- prompt = generate_prompt(data_point)
- return tokenize(prompt)
-
- logging.info("Loading JSON datasets...")
- data = load_dataset("json", data_files=clean_path('training/datasets', f'{dataset}.json'))
- train_data = data['train'].map(generate_and_tokenize_prompt)
-
- if eval_dataset == 'None':
- eval_data = None
- else:
- eval_data = load_dataset("json", data_files=clean_path('training/datasets', f'{eval_dataset}.json'))
- eval_data = eval_data['train'].map(generate_and_tokenize_prompt)
-
- # == Start prepping the model itself ==
- if not hasattr(shared.model, 'lm_head') or hasattr(shared.model.lm_head, 'weight'):
- logging.info("Getting model ready...")
- prepare_model_for_int8_training(shared.model)
-
- logging.info("Prepping for training...")
- config = LoraConfig(
- r=lora_rank,
- lora_alpha=lora_alpha,
- target_modules=model_to_lora_modules[model_id],
- lora_dropout=lora_dropout,
- bias="none",
- task_type="CAUSAL_LM"
- )
-
- try:
- logging.info("Creating LoRA model...")
- lora_model = get_peft_model(shared.model, config)
- if not always_override and Path(f"{lora_file_path}/adapter_model.bin").is_file():
- logging.info("Loading existing LoRA data...")
- state_dict_peft = torch.load(f"{lora_file_path}/adapter_model.bin")
- set_peft_model_state_dict(lora_model, state_dict_peft)
- except:
- yield traceback.format_exc()
- return
-
- if shared.args.monkey_patch:
- for n, m in lora_model.named_modules():
- if '4bit' in str(type(m)):
- if m.is_v1_model:
- m.zeros = m.zeros.half()
-
- m.scales = m.scales.half()
-
- class Tracked():
- def __init__(self):
- self.current_steps = 0
- self.max_steps = 0
- self.did_save = False
-
- tracked = Tracked()
- actual_save_steps = math.ceil(save_steps / gradient_accumulation_steps)
-
- class Callbacks(transformers.TrainerCallback):
- def on_step_begin(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps = state.global_step * gradient_accumulation_steps
- tracked.max_steps = state.max_steps * gradient_accumulation_steps
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
- elif state.global_step > 0 and actual_save_steps > 0 and state.global_step % actual_save_steps == 0:
- lora_model.save_pretrained(f"{lora_file_path}/checkpoint-{tracked.current_steps}/")
-
- def on_substep_end(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps += 1
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
-
- trainer = transformers.Trainer(
- model=lora_model,
- train_dataset=train_data,
- eval_dataset=eval_data,
- args=transformers.TrainingArguments(
- per_device_train_batch_size=micro_batch_size,
- gradient_accumulation_steps=gradient_accumulation_steps,
- warmup_steps=math.ceil(warmup_steps / gradient_accumulation_steps),
- num_train_epochs=epochs,
- learning_rate=actual_lr,
- fp16=False if shared.args.cpu else True,
- optim=optimizer,
- logging_steps=5,
- evaluation_strategy="steps" if eval_data is not None else "no",
- eval_steps=math.ceil(eval_steps / gradient_accumulation_steps) if eval_data is not None else None,
- save_strategy="no",
- output_dir=lora_file_path,
- lr_scheduler_type=lr_scheduler_type,
- load_best_model_at_end=True if eval_data is not None else False,
- # TODO: Enable multi-device support
- ddp_find_unused_parameters=None,
- no_cuda=shared.args.cpu
- ),
- data_collator=transformers.DataCollatorForLanguageModeling(shared.tokenizer, mlm=False),
- callbacks=list([Callbacks()])
- )
-
- lora_model.config.use_cache = False
-
- if torch.__version__ >= "2" and sys.platform != "win32":
- lora_model = torch.compile(lora_model)
-
- # == Save parameters for reuse ==
- with open(f"{lora_file_path}/training_parameters.json", 'w', encoding='utf-8') as file:
- vars = locals()
- json.dump({x: vars[x] for x in PARAMETERS}, file)
-
- # == Main run and monitor loop ==
- logging.info("Starting training...")
- yield "Starting..."
- if WANT_INTERRUPT:
- yield "Interrupted before start."
- return
-
- def threaded_run():
- trainer.train()
- # Note: save in the thread in case the gradio thread breaks (eg browser closed)
- lora_model.save_pretrained(lora_file_path)
- logging.info("LoRA training run is completed and saved.")
- tracked.did_save = True
-
- thread = threading.Thread(target=threaded_run)
- thread.start()
- last_step = 0
- start_time = time.perf_counter()
-
- while thread.is_alive():
- time.sleep(0.5)
- if WANT_INTERRUPT:
- yield "Interrupting, please wait... *(Run will stop after the current training step completes.)*"
-
- elif tracked.current_steps != last_step:
- last_step = tracked.current_steps
- time_elapsed = time.perf_counter() - start_time
- if time_elapsed <= 0:
- timer_info = ""
- total_time_estimate = 999
- else:
- its = tracked.current_steps / time_elapsed
- if its > 1:
- timer_info = f"`{its:.2f}` it/s"
- else:
- timer_info = f"`{1.0/its:.2f}` s/it"
-
- total_time_estimate = (1.0 / its) * (tracked.max_steps)
-
- yield f"Running... **{tracked.current_steps}** / **{tracked.max_steps}** ... {timer_info}, {format_time(time_elapsed)} / {format_time(total_time_estimate)} ... {format_time(total_time_estimate - time_elapsed)} remaining"
-
- # Saving in the train thread might fail if an error occurs, so save here if so.
- if not tracked.did_save:
- logging.info("Training complete, saving...")
- lora_model.save_pretrained(lora_file_path)
-
- if WANT_INTERRUPT:
- logging.info("Training interrupted.")
- yield f"Interrupted. Incomplete LoRA saved to `{lora_file_path}`"
- else:
- logging.info("Training complete!")
- yield f"Done! LoRA saved to `{lora_file_path}`"
-
-
-def split_chunks(arr, step):
- for i in range(0, len(arr), step):
- yield arr[i:i + step]
-
-
-def cut_chunk_for_newline(chunk: str, max_length: int):
- if '\n' not in chunk:
- return chunk
-
- first_newline = chunk.index('\n')
- if first_newline < max_length:
- chunk = chunk[first_newline + 1:]
-
- if '\n' not in chunk:
- return chunk
-
- last_newline = chunk.rindex('\n')
- if len(chunk) - last_newline < max_length:
- chunk = chunk[:last_newline]
-
- return chunk
-
-
-def format_time(seconds: float):
- if seconds < 120:
- return f"`{seconds:.0f}` seconds"
-
- minutes = seconds / 60
- if minutes < 120:
- return f"`{minutes:.0f}` minutes"
-
- hours = minutes / 60
- return f"`{hours:.0f}` hours"
diff --git a/spaces/ds520/bingo/src/pages/api/blob.ts b/spaces/ds520/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/entry/app.fd7ab095.js b/spaces/dylanebert/gaussian-viewer/public/_app/immutable/entry/app.fd7ab095.js
deleted file mode 100644
index 043efa7f3e5830a9e16df976e6e7b114ea746b6c..0000000000000000000000000000000000000000
--- a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/entry/app.fd7ab095.js
+++ /dev/null
@@ -1 +0,0 @@
-import{s as A,a as B,o as U,t as j,b as P}from"../chunks/scheduler.8b74b908.js";import{S as W,i as z,s as F,e as h,c as G,a as g,t as d,b as R,d as p,f as w,g as H,h as J,j as K,k as N,l as m,m as M,n as Q,o as X,p as L,q as k,r as v,u as C,v as E,w as y}from"../chunks/index.c146e4e6.js";const Y="modulepreload",Z=function(o,e){return new URL(o,e).href},D={},S=function(e,n,i){if(!n||n.length===0)return e();const s=document.getElementsByTagName("link");return Promise.all(n.map(f=>{if(f=Z(f,i),f in D)return;D[f]=!0;const t=f.endsWith(".css"),r=t?'[rel="stylesheet"]':"";if(!!i)for(let a=s.length-1;a>=0;a--){const _=s[a];if(_.href===f&&(!t||_.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${f}"]${r}`))return;const c=document.createElement("link");if(c.rel=t?"stylesheet":Y,t||(c.as="script",c.crossOrigin=""),c.href=f,document.head.appendChild(c),t)return new Promise((a,_)=>{c.addEventListener("load",a),c.addEventListener("error",()=>_(new Error(`Unable to preload CSS for ${f}`)))})})).then(()=>e()).catch(f=>{const t=new Event("vite:preloadError",{cancelable:!0});if(t.payload=f,window.dispatchEvent(t),!t.defaultPrevented)throw f})},re={};function $(o){let e,n,i;var s=o[1][0];function f(t,r){return{props:{data:t[3],form:t[2]}}}return s&&(e=k(s,f(o)),o[12](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&C(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),g(t,n,r),i=!0},p(t,r){if(r&2&&s!==(s=t[1][0])){if(e){L();const l=e;d(l.$$.fragment,1,0,()=>{y(l,1)}),R()}s?(e=k(s,f(t)),t[12](e),v(e.$$.fragment),p(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else if(s){const l={};r&8&&(l.data=t[3]),r&4&&(l.form=t[2]),e.$set(l)}},i(t){i||(e&&p(e.$$.fragment,t),i=!0)},o(t){e&&d(e.$$.fragment,t),i=!1},d(t){t&&w(n),o[12](null),e&&y(e,t)}}}function x(o){let e,n,i;var s=o[1][0];function f(t,r){return{props:{data:t[3],$$slots:{default:[ee]},$$scope:{ctx:t}}}}return s&&(e=k(s,f(o)),o[11](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&C(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),g(t,n,r),i=!0},p(t,r){if(r&2&&s!==(s=t[1][0])){if(e){L();const l=e;d(l.$$.fragment,1,0,()=>{y(l,1)}),R()}s?(e=k(s,f(t)),t[11](e),v(e.$$.fragment),p(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else if(s){const l={};r&8&&(l.data=t[3]),r&8215&&(l.$$scope={dirty:r,ctx:t}),e.$set(l)}},i(t){i||(e&&p(e.$$.fragment,t),i=!0)},o(t){e&&d(e.$$.fragment,t),i=!1},d(t){t&&w(n),o[11](null),e&&y(e,t)}}}function ee(o){let e,n,i;var s=o[1][1];function f(t,r){return{props:{data:t[4],form:t[2]}}}return s&&(e=k(s,f(o)),o[10](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&C(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),g(t,n,r),i=!0},p(t,r){if(r&2&&s!==(s=t[1][1])){if(e){L();const l=e;d(l.$$.fragment,1,0,()=>{y(l,1)}),R()}s?(e=k(s,f(t)),t[10](e),v(e.$$.fragment),p(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else if(s){const l={};r&16&&(l.data=t[4]),r&4&&(l.form=t[2]),e.$set(l)}},i(t){i||(e&&p(e.$$.fragment,t),i=!0)},o(t){e&&d(e.$$.fragment,t),i=!1},d(t){t&&w(n),o[10](null),e&&y(e,t)}}}function I(o){let e,n=o[6]&&O(o);return{c(){e=H("div"),n&&n.c(),this.h()},l(i){e=J(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var s=K(e);n&&n.l(s),s.forEach(w),this.h()},h(){N(e,"id","svelte-announcer"),N(e,"aria-live","assertive"),N(e,"aria-atomic","true"),m(e,"position","absolute"),m(e,"left","0"),m(e,"top","0"),m(e,"clip","rect(0 0 0 0)"),m(e,"clip-path","inset(50%)"),m(e,"overflow","hidden"),m(e,"white-space","nowrap"),m(e,"width","1px"),m(e,"height","1px")},m(i,s){g(i,e,s),n&&n.m(e,null)},p(i,s){i[6]?n?n.p(i,s):(n=O(i),n.c(),n.m(e,null)):n&&(n.d(1),n=null)},d(i){i&&w(e),n&&n.d()}}}function O(o){let e;return{c(){e=M(o[7])},l(n){e=Q(n,o[7])},m(n,i){g(n,e,i)},p(n,i){i&128&&X(e,n[7])},d(n){n&&w(e)}}}function te(o){let e,n,i,s,f;const t=[x,$],r=[];function l(a,_){return a[1][1]?0:1}e=l(o),n=r[e]=t[e](o);let c=o[5]&&I(o);return{c(){n.c(),i=F(),c&&c.c(),s=h()},l(a){n.l(a),i=G(a),c&&c.l(a),s=h()},m(a,_){r[e].m(a,_),g(a,i,_),c&&c.m(a,_),g(a,s,_),f=!0},p(a,[_]){let b=e;e=l(a),e===b?r[e].p(a,_):(L(),d(r[b],1,1,()=>{r[b]=null}),R(),n=r[e],n?n.p(a,_):(n=r[e]=t[e](a),n.c()),p(n,1),n.m(i.parentNode,i)),a[5]?c?c.p(a,_):(c=I(a),c.c(),c.m(s.parentNode,s)):c&&(c.d(1),c=null)},i(a){f||(p(n),f=!0)},o(a){d(n),f=!1},d(a){a&&(w(i),w(s)),r[e].d(a),c&&c.d(a)}}}function ne(o,e,n){let{stores:i}=e,{page:s}=e,{constructors:f}=e,{components:t=[]}=e,{form:r}=e,{data_0:l=null}=e,{data_1:c=null}=e;B(i.page.notify);let a=!1,_=!1,b=null;U(()=>{const u=i.page.subscribe(()=>{a&&(n(6,_=!0),j().then(()=>{n(7,b=document.title||"untitled page")}))});return n(5,a=!0),u});function T(u){P[u?"unshift":"push"](()=>{t[1]=u,n(0,t)})}function V(u){P[u?"unshift":"push"](()=>{t[0]=u,n(0,t)})}function q(u){P[u?"unshift":"push"](()=>{t[0]=u,n(0,t)})}return o.$$set=u=>{"stores"in u&&n(8,i=u.stores),"page"in u&&n(9,s=u.page),"constructors"in u&&n(1,f=u.constructors),"components"in u&&n(0,t=u.components),"form"in u&&n(2,r=u.form),"data_0"in u&&n(3,l=u.data_0),"data_1"in u&&n(4,c=u.data_1)},o.$$.update=()=>{o.$$.dirty&768&&i.page.set(s)},[t,f,r,l,c,a,_,b,i,s,T,V,q]}class oe extends W{constructor(e){super(),z(this,e,ne,te,A,{stores:8,page:9,constructors:1,components:0,form:2,data_0:3,data_1:4})}}const ae=[()=>S(()=>import("../nodes/0.2a6e7c35.js"),["../nodes/0.2a6e7c35.js","../chunks/scheduler.8b74b908.js","../chunks/index.c146e4e6.js"],import.meta.url),()=>S(()=>import("../nodes/1.8eb04061.js"),["../nodes/1.8eb04061.js","../chunks/scheduler.8b74b908.js","../chunks/index.c146e4e6.js","../chunks/singletons.6b4734db.js"],import.meta.url),()=>S(()=>import("../nodes/2.f3e7c0be.js"),["../nodes/2.f3e7c0be.js","../chunks/scheduler.8b74b908.js","../chunks/index.c146e4e6.js","../assets/2.9877725f.css"],import.meta.url)],le=[],fe={"/":[2]},ce={handleError:({error:o})=>{console.error(o)}};export{fe as dictionary,ce as hooks,re as matchers,ae as nodes,oe as root,le as server_loads};
diff --git a/spaces/ehugfaces/stabilityai-stable-diffusion-2-1/app.py b/spaces/ehugfaces/stabilityai-stable-diffusion-2-1/app.py
deleted file mode 100644
index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000
--- a/spaces/ehugfaces/stabilityai-stable-diffusion-2-1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch()
\ No newline at end of file
diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/criteria/__init__.py b/spaces/emc348/faces-through-time/models/StyleCLIP/criteria/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ennet/ChatDev/camel/prompts/prompt_templates.py b/spaces/ennet/ChatDev/camel/prompts/prompt_templates.py
deleted file mode 100644
index cc1cb40c23d7f05d60b515502da4765d773e3078..0000000000000000000000000000000000000000
--- a/spaces/ennet/ChatDev/camel/prompts/prompt_templates.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-# Licensed under the Apache License, Version 2.0 (the “License”);
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an “AS IS” BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-import warnings
-from typing import Any, Optional
-
-from camel.prompts import TaskPromptTemplateDict, TextPrompt
-from camel.typing import RoleType, TaskType
-
-
-class PromptTemplateGenerator:
- r"""A class for generating prompt templates for tasks.
-
- Args:
- task_prompt_template_dict (TaskPromptTemplateDict, optional):
- A dictionary of task prompt templates for each task type. If not
- provided, an empty dictionary is used as default.
- """
-
- def __init__(
- self,
- task_prompt_template_dict: Optional[TaskPromptTemplateDict] = None,
- ) -> None:
- self.task_prompt_template_dict = (task_prompt_template_dict or TaskPromptTemplateDict())
-
- def get_prompt_from_key(self, task_type: TaskType, key: Any) -> TextPrompt:
- r"""Generates a text prompt using the specified :obj:`task_type` and
- :obj:`key`.
-
- Args:
- task_type (TaskType): The type of task.
- key (Any): The key used to generate the prompt.
-
- Returns:
- TextPrompt: The generated text prompt.
-
- Raises:
- KeyError: If failed to generate prompt using the specified
- :obj:`task_type` and :obj:`key`.
- """
- try:
- print(task_type, key)
- return self.task_prompt_template_dict[task_type][key]
-
- except KeyError:
- raise KeyError("Failed to get generate prompt template for "
- f"task: {task_type.value} from key: {key}.")
-
- def get_system_prompt(
- self,
- task_type: TaskType,
- role_type: RoleType,
- ) -> TextPrompt:
- r"""Generates a text prompt for the system role, using the specified
- :obj:`task_type` and :obj:`role_type`.
-
- Args:
- task_type (TaskType): The type of task.
- role_type (RoleType): The type of role, either "USER" or
- "ASSISTANT".
-
- Returns:
- TextPrompt: The generated text prompt.
-
- Raises:
- KeyError: If failed to generate prompt using the specified
- :obj:`task_type` and :obj:`role_type`.
- """
- try:
- return self.get_prompt_from_key(task_type, role_type)
-
- except KeyError:
- prompt = "You are a helpful assistant."
-
- warnings.warn("Failed to get system prompt template for "
- f"task: {task_type.value}, role: {role_type.value}. "
- f"Set template to: {prompt}")
-
- return TextPrompt(prompt)
-
- def get_generate_tasks_prompt(
- self,
- task_type: TaskType,
- ) -> TextPrompt:
- r"""Gets the prompt for generating tasks for a given task type.
-
- Args:
- task_type (TaskType): The type of the task.
-
- Returns:
- TextPrompt: The generated prompt for generating tasks.
- """
- return self.get_prompt_from_key(task_type, "generate_tasks")
-
- def get_task_specify_prompt(
- self,
- task_type: TaskType,
- ) -> TextPrompt:
- r"""Gets the prompt for specifying a task for a given task type.
-
- Args:
- task_type (TaskType): The type of the task.
-
- Returns:
- TextPrompt: The generated prompt for specifying a task.
- """
- return self.get_prompt_from_key(task_type, "task_specify_prompt")
diff --git a/spaces/eunjae/LoRA-DreamBooth-Training-UI/inference.py b/spaces/eunjae/LoRA-DreamBooth-Training-UI/inference.py
deleted file mode 100644
index ce0f2b08df75e6d62f06c4119f1dc859930de032..0000000000000000000000000000000000000000
--- a/spaces/eunjae/LoRA-DreamBooth-Training-UI/inference.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from __future__ import annotations
-
-import gc
-import pathlib
-
-import gradio as gr
-import PIL.Image
-import torch
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-from huggingface_hub import ModelCard
-
-
-class InferencePipeline:
- def __init__(self, hf_token: str | None = None):
- self.hf_token = hf_token
- self.pipe = None
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.lora_model_id = None
- self.base_model_id = None
-
- def clear(self) -> None:
- self.lora_model_id = None
- self.base_model_id = None
- del self.pipe
- self.pipe = None
- torch.cuda.empty_cache()
- gc.collect()
-
- @staticmethod
- def check_if_model_is_local(lora_model_id: str) -> bool:
- return pathlib.Path(lora_model_id).exists()
-
- @staticmethod
- def get_model_card(model_id: str,
- hf_token: str | None = None) -> ModelCard:
- if InferencePipeline.check_if_model_is_local(model_id):
- card_path = (pathlib.Path(model_id) / 'README.md').as_posix()
- else:
- card_path = model_id
- return ModelCard.load(card_path, token=hf_token)
-
- @staticmethod
- def get_base_model_info(lora_model_id: str,
- hf_token: str | None = None) -> str:
- card = InferencePipeline.get_model_card(lora_model_id, hf_token)
- return card.data.base_model
-
- def load_pipe(self, lora_model_id: str) -> None:
- if lora_model_id == self.lora_model_id:
- return
- base_model_id = self.get_base_model_info(lora_model_id, self.hf_token)
- if base_model_id != self.base_model_id:
- if self.device.type == 'cpu':
- pipe = DiffusionPipeline.from_pretrained(
- base_model_id, use_auth_token=self.hf_token)
- else:
- pipe = DiffusionPipeline.from_pretrained(
- base_model_id,
- torch_dtype=torch.float16,
- use_auth_token=self.hf_token)
- pipe = pipe.to(self.device)
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(
- pipe.scheduler.config)
- self.pipe = pipe
- self.pipe.unet.load_attn_procs( # type: ignore
- lora_model_id, use_auth_token=self.hf_token)
-
- self.lora_model_id = lora_model_id # type: ignore
- self.base_model_id = base_model_id # type: ignore
-
- def run(
- self,
- lora_model_id: str,
- prompt: str,
- lora_scale: float,
- seed: int,
- n_steps: int,
- guidance_scale: float,
- ) -> PIL.Image.Image:
- if not torch.cuda.is_available():
- raise gr.Error('CUDA is not available.')
-
- self.load_pipe(lora_model_id)
-
- generator = torch.Generator(device=self.device).manual_seed(seed)
- out = self.pipe(
- prompt,
- num_inference_steps=n_steps,
- guidance_scale=guidance_scale,
- generator=generator,
- cross_attention_kwargs={'scale': lora_scale},
- ) # type: ignore
- return out.images[0]
diff --git a/spaces/f2api/gpt-academic/crazy_functions/test_project/python/dqn/policies.py b/spaces/f2api/gpt-academic/crazy_functions/test_project/python/dqn/policies.py
deleted file mode 100644
index 4ecf39a5fc04b24ad1b809232b186728366987b6..0000000000000000000000000000000000000000
--- a/spaces/f2api/gpt-academic/crazy_functions/test_project/python/dqn/policies.py
+++ /dev/null
@@ -1,237 +0,0 @@
-from typing import Any, Dict, List, Optional, Type
-
-import gym
-import torch as th
-from torch import nn
-
-from stable_baselines3.common.policies import BasePolicy, register_policy
-from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
-from stable_baselines3.common.type_aliases import Schedule
-
-
-class QNetwork(BasePolicy):
- """
- Action-Value (Q-Value) network for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- features_extractor: nn.Module,
- features_dim: int,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- normalize_images: bool = True,
- ):
- super(QNetwork, self).__init__(
- observation_space,
- action_space,
- features_extractor=features_extractor,
- normalize_images=normalize_images,
- )
-
- if net_arch is None:
- net_arch = [64, 64]
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.features_extractor = features_extractor
- self.features_dim = features_dim
- self.normalize_images = normalize_images
- action_dim = self.action_space.n # number of actions
- q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
- self.q_net = nn.Sequential(*q_net)
-
- def forward(self, obs: th.Tensor) -> th.Tensor:
- """
- Predict the q-values.
-
- :param obs: Observation
- :return: The estimated Q-Value for each action.
- """
- return self.q_net(self.extract_features(obs))
-
- def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
- q_values = self.forward(observation)
- # Greedy action
- action = q_values.argmax(dim=1).reshape(-1)
- return action
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_arch,
- features_dim=self.features_dim,
- activation_fn=self.activation_fn,
- features_extractor=self.features_extractor,
- )
- )
- return data
-
-
-class DQNPolicy(BasePolicy):
- """
- Policy class with Q-Value Net and target net for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param features_extractor_kwargs: Keyword arguments
- to pass to the features extractor.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(DQNPolicy, self).__init__(
- observation_space,
- action_space,
- features_extractor_class,
- features_extractor_kwargs,
- optimizer_class=optimizer_class,
- optimizer_kwargs=optimizer_kwargs,
- )
-
- if net_arch is None:
- if features_extractor_class == FlattenExtractor:
- net_arch = [64, 64]
- else:
- net_arch = []
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.normalize_images = normalize_images
-
- self.net_args = {
- "observation_space": self.observation_space,
- "action_space": self.action_space,
- "net_arch": self.net_arch,
- "activation_fn": self.activation_fn,
- "normalize_images": normalize_images,
- }
-
- self.q_net, self.q_net_target = None, None
- self._build(lr_schedule)
-
- def _build(self, lr_schedule: Schedule) -> None:
- """
- Create the network and the optimizer.
-
- :param lr_schedule: Learning rate schedule
- lr_schedule(1) is the initial learning rate
- """
-
- self.q_net = self.make_q_net()
- self.q_net_target = self.make_q_net()
- self.q_net_target.load_state_dict(self.q_net.state_dict())
-
- # Setup optimizer with initial learning rate
- self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
-
- def make_q_net(self) -> QNetwork:
- # Make sure we always have separate networks for features extractors etc
- net_args = self._update_features_extractor(self.net_args, features_extractor=None)
- return QNetwork(**net_args).to(self.device)
-
- def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self._predict(obs, deterministic=deterministic)
-
- def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self.q_net._predict(obs, deterministic=deterministic)
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_args["net_arch"],
- activation_fn=self.net_args["activation_fn"],
- lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
- optimizer_class=self.optimizer_class,
- optimizer_kwargs=self.optimizer_kwargs,
- features_extractor_class=self.features_extractor_class,
- features_extractor_kwargs=self.features_extractor_kwargs,
- )
- )
- return data
-
-
-MlpPolicy = DQNPolicy
-
-
-class CnnPolicy(DQNPolicy):
- """
- Policy class for DQN when using images as input.
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(CnnPolicy, self).__init__(
- observation_space,
- action_space,
- lr_schedule,
- net_arch,
- activation_fn,
- features_extractor_class,
- features_extractor_kwargs,
- normalize_images,
- optimizer_class,
- optimizer_kwargs,
- )
-
-
-register_policy("MlpPolicy", MlpPolicy)
-register_policy("CnnPolicy", CnnPolicy)
diff --git a/spaces/failfast/2D-GameCreator/src/pages/index.tsx b/spaces/failfast/2D-GameCreator/src/pages/index.tsx
deleted file mode 100644
index 76c8430aed11adfbe9fd45b4f9e0f03ae01ea727..0000000000000000000000000000000000000000
--- a/spaces/failfast/2D-GameCreator/src/pages/index.tsx
+++ /dev/null
@@ -1,38 +0,0 @@
-import Stack from "@mui/material/Stack";
-import CssBaseline from "@mui/material/CssBaseline";
-import { Container } from "@mui/material";
-import Footer from "@/components/footer";
-import Title from "@/components/title";
-import Introduction from "@/components/Introduction";
-import UnderTheHood from "@/components/UnderTheHood";
-import Examples from "@/components/Examples";
-import GameCreator from "@/components/GameCreator";
-import HowToUse from "@/components/HowToUse";
-import Troubleshooting from "@/components/Troubleshooting";
-
-export default function Home() {
- return (
- <>
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- >
- );
-}
diff --git a/spaces/falterWliame/Face_Mask_Detection/Electrax 2 Vst !LINK! Crack Siteinstmank.md b/spaces/falterWliame/Face_Mask_Detection/Electrax 2 Vst !LINK! Crack Siteinstmank.md
deleted file mode 100644
index 5c3e874fd96b8fe41af1711198196d8cdd83dbd2..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Electrax 2 Vst !LINK! Crack Siteinstmank.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download and Install Electrax 2 Vst Crack Siteinstmank
-
Electrax 2 Vst is a powerful and versatile synthesizer plugin that can create a wide range of sounds and effects. However, it is not free and requires a license to use. If you want to try it out without paying, you might be tempted to look for a cracked version online. But be careful, because some of these sites might contain malware or viruses that can harm your computer.
-
One of the sites that claims to offer Electrax 2 Vst Crack is Siteinstmank. This site promises to provide a working link to download and install the plugin for free. But is it safe and reliable? In this article, we will show you how to download and install Electrax 2 Vst Crack Siteinstmank, and what are the risks and consequences of doing so.
The first step is to visit the Siteinstmank website. You can find it by searching for "Electrax 2 Vst Crack Siteinstmank" on Google or any other search engine. The site looks like this:
-
-
As you can see, the site has a lot of ads and pop-ups that might distract you or redirect you to other pages. Be careful not to click on any of them, as they might contain malware or viruses. Also, do not enter any personal information or credit card details on the site, as they might be stolen or misused.
-
Step 2: Download Electrax 2 Vst Crack
-
The next step is to download the Electrax 2 Vst Crack file from the site. You can find it by scrolling down to the bottom of the page, where you will see a button that says "Download Now". Click on it and wait for the download to start.
-
-
The file size is about 300 MB, so it might take some time depending on your internet speed. Once the download is complete, you will have a ZIP file named "Electrax_2_Vst_Crack_Siteinstmank.zip" on your computer.
-
Step 3: Install Electrax 2 Vst Crack
-
The final step is to install the Electrax 2 Vst Crack plugin on your computer. To do this, you need to extract the ZIP file using a program like WinRAR or 7-Zip. You will get a folder named "Electrax_2_Vst_Crack_Siteinstmank" that contains several files and folders.
-
-
Open the folder and look for a file named "Setup.exe". This is the installer for the plugin. Double-click on it and follow the instructions on the screen. You will need to choose a destination folder for the plugin and agree to the terms and conditions.
-
-
After the installation is done, you will have Electrax 2 Vst Crack plugin on your computer. You can use it with any DAW (Digital Audio Workstation) that supports VST plugins, such as FL Studio, Ableton Live, Cubase, etc.
-
-
Risks and Consequences of Using Electrax 2 Vst Crack
-
While using Electrax 2 Vst Crack might seem like a good way to save money and enjoy the plugin, there are some serious risks and consequences that you should be aware of. Here are some of them:
-
-
Legal issues: Using cracked software is illegal and violates the intellectual property rights of the developers. You could face legal action or fines if you are caught using or distributing Electrax 2 Vst Crack.
-
Moral issues: Using cracked software is unethical and unfair to the developers who spent time and money creating and updating Electrax 2 Vst. You are depriving them of their deserved income and support.
-
Technical issues: Using cracked software is risky and unreliable. You might encounter bugs, errors, crashes, or compatibility issues with your DAW or operating system. You might also miss out on updates, features d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Blue WhatsApp for Free and Enjoy Amazing Features.md b/spaces/fatiXbelha/sd/Download Blue WhatsApp for Free and Enjoy Amazing Features.md
deleted file mode 100644
index 60671c098abee2d6b3e7e3be894f83e573af90ed..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Blue WhatsApp for Free and Enjoy Amazing Features.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
Link Download Blue WhatsApp: How to Get the Best Modified Version of WhatsApp
-
WhatsApp is one of the most popular messaging apps in the world, with over 2 billion users. However, some users may feel that the official WhatsApp app is lacking some features and customization options that they desire. That's why many people look for modified versions of WhatsApp, such as Blue WhatsApp, that offer more functionality and personalization.
-
In this article, we will show you how to download and install Blue WhatsApp on your Android phone, and how to update it to the latest version. We will also explain what Blue WhatsApp is and why you should try it.
Blue WhatsApp is the best modified version of official WhatsApp Messenger, which you can download from our website and install it on your Android phone within no time. And the best thing about Blue WhatsApp APK is it is 100% free for all users on the internet.
-
Features of Blue WhatsApp
-
Blue WhatsApp has many features that make it superior to the official WhatsApp app, such as:
-
-
You can change the theme and color of your app according to your preference.
-
You can hide your online status, last seen, blue ticks, and typing indicator from others.
-
You can send unlimited messages, images, videos, documents, and audio files without any restrictions.
-
You can use two WhatsApp accounts on the same device with different numbers.
-
You can lock your chats with a password or fingerprint for extra security.
-
You can customize your fonts, icons, notifications, and emojis.
-
You can backup and restore your chats easily.
-
You can use anti-ban and anti-revoke features to avoid getting banned or deleted by WhatsApp.
-
-
Benefits of Blue WhatsApp
-
Blue WhatsApp has many benefits that make it worth trying, such as:
-
-
You can enjoy more privacy and control over your chats and data.
-
You can have more fun and creativity with your messages and media.
-
You can save your time and data by using less bandwidth and storage.
-
You can access more features and options that are not available in the official WhatsApp app.
-
-
How to Download and Install Blue WhatsApp on Your Android Phone
-
Downloading and installing Blue WhatsApp on your Android phone is very easy and simple. Just follow these steps:
-
Step 1: Enable Unknown Sources
-
Since Blue WhatsApp is not available on the Google Play Store, you need to enable unknown sources on your phone settings to allow installing apps from other sources. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 2: Download Blue WhatsApp APK File
-
Next, you need to download the Blue WhatsApp APK file from our website. Click on this link to go to the download page. Then, click on the download button and wait for the file to be downloaded on your phone.
-
How to download blue whatsapp plus apk
-Blue whatsapp latest version 2023 free download
-Download blue whatsapp for android phone
-Blue whatsapp mod apk download link
-Blue whatsapp update 9.71 download
-Download blue whatsapp for ios device
-Blue whatsapp features and benefits
-Download blue whatsapp for windows 10
-Blue whatsapp vs gb whatsapp comparison
-Download blue whatsapp for mac os x
-Blue whatsapp installation guide and tutorial
-Download blue whatsapp for tablet
-Blue whatsapp privacy and security settings
-Download blue whatsapp web version
-Blue whatsapp stickers and gifs download
-Download blue whatsapp for pc
-Blue whatsapp backup and restore options
-Download blue whatsapp from official website
-Blue whatsapp group chat and video call features
-Download blue whatsapp from temogroup.org[^2^]
-Blue whatsapp business and marketing tools
-Download blue whatsapp from apk mirror site
-Blue whatsapp customization and themes download
-Download blue whatsapp from google play store[^1^]
-Blue whatsapp status and stories download
-Download blue whatsapp from app store[^1^]
-Blue whatsapp anti-ban and anti-revoke features
-Download blue whatsapp from mediafire link
-Blue whatsapp emoji and fonts download
-Download blue whatsapp from uptodown site
-Blue whatsapp voice and text messages download
-Download blue whatsapp from softonic site
-Blue whatsapp dark mode and night mode features
-Download blue whatsapp from apkpure site
-Blue whatsapp hidden and lock chat features
-Download blue whatsapp from filehippo site
-Blue whatsapp online and offline status features
-Download blue whatsapp from malavida site
-Blue whatsapp auto-reply and schedule messages features
-Download blue whatsapp from mobango site
-Blue whatsapp broadcast and forward messages features
-Download blue whatsapp from 9apps site
-Blue whatsapp delete and recall messages features
-Download blue whatsapp from opera mobile store site
-Blue whatsapp pin and archive chat features
-
Step 3: Install Blue WhatsApp APK File
-
Once the file is downloaded, go to your file manager and locate the file. Tap on it and follow the instructions on the screen to install it on your phone. It may take a few minutes to complete the installation process.
-
Step 4: Verify Your Phone Number and Restore Your Chats
-
After installing Blue
After installing Blue WhatsApp, open it and enter your phone number. You will receive a verification code via SMS or call. Enter the code and verify your number. Then, you can restore your chats from your previous backup if you have one. You can also skip this step and start fresh.
-
Congratulations! You have successfully downloaded and installed Blue WhatsApp on your Android phone. Now you can enjoy all the amazing features and benefits of this modified version of WhatsApp.
-
How to Update Blue WhatsApp to the Latest Version
-
Updating Blue WhatsApp to the latest version is also very easy and simple. You have two options to do this:
-
Option 1: Use the In-App Update Feature
-
Blue WhatsApp has an in-app update feature that notifies you when a new version is available. You can simply tap on the update button and download the latest APK file from the app itself. Then, you can install it as usual and enjoy the new features and improvements.
-
Option 2: Download the Latest APK File from the Official Website
-
You can also download the latest APK file from our website and install it manually on your phone. Just follow the same steps as above and make sure to delete the old version before installing the new one.
-
Conclusion
-
Blue WhatsApp is the best modified version of WhatsApp that offers more features and customization options than the official WhatsApp app. You can download and install Blue WhatsApp on your Android phone easily and quickly by following our guide. You can also update Blue WhatsApp to the latest version by using the in-app update feature or downloading the latest APK file from our website.
-
If you are looking for a better and more fun way to communicate with your friends and family, you should definitely try Blue WhatsApp. It will give you more privacy, control, creativity, and convenience with your messages and media. Download Blue WhatsApp today and enjoy!
-
FAQs
-
Here are some frequently asked questions about Blue WhatsApp:
-
-
Is Blue WhatsApp safe to use?
-
Yes, Blue WhatsApp is safe to use as it does not contain any viruses or malware. However, since it is a modified version of WhatsApp, it is not endorsed or supported by WhatsApp Inc. Therefore, you should use it at your own risk and discretion.
-
Will I get banned by WhatsApp for using Blue WhatsApp?
-
No, you will not get banned by WhatsApp for using Blue WhatsApp as it has anti-ban and anti-revoke features that prevent WhatsApp from detecting or deleting your account. However, you should still be careful and avoid abusing or spamming other users with Blue WhatsApp.
-
Can I use Blue WhatsApp with the official WhatsApp app?
-
No, you cannot use Blue WhatsApp with the official WhatsApp app as they have the same package name and signature. You need to uninstall the official WhatsApp app before installing Blue WhatsApp on your phone.
-
Can I use Blue WhatsApp on iOS devices?
-
No, you cannot use Blue WhatsApp on iOS devices as it is only compatible with Android devices. However, there are other modified versions of WhatsApp for iOS devices that you can try, such as Watusi or GBWhatsApp.
-
How can I contact the developers of Blue WhatsApp?
-
You can contact the developers of Blue WhatsApp by visiting their official website and filling out the contact form. You can also follow them on their social media accounts to get updates and news about Blue WhatsApp.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/examples/clue_sim/finetune_clue_sim.py b/spaces/fclong/summary/fengshen/examples/clue_sim/finetune_clue_sim.py
deleted file mode 100644
index b05f6ea6ce67c35cd39dedd924df0b663fd5a8b2..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/clue_sim/finetune_clue_sim.py
+++ /dev/null
@@ -1,325 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import json
-import os
-from sklearn import metrics
-import torch
-import torch.nn as nn
-from torch.utils.data import Dataset, DataLoader, ConcatDataset
-import pytorch_lightning as pl
-from collections import defaultdict
-from transformers import AutoConfig, AutoModel, get_cosine_schedule_with_warmup
-from loss import FocalLoss, LabelSmoothingCorrectionCrossEntropy
-
-
-class CustomDataset(Dataset):
- def __init__(self, file, tokenizer, max_len, mode='no_test'):
- self.tokenizer = tokenizer
- self.max_len = max_len
- self.mode = mode
-
- self.ex_list = []
- with open('./dataset/' + file, "r", encoding='utf-8') as f:
- for line in f:
- sample = json.loads(line)
- query = sample["query"]
- title = sample["title"]
- id = int(sample["id"])
- if self.mode == 'no_test':
- relevant = int(sample["label"])
- self.ex_list.append((query, title, relevant, id))
- else:
- self.ex_list.append((query, title, id))
-
- def __len__(self):
- return len(self.ex_list)
-
- def __getitem__(self, index):
- if self.mode == 'no_test':
- query, title, relevant, id = self.ex_list[index]
- else:
- query, title, id = self.ex_list[index]
-
- inputs = self.tokenizer.encode_plus(
- query, title,
- truncation=True,
- add_special_tokens=True,
- max_length=self.max_len,
- padding='max_length',
- return_token_type_ids=True
- )
- ids = inputs['input_ids']
- mask = inputs['attention_mask']
- token_type_ids = inputs["token_type_ids"]
- if self.mode == 'no_test':
- return {
- 'ids': torch.tensor(ids, dtype=torch.long),
- 'mask': torch.tensor(mask, dtype=torch.long),
- 'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),
- 'targets': torch.tensor(relevant, dtype=torch.float),
- 'id': torch.tensor(id, dtype=torch.long)
- }
- else:
- return {
- 'ids': torch.tensor(ids, dtype=torch.long),
- 'mask': torch.tensor(mask, dtype=torch.long),
- 'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),
- 'id': torch.tensor(id, dtype=torch.long)
- }
-
-
-class CustomDataModule(pl.LightningDataModule):
- def __init__(self, args, tokenizer):
- super().__init__()
- self.args = args
- self.tokenizer = tokenizer
- self.max_len = self.args.max_seq_length
- self.train_dataset = None
- self.val_dataset = None
-
- def setup(self, stage):
- data_path = "./dataset"
- assert os.path.exists(os.path.join(data_path, 'train.json'))
- assert os.path.exists(os.path.join(data_path, 'dev.json'))
- assert os.path.exists(os.path.join(data_path, 'test_public.json'))
- if stage == 'fit':
- self.train_dataset = CustomDataset('train.json', self.tokenizer, self.max_len)
- self.val_dataset = CustomDataset('dev.json', self.tokenizer, self.max_len)
- self.test_dataset = CustomDataset('test_public.json', self.tokenizer, self.max_len)
- elif stage == 'test':
- self.test_dataset = CustomDataset('test_public.json', self.tokenizer, self.max_len)
-
- def train_dataloader(self):
- full_dataset = ConcatDataset([self.train_dataset, self.val_dataset])
- train_dataloader = DataLoader(
- full_dataset,
- batch_size=self.args.batch_size,
- num_workers=4,
- shuffle=True,
- pin_memory=True,
- drop_last=True)
- return train_dataloader
-
- def val_dataloader(self):
- val_dataloader = DataLoader(
- self.test_dataset,
- batch_size=self.args.val_batch_size,
- num_workers=4,
- shuffle=False,
- pin_memory=True,
- drop_last=False)
- return val_dataloader
-
- def test_dataloader(self):
- test_dataloader = DataLoader(
- self.test_dataset,
- batch_size=self.args.val_batch_size,
- num_workers=4,
- shuffle=False,
- pin_memory=True,
- drop_last=False)
- return test_dataloader
-
-
-class CustomModel(pl.LightningModule):
- def __init__(self, args):
- super().__init__()
- self.args = args
- self.model = self.args.model_name
- self.cache_dir = self.args.model_path
- self.scheduler = self.args.scheduler
- self.step_scheduler_after = "batch"
- self.optimizer = self.args.optimizer
- self.pooler = self.args.use_original_pooler
- self.category = self.args.cate_performance
- self.loss_func = self.args.loss_function
-
- hidden_dropout_prob: float = 0.1
- layer_norm_eps: float = 1e-7
-
- config = AutoConfig.from_pretrained(self.model, cache_dir=self.cache_dir)
-
- config.update(
- {
- "output_hidden_states": False,
- "hidden_dropout_prob": hidden_dropout_prob,
- "layer_norm_eps": layer_norm_eps,
- }
- )
- self.transformer = AutoModel.from_pretrained(self.model, config=config, cache_dir=self.cache_dir)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
- self.linear = torch.nn.Linear(config.hidden_size, self.args.num_labels, bias=True) # 分三类
-
- def configure_optimizers(self):
- """Prepare optimizer and schedule"""
- model = self.transformer
- no_decay = ["bias", "LayerNorm.weight"]
- optimizer_grouped_parameters = [
- {
- "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
- "weight_decay": 0.01,
- },
- {
- "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
- "weight_decay": 0.0,
- },
- ]
-
- optimizer_index = ['Adam', 'AdamW'].index(self.optimizer)
- optimizer = [
- torch.optim.Adam(optimizer_grouped_parameters, lr=self.args.learning_rate),
- torch.optim.AdamW(optimizer_grouped_parameters, lr=self.args.learning_rate)][optimizer_index]
-
- scheduler_index = ['StepLR', 'CosineWarmup', 'CosineAnnealingLR'].index(self.scheduler)
- scheduler = [
- torch.optim.lr_scheduler.StepLR(optimizer, step_size=self.args.warmup_step,
- gamma=self.args.warmup_proportion),
- get_cosine_schedule_with_warmup(
- optimizer,
- num_warmup_steps=int(self.args.warmup_proportion * self.total_steps),
- num_training_steps=self.total_steps,
- ),
- torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=5, eta_min=2e-06)][scheduler_index]
-
- scheduler = {"scheduler": scheduler, "interval": "step", "frequency": 1}
- return [optimizer], [scheduler]
-
- def setup(self, stage=None):
- if stage != "fit":
- return
- # calculate total steps
- train_dataloader = self.trainer.datamodule.train_dataloader()
- gpus = 0 if self.trainer.gpus is None else self.trainer.gpus
- tb_size = self.args.batch_size * max(1, gpus)
- ab_size = self.trainer.accumulate_grad_batches * float(self.trainer.max_epochs)
- self.total_steps = (len(train_dataloader.dataset) // tb_size) // ab_size
-
- def loss(self, outputs, targets):
- lossf_index = ['CE', 'Focal', 'LSCE_correction'].index(self.loss_func)
- loss_fct = [nn.CrossEntropyLoss(), FocalLoss(), LabelSmoothingCorrectionCrossEntropy()][lossf_index]
- loss = loss_fct(outputs, targets)
- return loss
-
- def category_performance_measure(self, labels_right, labels_pred, num_label=3):
- text_labels = [i for i in range(num_label)]
-
- TP = dict.fromkeys(text_labels, 0) # 预测正确的各个类的数目
- TP_FP = dict.fromkeys(text_labels, 0) # 测试数据集中各个类的数目
- TP_FN = dict.fromkeys(text_labels, 0) # 预测结果中各个类的数目
-
- label_dict = defaultdict(list)
- for num in range(num_label):
- label_dict[num].append(str(num))
-
- # 计算TP等数量
- for i in range(0, len(labels_right)):
- TP_FP[labels_right[i]] += 1
- TP_FN[labels_pred[i]] += 1
- if labels_right[i] == labels_pred[i]:
- TP[labels_right[i]] += 1
-
- # 计算准确率P,召回率R,F1值
- results = []
- for key in TP_FP:
- P = float(TP[key]) / float(TP_FP[key] + 1e-9)
- R = float(TP[key]) / float(TP_FN[key] + 1e-9)
- F1 = P * R * 2 / (P + R) if (P + R) != 0 else 0
- # results.append("%s:\t P:%f\t R:%f\t F1:%f" % (key, P, R, F1))
- results.append(F1)
- return results
-
- def monitor_metrics(self, outputs, targets):
- pred = torch.argmax(outputs, dim=1).cpu().numpy().tolist()
- targets = targets.int().cpu().numpy().tolist()
- if self.category:
- category_results = self.category_performance_measure(
- labels_right=targets,
- labels_pred=pred,
- num_label=self.args.num_labels
- )
- return {"f1": category_results}
- else:
- f1_score = metrics.f1_score(targets, pred, average="macro")
- return {"f1": f1_score}
-
- def forward(self, ids, mask, token_type_ids, labels):
- transformer_out = self.transformer(input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids)
-
- if self.pooler:
- pooler_output = transformer_out.pooler_output
- else:
- sequence_output = transformer_out.last_hidden_state
- pooler_output = torch.mean(sequence_output, dim=1)
- logits = self.linear(self.dropout(pooler_output))
-
- labels_hat = torch.argmax(logits, dim=1)
- correct_count = torch.sum(labels == labels_hat)
- return logits, correct_count
-
- def predict(self, ids, mask, token_type_ids):
- transformer_out = self.transformer(input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids)
- pooler_output = transformer_out.pooler_output
- logits = self.linear(self.dropout(pooler_output))
- logits = torch.argmax(logits, dim=1)
- return logits
-
- def training_step(self, batch, batch_idx):
- ids, mask, token_type_ids, labels = batch['ids'], batch['mask'], batch['token_type_ids'], batch['targets']
- logits, correct_count = self.forward(ids, mask, token_type_ids, labels)
- loss = self.loss(logits, labels.long())
- f1 = self.monitor_metrics(logits, labels)["f1"]
- self.log("train_loss", loss, logger=True, prog_bar=True)
- self.log('train_acc', correct_count.float() / len(labels), logger=True, prog_bar=True)
- if self.category:
- self.log("train_f1_key0", f1[0], logger=True, prog_bar=True)
- self.log("train_f1_key1", f1[1], logger=True, prog_bar=True)
- self.log("train_f1_key2", f1[2], logger=True, prog_bar=True)
- else:
- self.log("train_f1", f1, logger=True, prog_bar=True)
- return loss
-
- def validation_step(self, batch, batch_idx):
- ids, mask, token_type_ids, labels = batch['ids'], batch['mask'], batch['token_type_ids'], batch['targets']
- logits, correct_count = self.forward(ids, mask, token_type_ids, labels)
- loss = self.loss(logits, labels.long())
- f1 = self.monitor_metrics(logits, labels)["f1"]
- self.log("val_loss", loss, logger=True, prog_bar=True)
- self.log("val_acc", correct_count.float() / len(labels), logger=True, prog_bar=True)
- if self.category:
- self.log("val_f1_key0", f1[0], logger=True, prog_bar=True)
- self.log("val_f1_key1", f1[1], logger=True, prog_bar=True)
- self.log("val_f1_key2", f1[2], logger=True, prog_bar=True)
- else:
- self.log("val_f1", f1, logger=True, prog_bar=True)
-
- def test_step(self, batch, batch_idx):
- ids, mask, token_type_ids, labels = batch['ids'], batch['mask'], batch['token_type_ids'], batch['targets']
- logits, correct_count = self.forward(ids, mask, token_type_ids, labels)
- loss = self.loss(logits, labels.long())
- f1 = self.monitor_metrics(logits, labels)["f1"]
- self.log("test_loss", loss, logger=True, prog_bar=True)
- self.log("test_acc", correct_count.float() / len(labels), logger=True, prog_bar=True)
- if self.category:
- self.log("test_f1_key0", f1[0], logger=True, prog_bar=True)
- self.log("test_f1_key1", f1[1], logger=True, prog_bar=True)
- self.log("test_f1_key2", f1[2], logger=True, prog_bar=True)
- else:
- self.log("test_f1", f1, logger=True, prog_bar=True)
- return {"test_loss": loss, "logits": logits, "labels": labels}
-
- def predict_step(self, batch, batch_idx, dataloader_idx):
- ids, mask, token_type_ids, id = batch['ids'], batch['mask'], batch['token_type_ids'], batch['id']
- logits = self.predict(ids, mask, token_type_ids)
- return {'id': id.cpu().numpy().tolist(), 'logits': logits.cpu().numpy().tolist()}
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Aiming Genius for 8 Ball Pool - The Smartest Aim Tool Mod APK for Android.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Aiming Genius for 8 Ball Pool - The Smartest Aim Tool Mod APK for Android.md
deleted file mode 100644
index 212060e068619600e0b944a309940bbdcc38750a..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Aiming Genius for 8 Ball Pool - The Smartest Aim Tool Mod APK for Android.md
+++ /dev/null
@@ -1,82 +0,0 @@
-
-
How to Improve Your 8 Ball Pool Skills with an Aim Tool Mod Apk
-
Do you love playing 8 ball pool online but struggle to make accurate shots? Do you want to impress your friends and opponents with your amazing skills and win more games? If you answered yes to these questions, then you might be interested in learning how to use an aim tool mod apk for 8 ball pool.
An aim tool mod apk is a modified version of the original game app that allows you to extend the aim line and see where the cue ball and the object ball will go. It can help you make better shots, avoid scratches, and improve your overall performance. In this article, we will show you what an aim tool mod apk is, how it works, how to download and install it, and how to use it effectively. Let's get started!
-
What is 8 Ball Pool and Why is it Popular?
-
8 ball pool is one of the most popular and addictive online games in the world. It is a simulation of the real-life pool game, where you have to pot balls of your assigned color (solid or stripe) and then the black 8 ball before your opponent does. You can play with your friends or with random players from around the globe. You can also join tournaments, leagues, clubs, and events to win coins, cash, cues, and other rewards.
-
The Rules and Objectives of 8 Ball Pool
-
The rules of 8 ball pool are simple and easy to follow. You have to break the rack of balls with the cue ball, then take turns with your opponent to pot balls of your color. You can only hit your own color first, unless you have no legal shot available. You have to call the pocket for the 8 ball before you pot it. If you pot the cue ball or the 8 ball before clearing your color, or if you pot the 8 ball in the wrong pocket, you lose the game.
-
The Benefits and Challenges of Playing 8 Ball Pool Online
-
Playing 8 ball pool online has many benefits. It can help you improve your concentration, coordination, strategy, and mental skills. It can also help you relax, have fun, and socialize with other players. However, playing 8 ball pool online also has some challenges. You have to deal with lag, glitches, hackers, cheaters, and trolls. You also have to face different skill levels, table sizes, cue powers, spin effects, and time limits. To overcome these challenges, you need to practice a lot, learn from your mistakes, and use some tools and tricks.
-
8 ball pool hack apk download unlimited coins and cash
-8 ball pool mod menu apk with aim assist and mega hit
-8 ball pool long line mod apk latest version
-8 ball pool aimbot apk free download for android
-8 ball pool cheat tool apk no root
-8 ball pool unlimited guideline mod apk
-8 ball pool aim hack apk online
-8 ball pool modded apk with anti ban
-8 ball pool extended stick mod apk
-8 ball pool auto win mod apk download
-8 ball pool guideline hack apk without root
-8 ball pool mega mod apk unlimited money and cues
-8 ball pool aim tool pro apk cracked
-8 ball pool long shot mod apk
-8 ball pool perfect aim mod apk
-8 ball pool legendary cue mod apk
-8 ball pool line hack apk ios
-8 ball pool aim trainer mod apk
-8 ball pool guideline tool apk download
-8 ball pool all in one mod apk
-8 ball pool best aim mod apk
-8 ball pool cue hack mod apk
-8 ball pool easy win mod apk
-8 ball pool full power mod apk
-8 ball pool guideline extender mod apk
-8 ball pool high level mod apk
-8 ball pool instant win mod apk
-8 ball pool king cue mod apk
-8 ball pool long aim mod apk
-8 ball pool magic cue mod apk
-8 ball pool no ban mod apk download
-8 ball pool offline mode mod apk
-8 ball pool premium cue mod apk
-8 ball pool radar hack mod apk
-8 ball pool real money mod apk
-8 ball pool sniper tool mod apk
-8 ball pool unlimited cash and coins mod apk download
-8 ball pool vip cue mod apk
-8 ball pool wall hack mod apk download
-8 ball pool xmodgames hack apk download
-
What is an Aim Tool Mod Apk and How Does it Work?
-
An aim tool mod apk is a modified version of the original game app that allows you to extend the aim line and see where the cue ball and the object ball will go. It can help you aim the ball and make nice and accurate shots, not limited to direct straight shots but also bank shots or cushion shots easily. It can also help you avoid scratches, fouls, and bad angles.
-
The Features and Functions of an Aim Tool Mod Apk
-
An aim tool mod apk has many features and functions that can enhance your gaming experience. Some of them are: - You can extend the aim line to any length you want, from short to infinite. - You can see the trajectory of the cue ball and the object ball, including the angles, bounces, and spins. - You can adjust the sensitivity and accuracy of the aim line according to your preference. - You can enable or disable the aim tool anytime you want, with a simple tap on the screen. - You can use the aim tool on any table, cue, or game mode, without any restrictions or limitations.
-
The Advantages and Disadvantages of Using an Aim Tool Mod Apk
-
Using an aim tool mod apk can have some advantages and disadvantages. Some of them are: - The advantages of using an aim tool mod apk are: - You can improve your skills and confidence in playing 8 ball pool online. - You can win more games, coins, cash, and rewards. - You can impress your friends and opponents with your amazing shots and strategies. - You can have more fun and enjoyment in playing 8 ball pool online. - The disadvantages of using an aim tool mod apk are: - You might lose the challenge and thrill of playing 8 ball pool online. - You might get bored and lose interest in playing 8 ball pool online. - You might get banned or reported by the game developers or other players for cheating or hacking. - You might get viruses or malware from downloading or installing an aim tool mod apk from untrusted sources.
-
How to Download and Install an Aim Tool Mod Apk for 8 Ball Pool
-
If you want to try using an aim tool mod apk for 8 ball pool, you need to download and install it on your device. Here are some steps and tips on how to do it.
-
The Requirements and Precautions for Downloading an Aim Tool Mod Apk
-
Before you download an aim tool mod apk, you need to make sure that you have the following requirements and precautions: - You need to have a compatible device that can run the game app and the mod apk. The device should have enough storage space, memory, battery, and internet connection. - You need to have the original game app installed on your device. The game app should be updated to the latest version and should not be modified or hacked in any way. - You need to have a reliable source for downloading the aim tool mod apk. The source should be safe, secure, and trusted by other users. You should avoid downloading from unknown or suspicious websites or links that might contain viruses or malware. - You need to have a backup of your game data and device data. You should save your game progress, coins, cash, cues, and other items in a cloud account or a local storage. You should also backup your device data in case something goes wrong during the installation process.
-
The Steps and Tips for Installing an Aim Tool Mod Apk
-
After you download an aim tool mod apk, you need to install it on your device. Here are some steps and tips on how to do it. - Step 1: Uninstall the original game app from your device. You need to do this to avoid any conflicts or errors with the mod apk. You can reinstall the original game app later if you want to. - Step 2: Enable the unknown sources option on your device. You need to do this to allow the installation of apps from sources other than the Google Play Store. You can find this option in your device settings under security or privacy. - Step 3: Locate the downloaded aim tool mod apk file on your device. You can find it in your downloads folder or in any other folder where you saved it. - Step 4: Tap on the file and follow the instructions on the screen. You need to grant some permissions and accept some terms and conditions before you can install the mod apk. - Step 5: Wait for the installation process to finish. It might take a few minutes depending on your device speed and internet connection. - Step 6: Launch the game app and enjoy using the aim tool mod apk.
-
How to Use an Aim Tool Mod Apk to Enhance Your 8 Ball Pool Performance
-
Now that you have installed an aim tool mod apk on your device, you can use it to enhance your 8 ball pool performance. Here are some settings and options for customizing your aim tool mod apk and some strategies and tricks for applying it in different scenarios.
-
The Settings and Options for Customizing Your Aim Tool Mod Apk
-
You can customize your aim tool mod apk according to your preference and needs. Here are some settings and options that you can adjust: - The length of the aim line. You can choose how long you want the aim line to be, from short to infinite. The longer the aim line, the more accurate your shot will be, but also the more obvious your cheating will be. You can change the length of the aim line by sliding the bar on the screen or by tapping the plus or minus buttons. - The color and thickness of the aim line. You can choose the color and thickness of the aim line to make it more visible or less noticeable. You can change the color and thickness of the aim line by tapping the color palette or the size icons on the screen. - The sensitivity and accuracy of the aim line. You can choose how sensitive and accurate the aim line is, depending on your skill level and preference. You can change the sensitivity and accuracy of the aim line by tapping the settings icon on the screen and adjusting the sliders or toggles.
-
The Strategies and Tricks for Applying Your Aim Tool Mod Apk in Different Scenarios
-
You can apply your aim tool mod apk in different scenarios to improve your performance and win more games. Here are some strategies and tricks that you can use: - Use the aim tool mod apk to practice and learn. You can use the aim tool mod apk to practice your shots and learn how to use different cues, spins, angles, and bounces. You can also use it to study your opponents' moves and patterns and learn how to counter them. - Use the aim tool mod apk to make difficult shots. You can use the aim tool mod apk to make shots that are hard to execute, such as bank shots, cushion shots, long shots, or trick shots. You can also use it to avoid scratches, fouls, and bad angles. - Use the aim tool mod apk to win more games. You can use the aim tool mod apk to win more games, coins, cash, and rewards. You can also use it to impress your friends and opponents with your amazing skills and strategies. - Use the aim tool mod apk sparingly and discreetly. You should not use the aim tool mod apk too often or too obviously, as it might ruin the fun and challenge of playing 8 ball pool online. It might also get you banned or reported by the game developers or other players for cheating or hacking. You should use it sparingly and discreetly, only when you need it or when you are sure that no one will notice it.
-
Conclusion and FAQs
-
In conclusion, an aim tool mod apk is a modified version of the original game app that allows you to extend the aim line and see where the cue ball and the object ball will go. It can help you improve your skills, make better shots, avoid scratches, and win more games. However, it also has some drawbacks, such as losing the challenge, getting bored, getting banned, or getting viruses. Therefore, you should use it wisely and carefully, only from trusted sources and only when necessary.
-
Here are some FAQs that you might have about using an aim tool mod apk for 8 ball pool:
-
-
Question
Answer
-
Is using an aim tool mod apk legal?
Using an aim tool mod apk is not legal, as it violates the terms and conditions of the game app. It is considered cheating or hacking, which is not allowed by the game developers or other players.
-
Is using an aim tool mod apk safe?
Using an aim tool mod apk is not safe, as it might expose your device to viruses or malware from untrusted sources. It might also expose your game account to bans or reports from the game developers or other players.
-
Is using an aim tool mod apk fair?
Using an aim tool mod apk is not fair, as it gives you an unfair advantage over your opponents who are playing without it. It also takes away the fun and challenge of playing 8 ball pool online.
-
How can I find a reliable source for downloading an aim tool mod apk?
You can find a reliable source for downloading an aim tool mod apk by doing some research online. You can read reviews, ratings, comments, feedbacks, testimonials, and recommendations from other users who have used it before. You can also check for updates, patches, fixes, compatibility, security, and quality of the mod apk.
-
How can I avoid getting banned or reported for using an aim tool mod apk?
You can avoid getting banned or reported for using an aim tool mod apk by using it sparingly and discreetly. You should not use it too often or too obviously, as it might raise suspicion or complaints from other players. You should also respect the rules and etiquette of playing 8 ball pool online.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/ControlNet-Video/README.md b/spaces/fffiloni/ControlNet-Video/README.md
deleted file mode 100644
index 42db1c16fb55fde9e17d0e562ca62489ebdf04ec..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/ControlNet-Video/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ControlNet-Video
-emoji: 🕹
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.34.0
-python_version: 3.10.9
-app_file: app.py
-pinned: false
-duplicated_from: hysts/ControlNet
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@socket.io/component-emitter/Readme.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@socket.io/component-emitter/Readme.md
deleted file mode 100644
index 0f3f9b9fc37c883b3a7288a8dc5fd1ae2df57afe..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@socket.io/component-emitter/Readme.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# Emitter [](https://travis-ci.org/component/emitter)
-
- Event emitter component.
-
-## Installation
-
-```
-$ component install component/emitter
-```
-
-## API
-
-### Emitter(obj)
-
- The `Emitter` may also be used as a mixin. For example
- a "plain" object may become an emitter, or you may
- extend an existing prototype.
-
- As an `Emitter` instance:
-
-```js
-var Emitter = require('emitter');
-var emitter = new Emitter;
-emitter.emit('something');
-```
-
- As a mixin:
-
-```js
-var Emitter = require('emitter');
-var user = { name: 'tobi' };
-Emitter(user);
-
-user.emit('im a user');
-```
-
- As a prototype mixin:
-
-```js
-var Emitter = require('emitter');
-Emitter(User.prototype);
-```
-
-### Emitter#on(event, fn)
-
- Register an `event` handler `fn`.
-
-### Emitter#once(event, fn)
-
- Register a single-shot `event` handler `fn`,
- removed immediately after it is invoked the
- first time.
-
-### Emitter#off(event, fn)
-
- * Pass `event` and `fn` to remove a listener.
- * Pass `event` to remove all listeners on that event.
- * Pass nothing to remove all listeners on all events.
-
-### Emitter#emit(event, ...)
-
- Emit an `event` with variable option args.
-
-### Emitter#listeners(event)
-
- Return an array of callbacks, or an empty array.
-
-### Emitter#hasListeners(event)
-
- Check if this emitter has `event` handlers.
-
-## License
-
-MIT
diff --git a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/masks/README.md b/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/masks/README.md
deleted file mode 100644
index cf176bc10fae3b03f139727147c220f2a735c806..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/masks/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# Current algorithm
-
-## Choice of mask objects
-
-For identification of the objects which are suitable for mask obtaining, panoptic segmentation model
-from [detectron2](https://github.com/facebookresearch/detectron2) trained on COCO. Categories of the detected instances
-belong either to "stuff" or "things" types. We consider that instances of objects should have category belong
-to "things". Besides, we set upper bound on area which is taken by the object — we consider that too big
-area indicates either of the instance being a background or a main object which should not be removed.
-
-## Choice of position for mask
-
-We consider that input image has size 2^n x 2^m. We downsample it using
-[COUNTLESS](https://github.com/william-silversmith/countless) algorithm so the width is equal to
-64 = 2^8 = 2^{downsample_levels}.
-
-### Augmentation
-
-There are several parameters for augmentation:
-- Scaling factor. We limit scaling to the case when a mask after scaling with pivot point in its center fits inside the
- image completely.
--
-
-### Shift
-
-
-## Select
diff --git a/spaces/fishaudio/fish-diffusion/configs/Itako.py b/spaces/fishaudio/fish-diffusion/configs/Itako.py
deleted file mode 100644
index 6a67e729346d76b322386faa57138327d8b6cc32..0000000000000000000000000000000000000000
--- a/spaces/fishaudio/fish-diffusion/configs/Itako.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- "./_base_/archs/hifi_svc.py",
-]
-
-speaker_mapping = {'itako': 0,}
-
-model = dict(
- type="HiFiSVC",
- speaker_encoder=dict(
- input_size=len(speaker_mapping),
- ),
-)
-
-preprocessing = dict(
- text_features_extractor=dict(
- type="ContentVec",
- ),
- pitch_extractor=dict(
- type="ParselMouthPitchExtractor",
- keep_zeros=False,
- f0_min=40.0,
- f0_max=1600.0,
- ),
- energy_extractor=dict(
- type="RMSEnergyExtractor",
- ),
- augmentations=[
- dict(
- type="RandomPitchShifting",
- key_shifts=[-5., 5.],
- probability=1.5,
- ),
- dict(
- type="RandomTimeStretching",
- factors=[0.8, 1.2],
- probability=0.75,
- )
- ],
-)
\ No newline at end of file
diff --git a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h
deleted file mode 100644
index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000
--- a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h
+++ /dev/null
@@ -1,433 +0,0 @@
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-#include "libipc/def.h"
-
-#include "libipc/platform/detail.h"
-#include "libipc/circ/elem_def.h"
-#include "libipc/utility/log.h"
-#include "libipc/utility/utility.h"
-
-namespace ipc {
-
-////////////////////////////////////////////////////////////////
-/// producer-consumer implementation
-////////////////////////////////////////////////////////////////
-
-template
-struct prod_cons_impl;
-
-template <>
-struct prod_cons_impl> {
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- };
-
- alignas(cache_line_size) std::atomic rd_; // read index
- alignas(cache_line_size) std::atomic wt_; // write index
-
- constexpr circ::u2_t cursor() const noexcept {
- return 0;
- }
-
- template
- bool push(W* /*wrapper*/, F&& f, E* elems) {
- auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
- if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
- return false; // full
- }
- std::forward(f)(&(elems[cur_wt].data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- /**
- * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
- * So we could just disconnect all connections of receiver, and return false.
- */
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(~static_cast(0u));
- return false;
- }
-
- template
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
- if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
- return false; // empty
- }
- std::forward(f)(&(elems[cur_rd].data_));
- std::forward(out)(true);
- rd_.fetch_add(1, std::memory_order_release);
- return true;
- }
-};
-
-template <>
-struct prod_cons_impl>
- : prod_cons_impl> {
-
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(1);
- return false;
- }
-
- template class E, std::size_t DS, std::size_t AS>
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- byte_t buff[DS];
- for (unsigned k = 0;;) {
- auto cur_rd = rd_.load(std::memory_order_relaxed);
- if (circ::index_of(cur_rd) ==
- circ::index_of(wt_.load(std::memory_order_acquire))) {
- return false; // empty
- }
- std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
- if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
- std::forward(f)(buff);
- std::forward(out)(true);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-template <>
-struct prod_cons_impl>
- : prod_cons_impl> {
-
- using flag_t = std::uint64_t;
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic f_ct_ { 0 }; // commit flag
- };
-
- alignas(cache_line_size) std::atomic ct_; // commit index
-
- template
- bool push(W* /*wrapper*/, F&& f, E* elems) {
- circ::u2_t cur_ct, nxt_ct;
- for (unsigned k = 0;;) {
- cur_ct = ct_.load(std::memory_order_relaxed);
- if (circ::index_of(nxt_ct = cur_ct + 1) ==
- circ::index_of(rd_.load(std::memory_order_acquire))) {
- return false; // full
- }
- if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
- break;
- }
- ipc::yield(k);
- }
- auto* el = elems + circ::index_of(cur_ct);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- while (1) {
- auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
- if (cur_ct != wt_.load(std::memory_order_relaxed)) {
- return true;
- }
- if ((~cac_ct) != cur_ct) {
- return true;
- }
- if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
- return true;
- }
- wt_.store(nxt_ct, std::memory_order_release);
- cur_ct = nxt_ct;
- nxt_ct = cur_ct + 1;
- el = elems + circ::index_of(cur_ct);
- }
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&&, E*) {
- wrapper->elems()->disconnect_receiver(1);
- return false;
- }
-
- template class E, std::size_t DS, std::size_t AS>
- bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
- byte_t buff[DS];
- for (unsigned k = 0;;) {
- auto cur_rd = rd_.load(std::memory_order_relaxed);
- auto cur_wt = wt_.load(std::memory_order_acquire);
- auto id_rd = circ::index_of(cur_rd);
- auto id_wt = circ::index_of(cur_wt);
- if (id_rd == id_wt) {
- auto* el = elems + id_wt;
- auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
- if ((~cac_ct) != cur_wt) {
- return false; // empty
- }
- if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
- wt_.store(cur_wt + 1, std::memory_order_release);
- }
- k = 0;
- }
- else {
- std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
- if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
- std::forward(f)(buff);
- std::forward(out)(true);
- return true;
- }
- ipc::yield(k);
- }
- }
- }
-};
-
-template <>
-struct prod_cons_impl> {
-
- using rc_t = std::uint64_t;
-
- enum : rc_t {
- ep_mask = 0x00000000ffffffffull,
- ep_incr = 0x0000000100000000ull
- };
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic rc_ { 0 }; // read-counter
- };
-
- alignas(cache_line_size) std::atomic wt_; // write index
- alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
-
- circ::u2_t cursor() const noexcept {
- return wt_.load(std::memory_order_acquire);
- }
-
- template
- bool push(W* wrapper, F&& f, E* elems) {
- E* el;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & ep_mask;
- if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
- return false; // has not finished yet
- }
- // consider rem_cc to be 0 here
- if (el->rc_.compare_exchange_weak(
- cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) {
- break;
- }
- ipc::yield(k);
- }
- std::forward(f)(&(el->data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&& f, E* elems) {
- E* el;
- epoch_ += ep_incr;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & ep_mask;
- if (cc & rem_cc) {
- ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
- cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
- if (cc == 0) return false; // no reader
- }
- // just compare & exchange
- if (el->rc_.compare_exchange_weak(
- cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) {
- break;
- }
- ipc::yield(k);
- }
- std::forward(f)(&(el->data_));
- wt_.fetch_add(1, std::memory_order_release);
- return true;
- }
-
- template
- bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
- if (cur == cursor()) return false; // acquire
- auto* el = elems + circ::index_of(cur++);
- std::forward(f)(&(el->data_));
- for (unsigned k = 0;;) {
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- if ((cur_rc & ep_mask) == 0) {
- std::forward(out)(true);
- return true;
- }
- auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id());
- if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
- std::forward(out)((nxt_rc & ep_mask) == 0);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-template <>
-struct prod_cons_impl> {
-
- using rc_t = std::uint64_t;
- using flag_t = std::uint64_t;
-
- enum : rc_t {
- rc_mask = 0x00000000ffffffffull,
- ep_mask = 0x00ffffffffffffffull,
- ep_incr = 0x0100000000000000ull,
- ic_mask = 0xff000000ffffffffull,
- ic_incr = 0x0000000100000000ull
- };
-
- template
- struct elem_t {
- std::aligned_storage_t data_ {};
- std::atomic rc_ { 0 }; // read-counter
- std::atomic f_ct_ { 0 }; // commit flag
- };
-
- alignas(cache_line_size) std::atomic ct_; // commit index
- alignas(cache_line_size) std::atomic epoch_ { 0 };
-
- circ::u2_t cursor() const noexcept {
- return ct_.load(std::memory_order_acquire);
- }
-
- constexpr static rc_t inc_rc(rc_t rc) noexcept {
- return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
- }
-
- constexpr static rc_t inc_mask(rc_t rc) noexcept {
- return inc_rc(rc) & ~rc_mask;
- }
-
- template
- bool push(W* wrapper, F&& f, E* elems) {
- E* el;
- circ::u2_t cur_ct;
- rc_t epoch = epoch_.load(std::memory_order_acquire);
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_relaxed);
- circ::cc_t rem_cc = cur_rc & rc_mask;
- if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
- return false; // has not finished yet
- }
- else if (!rem_cc) {
- auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
- if ((cur_fl != cur_ct) && cur_fl) {
- return false; // full
- }
- }
- // consider rem_cc to be 0 here
- if (el->rc_.compare_exchange_weak(
- cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) &&
- epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
- break;
- }
- ipc::yield(k);
- }
- // only one thread/process would touch here at one time
- ct_.store(cur_ct + 1, std::memory_order_release);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- return true;
- }
-
- template
- bool force_push(W* wrapper, F&& f, E* elems) {
- E* el;
- circ::u2_t cur_ct;
- rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
- for (unsigned k = 0;;) {
- circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
- if (cc == 0) return false; // no reader
- el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
- // check all consumers have finished reading this element
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- circ::cc_t rem_cc = cur_rc & rc_mask;
- if (cc & rem_cc) {
- ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
- cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
- if (cc == 0) return false; // no reader
- }
- // just compare & exchange
- if (el->rc_.compare_exchange_weak(
- cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) {
- if (epoch == epoch_.load(std::memory_order_acquire)) {
- break;
- }
- else if (push(wrapper, std::forward(f), elems)) {
- return true;
- }
- epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
- }
- ipc::yield(k);
- }
- // only one thread/process would touch here at one time
- ct_.store(cur_ct + 1, std::memory_order_release);
- std::forward(f)(&(el->data_));
- // set flag & try update wt
- el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release);
- return true;
- }
-
- template
- bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
- auto* el = elems + circ::index_of(cur);
- auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
- if (cur_fl != ~static_cast(cur)) {
- return false; // empty
- }
- ++cur;
- std::forward(f)(&(el->data_));
- for (unsigned k = 0;;) {
- auto cur_rc = el->rc_.load(std::memory_order_acquire);
- if ((cur_rc & rc_mask) == 0) {
- std::forward(out)(true);
- el->f_ct_.store(cur + N - 1, std::memory_order_release);
- return true;
- }
- auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id());
- bool last_one = false;
- if ((last_one = (nxt_rc & rc_mask) == 0)) {
- el->f_ct_.store(cur + N - 1, std::memory_order_release);
- }
- if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
- std::forward(out)(last_one);
- return true;
- }
- ipc::yield(k);
- }
- }
-};
-
-} // namespace ipc
diff --git a/spaces/flax-community/DietNerf-Demo/jaxnerf/__init__.py b/spaces/flax-community/DietNerf-Demo/jaxnerf/__init__.py
deleted file mode 100644
index c4cbefc3397c8c691234e616369bda8b71f721a6..0000000000000000000000000000000000000000
--- a/spaces/flax-community/DietNerf-Demo/jaxnerf/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Google Research Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
diff --git a/spaces/flax-community/Multilingual-VQA/sections/checkpoints/checkpoints.md b/spaces/flax-community/Multilingual-VQA/sections/checkpoints/checkpoints.md
deleted file mode 100644
index 8976908c2e63e608795c18aeaf81c123efaabf28..0000000000000000000000000000000000000000
--- a/spaces/flax-community/Multilingual-VQA/sections/checkpoints/checkpoints.md
+++ /dev/null
@@ -1,3 +0,0 @@
-- Pre-trained checkpoint at 60k steps: [clip-vision-bert-cc12m-60k](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k)
-- Pre-trained checkpoint at 70k steps: [clip-vision-bert-cc12m-70k](https://huggingface.co/flax-community/clip-vision-bert-cc12m-70k)
-- Fine-tuned checkpoint at 6k steps on 60k pre-trained checkpoint: [clip-vision-bert-vqa-ft-6k](https://huggingface.co/flax-community/clip-vision-bert-vqa-ft-6k)
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/spying.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/spying.py
deleted file mode 100644
index 31fe5d6fa1cf339a8f52d7cf3c37d8d8d18b9647..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/spying.py
+++ /dev/null
@@ -1,429 +0,0 @@
-import numpy as np
-
-from gym_minigrid.minigrid import *
-from gym_minigrid.register import register
-
-import time
-from collections import deque
-
-
-class Peer(NPC):
- """
- A dancing NPC that the agent has to copy
- """
-
- def __init__(self, color, name, env, knowledgeable=False):
- super().__init__(color)
- self.name = name
- self.npc_dir = 1 # NPC initially looks downward
- self.npc_type = 0
- self.env = env
- self.knowledgeable = knowledgeable
- self.npc_actions = []
- self.dancing_step_idx = 0
- self.actions = MiniGridEnv.Actions
- self.add_npc_direction = True
- self.available_moves = [self.rotate_left, self.rotate_right, self.go_forward, self.toggle_action]
- self.exited = False
-
- def step(self):
- if self.exited:
- return
-
- if all(np.array(self.cur_pos) == np.array(self.env.door_pos)):
- # disappear
- self.env.grid.set(*self.cur_pos, self.env.object)
- self.cur_pos = np.array([np.nan, np.nan])
-
- # close door
- self.env.object.toggle(self.env, self.cur_pos)
-
- # reset switches door
- for s in self.env.switches:
- s.is_on = False
-
- # update door
- self.env.update_door_lock()
-
- self.exited = True
-
- elif self.knowledgeable:
-
- if self.env.object.is_locked:
- first_wrong_id = np.where(self.env.get_selected_password() != self.env.password)[0][0]
- print("first_wrong_id:", first_wrong_id)
- goal_pos = self.env.switches_pos[first_wrong_id]
- act = self.path_to_toggle_pos(goal_pos)
- act()
-
- else:
- if all(self.front_pos == self.env.door_pos) and self.env.object.is_open:
- self.go_forward()
-
- else:
- act = self.path_to_toggle_pos(self.env.door_pos)
- act()
-
- else:
- self.env._rand_elem(self.available_moves)()
-
- self.env.update_door_lock()
-
-
-class SpyingGrammar(object):
-
- templates = ["Move your", "Shake your"]
- things = ["body", "head"]
-
- grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)])
-
- @classmethod
- def construct_utterance(cls, action):
- return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " "
-
-
-class SpyingEnv(MultiModalMiniGridEnv):
- """
- Environment in which the agent is instructed to go to a given object
- named using an English text string
- """
-
- def __init__(
- self,
- size=5,
- diminished_reward=True,
- step_penalty=False,
- knowledgeable=False,
- hard_password=False,
- max_steps=None,
- n_switches=3
- ):
- assert size >= 5
- self.empty_symbol = "NA \n"
- self.diminished_reward = diminished_reward
- self.step_penalty = step_penalty
- self.knowledgeable = knowledgeable
- self.hard_password = hard_password
- self.n_switches = n_switches
-
- super().__init__(
- grid_size=size,
- max_steps=max_steps or 5*size**2,
- # Set this to True for maximum speed
- see_through_walls=True,
- actions=MiniGridEnv.Actions,
- action_space=spaces.MultiDiscrete([
- len(MiniGridEnv.Actions),
- *SpyingGrammar.grammar_action_space.nvec
- ]),
- add_npc_direction=True
- )
-
- print({
- "size": size,
- "diminished_reward": diminished_reward,
- "step_penalty": step_penalty,
- })
-
- def get_selected_password(self):
- return np.array([int(s.is_on) for s in self.switches])
-
- def _gen_grid(self, width, height):
- # Create the grid
- self.grid = Grid(width, height, nb_obj_dims=4)
-
- # Randomly vary the room width and height
- width = self._rand_int(5, width+1)
- height = self._rand_int(5, height+1)
-
- self.wall_x = width - 1
- self.wall_y = height - 1
-
- # Generate the surrounding walls
- self.grid.wall_rect(0, 0, width, height)
-
- door_color = self._rand_elem(COLOR_NAMES)
-
- wall_for_door = self._rand_int(1, 4)
-
- if wall_for_door < 2:
- w = self._rand_int(1, width-1)
- h = height-1 if wall_for_door == 0 else 0
- else:
- w = width-1 if wall_for_door == 3 else 0
- h = self._rand_int(1, height-1)
-
- assert h != height-1 # door mustn't be on the bottom wall
-
- self.door_pos = (w, h)
- self.door = Door(door_color, is_locked=True)
- self.grid.set(*self.door_pos, self.door)
-
- # add the switches
- self.switches = []
- self.switches_pos = []
- for i in range(self.n_switches):
- c = COLOR_NAMES[i]
- pos = np.array([i+1, height-1])
- sw = Switch(c)
- self.grid.set(*pos, sw)
- self.switches.append(sw)
- self.switches_pos.append(pos)
-
- # sample password
- if self.hard_password:
- self.password = np.array([self._rand_int(0, 2) for _ in range(self.n_switches)])
-
- else:
- idx = self._rand_int(0, self.n_switches)
- self.password = np.zeros(self.n_switches)
- self.password[idx] = 1.0
-
- # Set a randomly coloured Dancer NPC
- color = self._rand_elem(COLOR_NAMES)
- self.peer = Peer(color, "Jim", self, knowledgeable=self.knowledgeable)
-
- # Place it on the middle left side of the room
- peer_pos = np.array((self._rand_int(1, width - 1), self._rand_int(1, height - 1)))
-
- self.grid.set(*peer_pos, self.peer)
- self.peer.init_pos = peer_pos
- self.peer.cur_pos = peer_pos
-
- # Randomize the agent's start position and orientation
- self.place_agent(size=(width, height))
-
- # Generate the mission string
- self.mission = 'exit the room'
-
- # Dummy beginning string
- self.beginning_string = "This is what you hear. \n"
- self.utterance = self.beginning_string
-
- # utterance appended at the end of each step
- self.utterance_history = ""
-
- # used for rendering
- self.conversation = self.utterance
-
- def update_door_lock(self):
- if np.array_equal(self.get_selected_password(), self.password):
- self.door.is_locked = False
- else:
- self.door.is_locked = True
- self.door.is_open = False
-
- def step(self, action):
- p_action = action[0]
- utterance_action = action[1:]
-
- obs, reward, done, info = super().step(p_action)
- self.update_door_lock()
-
- print("pass:", self.password)
-
- if p_action == self.actions.done:
- done = True
-
- self.peer.step()
-
- if all(self.agent_pos == self.door_pos):
- done = True
- if self.peer.exited:
- # only give reward of both exited
- reward = self._reward()
-
- # discount
- if self.step_penalty:
- reward = reward - 0.01
-
- # fill observation with text
- self.append_existing_utterance_to_history()
- obs = self.add_utterance_to_observation(obs)
- self.reset_utterance()
- return obs, reward, done, info
-
- def _reward(self):
- if self.diminished_reward:
- return super()._reward()
- else:
- return 1.0
-
- def render(self, *args, **kwargs):
- obs = super().render(*args, **kwargs)
- print("conversation:\n", self.conversation)
- print("utterance_history:\n", self.utterance_history)
- self.window.set_caption(self.conversation, [self.peer.name])
- return obs
-
-
-class Spying8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8)
-
-
-class Spying6x6Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=6)
-
-
-# knowledgeable
-class SpyingKnowledgeableEnv(SpyingEnv):
- def __init__(self):
- super().__init__(size=5, knowledgeable=True)
-
-class SpyingKnowledgeable6x6Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=6, knowledgeable=True)
-
-class SpyingKnowledgeable8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True)
-
-class SpyingKnowledgeableHardPassword8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, hard_password=True)
-
-class Spying508x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, max_steps=50)
-
-class SpyingKnowledgeable508x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, max_steps=50)
-
-class SpyingKnowledgeableHardPassword508x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, hard_password=True, max_steps=50)
-
-class SpyingKnowledgeable1008x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, max_steps=100)
-
-class SpyingKnowledgeable100OneSwitch8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, max_steps=100, n_switches=1)
-
-class SpyingKnowledgeable50OneSwitch5x5Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=5, knowledgeable=True, max_steps=50, n_switches=1)
-
-
-class SpyingKnowledgeable505x5Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=5, knowledgeable=True, max_steps=50, n_switches=3)
-
-class SpyingKnowledgeable50TwoSwitches8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, max_steps=50, n_switches=2)
-
-class SpyingKnowledgeable50TwoSwitchesHard8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, max_steps=50, n_switches=2, hard_password=True)
-
-
-class SpyingKnowledgeable100TwoSwitches8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, max_steps=100, n_switches=2)
-
-class SpyingKnowledgeable100TwoSwitchesHard8x8Env(SpyingEnv):
- def __init__(self):
- super().__init__(size=8, knowledgeable=True, max_steps=100, n_switches=2, hard_password=True)
-
-
-
-
-register(
- id='MiniGrid-Spying-5x5-v0',
- entry_point='gym_minigrid.envs:SpyingEnv'
-)
-
-register(
- id='MiniGrid-Spying-6x6-v0',
- entry_point='gym_minigrid.envs:Spying6x6Env'
-)
-
-register(
- id='MiniGrid-Spying-8x8-v0',
- entry_point='gym_minigrid.envs:Spying8x8Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeable-5x5-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeableEnv'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeable-6x6-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable6x6Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeable-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable8x8Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeableHardPassword-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeableHardPassword8x8Env'
-)
-
-# max len 50
-register(
- id='MiniGrid-Spying50-8x8-v0',
- entry_point='gym_minigrid.envs:Spying508x8Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeable50-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable508x8Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeableHardPassword50-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeableHardPassword508x8Env'
-)
-
-# max len 100
-register(
- id='MiniGrid-SpyingKnowledgeable100-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable1008x8Env'
-)
-
-# max len OneSwitch
-register(
- id='MiniGrid-SpyingKnowledgeable100OneSwitch-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable100OneSwitch8x8Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeable50OneSwitch-5x5-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable50OneSwitch5x5Env'
-)
-
-register(
- id='MiniGrid-SpyingUnknowledgeable50OneSwitch-5x5-v0',
- entry_point='gym_minigrid.envs:SpyingUnknowledgeable50OneSwitch5x5Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeable50-5x5-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable505x5Env'
-)
-
-register(
- id='MiniGrid-SpyingKnowledgeable50TwoSwitches-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable50TwoSwitches8x8Env'
-)
-register(
- id='MiniGrid-SpyingKnowledgeable50TwoSwitchesHard-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable50TwoSwitchesHard8x8Env'
-)
-register(
- id='MiniGrid-SpyingKnowledgeable100TwoSwitches-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable100TwoSwitches8x8Env'
-)
-register(
- id='MiniGrid-SpyingKnowledgeable100TwoSwitchesHard-8x8-v0',
- entry_point='gym_minigrid.envs:SpyingKnowledgeable100TwoSwitchesHard8x8Env'
-)
diff --git a/spaces/gagan3012/IMD/BusterNet/README.md b/spaces/gagan3012/IMD/BusterNet/README.md
deleted file mode 100644
index 338f84a05fe6ae81e3160e5a35a76dc084235b74..0000000000000000000000000000000000000000
--- a/spaces/gagan3012/IMD/BusterNet/README.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# BusterNet: Detecting Copy-Move Image Forgery with Source/Target Localization
-
-### Introduction
-We introduce a novel deep neural architecture for image copy-move forgery detection (CMFD), code-named *BusterNet*. Unlike previous efforts, BusterNet is a pure, end-to-end trainable, deep neural network solution. It features a two-branch architecture followed by a fusion module. The two branches localize potential manipulation regions via visual artifacts and copy-move regions via visual similarities, respectively. To the best of our knowledge, this is the first CMFD algorithm with discernibility to localize source/target regions.
-
-In this repository, we release many paper related things, including
-
-- a pretrained BusterNet model
-- custom layers implemented in keras-tensorflow
-- CASIA-CMFD, CoMoFoD-CMFD, and USCISI-CMFD dataset
-- python notebook to reproduce paper results
-
-### Repo Organization
-The entire repo is organized as follows:
-
-- **Data** - host all datasets
- - *CASIA-CMFD
- - *CoMoFoD-CMFD
- - *USCISI-CMFD
-- **Model** - host all model files
-- **ReadMe.md** - this file
-
-Due to the size limit, we can't host all dataset in repo. For those large ones, we host them externally. *indicated dataset requires to be downloaded seperately. Please refer to the document of each dataset for more detailed downloading instructions.
-
-### Python/Keras/Tensorflow
-The original model was trained with
-
-- keras.version = 2.0.7
-- tensorflow.version = 1.1.0
-
-we also test the repository with
-
-- keras.version = 2.2.2
-- tensorflow.version = 1.8.0
-
-Though small differences may be found, results are in general consistent.
-
-### Citation
-If you use the provided code or data in any publication, please kindly cite the following paper.
-
- @inproceedings{wu2018eccv,
- title={BusterNet: Detecting Image Copy-Move Forgery With Source/Target Localization},
- author={Wu, Yue, and AbdAlmageed, Wael and Natarajan, Prem},
- booktitle={European Conference on Computer Vision (ECCV)},
- year={2018},
- organization={Springer},
- }
-
-### Contact
-- Name: Yue Wu
-- Email: yue_wu\[at\]isi.edu
-
-
-### License
-The Software is made available for academic or non-commercial purposes only. The license is for a copy of the program for an unlimited term. Individuals requesting a license for commercial use must pay for a commercial license.
-
- USC Stevens Institute for Innovation
- University of Southern California
- 1150 S. Olive Street, Suite 2300
- Los Angeles, CA 90115, USA
- ATTN: Accounting
-
-DISCLAIMER. USC MAKES NO EXPRESS OR IMPLIED WARRANTIES, EITHER IN FACT OR BY OPERATION OF LAW, BY STATUTE OR OTHERWISE, AND USC SPECIFICALLY AND EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, VALIDITY OF THE SOFTWARE OR ANY OTHER INTELLECTUAL PROPERTY RIGHTS OR NON-INFRINGEMENT OF THE INTELLECTUAL PROPERTY OR OTHER RIGHTS OF ANY THIRD PARTY. SOFTWARE IS MADE AVAILABLE AS-IS. LIMITATION OF LIABILITY. TO THE MAXIMUM EXTENT PERMITTED BY LAW, IN NO EVENT WILL USC BE LIABLE TO ANY USER OF THIS CODE FOR ANY INCIDENTAL, CONSEQUENTIAL, EXEMPLARY OR PUNITIVE DAMAGES OF ANY KIND, LOST GOODWILL, LOST PROFITS, LOST BUSINESS AND/OR ANY INDIRECT ECONOMIC DAMAGES WHATSOEVER, REGARDLESS OF WHETHER SUCH DAMAGES ARISE FROM CLAIMS BASED UPON CONTRACT, NEGLIGENCE, TORT (INCLUDING STRICT LIABILITY OR OTHER LEGAL THEORY), A BREACH OF ANY WARRANTY OR TERM OF THIS AGREEMENT, AND REGARDLESS OF WHETHER USC WAS ADVISED OR HAD REASON TO KNOW OF THE POSSIBILITY OF INCURRING SUCH DAMAGES IN ADVANCE.
-
-For commercial license pricing and annual commercial update and support pricing, please contact:
-
- Rakesh Pandit USC Stevens Institute for Innovation
- University of Southern California
- 1150 S. Olive Street, Suite 2300
- Los Angeles, CA 90115, USA
-
- Tel: +1 213-821-3552
- Fax: +1 213-821-5001
- Email: rakeshvp@usc.edu and ccto: accounting@stevens.usc.edu
diff --git a/spaces/giustiniano/real_estate_classifier/app.py b/spaces/giustiniano/real_estate_classifier/app.py
deleted file mode 100644
index 242da64f31494368d73f84be142ce781c0de529a..0000000000000000000000000000000000000000
--- a/spaces/giustiniano/real_estate_classifier/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# AUTOGENERATED! DO NOT EDIT! File to edit: ../drive/MyDrive/Colab Notebooks/room classifier to app.ipynb.
-
-# %% auto 0
-__all__ = ['learner', 'image', 'label', 'examples', 'intf', 'classify_image']
-
-# %% ../drive/MyDrive/Colab Notebooks/room classifier to app.ipynb 2
-import platform
-
-import fastbook
-import fastai
-from fastai.vision.widgets import *
-from fastai.callback.preds import load_learner
-from fastai.vision.all import *
-
-fastbook.setup_book()
-
-# %% ../drive/MyDrive/Colab Notebooks/room classifier to app.ipynb 3
-if platform.system().lower() == "windows":
- import pathlib
- posix_path = pathlib.PosixPath
- pathlib.PosixPath = pathlib.WindowsPath
-learner = load_learner("room_classifier.pk1")
-if platform.system().lower() == "windows":
- pathlib.PosixPath = posix_path
-
-
-# %% ../drive/MyDrive/Colab Notebooks/room classifier to app.ipynb 14
-def classify_image(img):
- pred, idx, probs = learner.predict(img)
- return dict(zip(learner.dls.vocab, map(float, probs)))
-
-
-
-# %% ../drive/MyDrive/Colab Notebooks/room classifier to app.ipynb 16
-import gradio as gr
-
-image = gr.inputs.Image(shape=(192, 192))
-label = gr.outputs.Label()
-out_pl = widgets.Output()
-
-examples = ["examples/test_bathroom.jfif", "examples/test_living_room.jfif", "examples/test_building.jfif"]
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
-intf.launch()
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/3D Instructor 2.2.7 Keygen.md b/spaces/gotiQspiryo/whisper-ui/examples/3D Instructor 2.2.7 Keygen.md
deleted file mode 100644
index cc7897ae14ad9229f28dd6bce5b4dddfbf97fb76..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/3D Instructor 2.2.7 Keygen.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-With the help of the Traffic Code Compliance System and the instructor's prompts, you will consolidate your knowledge of the rules of the road in different countries of the world ... In this section you will find maps of different countries and cities.
-Read moreWith the traffic rules enforcement system and the instructor's hints you will strengthen your knowledge of the rules of the road in different countries of the world. ...
-In this section you will find maps of different countries and cities.
-If you want to know how to get to a point of interest, then choose the desired country from the list and click on its map.
-You can use the site search or, if you already have data, enter the address you are looking for. 8a78ff9644
-
-
-
diff --git "a/spaces/gotiQspiryo/whisper-ui/examples/Eset NOD32 Antivirus 2016 TNOD 1.6.0 Final \302\240Licencias ((TOP)).md" "b/spaces/gotiQspiryo/whisper-ui/examples/Eset NOD32 Antivirus 2016 TNOD 1.6.0 Final \302\240Licencias ((TOP)).md"
deleted file mode 100644
index ab06dc70b043d53fa2f3eb0eefaa2064335ea5f8..0000000000000000000000000000000000000000
--- "a/spaces/gotiQspiryo/whisper-ui/examples/Eset NOD32 Antivirus 2016 TNOD 1.6.0 Final \302\240Licencias ((TOP)).md"
+++ /dev/null
@@ -1,6 +0,0 @@
-
Eset NOD32 Antivirus 2016 TNOD 1.6.0 Final Licencias
11th Physics Digest Pdf Download: A Guide for Students
-
-
If you are looking for a comprehensive and reliable reference book for 11th standard physics, you might want to download the 11th Physics Digest Pdf. This is a book that covers all the topics and concepts in the Maharashtra State Board syllabus, as well as provides model answers, solutions, problems, MCQs, diagrams, formulae and more. In this article, we will tell you why you should download the 11th Physics Digest Pdf and how it can help you prepare for your exams.
There are many benefits of downloading the 11th Physics Digest Pdf for your studies. Here are some of them:
-
-
-
It is based on the new textbook and the latest question paper pattern for Standard XII.
-
It contains model answers and solutions to all the questions and problems given in the Board's textbook.
-
It also contains additional graded and varied questions with answers and solved problems that cover every concept in the textbook.
-
It has neat, fully-labelled, authentic and easily-reproducible diagrams in two colours.
-
It has formulae at a glance and memory map for instant revision.
-
It has WWW links to authentic study material for interesting online learning.
-
It is very useful to understand the subject well and to prepare thoroughly for HSC Board Examination as well as other competitive examinations like NEET, JEE MAIN, MHT-CET, etc.
-
-
-
How to Download the 11th Physics Digest Pdf?
-
-
If you are interested in downloading the 11th Physics Digest Pdf, you can follow these simple steps:
-
-
-
Go to any of the web search results that offer the 11th Physics Digest Pdf download link.
-
Click on the link and you will be redirected to a page where you can preview or download the pdf file.
-
You may need to enter your email address or phone number to access the download link.
-
You may also need to complete a captcha or a survey to verify that you are not a robot.
-
Once you have completed these steps, you can download the 11th Physics Digest Pdf file to your device.
-
You can also print it out or save it to your cloud storage for future reference.
-
-
-
What are the Topics Covered in the 11th Physics Digest Pdf?
-
-
The 11th Physics Digest Pdf covers all the topics and concepts that are included in the Maharashtra State Board syllabus for Class 11 Physics. Here are some of the topics that you will find in the book:
-
-
-
-
Units and Measurements
-
Mathematical Methods
-
Motion in a Plane
-
Laws of Motion
-
Gravitation
-
Mechanical Properties of Solids
-
Thermal Properties of Matter
-
Sound
-
Optics
-
Electrostatics
-
Electric Current Through Conductors
-
Magnetism
-
Electromagnetic Waves and Communication System
-
Semiconductors
-
-
-
The book also provides detailed explanations, examples, illustrations, exercises and solutions for each topic. You can use the book as a guide to learn physics concepts, revise them before exams, practice problems and test your knowledge.
-
-
Conclusion
-
-
The 11th Physics Digest Pdf is a valuable resource for students who want to excel in physics. It is a complete reference book that covers all the topics and concepts in the Maharashtra State Board syllabus. It also provides model answers, solutions, problems, MCQs, diagrams, formulae and more. You can download the book from any of the web search results that offer it. You can also use it as a guide to learn physics concepts, revise them before exams, practice problems and test your knowledge. We hope this article has helped you understand why you should download the 11th Physics Digest Pdf and how it can help you prepare for your exams.
-
How to Use the 11th Physics Digest Pdf for Your Studies?
-
-
The 11th Physics Digest Pdf is not just a book that you can download and read. It is also a book that you can use for your studies in various ways. Here are some of the ways that you can use the 11th Physics Digest Pdf for your studies:
-
-
-
You can use it as a reference book to learn physics concepts and theories from the explanations, examples and illustrations provided in the book.
-
You can use it as a revision book to revise physics concepts and formulae before exams from the formulae at a glance and memory map sections.
-
You can use it as a practice book to practice physics problems and questions from the exercises, solutions, problems and MCQs given in the book.
-
You can use it as a test book to test your physics knowledge and skills from the model answers, solutions, problems and MCQs given in the book.
-
You can use it as a resource book to access authentic study material from the WWW links given in the book.
-
-
-
What are the Features of the 11th Physics Digest Pdf?
-
-
The 11th Physics Digest Pdf is not just a book that covers all the topics and concepts in the Maharashtra State Board syllabus. It is also a book that has many features that make it a valuable resource for students. Here are some of the features of the 11th Physics Digest Pdf:
-
-
-
It is based on the new textbook and the latest question paper pattern for Standard XII.
-
It is written by experts in physics who have years of experience in teaching and writing.
-
It is designed to suit the needs and abilities of students of different levels of understanding and learning styles.
-
It is updated with the latest information and developments in physics.
-
It is easy to read and understand with simple language and clear presentation.
-
It is attractive and appealing with colourful diagrams, pictures and graphics.
-
It is interactive and engaging with stimulating in-text questions, informations, tips and tricks.
-
-
-
Conclusion
-
-
The 11th Physics Digest Pdf is a must-have book for students who want to excel in physics. It is a complete reference book that covers all the topics and concepts in the Maharashtra State Board syllabus. It also provides model answers, solutions, problems, MCQs, diagrams, formulae and more. You can download the book from any of the web search results that offer it. You can also use it as a guide to learn physics concepts, revise them before exams, practice problems and test your knowledge. We hope this article has helped you understand why you should download the 11th Physics Digest Pdf and how it can help you prepare for your exams.
-
What are the Advantages of the 11th Physics Digest Pdf over Other Books?
-
-
The 11th Physics Digest Pdf is not just another book that you can find in the market. It is a book that has many advantages over other books that claim to offer similar content. Here are some of the advantages of the 11th Physics Digest Pdf over other books:
-
-
-
It is prepared by Navneet Education Limited, a trusted and reputed name in the field of education publishing.
-
It is updated and revised regularly to keep up with the changes and developments in the syllabus and the question paper pattern.
-
It is comprehensive and exhaustive, covering all the topics and concepts in the syllabus in detail.
-
It is accurate and authentic, providing correct and verified information and data.
-
It is user-friendly and student-oriented, providing easy and clear explanations, examples and illustrations.
-
It is affordable and accessible, offering a high-quality book at a reasonable price and an easy download option.
-
-
-
How to Study with the 11th Physics Digest Pdf?
-
-
The 11th Physics Digest Pdf is not just a book that you can download and use. It is also a book that you can study with to improve your physics knowledge and skills. Here are some tips on how to study with the 11th Physics Digest Pdf:
-
-
-
Read the book thoroughly and understand the physics concepts and theories from the explanations, examples and illustrations provided in the book.
-
Revise the book regularly and recall the physics concepts and formulae from the formulae at a glance and memory map sections.
-
Practice the book frequently and solve the physics problems and questions from the exercises, solutions, problems and MCQs given in the book.
-
Test yourself periodically and check your physics knowledge and skills from the model answers, solutions, problems and MCQs given in the book.
-
Explore more resources online and access authentic study material from the WWW links given in the book.
-
-
-
Conclusion
-
-
The 11th Physics Digest Pdf is a must-have book for students who want to excel in physics. It is a complete reference book that covers all the topics and concepts in the Maharashtra State Board syllabus. It also provides model answers, solutions, problems, MCQs, diagrams, formulae and more. You can download the book from any of the web search results that offer it. You can also use it as a guide to learn physics concepts, revise them before exams, practice problems and test your knowledge. We hope this article has helped you understand why you should download the 11th Physics Digest Pdf and how it can help you prepare for your exams.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mimaki RasterLink Pro 5 IP Crack WORK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mimaki RasterLink Pro 5 IP Crack WORK.md
deleted file mode 100644
index ea3245a024b360c6af77bbf4ac2bdf7068cbae70..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mimaki RasterLink Pro 5 IP Crack WORK.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
How to Boost Your Printing Performance with Mimaki RasterLink Pro 5 IP Crack
-
-
Do you want to take your printing business to the next level? Do you want to produce high-quality prints with smooth gradations and accurate colors? Do you want to save time and money on color matching and software updates? If you answered yes to any of these questions, then you need Mimaki RasterLink Pro 5 IP crack. This is a software that allows you to control your Mimaki inkjet printers with ease and precision. In this article, we will tell you what Mimaki RasterLink Pro 5 IP crack is, how it works, and where to get it.
-
-
What is Mimaki RasterLink Pro 5 IP Crack?
-
-
Mimaki RasterLink Pro 5 IP crack is a cracked version of the original Mimaki RasterLink Pro 5 IP software, which is a RIP (raster image processor) software that converts digital images into printable data for Mimaki inkjet printers. By using Mimaki RasterLink Pro 5 IP crack, you can access all the features and functions of the original software without paying for the license fee or activation code.
Mimaki RasterLink Pro 5 IP crack works by providing you with several features and benefits that can improve your printing performance. Some of them are:
-
-
-
16 bit rendering: This feature generates smoother gradations and finer details for outstanding print quality. 16 bit rendering maximizes the output potential of all your Mimaki inkjet printers.
-
DIC color collection: This feature enables you to produce desired colors according to color collections printed on the media used for the actual production. DIC color collection reduces the cost and time for color matching.
-
Automatic spot color conversion: This feature automatically converts spot colors to CMYK values by Adobe Illustrator-compliant DIC color collection. This ensures stable and precise color management.
-
Web update function: This feature allows you to easily update the software and download profiles from the internet.
-
-
-
Where to Get Mimaki RasterLink Pro 5 IP Crack?
-
-
To get Mimaki RasterLink Pro 5 IP crack, you need to follow these steps:
-
-
-
Download the Mimaki RasterLink Pro 5 IP crack file from here.
-
Extract the file using WinRAR or any other extraction tool.
-
Run the setup.exe file and follow the instructions on the screen.
-
Copy the crack file from the extracted folder and paste it into the installation directory of Mimaki RasterLink Pro 5 IP.
-
Launch the software and enjoy!
-
-
-
Note: You may need to disable your antivirus or firewall before installing or running Mimaki RasterLink Pro 5 IP crack, as it may be detected as a virus or malware by some security programs.
-
-
Conclusion
-
-
Mimaki RasterLink Pro 5 IP crack is a powerful and versatile software that can help you boost your printing performance with your Mimaki inkjet printers. It has several features and benefits that can improve your print quality and efficiency. It is also easy to get and install, as long as you follow the steps above. If you want to try Mimaki RasterLink Pro 5 IP crack, you can get it from here.
-
What are the Alternatives to Mimaki RasterLink Pro 5 IP Crack?
-
-
Mimaki RasterLink Pro 5 IP crack is not the only option for controlling your Mimaki inkjet printers. There are some alternatives that you can consider, depending on your needs and preferences. Some of them are:
-
-
-
-
Mimaki RasterLink Pro 5 IP original software: This is the official software from Mimaki that requires a license fee and activation code. It has the same features and functions as Mimaki RasterLink Pro 5 IP crack, but it also comes with technical support and customer service from Mimaki. It is also more secure and reliable than Mimaki RasterLink Pro 5 IP crack.
-
Mimaki RasterLink Pro 6: This is the latest version of Mimaki's RIP software that has more advanced features and functions than Mimaki RasterLink Pro 5 IP. It supports more printer models and media types, and it has a new user interface that is more intuitive and user-friendly. It also has a new color management system that can produce more accurate and consistent colors.
-
Other RIP software: There are other RIP software that can work with Mimaki inkjet printers, such as Onyx, Caldera, Wasatch, etc. They have different features and functions that may suit your specific needs and preferences. However, they may not be fully compatible with Mimaki inkjet printers, and they may require additional drivers or profiles to work properly.
-
-
-
Conclusion
-
-
Mimaki RasterLink Pro 5 IP crack is a software that can help you boost your printing performance with your Mimaki inkjet printers. It has many features and benefits that can improve your print quality and efficiency. It is also easy to get and install, as long as you follow the steps above. However, it also has some disadvantages that you should be aware of, such as violating the intellectual property rights of Mimaki, exposing your computer to viruses or malware, encountering some errors or bugs, and not receiving any technical support or customer service from Mimaki. If you want to try Mimaki RasterLink Pro 5 IP crack, you can get it from here. If you want to explore other alternatives to Mimaki RasterLink Pro 5 IP crack, you can check out Mimaki RasterLink Pro 5 IP original software, Mimaki RasterLink Pro 6, or other RIP software.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/C3520 Flash Loader 75 4 CSC V02 Citrus Lite [TOP].md b/spaces/inreVtussa/clothingai/Examples/C3520 Flash Loader 75 4 CSC V02 Citrus Lite [TOP].md
deleted file mode 100644
index 0636ffd82acd6bd53a6cb03151fa875c2e6382cf..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/C3520 Flash Loader 75 4 CSC V02 Citrus Lite [TOP].md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
-```python
-import usesless
-
-message_id = ""
-token = # usesless.Account.create(logging=True)
-while True:
- prompt = input("Question: ")
- if prompt == "!stop":
- break
-
- req = usesless.Completion.create(prompt=prompt, parentMessageId=message_id, token=token)
-
- print(f"Answer: {req['text']}")
- message_id = req["id"]
-```
diff --git a/spaces/kaidorespy/CompVis-stable-diffusion-v1-4/app.py b/spaces/kaidorespy/CompVis-stable-diffusion-v1-4/app.py
deleted file mode 100644
index e1e1025c8f06010197c50917ac9dd1ddeaf7e5aa..0000000000000000000000000000000000000000
--- a/spaces/kaidorespy/CompVis-stable-diffusion-v1-4/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/CompVis/stable-diffusion-v1-4").launch()
\ No newline at end of file
diff --git a/spaces/kananj/Daytona-Beach-Ambassador/daytona.py b/spaces/kananj/Daytona-Beach-Ambassador/daytona.py
deleted file mode 100644
index 5b62190d5f7228c022342a1bd143874e456570cb..0000000000000000000000000000000000000000
--- a/spaces/kananj/Daytona-Beach-Ambassador/daytona.py
+++ /dev/null
@@ -1,291 +0,0 @@
-DAYTONA_smaller = """\
-You are a helpful, respectful and honest ambassador for Daytona Beach Florida. Answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
-If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Only answer questions about Daytona Beach Florida. Present Daytona Beach Florida as a fun, exciting and family friendly place to visit. Encourage the user to visit Daytona Beach. Start the conversation by introducing yourself as an ambassador for Daytona Beach. See if they have any specific questions, or if not, ask them if they would like to hear some interesting details about Daytona Beach.\
-
-The following is the knowledge base about Daytona Beach Florida:\
-
-Daytona Beach, or simply Daytona, is a coastal resort city in east-central Florida. Located on the eastern edge of Volusia County near the Atlantic coastline, its population was 72,647 at the 2020 census. Daytona Beach is approximately 42 miles (67.6 km) northeast of Orlando, 90 miles (144.8 km) southeast of Jacksonville, and 231 miles (371.8 km) northwest of Miami. It is part of the Deltona–Daytona Beach–Ormond Beach metropolitan area which has a population of about 600,000 and is also a principal city of the Fun Coast region of Florida.
-
-Daytona Beach is historically known for its beach, where the hard-packed sand allows motorized vehicles on the beach in restricted areas.[5] This hard-packed sand made Daytona Beach a mecca for motorsports, and the old Daytona Beach and Road Course hosted races for over 50 years. This was replaced in 1959 by Daytona International Speedway. The city is also the headquarters of NASCAR.
-
-Daytona Beach hosts large groups of out-of-towners during the year, who visit the city for various events, notably Speedweeks in early February when over 200,000 NASCAR fans come to attend the season-opening Daytona 500. Other events include the NASCAR Coke Zero Sugar 400 race in August, Bike Week in early March, Biketoberfest in late October, and the 24 Hours of Daytona endurance race in January.
-
-Climate chart
-Month / Precipitation totals in inches / Average max temperature in F / Average min. temperature in F
-
-J / 3.1 / 68/47
-F / 2.7 / 71/50
-M/3.8 / 75/54
-A/2.5 / 79/59
-M/ 3.3 / 85/65
-J /5.7 / 88/71
-J / 5.2 / 90/73
-A/6.1 / 90/73
-S / 6.6 / 87 / 72
-O / 4.5 / 82 / 66
-N / 3 / 76 / 57
-D / 2.7 / 70 / 51
-
-Daytona Beach is located at 29°12′N 81°2′W (29.2073, −81.0379). According to the United States Census Bureau, the city has a total area of 64.93 sq mi (168 km2). of which 58.68 sq mi (152 km2) is land and 6.25 sq mi (16 km2) is water, with water thus comprising 9.6% of the total area.
-
-The city of Daytona Beach is split in two by the Halifax River lagoon, part of the Intracoastal Waterway, and sits on the Atlantic Ocean. It is bordered on the north by Holly Hill and Ormond Beach and on the south by Daytona Beach Shores, South Daytona and Port Orange.
-
-Climate
-Daytona Beach has a humid subtropical climate (Köppen climate classification Cfa), which is typical of the Gulf and South Atlantic states. As is typical of much of Florida, there are two seasons in Daytona Beach; the warmer, wetter season (late May through October) and the cooler and drier season (November through April).
-
-In summer, temperatures are relatively stable and there is an average of only 8 days annually with a maximum at or above 95 °F (35 °C); the last 100 °F (38 °C) reading was seen on August 2, 1999. The Bermuda High pumps hot and unstable tropical air from the Bahamas and Gulf of Mexico, resulting in daily, but brief thundershowers. This results in the months of June through September accounting for a majority of the average annual rainfall of 51.25 in (1,302 mm).
-
-In winter, Daytona Beach has weather conditions typical of other cities on the Florida peninsula. On average, the coolest month is January, with a normal monthly mean temperature of 58.8 °F (14.9 °C). It is the only month where the average high temperature falls below 70.0 °F (21.1 °C). Occasional cold fronts can bring freezes, which from 1991 to 2020 were seen on an average of 3.0 nights annually; however, minima below 25 °F (−4 °C) are very rare, and were last seen on December 28, 2010. Like much of Florida, Daytona Beach often can be very dry in late winter and early spring, and brush fires and water restrictions can be an issue.
-
-Official record temperatures range from 15 °F (−9 °C) on January 21, 1985, up to 102 °F (39 °C) on July 15, 1981, and June 24, 1944; the record cold daily maximum is 33 °F (1 °C) on Christmas day 1983, while, conversely, the record warm daily minimum is 82 °F (28 °C) on September 1 and 10–11, 2008 and August 25, 2020. Annual rainfall has ranged from 31.36 in (797 mm) in 2006 and 1956, up to 79.29 in (2,014 mm) in 1953. The most rainfall to have occurred in a calendar day was 12.85 in (326 mm) on October 10, 1924, which contributed to 24.82 in (630 mm) of rain that fell that month, the most of any calendar month.
-
-
-Culture
-
-Museum of Arts and Sciences
-The Museum of Arts and Sciences is the primary cultural facility for Daytona Beach and Volusia County. Other museums located in the city include the Southeast Museum of Photography and the Halifax Historical Museum. The Museum of Arts and Sciences is actually a collection of museums and galleries and includes the Klancke Environmental Complex, the Cuban Museum, Root Family Museum featuring one of the largest Coca-Cola collections in the world, the Dow American Gallery and the Bouchelle Center for Decorative Arts which together form what is probably one of the finest collections of furniture and decorative arts in the Southeast. It also includes the Cici and Hyatt Brown Museum of Art, which houses the largest collection of Florida art in the world. There are also changing exhibitions and a children's science center opened in 2008. Since 1952, the non-profit Daytona Beach Symphony Society has sponsored performances by U.S. and international orchestras, opera and dance companies each season at the Peabody Auditorium.[28]
-
-Driving on the packed sand at Daytona Beach
-Daytona Beach has over 23 miles (37 km) of white sandy beaches open to pedestrians without time restrictions.[29] Cars can be driven on some of the beaches during daylight hours.[29] There are more than ten waterfront parks in Daytona Beach.[30] Thong bikinis are prohibited in all areas of Daytona Beach,[31] with a penalty of up to $500 and 60 days in jail.[32]
-
-Sports
-
-The start of the 2015 Daytona 500 at Daytona International Speedway
-
-Daytona Beach Golf Course, South Course
-Daytona Beach is home to the headquarters of the LPGA, NASCAR, IMSA, International Speedway Corporation, in Florida.
-
-Motorsports
-The Daytona International Speedway hosts the annual 24 Hours of Daytona (Rolex 24 at Daytona) and Daytona 500 races, among other events.
-
-Baseball
-In addition to motorsports, Daytona is also the home of the Daytona Tortugas, a minor league baseball team of the Low-A Southeast who play at Jackie Robinson Ballpark; it was established in 1993 and currently has 6 championships.
-
-Golf
-There are a number of golf courses in Daytona Beach.
-
-Daytona Beach Golf Course: Two courses, North and South Courses designed in 1922.
-LPGA International: The golf club offers two 18-hole courses, Hills and Jones (originally Legends and Champions).
-Special events
-The city attracts over 8 million tourists each year. Special events that draw visitors to Daytona Beach include:
-
-Speedweeks (Daytona 500 NASCAR race, Rolex 24 sports car race, and others)
-Coke Zero Sugar 400, NASCAR race held on the first Saturday of July (formerly called the Pepsi 400 and the Firecracker 400)
-Daytona Beach Bike Week Daytona 200 motorcycle races, bike shows and biker reunion in March
-Spring break (date varies, usually the first and second week of March)
-During motorcycle events (Bike Week and Biketoberfest), several hundred thousand bikers from all over the world visit the greater Daytona Beach area. The city is also often associated with spring break, though the efforts of the local government to discourage rowdiness, combined with the rise of other spring break destinations, have affected Daytona's preeminence as a spring break destination. It is the destination of Dayton 2 Daytona, an annual event that draws over 3,000 University of Dayton college students since 1977.
-Shopping
-Volusia Mall, 1700 West International Speedway Blvd. The largest shopping mall in Daytona Beach. Anchored by Sears, JCPenney, Macy's, and Dillard's.
-Ocean Walk Shoppes, 250 North Atlantic Ave. Open-air shopping center, located in the heart of the beach area.
-Tanger Outlets, located in the southeast quadrant of Interstate 95 and LPGA Blvd. The 380,000 square feet (35,000 m2) retail center was completed in November 2016.
-"""
-
-DAYTONA_mid = """\
-
-You are a helpful, respectful and honest ambassador for Daytona Beach Florida. Answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
-If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Only answer questions about Daytona Beach Florida. Present Daytona Beach Florida as a fun, exciting and family friendly place to visit. Encourage the user to visit Daytona Beach. Start the conversation by introducing yourself as an ambassador for Daytona Beach. See if they have any specific questions, or if not, ask them if they would like to hear some interesting details about Daytona Beach.\
-
-The following is the knowledge base about Daytona Beach Florida:\
-
-Daytona Beach, or simply Daytona, is a coastal resort city in east-central Florida. Located on the eastern edge of Volusia County near the Atlantic coastline, its population was 72,647 at the 2020 census. Daytona Beach is approximately 42 miles (67.6 km) northeast of Orlando, 90 miles (144.8 km) southeast of Jacksonville, and 231 miles (371.8 km) northwest of Miami. It is part of the Deltona–Daytona Beach–Ormond Beach metropolitan area which has a population of about 600,000 and is also a principal city of the Fun Coast region of Florida.
-
-Daytona Beach is historically known for its beach, where the hard-packed sand allows motorized vehicles on the beach in restricted areas.[5] This hard-packed sand made Daytona Beach a mecca for motorsports, and the old Daytona Beach and Road Course hosted races for over 50 years. This was replaced in 1959 by Daytona International Speedway. The city is also the headquarters of NASCAR.
-
-Daytona Beach hosts large groups of out-of-towners during the year, who visit the city for various events, notably Speedweeks in early February when over 200,000 NASCAR fans come to attend the season-opening Daytona 500. Other events include the NASCAR Coke Zero Sugar 400 race in August, Bike Week in early March, Biketoberfest in late October, and the 24 Hours of Daytona endurance race in January.
-
-Climate chart
-Month / Precipitation totals in inches / Average max temperature in F / Average min. temperature in F
-
-J / 3.1 / 68/47
-F / 2.7 / 71/50
-M/3.8 / 75/54
-A/2.5 / 79/59
-M/ 3.3 / 85/65
-J /5.7 / 88/71
-J / 5.2 / 90/73
-A/6.1 / 90/73
-S / 6.6 / 87 / 72
-O / 4.5 / 82 / 66
-N / 3 / 76 / 57
-D / 2.7 / 70 / 51
-
-Daytona Beach is located at 29°12′N 81°2′W (29.2073, −81.0379). According to the United States Census Bureau, the city has a total area of 64.93 sq mi (168 km2). of which 58.68 sq mi (152 km2) is land and 6.25 sq mi (16 km2) is water, with water thus comprising 9.6% of the total area.
-
-The city of Daytona Beach is split in two by the Halifax River lagoon, part of the Intracoastal Waterway, and sits on the Atlantic Ocean. It is bordered on the north by Holly Hill and Ormond Beach and on the south by Daytona Beach Shores, South Daytona and Port Orange.
-
-Climate
-Daytona Beach has a humid subtropical climate (Köppen climate classification Cfa), which is typical of the Gulf and South Atlantic states. As is typical of much of Florida, there are two seasons in Daytona Beach; the warmer, wetter season (late May through October) and the cooler and drier season (November through April).
-
-In summer, temperatures are relatively stable and there is an average of only 8 days annually with a maximum at or above 95 °F (35 °C); the last 100 °F (38 °C) reading was seen on August 2, 1999. The Bermuda High pumps hot and unstable tropical air from the Bahamas and Gulf of Mexico, resulting in daily, but brief thundershowers. This results in the months of June through September accounting for a majority of the average annual rainfall of 51.25 in (1,302 mm).
-
-In winter, Daytona Beach has weather conditions typical of other cities on the Florida peninsula. On average, the coolest month is January, with a normal monthly mean temperature of 58.8 °F (14.9 °C). It is the only month where the average high temperature falls below 70.0 °F (21.1 °C). Occasional cold fronts can bring freezes, which from 1991 to 2020 were seen on an average of 3.0 nights annually; however, minima below 25 °F (−4 °C) are very rare, and were last seen on December 28, 2010. Like much of Florida, Daytona Beach often can be very dry in late winter and early spring, and brush fires and water restrictions can be an issue.
-
-Official record temperatures range from 15 °F (−9 °C) on January 21, 1985, up to 102 °F (39 °C) on July 15, 1981, and June 24, 1944; the record cold daily maximum is 33 °F (1 °C) on Christmas day 1983, while, conversely, the record warm daily minimum is 82 °F (28 °C) on September 1 and 10–11, 2008 and August 25, 2020. Annual rainfall has ranged from 31.36 in (797 mm) in 2006 and 1956, up to 79.29 in (2,014 mm) in 1953. The most rainfall to have occurred in a calendar day was 12.85 in (326 mm) on October 10, 1924, which contributed to 24.82 in (630 mm) of rain that fell that month, the most of any calendar month.
-
-
-Culture
-
-Museum of Arts and Sciences
-The Museum of Arts and Sciences is the primary cultural facility for Daytona Beach and Volusia County. Other museums located in the city include the Southeast Museum of Photography and the Halifax Historical Museum. The Museum of Arts and Sciences is actually a collection of museums and galleries and includes the Klancke Environmental Complex, the Cuban Museum, Root Family Museum featuring one of the largest Coca-Cola collections in the world, the Dow American Gallery and the Bouchelle Center for Decorative Arts which together form what is probably one of the finest collections of furniture and decorative arts in the Southeast. It also includes the Cici and Hyatt Brown Museum of Art, which houses the largest collection of Florida art in the world. There are also changing exhibitions and a children's science center opened in 2008. Since 1952, the non-profit Daytona Beach Symphony Society has sponsored performances by U.S. and international orchestras, opera and dance companies each season at the Peabody Auditorium.[28]
-
-Driving on the packed sand at Daytona Beach
-Daytona Beach has over 23 miles (37 km) of white sandy beaches open to pedestrians without time restrictions.[29] Cars can be driven on some of the beaches during daylight hours.[29] There are more than ten waterfront parks in Daytona Beach.[30] Thong bikinis are prohibited in all areas of Daytona Beach,[31] with a penalty of up to $500 and 60 days in jail.[32]
-
-Sports
-
-The start of the 2015 Daytona 500 at Daytona International Speedway
-
-Daytona Beach Golf Course, South Course
-Daytona Beach is home to the headquarters of the LPGA, NASCAR, IMSA, International Speedway Corporation, in Florida.
-
-Motorsports
-The Daytona International Speedway hosts the annual 24 Hours of Daytona (Rolex 24 at Daytona) and Daytona 500 races, among other events.
-
-Baseball
-In addition to motorsports, Daytona is also the home of the Daytona Tortugas, a minor league baseball team of the Low-A Southeast who play at Jackie Robinson Ballpark; it was established in 1993 and currently has 6 championships.
-
-Golf
-There are a number of golf courses in Daytona Beach.
-
-Daytona Beach Golf Course: Two courses, North and South Courses designed in 1922.
-LPGA International: The golf club offers two 18-hole courses, Hills and Jones (originally Legends and Champions).
-Special events
-The city attracts over 8 million tourists each year. Special events that draw visitors to Daytona Beach include:
-
-Speedweeks (Daytona 500 NASCAR race, Rolex 24 sports car race, and others)
-Coke Zero Sugar 400, NASCAR race held on the first Saturday of July (formerly called the Pepsi 400 and the Firecracker 400)
-Daytona Beach Bike Week Daytona 200 motorcycle races, bike shows and biker reunion in March
-Spring break (date varies, usually the first and second week of March)
-During motorcycle events (Bike Week and Biketoberfest), several hundred thousand bikers from all over the world visit the greater Daytona Beach area. The city is also often associated with spring break, though the efforts of the local government to discourage rowdiness, combined with the rise of other spring break destinations, have affected Daytona's preeminence as a spring break destination. It is the destination of Dayton 2 Daytona, an annual event that draws over 3,000 University of Dayton college students since 1977.
-Shopping
-Volusia Mall, 1700 West International Speedway Blvd. The largest shopping mall in Daytona Beach. Anchored by Sears, JCPenney, Macy's, and Dillard's.
-Ocean Walk Shoppes, 250 North Atlantic Ave. Open-air shopping center, located in the heart of the beach area.
-Tanger Outlets, located in the southeast quadrant of Interstate 95 and LPGA Blvd. The 380,000 square feet (35,000 m2) retail center was completed in November 2016.
-Transportation
-Airports
-
-Aerial view of Daytona Beach International Airport.
-Passenger airline services are located at Daytona Beach International Airport (DAB), which is centrally located within the city adjacent to Daytona International Speedway. The site was first used as an airport with terminals being constructed in 1952 and 1958. The present facility was constructed in 1992 at the cost of $46 million, and includes both a domestic terminal and an International terminal. Despite the new facilities, DAB has found difficulty in attracting and retaining carriers; Continental Airlines, AirTran Airways, and United Airlines discontinued flights to Daytona in 2007 and 2008.[37] LTU & American Airlines also serviced Daytona Beach during the 1980s and 1990s, both of which ended all flights in 1994 & 1997.
-
-Current passenger airlines serving DAB include Delta Air Lines (with nonstop service to Atlanta) and American Airlines (with non-stop service to Charlotte). Both carriers offer connecting service from those cities to destinations worldwide. International flights from DAB fly to destinations in the Bahamas through air taxi and charter services Airgate Aviation and IslandPass; non-stop flights are available from DAB to Marsh Harbour, Treasure Cay, and North Eleuthera. Sunwing Airlines also operates seasonal flights from Toronto Pearson International Airport.[38] DAB is also heavily used for general aviation, largely due to Embry–Riddle Aeronautical University, whose campus is located at the airport.
-
-Larger airports nearby are Orlando International Airport and Jacksonville International Airport, each of which is approximately 90 minutes away.
-
-Buses
-
-The Volusia County Parking Garage in Daytona Beach provides a place for visitors to park and walk around.
-Daytona Beach is served by Greyhound Bus Lines, which has a terminal located at 138 South Ridgewood Avenue (US 1). The Greyhound routes from Daytona Beach connect with hubs in Jacksonville and Orlando.
-Votran is the local bus service provided by Volusia County.
-Automobiles
-Daytona Beach is easily accessible by I-95 that runs north and south and I-4 connecting Daytona Beach with Orlando and Tampa. US 1 (Ridgewood Avenue) also passes north–south through Daytona Beach. US 92 (International Speedway Boulevard) runs east–west through Daytona Beach. SR A1A is a scenic north–south route along the beach.
-
-The Volusia County Parking Garage is located at 701 Earl Street at North Atlantic Avenue (SR A1A). The garage is strategically located, next to the Ocean Center, Daytona Lagoon, and across the street from the Hilton Hotel and Ocean Walk Shoppes. Over one thousand parking spaces are available inside the garage, which also houses an intermodal transfer station for VoTran.
-
-Bridges
-There are four bridges over the Halifax River (and Intracoastal Waterway) at Daytona Beach. They include (starting from furthest downstream) the Veterans Memorial Bridge (which carries CR 4050 traffic), the Broadway Bridge (which carries US 92 traffic), the Main Street Bridge (which carries CR 4040 traffic), and the Seabreeze Bridge (which carries SR 430 traffic). All four bridges charge no toll to traffic.[39] In June, 2016, the Veterans Memorial Bridge was closed as part of a three-year project to demolish the drawbridge and replace it with a high span bridge.[40]
-
-Veterans Memorial Bridge
-Broadway Bridge
-Main Street Bridge
-Seabreeze Bridge
-
-Rail
-
-Daytona Beach railroad station, ca. 1926
-Passenger railroad service to Daytona Beach was established no later than 1889 by the Jacksonville, St. Augustine and Halifax River Railway, predecessor of the Florida East Coast Railroad (FEC). Long-distance trains such as the City of Miami and the South Wind (both from Chicago), East Coast Champion (from New York City) and the Havana Special (New York City) made stops at Daytona Beach.[41][42][43] Long distance routes were diverted to Atlantic Coast Line Railroad and Seaboard Air Line Railroad routes on the Florida interior south of the Jacksonville Union Station, following the beginning of a labor dispute on the FEC in 1963.[44][45] Passenger trains continued calling at Daytona Beach until July 31, 1968, when the FEC terminated passenger operations system-wide.[46] The FEC currently operates freight trains through Daytona Beach.
-
-Daytona Beach is served by Amtrak by way of a Thruway Motorcoach connection between the beachside and Amtrak's DeLand Station, 28 miles (45 km) to the west. There, the service connects northbound with train 92, the Silver Star, and train 98, the Silver Meteor. Southbound connections from Daytona Beach are limited to Silver Meteor southbound train 97. The DeLand – Daytona Beach service is Amtrak's only Florida Thruway Motorcoach route provided by a taxi-cab, rather than a bus.
-
-Points of interest
-National Historic Places
-
-Tarragona Arch
-
-The beach in Daytona Beach near the border with Ormond Beach
-The Abbey
-Mary McLeod Bethune Home
-Bethune–Cookman College Historic District
-Delos A. Blodgett House
-City Island
-City Island Ball Park
-Cypress Street Elementary School
-Daytona Beach Bandshell and Oceanfront Park Complex
-Daytona Beach Surfside Historic District
-Bartholomew J. Donnelly House
-El Pino Parque Historic District
-Amos Kling House
-S.H. Kress and Co. Building
-Merchants Bank Building
-Olds Hall
-Rogers House
-Seabreeze Historic District
-Seybold Baking Company Factory
-South Beach Street Historic District
-South Peninsula Historic District
-South Ridgewood Elementary School
-Southwest Daytona Beach Black Heritage District
-Tarragona Tower
-Howard Thurman House
-Tourist Church
-US Post Office
-White Hall
-S. Cornelia Young Memorial Library
-Other points of interest
-Daytona 500 Experience
-Daytona International Speedway
-Daytona Beach Boardwalk
-Daytona Lagoon Water Park
-Halifax Historical Museum
-Jackie Robinson Ballpark
-Main Street Pier
-Mary McLeod Bethune Performing Arts Center and Visual Arts Gallery
-Museum of Arts and Sciences
-News Journal Center
-Southeast Museum of Photography
-The Ocean Center
-List of Registered Historic Buildings in Daytona Beach, Florida
-
-
-Notable people
-Duane Allman and Gregg Allman, musicians
-Perry Baker, rugby player for U.S. national team
-Fulgencio Batista, 19th President of Cuba
-Pete Carr, musician
-Vince Carter, basketball player, 8-time NBA All-Star
-Ed Charles, former Major League Baseball player
-Bill France Sr., founder of NASCAR
-Roland G. Fryer Jr., economist; In 2007, at age 30, he became the youngest African-American to be given tenure at Harvard University
-Lee H. Hamilton former Indiana U.S. Congressman
-Danielle Harris, actress
-Carrenza Howard, baseball pitcher
-Zora Neale Hurston, writer, anthropologist
-Alex Kinsey, singer
-E. J. Kuale, professional football player
-Gary Russell Libby, art historian, curator, and former director of Museum of Arts and Sciences
-Ryan Lochte, swimmer, winner of 12 Olympic medals including six gold
-Martin Mayhew, pro football player and executive
-Mary McLeod Bethune, educator and civil rights activist
-Walter M. Miller Jr., author of A Canticle for Leibowitz
-Jane Morgan, singer[48]
-Matthew Tyler Musto, musician
-Kevin Nash, professional WWE wrestler
-No Kum-sok, North Korean defector
-Ransom Eli Olds, automobile pioneer
-Pavlina Osta, radio host
-Josef Papp, engineer
-Kitty Pryde, rapper
-Glen "Fireball" Roberts, NASCAR driver
-Jackie Robinson, professional baseball player
-Bob Ross, artist and television host
-Galen Seaman, lawyer, Wisconsin State Assemblyman, and mayor of Daytona Beach
-David Sholtz, 26th governor of Florida
-Mike Skinner, NASCAR driver
-Marc-Aurèle de Foy Suzor-Coté, painter
-Howard Thurman, author and theologian
-Denzel Washington, actor
-Eric Weems, professional football player
-T. K. Wetherell, president of Florida State University
-Robert Wright, musical theater writer
-Aileen Wuornos, serial killer executed in 2002
-Smokey Yunick, mechanic and motor racing innovator
-"""
\ No newline at end of file
diff --git a/spaces/kcagle/AutoGPT/autogpt/workspace.py b/spaces/kcagle/AutoGPT/autogpt/workspace.py
deleted file mode 100644
index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000
--- a/spaces/kcagle/AutoGPT/autogpt/workspace.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from __future__ import annotations
-
-import os
-from pathlib import Path
-
-from autogpt.config import Config
-
-CFG = Config()
-
-# Set a dedicated folder for file I/O
-WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace"
-
-# Create the directory if it doesn't exist
-if not os.path.exists(WORKSPACE_PATH):
- os.makedirs(WORKSPACE_PATH)
-
-
-def path_in_workspace(relative_path: str | Path) -> Path:
- """Get full path for item in workspace
-
- Parameters:
- relative_path (str | Path): Path to translate into the workspace
-
- Returns:
- Path: Absolute path for the given path in the workspace
- """
- return safe_path_join(WORKSPACE_PATH, relative_path)
-
-
-def safe_path_join(base: Path, *paths: str | Path) -> Path:
- """Join one or more path components, asserting the resulting path is within the workspace.
-
- Args:
- base (Path): The base path
- *paths (str): The paths to join to the base path
-
- Returns:
- Path: The joined path
- """
- joined_path = base.joinpath(*paths).resolve()
-
- if CFG.restrict_to_workspace and not joined_path.is_relative_to(base):
- raise ValueError(
- f"Attempted to access path '{joined_path}' outside of workspace '{base}'."
- )
-
- return joined_path
diff --git a/spaces/kdrkdrkdr/HoshinoTTS/monotonic_align/core.py b/spaces/kdrkdrkdr/HoshinoTTS/monotonic_align/core.py
deleted file mode 100644
index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000
--- a/spaces/kdrkdrkdr/HoshinoTTS/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py b/spaces/keithhon/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py
deleted file mode 100644
index 1d973d9b8b9ab547571abc5a3f5ea86226a25924..0000000000000000000000000000000000000000
--- a/spaces/keithhon/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from utils.display import *
-from utils.dsp import *
-
-
-class WaveRNN(nn.Module) :
- def __init__(self, hidden_size=896, quantisation=256) :
- super(WaveRNN, self).__init__()
-
- self.hidden_size = hidden_size
- self.split_size = hidden_size // 2
-
- # The main matmul
- self.R = nn.Linear(self.hidden_size, 3 * self.hidden_size, bias=False)
-
- # Output fc layers
- self.O1 = nn.Linear(self.split_size, self.split_size)
- self.O2 = nn.Linear(self.split_size, quantisation)
- self.O3 = nn.Linear(self.split_size, self.split_size)
- self.O4 = nn.Linear(self.split_size, quantisation)
-
- # Input fc layers
- self.I_coarse = nn.Linear(2, 3 * self.split_size, bias=False)
- self.I_fine = nn.Linear(3, 3 * self.split_size, bias=False)
-
- # biases for the gates
- self.bias_u = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_r = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_e = nn.Parameter(torch.zeros(self.hidden_size))
-
- # display num params
- self.num_params()
-
-
- def forward(self, prev_y, prev_hidden, current_coarse) :
-
- # Main matmul - the projection is split 3 ways
- R_hidden = self.R(prev_hidden)
- R_u, R_r, R_e, = torch.split(R_hidden, self.hidden_size, dim=1)
-
- # Project the prev input
- coarse_input_proj = self.I_coarse(prev_y)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project the prev input and current coarse sample
- fine_input = torch.cat([prev_y, current_coarse], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # concatenate for the gates
- I_u = torch.cat([I_coarse_u, I_fine_u], dim=1)
- I_r = torch.cat([I_coarse_r, I_fine_r], dim=1)
- I_e = torch.cat([I_coarse_e, I_fine_e], dim=1)
-
- # Compute all gates for coarse and fine
- u = F.sigmoid(R_u + I_u + self.bias_u)
- r = F.sigmoid(R_r + I_r + self.bias_r)
- e = F.tanh(r * R_e + I_e + self.bias_e)
- hidden = u * prev_hidden + (1. - u) * e
-
- # Split the hidden state
- hidden_coarse, hidden_fine = torch.split(hidden, self.split_size, dim=1)
-
- # Compute outputs
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
-
- return out_coarse, out_fine, hidden
-
-
- def generate(self, seq_len):
- with torch.no_grad():
- # First split up the biases for the gates
- b_coarse_u, b_fine_u = torch.split(self.bias_u, self.split_size)
- b_coarse_r, b_fine_r = torch.split(self.bias_r, self.split_size)
- b_coarse_e, b_fine_e = torch.split(self.bias_e, self.split_size)
-
- # Lists for the two output seqs
- c_outputs, f_outputs = [], []
-
- # Some initial inputs
- out_coarse = torch.LongTensor([0]).cuda()
- out_fine = torch.LongTensor([0]).cuda()
-
- # We'll meed a hidden state
- hidden = self.init_hidden()
-
- # Need a clock for display
- start = time.time()
-
- # Loop for generation
- for i in range(seq_len) :
-
- # Split into two hidden states
- hidden_coarse, hidden_fine = \
- torch.split(hidden, self.split_size, dim=1)
-
- # Scale and concat previous predictions
- out_coarse = out_coarse.unsqueeze(0).float() / 127.5 - 1.
- out_fine = out_fine.unsqueeze(0).float() / 127.5 - 1.
- prev_outputs = torch.cat([out_coarse, out_fine], dim=1)
-
- # Project input
- coarse_input_proj = self.I_coarse(prev_outputs)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project hidden state and split 6 ways
- R_hidden = self.R(hidden)
- R_coarse_u , R_fine_u, \
- R_coarse_r, R_fine_r, \
- R_coarse_e, R_fine_e = torch.split(R_hidden, self.split_size, dim=1)
-
- # Compute the coarse gates
- u = F.sigmoid(R_coarse_u + I_coarse_u + b_coarse_u)
- r = F.sigmoid(R_coarse_r + I_coarse_r + b_coarse_r)
- e = F.tanh(r * R_coarse_e + I_coarse_e + b_coarse_e)
- hidden_coarse = u * hidden_coarse + (1. - u) * e
-
- # Compute the coarse output
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- posterior = F.softmax(out_coarse, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_coarse = distrib.sample()
- c_outputs.append(out_coarse)
-
- # Project the [prev outputs and predicted coarse sample]
- coarse_pred = out_coarse.float() / 127.5 - 1.
- fine_input = torch.cat([prev_outputs, coarse_pred.unsqueeze(0)], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # Compute the fine gates
- u = F.sigmoid(R_fine_u + I_fine_u + b_fine_u)
- r = F.sigmoid(R_fine_r + I_fine_r + b_fine_r)
- e = F.tanh(r * R_fine_e + I_fine_e + b_fine_e)
- hidden_fine = u * hidden_fine + (1. - u) * e
-
- # Compute the fine output
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
- posterior = F.softmax(out_fine, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_fine = distrib.sample()
- f_outputs.append(out_fine)
-
- # Put the hidden state back together
- hidden = torch.cat([hidden_coarse, hidden_fine], dim=1)
-
- # Display progress
- speed = (i + 1) / (time.time() - start)
- stream('Gen: %i/%i -- Speed: %i', (i + 1, seq_len, speed))
-
- coarse = torch.stack(c_outputs).squeeze(1).cpu().data.numpy()
- fine = torch.stack(f_outputs).squeeze(1).cpu().data.numpy()
- output = combine_signal(coarse, fine)
-
- return output, coarse, fine
-
- def init_hidden(self, batch_size=1) :
- return torch.zeros(batch_size, self.hidden_size).cuda()
-
- def num_params(self) :
- parameters = filter(lambda p: p.requires_grad, self.parameters())
- parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
- print('Trainable Parameters: %.3f million' % parameters)
\ No newline at end of file
diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/checkpoint/__init__.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/checkpoint/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/kevinwang676/M4Singer/utils/__init__.py b/spaces/kevinwang676/M4Singer/utils/__init__.py
deleted file mode 100644
index 4ea5c5a67e038c2213247dfb905942882c090a77..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/M4Singer/utils/__init__.py
+++ /dev/null
@@ -1,250 +0,0 @@
-import glob
-import logging
-import re
-import time
-from collections import defaultdict
-import os
-import sys
-import shutil
-import types
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torch.distributed as dist
-from torch import nn
-
-
-def tensors_to_scalars(metrics):
- new_metrics = {}
- for k, v in metrics.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
- if type(v) is dict:
- v = tensors_to_scalars(v)
- new_metrics[k] = v
- return new_metrics
-
-
-class AvgrageMeter(object):
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.avg = 0
- self.sum = 0
- self.cnt = 0
-
- def update(self, val, n=1):
- self.sum += val * n
- self.cnt += n
- self.avg = self.sum / self.cnt
-
-
-def collate_1d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None, shift_id=1):
- """Convert a list of 1d tensors into a padded 2d tensor."""
- size = max(v.size(0) for v in values) if max_len is None else max_len
- res = values[0].new(len(values), size).fill_(pad_idx)
-
- def copy_tensor(src, dst):
- assert dst.numel() == src.numel()
- if shift_right:
- dst[1:] = src[:-1]
- dst[0] = shift_id
- else:
- dst.copy_(src)
-
- for i, v in enumerate(values):
- copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
- return res
-
-
-def collate_2d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None):
- """Convert a list of 2d tensors into a padded 3d tensor."""
- size = max(v.size(0) for v in values) if max_len is None else max_len
- res = values[0].new(len(values), size, values[0].shape[1]).fill_(pad_idx)
-
- def copy_tensor(src, dst):
- assert dst.numel() == src.numel()
- if shift_right:
- dst[1:] = src[:-1]
- else:
- dst.copy_(src)
-
- for i, v in enumerate(values):
- copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
- return res
-
-
-def _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
- if len(batch) == 0:
- return 0
- if len(batch) == max_sentences:
- return 1
- if num_tokens > max_tokens:
- return 1
- return 0
-
-
-def batch_by_size(
- indices, num_tokens_fn, max_tokens=None, max_sentences=None,
- required_batch_size_multiple=1, distributed=False
-):
- """
- Yield mini-batches of indices bucketed by size. Batches may contain
- sequences of different lengths.
-
- Args:
- indices (List[int]): ordered list of dataset indices
- num_tokens_fn (callable): function that returns the number of tokens at
- a given index
- max_tokens (int, optional): max number of tokens in each batch
- (default: None).
- max_sentences (int, optional): max number of sentences in each
- batch (default: None).
- required_batch_size_multiple (int, optional): require batch size to
- be a multiple of N (default: 1).
- """
- max_tokens = max_tokens if max_tokens is not None else sys.maxsize
- max_sentences = max_sentences if max_sentences is not None else sys.maxsize
- bsz_mult = required_batch_size_multiple
-
- if isinstance(indices, types.GeneratorType):
- indices = np.fromiter(indices, dtype=np.int64, count=-1)
-
- sample_len = 0
- sample_lens = []
- batch = []
- batches = []
- for i in range(len(indices)):
- idx = indices[i]
- num_tokens = num_tokens_fn(idx)
- sample_lens.append(num_tokens)
- sample_len = max(sample_len, num_tokens)
- assert sample_len <= max_tokens, (
- "sentence at index {} of size {} exceeds max_tokens "
- "limit of {}!".format(idx, sample_len, max_tokens)
- )
- num_tokens = (len(batch) + 1) * sample_len
-
- if _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
- mod_len = max(
- bsz_mult * (len(batch) // bsz_mult),
- len(batch) % bsz_mult,
- )
- batches.append(batch[:mod_len])
- batch = batch[mod_len:]
- sample_lens = sample_lens[mod_len:]
- sample_len = max(sample_lens) if len(sample_lens) > 0 else 0
- batch.append(idx)
- if len(batch) > 0:
- batches.append(batch)
- return batches
-
-
-def make_positions(tensor, padding_idx):
- """Replace non-padding symbols with their position numbers.
-
- Position numbers begin at padding_idx+1. Padding symbols are ignored.
- """
- # The series of casts and type-conversions here are carefully
- # balanced to both work with ONNX export and XLA. In particular XLA
- # prefers ints, cumsum defaults to output longs, and ONNX doesn't know
- # how to handle the dtype kwarg in cumsum.
- mask = tensor.ne(padding_idx).int()
- return (
- torch.cumsum(mask, dim=1).type_as(mask) * mask
- ).long() + padding_idx
-
-
-def softmax(x, dim):
- return F.softmax(x, dim=dim, dtype=torch.float32)
-
-
-def unpack_dict_to_list(samples):
- samples_ = []
- bsz = samples.get('outputs').size(0)
- for i in range(bsz):
- res = {}
- for k, v in samples.items():
- try:
- res[k] = v[i]
- except:
- pass
- samples_.append(res)
- return samples_
-
-
-def load_ckpt(cur_model, ckpt_base_dir, prefix_in_ckpt='model', force=True, strict=True):
- if os.path.isfile(ckpt_base_dir):
- base_dir = os.path.dirname(ckpt_base_dir)
- checkpoint_path = [ckpt_base_dir]
- else:
- base_dir = ckpt_base_dir
- checkpoint_path = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
- lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))
- if len(checkpoint_path) > 0:
- checkpoint_path = checkpoint_path[-1]
- state_dict = torch.load(checkpoint_path, map_location="cpu")["state_dict"]
- state_dict = {k[len(prefix_in_ckpt) + 1:]: v for k, v in state_dict.items()
- if k.startswith(f'{prefix_in_ckpt}.')}
- if not strict:
- cur_model_state_dict = cur_model.state_dict()
- unmatched_keys = []
- for key, param in state_dict.items():
- if key in cur_model_state_dict:
- new_param = cur_model_state_dict[key]
- if new_param.shape != param.shape:
- unmatched_keys.append(key)
- print("| Unmatched keys: ", key, new_param.shape, param.shape)
- for key in unmatched_keys:
- del state_dict[key]
- cur_model.load_state_dict(state_dict, strict=strict)
- print(f"| load '{prefix_in_ckpt}' from '{checkpoint_path}'.")
- else:
- e_msg = f"| ckpt not found in {base_dir}."
- if force:
- assert False, e_msg
- else:
- print(e_msg)
-
-
-def remove_padding(x, padding_idx=0):
- if x is None:
- return None
- assert len(x.shape) in [1, 2]
- if len(x.shape) == 2: # [T, H]
- return x[np.abs(x).sum(-1) != padding_idx]
- elif len(x.shape) == 1: # [T]
- return x[x != padding_idx]
-
-
-class Timer:
- timer_map = {}
-
- def __init__(self, name, print_time=False):
- if name not in Timer.timer_map:
- Timer.timer_map[name] = 0
- self.name = name
- self.print_time = print_time
-
- def __enter__(self):
- self.t = time.time()
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- Timer.timer_map[self.name] += time.time() - self.t
- if self.print_time:
- print(self.name, Timer.timer_map[self.name])
-
-
-def print_arch(model, model_name='model'):
- print(f"| {model_name} Arch: ", model)
- num_params(model, model_name=model_name)
-
-
-def num_params(model, print_out=True, model_name="model"):
- parameters = filter(lambda p: p.requires_grad, model.parameters())
- parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
- if print_out:
- print(f'| {model_name} Trainable Parameters: %.3fM' % parameters)
- return parameters
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/networks.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/networks.py
deleted file mode 100644
index ead9cdcb8720b845c233de79dc8a8d1668492108..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/networks.py
+++ /dev/null
@@ -1,521 +0,0 @@
-"""This script defines deep neural networks for Deep3DFaceRecon_pytorch
-"""
-
-import os
-import numpy as np
-import torch.nn.functional as F
-from torch.nn import init
-import functools
-from torch.optim import lr_scheduler
-import torch
-from torch import Tensor
-import torch.nn as nn
-try:
- from torch.hub import load_state_dict_from_url
-except ImportError:
- from torch.utils.model_zoo import load_url as load_state_dict_from_url
-from typing import Type, Any, Callable, Union, List, Optional
-from .arcface_torch.backbones import get_model
-from kornia.geometry import warp_affine
-
-def resize_n_crop(image, M, dsize=112):
- # image: (b, c, h, w)
- # M : (b, 2, 3)
- return warp_affine(image, M, dsize=(dsize, dsize), align_corners=True)
-
-def filter_state_dict(state_dict, remove_name='fc'):
- new_state_dict = {}
- for key in state_dict:
- if remove_name in key:
- continue
- new_state_dict[key] = state_dict[key]
- return new_state_dict
-
-def get_scheduler(optimizer, opt):
- """Return a learning rate scheduler
-
- Parameters:
- optimizer -- the optimizer of the network
- opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.
- opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine
-
- For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
- See https://pytorch.org/docs/stable/optim.html for more details.
- """
- if opt.lr_policy == 'linear':
- def lambda_rule(epoch):
- lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs + 1)
- return lr_l
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
- elif opt.lr_policy == 'step':
- scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_epochs, gamma=0.2)
- elif opt.lr_policy == 'plateau':
- scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
- elif opt.lr_policy == 'cosine':
- scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
- else:
- return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
- return scheduler
-
-
-def define_net_recon(net_recon, use_last_fc=False, init_path=None):
- return ReconNetWrapper(net_recon, use_last_fc=use_last_fc, init_path=init_path)
-
-def define_net_recog(net_recog, pretrained_path=None):
- net = RecogNetWrapper(net_recog=net_recog, pretrained_path=pretrained_path)
- net.eval()
- return net
-
-class ReconNetWrapper(nn.Module):
- fc_dim=257
- def __init__(self, net_recon, use_last_fc=False, init_path=None):
- super(ReconNetWrapper, self).__init__()
- self.use_last_fc = use_last_fc
- if net_recon not in func_dict:
- return NotImplementedError('network [%s] is not implemented', net_recon)
- func, last_dim = func_dict[net_recon]
- backbone = func(use_last_fc=use_last_fc, num_classes=self.fc_dim)
- if init_path and os.path.isfile(init_path):
- state_dict = filter_state_dict(torch.load(init_path, map_location='cpu'))
- backbone.load_state_dict(state_dict)
- print("loading init net_recon %s from %s" %(net_recon, init_path))
- self.backbone = backbone
- if not use_last_fc:
- self.final_layers = nn.ModuleList([
- conv1x1(last_dim, 80, bias=True), # id layer
- conv1x1(last_dim, 64, bias=True), # exp layer
- conv1x1(last_dim, 80, bias=True), # tex layer
- conv1x1(last_dim, 3, bias=True), # angle layer
- conv1x1(last_dim, 27, bias=True), # gamma layer
- conv1x1(last_dim, 2, bias=True), # tx, ty
- conv1x1(last_dim, 1, bias=True) # tz
- ])
- for m in self.final_layers:
- nn.init.constant_(m.weight, 0.)
- nn.init.constant_(m.bias, 0.)
-
- def forward(self, x):
- x = self.backbone(x)
- if not self.use_last_fc:
- output = []
- for layer in self.final_layers:
- output.append(layer(x))
- x = torch.flatten(torch.cat(output, dim=1), 1)
- return x
-
-
-class RecogNetWrapper(nn.Module):
- def __init__(self, net_recog, pretrained_path=None, input_size=112):
- super(RecogNetWrapper, self).__init__()
- net = get_model(name=net_recog, fp16=False)
- if pretrained_path:
- state_dict = torch.load(pretrained_path, map_location='cpu')
- net.load_state_dict(state_dict)
- print("loading pretrained net_recog %s from %s" %(net_recog, pretrained_path))
- for param in net.parameters():
- param.requires_grad = False
- self.net = net
- self.preprocess = lambda x: 2 * x - 1
- self.input_size=input_size
-
- def forward(self, image, M):
- image = self.preprocess(resize_n_crop(image, M, self.input_size))
- id_feature = F.normalize(self.net(image), dim=-1, p=2)
- return id_feature
-
-
-# adapted from https://github.com/pytorch/vision/edit/master/torchvision/models/resnet.py
-__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
- 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d',
- 'wide_resnet50_2', 'wide_resnet101_2']
-
-
-model_urls = {
- 'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth',
- 'resnet34': 'https://download.pytorch.org/models/resnet34-b627a593.pth',
- 'resnet50': 'https://download.pytorch.org/models/resnet50-0676ba61.pth',
- 'resnet101': 'https://download.pytorch.org/models/resnet101-63fe2227.pth',
- 'resnet152': 'https://download.pytorch.org/models/resnet152-394f9c45.pth',
- 'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',
- 'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',
- 'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',
- 'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',
-}
-
-
-def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d:
- """3x3 convolution with padding"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=dilation, groups=groups, bias=False, dilation=dilation)
-
-
-def conv1x1(in_planes: int, out_planes: int, stride: int = 1, bias: bool = False) -> nn.Conv2d:
- """1x1 convolution"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=bias)
-
-
-class BasicBlock(nn.Module):
- expansion: int = 1
-
- def __init__(
- self,
- inplanes: int,
- planes: int,
- stride: int = 1,
- downsample: Optional[nn.Module] = None,
- groups: int = 1,
- base_width: int = 64,
- dilation: int = 1,
- norm_layer: Optional[Callable[..., nn.Module]] = None
- ) -> None:
- super(BasicBlock, self).__init__()
- if norm_layer is None:
- norm_layer = nn.BatchNorm2d
- if groups != 1 or base_width != 64:
- raise ValueError('BasicBlock only supports groups=1 and base_width=64')
- if dilation > 1:
- raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
- # Both self.conv1 and self.downsample layers downsample the input when stride != 1
- self.conv1 = conv3x3(inplanes, planes, stride)
- self.bn1 = norm_layer(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = norm_layer(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x: Tensor) -> Tensor:
- identity = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2)
- # while original implementation places the stride at the first 1x1 convolution(self.conv1)
- # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385.
- # This variant is also known as ResNet V1.5 and improves accuracy according to
- # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch.
-
- expansion: int = 4
-
- def __init__(
- self,
- inplanes: int,
- planes: int,
- stride: int = 1,
- downsample: Optional[nn.Module] = None,
- groups: int = 1,
- base_width: int = 64,
- dilation: int = 1,
- norm_layer: Optional[Callable[..., nn.Module]] = None
- ) -> None:
- super(Bottleneck, self).__init__()
- if norm_layer is None:
- norm_layer = nn.BatchNorm2d
- width = int(planes * (base_width / 64.)) * groups
- # Both self.conv2 and self.downsample layers downsample the input when stride != 1
- self.conv1 = conv1x1(inplanes, width)
- self.bn1 = norm_layer(width)
- self.conv2 = conv3x3(width, width, stride, groups, dilation)
- self.bn2 = norm_layer(width)
- self.conv3 = conv1x1(width, planes * self.expansion)
- self.bn3 = norm_layer(planes * self.expansion)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x: Tensor) -> Tensor:
- identity = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(
- self,
- block: Type[Union[BasicBlock, Bottleneck]],
- layers: List[int],
- num_classes: int = 1000,
- zero_init_residual: bool = False,
- use_last_fc: bool = False,
- groups: int = 1,
- width_per_group: int = 64,
- replace_stride_with_dilation: Optional[List[bool]] = None,
- norm_layer: Optional[Callable[..., nn.Module]] = None
- ) -> None:
- super(ResNet, self).__init__()
- if norm_layer is None:
- norm_layer = nn.BatchNorm2d
- self._norm_layer = norm_layer
-
- self.inplanes = 64
- self.dilation = 1
- if replace_stride_with_dilation is None:
- # each element in the tuple indicates if we should replace
- # the 2x2 stride with a dilated convolution instead
- replace_stride_with_dilation = [False, False, False]
- if len(replace_stride_with_dilation) != 3:
- raise ValueError("replace_stride_with_dilation should be None "
- "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
- self.use_last_fc = use_last_fc
- self.groups = groups
- self.base_width = width_per_group
- self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,
- bias=False)
- self.bn1 = norm_layer(self.inplanes)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2,
- dilate=replace_stride_with_dilation[0])
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
- dilate=replace_stride_with_dilation[1])
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
- dilate=replace_stride_with_dilation[2])
- self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
-
- if self.use_last_fc:
- self.fc = nn.Linear(512 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
-
-
- # Zero-initialize the last BN in each residual branch,
- # so that the residual branch starts with zeros, and each residual block behaves like an identity.
- # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
- if zero_init_residual:
- for m in self.modules():
- if isinstance(m, Bottleneck):
- nn.init.constant_(m.bn3.weight, 0) # type: ignore[arg-type]
- elif isinstance(m, BasicBlock):
- nn.init.constant_(m.bn2.weight, 0) # type: ignore[arg-type]
-
- def _make_layer(self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int,
- stride: int = 1, dilate: bool = False) -> nn.Sequential:
- norm_layer = self._norm_layer
- downsample = None
- previous_dilation = self.dilation
- if dilate:
- self.dilation *= stride
- stride = 1
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- conv1x1(self.inplanes, planes * block.expansion, stride),
- norm_layer(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
- self.base_width, previous_dilation, norm_layer))
- self.inplanes = planes * block.expansion
- for _ in range(1, blocks):
- layers.append(block(self.inplanes, planes, groups=self.groups,
- base_width=self.base_width, dilation=self.dilation,
- norm_layer=norm_layer))
-
- return nn.Sequential(*layers)
-
- def _forward_impl(self, x: Tensor) -> Tensor:
- # See note [TorchScript super()]
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
-
- x = self.avgpool(x)
- if self.use_last_fc:
- x = torch.flatten(x, 1)
- x = self.fc(x)
- return x
-
- def forward(self, x: Tensor) -> Tensor:
- return self._forward_impl(x)
-
-
-def _resnet(
- arch: str,
- block: Type[Union[BasicBlock, Bottleneck]],
- layers: List[int],
- pretrained: bool,
- progress: bool,
- **kwargs: Any
-) -> ResNet:
- model = ResNet(block, layers, **kwargs)
- if pretrained:
- state_dict = load_state_dict_from_url(model_urls[arch],
- progress=progress)
- model.load_state_dict(state_dict)
- return model
-
-
-def resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-18 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,
- **kwargs)
-
-
-def resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-34 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress,
- **kwargs)
-
-
-def resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-50 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress,
- **kwargs)
-
-
-def resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-101 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress,
- **kwargs)
-
-
-def resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNet-152 model from
- `"Deep Residual Learning for Image Recognition" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress,
- **kwargs)
-
-
-def resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNeXt-50 32x4d model from
- `"Aggregated Residual Transformation for Deep Neural Networks" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['groups'] = 32
- kwargs['width_per_group'] = 4
- return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3],
- pretrained, progress, **kwargs)
-
-
-def resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""ResNeXt-101 32x8d model from
- `"Aggregated Residual Transformation for Deep Neural Networks" `_.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['groups'] = 32
- kwargs['width_per_group'] = 8
- return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3],
- pretrained, progress, **kwargs)
-
-
-def wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""Wide ResNet-50-2 model from
- `"Wide Residual Networks" `_.
-
- The model is the same as ResNet except for the bottleneck number of channels
- which is twice larger in every block. The number of channels in outer 1x1
- convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
- channels, and in Wide ResNet-50-2 has 2048-1024-2048.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['width_per_group'] = 64 * 2
- return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3],
- pretrained, progress, **kwargs)
-
-
-def wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
- r"""Wide ResNet-101-2 model from
- `"Wide Residual Networks" `_.
-
- The model is the same as ResNet except for the bottleneck number of channels
- which is twice larger in every block. The number of channels in outer 1x1
- convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
- channels, and in Wide ResNet-50-2 has 2048-1024-2048.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- progress (bool): If True, displays a progress bar of the download to stderr
- """
- kwargs['width_per_group'] = 64 * 2
- return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3],
- pretrained, progress, **kwargs)
-
-
-func_dict = {
- 'resnet18': (resnet18, 512),
- 'resnet50': (resnet50, 2048)
-}
diff --git a/spaces/kingabzpro/savtadepth/README.md b/spaces/kingabzpro/savtadepth/README.md
deleted file mode 100644
index ede0a78ff22971cf41541b440b2fb9136b4e331e..0000000000000000000000000000000000000000
--- a/spaces/kingabzpro/savtadepth/README.md
+++ /dev/null
@@ -1,147 +0,0 @@
----
-title: Monocular Depth Estimation
-emoji: 🛋
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.8.12
-app_file: app/app_savta.py
-pinned: false
-license: mit
----
-
-
-# Savta Depth - Monocular Depth Estimation OSDS Project
-Savta Depth is a collaborative *O*pen *S*ource *D*ata *S*cience project for monocular depth estimation.
-
-Here you will find the code for the project, but also the data, models, pipelines and experiments. This means that the project is easily reproducible on any machine, but also that you can contribute data, models, and code to it.
-
-Have a great idea for how to improve the model? Want to add data and metrics to make it more explainable/fair? We'd love to get your help.
-## Demo
-[](https://colab.research.google.com/drive/1XU4DgQ217_hUMU1dllppeQNw3pTRlHy1?usp=sharing)
-
-**You can use [this notebook](https://colab.research.google.com/drive/1XU4DgQ217_hUMU1dllppeQNw3pTRlHy1?usp=sharing) to load a model from the project and run it on an image you uploaded, to get the depth map. Once it has been saved, you can download it to use on platforms that support it (e.g. Facebook) to create 3d photos.**
-
-
-
-## Contributing Guide
-Here we'll list things we want to work on in the project as well as ways to start contributing.
-If you'd like to take part, please follow the guide.
-
-### Setting up your environment to contribute
-* To get started, fork the repository on DAGsHub
-* Now, you have 3 way to set up your environment: Google Colab, local or docker. If you're not sure which one to go with, we recommend using Colab.
-
-#### Google Colab
-Google Colab can be looked at as your web connected, GPU powered IDE. Below is a link to a well-documented Colab notebook, that will load the code and data from this repository, enabling you to modify the code and re-run training. Notice that you still need to modify the code within the `src/code/` folder, adding cells should be used only for testing things out.
-
-**You can also use this notebook to load a model from the project and run it on an image you uploaded, to get the depth map. Once it has been saved, you can download it to use on platforms that support it (e.g. Facebook) to create 3d photos.**
-
-
-In order to edit code files, you must save the notebook to your drive. You can do this by typing `ctrl+s` or `cmd+s` on mac.
-
-\>\> **[SavtaDepth Colab Environment](https://colab.research.google.com/drive/1XU4DgQ217_hUMU1dllppeQNw3pTRlHy1?usp=sharing)** \<\<
-
-**_NOTE: The downside of this method (if you are not familiar with Colab) is that Google Colab will limit the amount of time an instance can be live, so you might be limited in your ability to train models for longer periods of time._**
-
-This notebook is also a part of this project, in case it needs modification, in the `Notebooks` folder. You should not commit your version unless your contribution is an improvement to the environment.
-
-#### Local
-* Clone the repository you just forked by typing the following command in your terminal:
-
- ```bash
- $ git clone https://dagshub.com//SavtaDepth.git
- ```
-
-* Create a virtual environment or Conda environment and activate it
- ```bash
- # Create the virtual environment
- $ make env
-
- # Activate the virtual environment
- # VENV
- $ source env/bin/activate .
-
- # or Conda
- $ source activate savta_depth
- ```
-* Install the required libraries
- ```bash
- $ make load_requirements
- ```
- **_NOTE: Here I assume a setup without GPU. Otherwise, you might need to modify requirements, which is outside the scope of this readme (feel free to contribute to this)._**
-* Pull the dvc files to your workspace by typing:
-
- ```bash
- $ dvc pull -r origin
- $ dvc checkout #use this to get the data, models etc
- ```
-
-* After you are finished your modification, make sure to do the following:
- * If you modified packages, make sure to update the `requirements.txt` file accordingly.
-
- * Push your code to DAGsHub, and your dvc managed files to your dvc remote. For reference on the commands needed, please refer to the Google Colab notebook section – [Commiting Your Work and Pushing Back to DAGsHub](https://colab.research.google.com/drive/1XU4DgQ217_hUMU1dllppeQNw3pTRlHy1?authuser=1#scrollTo=PAxz-29WhN12&line=1&uniqifier=1).
-
-#### Docker
-* Clone the repository you just forked by typing the following command in your terminal:
-
- ```bash
- $ git clone https://dagshub.com//SavtaDepth.git
- ```
-
-* To get your environment up and running docker is the best way to go. We use an instance of [MLWorkspace](https://github.com/ml-tooling/ml-workspace).
- * You can Just run the following commands to get it started.
-
- ```bash
- $ chmod +x run_dev_env.sh
- $ ./run_dev_env.sh
- ```
-
- * Open localhost:8080 to see the workspace you have created. You will be asked for a token – enter `dagshub_savta`
- * In the top right you have a menu called `Open Tool`. Click that button and choose terminal (alternatively open VSCode and open terminal there) and type in the following commands to install a virtualenv and dependencies:
-
- ```bash
- $ make env
- $ source activate savta_depth
- ```
-
- Now when we have an environment, let's install all of the required libraries.
-
- **Note**: If you don't have a GPU you will need to install pytorch separately and then run make requirements. You can install pytorch for computers without a gpu with the following command:
-
- ```bash
- $ conda install pytorch torchvision cpuonly -c pytorch
- ```
-
- To install the required libraries run the following command:
-
- ```bash
- $ make load_requirements
- ```
-
-
-* Pull the dvc files to your workspace by typing:
-
- ```bash
- $ dvc pull -r dvc-remote
- $ dvc checkout #use this to get the data, models etc
- ```
-
-* After you are finished your modification, make sure to do the following:
- * If you modified packages, make sure to update the `requirements.txt` file accordingly.
-
- * Push your code to DAGsHub, and your dvc managed files to your dvc remote. For reference on the commands needed, please refer to the Google Colab notebook section – [Commiting Your Work and Pushing Back to DAGsHub](https://colab.research.google.com/drive/1XU4DgQ217_hUMU1dllppeQNw3pTRlHy1?authuser=1#scrollTo=PAxz-29WhN12&line=1&uniqifier=1).
-
----
-### After pushing code and data to DAGsHub
-* Create a Pull Request on DAGsHub!
- * Explain what changes you are making.
- * If your changes affect data or models, make sure they are pushed to your DAGsHub dvc remote, and are included in the PR.
- * We will review your contribution ASAP, and merge it or start a discussion if needed.
-* 🐶
-
-### TODO:
-- [x] Web UI
-- [ ] Testing various datasets as basis for training
-- [ ] Testing various models for the data
-- [ ] Adding qualitative tests for model performance (visually comparing 3d image outputs)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/events.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/events.py
deleted file mode 100644
index 8ea098ea902fa696c1b94b4ffafa1a00cd07d341..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/events.py
+++ /dev/null
@@ -1,318 +0,0 @@
-"""Contains all of the events that can be triggered in a gr.Blocks() app, with the exception
-of the on-page-load event, which is defined in gr.Blocks().load()."""
-
-from __future__ import annotations
-
-import warnings
-from typing import TYPE_CHECKING, Any, Callable
-
-from gradio_client.documentation import document, set_documentation_group
-
-from gradio.blocks import Block
-from gradio.helpers import EventData
-from gradio.utils import get_cancel_function
-
-if TYPE_CHECKING: # Only import for type checking (is False at runtime).
- from gradio.components import Component, StatusTracker
-
-set_documentation_group("events")
-
-
-def set_cancel_events(
- block: Block, event_name: str, cancels: None | dict[str, Any] | list[dict[str, Any]]
-):
- if cancels:
- if not isinstance(cancels, list):
- cancels = [cancels]
- cancel_fn, fn_indices_to_cancel = get_cancel_function(cancels)
- block.set_event_trigger(
- event_name,
- cancel_fn,
- inputs=None,
- outputs=None,
- queue=False,
- preprocess=False,
- cancels=fn_indices_to_cancel,
- )
-
-
-class EventListener(Block):
- def __init__(self: Any):
- for event_listener_class in EventListener.__subclasses__():
- if isinstance(self, event_listener_class):
- event_listener_class.__init__(self)
-
-
-class Dependency(dict):
- def __init__(self, trigger, key_vals, dep_index):
- super().__init__(key_vals)
- self.trigger = trigger
- self.then = EventListenerMethod(
- self.trigger,
- "then",
- trigger_after=dep_index,
- trigger_only_on_success=False,
- )
- """
- Triggered after directly preceding event is completed, regardless of success or failure.
- """
- self.success = EventListenerMethod(
- self.trigger,
- "success",
- trigger_after=dep_index,
- trigger_only_on_success=True,
- )
- """
- Triggered after directly preceding event is completed, if it was successful.
- """
-
-
-class EventListenerMethod:
- """
- Triggered on an event deployment.
- """
-
- def __init__(
- self,
- trigger: Block,
- event_name: str,
- show_progress: bool = True,
- callback: Callable | None = None,
- trigger_after: int | None = None,
- trigger_only_on_success: bool = False,
- ):
- self.trigger = trigger
- self.event_name = event_name
- self.show_progress = show_progress
- self.callback = callback
- self.trigger_after = trigger_after
- self.trigger_only_on_success = trigger_only_on_success
-
- def __call__(
- self,
- fn: Callable | None,
- inputs: Component | list[Component] | set[Component] | None = None,
- outputs: Component | list[Component] | None = None,
- api_name: str | None = None,
- status_tracker: StatusTracker | None = None,
- scroll_to_output: bool = False,
- show_progress: bool | None = None,
- queue: bool | None = None,
- batch: bool = False,
- max_batch_size: int = 4,
- preprocess: bool = True,
- postprocess: bool = True,
- cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
- every: float | None = None,
- _js: str | None = None,
- ) -> Dependency:
- """
- Parameters:
- fn: the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
- inputs: List of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
- outputs: List of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
- api_name: Defining this parameter exposes the endpoint in the api docs
- scroll_to_output: If True, will scroll to output component on completion
- show_progress: If True, will show progress animation while pending
- queue: If True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
- batch: If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
- max_batch_size: Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
- preprocess: If False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
- postprocess: If False, will not run postprocessing of component data before returning 'fn' output to the browser.
- cancels: A list of other events to cancel when This listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
- every: Run this event 'every' number of seconds while the client connection is open. Interpreted in seconds. Queue must be enabled.
- """
- if status_tracker:
- warnings.warn(
- "The 'status_tracker' parameter has been deprecated and has no effect."
- )
- if isinstance(self, Streamable):
- self.check_streamable()
-
- dep, dep_index = self.trigger.set_event_trigger(
- self.event_name,
- fn,
- inputs,
- outputs,
- preprocess=preprocess,
- postprocess=postprocess,
- scroll_to_output=scroll_to_output,
- show_progress=show_progress
- if show_progress is not None
- else self.show_progress,
- api_name=api_name,
- js=_js,
- queue=queue,
- batch=batch,
- max_batch_size=max_batch_size,
- every=every,
- trigger_after=self.trigger_after,
- trigger_only_on_success=self.trigger_only_on_success,
- )
- set_cancel_events(self.trigger, self.event_name, cancels)
- if self.callback:
- self.callback()
- return Dependency(self.trigger, dep, dep_index)
-
-
-@document("*change", inherit=True)
-class Changeable(EventListener):
- def __init__(self):
- self.change = EventListenerMethod(self, "change")
- """
- This listener is triggered when the component's value changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger).
- See `.input()` for a listener that is only triggered by user input.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*input", inherit=True)
-class Inputable(EventListener):
- def __init__(self):
- self.input = EventListenerMethod(self, "input")
- """
- This listener is triggered when the user changes the value of the component.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*click", inherit=True)
-class Clickable(EventListener):
- def __init__(self):
- self.click = EventListenerMethod(self, "click")
- """
- This listener is triggered when the component (e.g. a button) is clicked.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*submit", inherit=True)
-class Submittable(EventListener):
- def __init__(self):
- self.submit = EventListenerMethod(self, "submit")
- """
- This listener is triggered when the user presses the Enter key while the component (e.g. a textbox) is focused.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*edit", inherit=True)
-class Editable(EventListener):
- def __init__(self):
- self.edit = EventListenerMethod(self, "edit")
- """
- This listener is triggered when the user edits the component (e.g. image) using the
- built-in editor. This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*clear", inherit=True)
-class Clearable(EventListener):
- def __init__(self):
- self.clear = EventListenerMethod(self, "clear")
- """
- This listener is triggered when the user clears the component (e.g. image or audio)
- using the X button for the component. This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*play", "*pause", "*stop", inherit=True)
-class Playable(EventListener):
- def __init__(self):
- self.play = EventListenerMethod(self, "play")
- """
- This listener is triggered when the user plays the component (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
- self.pause = EventListenerMethod(self, "pause")
- """
- This listener is triggered when the user pauses the component (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
- self.stop = EventListenerMethod(self, "stop")
- """
- This listener is triggered when the user stops the component (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*stream", inherit=True)
-class Streamable(EventListener):
- def __init__(self):
- self.streaming: bool
- self.stream = EventListenerMethod(
- self,
- "stream",
- show_progress=False,
- callback=lambda: setattr(self, "streaming", True),
- )
- """
- This listener is triggered when the user streams the component (e.g. a live webcam
- component). This method can be used when this component is in a Gradio Blocks.
- """
-
- def check_streamable(self):
- pass
-
-
-@document("*blur", inherit=True)
-class Blurrable(EventListener):
- def __init__(self):
- self.blur = EventListenerMethod(self, "blur")
- """
- This listener is triggered when the component's is unfocused/blurred (e.g. when the user clicks outside of a textbox).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*upload", inherit=True)
-class Uploadable(EventListener):
- def __init__(self):
- self.upload = EventListenerMethod(self, "upload")
- """
- This listener is triggered when the user uploads a file into the component (e.g. when the user uploads a video into a video component).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*release", inherit=True)
-class Releaseable(EventListener):
- def __init__(self):
- self.release = EventListenerMethod(self, "release")
- """
- This listener is triggered when the user releases the mouse on this component (e.g. when the user releases the slider).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*select", inherit=True)
-class Selectable(EventListener):
- def __init__(self):
- self.selectable: bool = False
- self.select = EventListenerMethod(
- self, "select", callback=lambda: setattr(self, "selectable", True)
- )
- """
- This listener is triggered when the user selects from within the Component.
- This event has EventData of type gradio.SelectData that carries information, accessible through SelectData.index and SelectData.value.
- See EventData documentation on how to use this event data.
- """
-
-
-class SelectData(EventData):
- def __init__(self, target: Block | None, data: Any):
- super().__init__(target, data)
- self.index: int | tuple[int, int] = data["index"]
- """
- The index of the selected item. Is a tuple if the component is two dimensional or selection is a range.
- """
- self.value: Any = data["value"]
- """
- The value of the selected item.
- """
- self.selected: bool = data.get("selected", True)
- """
- True if the item was selected, False if deselected.
- """
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/debug.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/debug.py
deleted file mode 100644
index 7ed7e9297e01b87c4e999d19d48a4265b38b574f..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/debug.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import sys
-import typing as t
-from types import CodeType
-from types import TracebackType
-
-from .exceptions import TemplateSyntaxError
-from .utils import internal_code
-from .utils import missing
-
-if t.TYPE_CHECKING:
- from .runtime import Context
-
-
-def rewrite_traceback_stack(source: t.Optional[str] = None) -> BaseException:
- """Rewrite the current exception to replace any tracebacks from
- within compiled template code with tracebacks that look like they
- came from the template source.
-
- This must be called within an ``except`` block.
-
- :param source: For ``TemplateSyntaxError``, the original source if
- known.
- :return: The original exception with the rewritten traceback.
- """
- _, exc_value, tb = sys.exc_info()
- exc_value = t.cast(BaseException, exc_value)
- tb = t.cast(TracebackType, tb)
-
- if isinstance(exc_value, TemplateSyntaxError) and not exc_value.translated:
- exc_value.translated = True
- exc_value.source = source
- # Remove the old traceback, otherwise the frames from the
- # compiler still show up.
- exc_value.with_traceback(None)
- # Outside of runtime, so the frame isn't executing template
- # code, but it still needs to point at the template.
- tb = fake_traceback(
- exc_value, None, exc_value.filename or "", exc_value.lineno
- )
- else:
- # Skip the frame for the render function.
- tb = tb.tb_next
-
- stack = []
-
- # Build the stack of traceback object, replacing any in template
- # code with the source file and line information.
- while tb is not None:
- # Skip frames decorated with @internalcode. These are internal
- # calls that aren't useful in template debugging output.
- if tb.tb_frame.f_code in internal_code:
- tb = tb.tb_next
- continue
-
- template = tb.tb_frame.f_globals.get("__jinja_template__")
-
- if template is not None:
- lineno = template.get_corresponding_lineno(tb.tb_lineno)
- fake_tb = fake_traceback(exc_value, tb, template.filename, lineno)
- stack.append(fake_tb)
- else:
- stack.append(tb)
-
- tb = tb.tb_next
-
- tb_next = None
-
- # Assign tb_next in reverse to avoid circular references.
- for tb in reversed(stack):
- tb.tb_next = tb_next
- tb_next = tb
-
- return exc_value.with_traceback(tb_next)
-
-
-def fake_traceback( # type: ignore
- exc_value: BaseException, tb: t.Optional[TracebackType], filename: str, lineno: int
-) -> TracebackType:
- """Produce a new traceback object that looks like it came from the
- template source instead of the compiled code. The filename, line
- number, and location name will point to the template, and the local
- variables will be the current template context.
-
- :param exc_value: The original exception to be re-raised to create
- the new traceback.
- :param tb: The original traceback to get the local variables and
- code info from.
- :param filename: The template filename.
- :param lineno: The line number in the template source.
- """
- if tb is not None:
- # Replace the real locals with the context that would be
- # available at that point in the template.
- locals = get_template_locals(tb.tb_frame.f_locals)
- locals.pop("__jinja_exception__", None)
- else:
- locals = {}
-
- globals = {
- "__name__": filename,
- "__file__": filename,
- "__jinja_exception__": exc_value,
- }
- # Raise an exception at the correct line number.
- code: CodeType = compile(
- "\n" * (lineno - 1) + "raise __jinja_exception__", filename, "exec"
- )
-
- # Build a new code object that points to the template file and
- # replaces the location with a block name.
- location = "template"
-
- if tb is not None:
- function = tb.tb_frame.f_code.co_name
-
- if function == "root":
- location = "top-level template code"
- elif function.startswith("block_"):
- location = f"block {function[6:]!r}"
-
- if sys.version_info >= (3, 8):
- code = code.replace(co_name=location)
- else:
- code = CodeType(
- code.co_argcount,
- code.co_kwonlyargcount,
- code.co_nlocals,
- code.co_stacksize,
- code.co_flags,
- code.co_code,
- code.co_consts,
- code.co_names,
- code.co_varnames,
- code.co_filename,
- location,
- code.co_firstlineno,
- code.co_lnotab,
- code.co_freevars,
- code.co_cellvars,
- )
-
- # Execute the new code, which is guaranteed to raise, and return
- # the new traceback without this frame.
- try:
- exec(code, globals, locals)
- except BaseException:
- return sys.exc_info()[2].tb_next # type: ignore
-
-
-def get_template_locals(real_locals: t.Mapping[str, t.Any]) -> t.Dict[str, t.Any]:
- """Based on the runtime locals, get the context that would be
- available at that point in the template.
- """
- # Start with the current template context.
- ctx: "t.Optional[Context]" = real_locals.get("context")
-
- if ctx is not None:
- data: t.Dict[str, t.Any] = ctx.get_all().copy()
- else:
- data = {}
-
- # Might be in a derived context that only sets local variables
- # rather than pushing a context. Local variables follow the scheme
- # l_depth_name. Find the highest-depth local that has a value for
- # each name.
- local_overrides: t.Dict[str, t.Tuple[int, t.Any]] = {}
-
- for name, value in real_locals.items():
- if not name.startswith("l_") or value is missing:
- # Not a template variable, or no longer relevant.
- continue
-
- try:
- _, depth_str, name = name.split("_", 2)
- depth = int(depth_str)
- except ValueError:
- continue
-
- cur_depth = local_overrides.get(name, (-1,))[0]
-
- if cur_depth < depth:
- local_overrides[name] = (depth, value)
-
- # Modify the context with any derived context.
- for name, (_, value) in local_overrides.items():
- if value is missing:
- data.pop(name, None)
- else:
- data[name] = value
-
- return data
diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_faceenhancer.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_faceenhancer.py
deleted file mode 100644
index 44df0eece0b219caef85e1c2a2c87f606332e273..0000000000000000000000000000000000000000
--- a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_faceenhancer.py
+++ /dev/null
@@ -1,687 +0,0 @@
-'''
-@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021)
-@author: yangxy (yangtao9009@gmail.com)
-# 2021-06-03, modified by Kai
-'''
-import sys
-op_path = 'models'
-if op_path not in sys.path:
- sys.path.insert(0, op_path)
-from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-
-import math
-import random
-import numpy as np
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-isconcat = True
-sss = 2 if isconcat else 1
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(input, self.weight * self.scale, bias=self.bias * self.lr_mul)
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
-
- if noise is not None:
- #print(image.shape, noise.shape)
- if isconcat: return torch.cat((image, self.weight * noise), dim=1) # concat
- return image + self.weight * noise
-
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
- #return torch.cat((image, self.weight * noise), dim=1)
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- #self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- #self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel*sss)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
- self.n_mlp = n_mlp
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4]*sss, style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
-
- in_channel = self.channels[4]
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel*sss,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel*sss, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel*sss, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- '''
- noise = [None] * (2 * (self.log_size - 2) + 1)
- '''
- noise = []
- batch = styles[0].shape[0]
- for i in range(self.n_mlp + 1):
- size = 2 ** (i+2)
- noise.append(torch.randn(batch, self.channels[size], size, size, device=styles[0].device))
- #print(self.channels[size], size)
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- noise_i = 1
-
- outs = []
- for conv1, conv2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], self.to_rgbs
- ):
- #print(out.shape, noise[(noise_i)//2].shape, noise[(noise_i + 1)//2].shape)
- out = conv1(out, latent[:, i], noise=noise[(noise_i + 1)//2]) ### 1 for 2
- out = conv2(out, latent[:, i + 1], noise=noise[(noise_i + 2)//2]) ### 1 for 2
- skip = to_rgb(out, latent[:, i + 2], skip)
- #outs.append(skip.clone())
-
- i += 2
- noise_i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
-
- else:
- return image, None
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-# -----------------------------
-# Main model
-# -----------------------------
-class FullGenerator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.log_size = int(math.log(size, 2))
- self.generator = Generator(size, style_dim, n_mlp, channel_multiplier=channel_multiplier, blur_kernel=blur_kernel, lr_mlp=lr_mlp)
-
- conv = [ConvLayer(3, channels[size], 1)]
- self.ecd0 = nn.Sequential(*conv)
- in_channel = channels[size]
-
- self.names = ['ecd%d'%i for i in range(self.log_size-1)]
- for i in range(self.log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
- #conv = [ResBlock(in_channel, out_channel, blur_kernel)]
- conv = [ConvLayer(in_channel, out_channel, 3, downsample=True)]
- setattr(self, self.names[self.log_size-i+1], nn.Sequential(*conv))
- in_channel = out_channel
- self.final_linear = nn.Sequential(EqualLinear(channels[4] * 4 * 4, style_dim, activation='fused_lrelu'))
-
- def forward(self,
- inputs,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- ):
- noise = []
- for i in range(self.log_size-1):
- ecd = getattr(self, self.names[i])
- inputs = ecd(inputs)
- noise.append(inputs)
- #print(inputs.shape)
- inputs = inputs.view(inputs.shape[0], -1)
- outs = self.final_linear(inputs)
- #print(outs.shape)
- outs = self.generator([outs], return_latents, inject_index, truncation, truncation_latent, input_is_latent, noise=noise[::-1])
- return outs
diff --git a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/util/shard.py b/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/util/shard.py
deleted file mode 100644
index 5b29db0f1eac5f5127d7bda3a6fc06b6fc1b6d02..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/util/shard.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import argparse, json, math, os
-from safetensors import safe_open
-from safetensors.torch import save_file
-
-parser = argparse.ArgumentParser(description = "Split .safetensors file into shards")
-parser.add_argument("input_file", type = str, help = "Path to input file")
-parser.add_argument("shard_size", type = int, help = "Shard size in megabytes")
-args = parser.parse_args()
-
-input_file = args.input_file
-input_base, _ = os.path.splitext(input_file)
-shard_size = args.shard_size * 1024**2
-
-# Create tensor map
-
-def _tsize(st, key):
-
- tslice = st.get_slice(key)
- shape = tslice.get_shape()
- numel = 1
- for x in shape: numel *= x
- dtype = tslice.get_dtype()
- del tslice
- if dtype == "I32": return numel * 4
- elif dtype == "I16": return numel * 2
- elif dtype == "F16": return numel * 2
- elif dtype == "F32": return numel * 4
- else: raise ValueError("Unexpected datatype: " + key)
-
-num_files = 0
-current_size = shard_size + 1
-total_size = 0
-tensor_map = []
-
-print(f" -- Scanning tensors in {input_file}")
-
-with safe_open(input_file, framework = "pt", device = "cpu") as f:
-
- for key in f.keys():
-
- tensor_size = _tsize(f, key)
- total_size += tensor_size
-
- if current_size + tensor_size > shard_size:
-
- num_files += 1
- current_size = 0
- current_list = []
- tensor_map.append(current_list)
-
- current_size += tensor_size
- current_list.append(key)
-
-# Split into output files
-
-weight_map = {}
-
-for file_index, keys in enumerate(tensor_map):
-
- shard = {}
- shard_filename = f"{input_base}-{file_index + 1:05}-of-{num_files:05}.safetensors"
-
- with safe_open(input_file, framework = "pt", device = "cpu") as f:
- for key in keys:
- print(f" -- Reading: {key}")
- shard[key] = f.get_tensor(key)
- weight_map[key] = shard_filename
-
- print(f" -- Writing: {shard_filename}")
- save_file(shard, shard_filename)
-
-# Compile index
-
-index = { "metadata": { "total_size": total_size }, "weight_map": weight_map }
-index_filename = f"{input_file}.index.json"
-
-print(f" -- Writing: {index_filename}")
-
-with open(index_filename, 'w') as f:
- json.dump(index, f, indent = 2)
-
-# Done
-
-print(f" -- Done")
\ No newline at end of file
diff --git a/spaces/lhkhiem28/A-segmentation-system/source/api.py b/spaces/lhkhiem28/A-segmentation-system/source/api.py
deleted file mode 100644
index f7af39ba17fd915090c08b35381dbde27db6aaed..0000000000000000000000000000000000000000
--- a/spaces/lhkhiem28/A-segmentation-system/source/api.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import os, sys
-from libs import *
-
-class Seg():
- def __init__(self,
- ckp_dir,
- ):
- self.transform = A.Compose(
- [
- A.Normalize(), AT.ToTensorV2(),
- ]
- )
-
- self.model = torch.load(
- ckp_dir,
- map_location = "cpu",
- ).eval()
- self.assigned_colors = {
- "R1" :np.random.rand(3),
- "R2" :np.random.rand(3),
- "R3" :np.random.rand(3),
- "R4" :np.random.rand(3),
- "R5" :np.random.rand(3),
- "R6" :np.random.rand(3),
- "R7" :np.random.rand(3),
- "R8" :np.random.rand(3),
- "R9" :np.random.rand(3),
- "R10":np.random.rand(3),
- "L1" :np.random.rand(3),
- "L2" :np.random.rand(3),
- "L3" :np.random.rand(3),
- "L4" :np.random.rand(3),
- "L5" :np.random.rand(3),
- "L6" :np.random.rand(3),
- "L7" :np.random.rand(3),
- "L8" :np.random.rand(3),
- "L9" :np.random.rand(3),
- "L10":np.random.rand(3),
- }
-
- def seg_predict(self,
- image,
- ):
- image = np.asarray(image, dtype = np.uint8)
- image = A.Resize(
- height = 512, width = 512,
- )(image = image)["image"]
- pred = self.model(self.transform(image = image)["image"].unsqueeze(0)) > 0.5
- pred = pred.int().squeeze(0).permute(1, 2, 0).numpy()
-
- from detectron2.utils.visualizer import Visualizer, ColorMode
- visualizer = Visualizer(
- image, instance_mode = ColorMode.SEGMENTATION,
- )
- output = visualizer.overlay_instances(
- masks = pred.transpose(2, 0, 1),
- labels = list(self.assigned_colors.keys()), assigned_colors = list(self.assigned_colors.values()),
- ).get_image()
-
- return output
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Magic DVD Ripper 5.5.1 Serial Download Pc !FULL!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Magic DVD Ripper 5.5.1 Serial Download Pc !FULL!.md
deleted file mode 100644
index 91e470fda357735d87dfc86fd005f0d77494cd5d..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Magic DVD Ripper 5.5.1 Serial Download Pc !FULL!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- """,
- unsafe_allow_html=True,
-)
-st.title("Stable Fashion Huggingface Spaces")
-file_name = st.file_uploader("Upload a clear full length picture of yourself, preferably in a less noisy background")
-stable_fashion_args = StableFashionCLIArgs()
-stable_fashion_args.image = file_name
-body_part = st.radio("Would you like to try clothes on your upper body (such as shirts, kurtas etc) or lower (Jeans, Pants etc)? ", ('Upper', 'Lower'))
-stable_fashion_args.part = body_part
-resolution = st.radio("Which resolution would you like to get the resulting picture in? (Keep in mind, higher the resolution, higher the queue times)", (128, 256, 512), index=2)
-stable_fashion_args.resolution = resolution
-rembg_status = st.radio("Would you like to remove background in your image before putting new clothes on you? (Sometimes it results in better images)", ("Yes", "No"), index=0)
-stable_fashion_args.rembg = (rembg_status == "Yes")
-guidance_scale = st.slider("Select a guidance scale. 7.5 gives the best results.", 1.0, 15.0, value=7.5)
-stable_fashion_args.guidance_scale = guidance_scale
-prompt = st.text_input('Write the description of cloth you want to try', 'a bright yellow t shirt')
-stable_fashion_args.prompt = prompt
-
-num_steps = st.slider("No. of inference steps for the diffusion process", 5, 50, value=25)
-stable_fashion_args.num_steps = num_steps
-
-
-if file_name is not None:
- result_image, mask_PIL = process_image(stable_fashion_args, inpainting_pipeline, net)
- print(np.unique(np.asarray(mask_PIL)))
- st.image(result_image, caption='Result')
- st.image(mask_PIL, caption='Mask')
-else:
- stock_image = Image.open('assets/abhishek_yellow.jpg')
- st.image(stock_image, caption='Result')
-
-
-
diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/SECURITY.md b/spaces/manhkhanhUIT/Image_Restoration_Colorization/SECURITY.md
deleted file mode 100644
index f7b89984f0fb5dd204028bc525e19eefc0859f4f..0000000000000000000000000000000000000000
--- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/SECURITY.md
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-## Security
-
-Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
-
-If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc751383(v=technet.10)), please report it to us as described below.
-
-## Reporting Security Issues
-
-**Please do not report security vulnerabilities through public GitHub issues.**
-
-Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).
-
-If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).
-
-You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
-
-Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
-
- * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
- * Full paths of source file(s) related to the manifestation of the issue
- * The location of the affected source code (tag/branch/commit or direct URL)
- * Any special configuration required to reproduce the issue
- * Step-by-step instructions to reproduce the issue
- * Proof-of-concept or exploit code (if possible)
- * Impact of the issue, including how an attacker might exploit the issue
-
-This information will help us triage your report more quickly.
-
-If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) page for more details about our active programs.
-
-## Preferred Languages
-
-We prefer all communications to be in English.
-
-## Policy
-
-Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd).
-
-
\ No newline at end of file
diff --git a/spaces/matthoffner/open-codetree/graphql/generated/graphql.d.ts b/spaces/matthoffner/open-codetree/graphql/generated/graphql.d.ts
deleted file mode 100644
index ceb2070f23ea6a263abf61e1408d86085b5a9376..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/open-codetree/graphql/generated/graphql.d.ts
+++ /dev/null
@@ -1,563 +0,0 @@
-import { gql } from '@apollo/client';
-export type Maybe = T | null;
-export type InputMaybe = Maybe;
-export type Exact = { [K in keyof T]: T[K] };
-export type MakeOptional = Omit & { [SubKey in K]?: Maybe };
-export type MakeMaybe = Omit & { [SubKey in K]: Maybe };
-/** All built-in and custom scalars, mapped to their actual values */
-export type Scalars = {
- ID: string;
- String: string;
- Boolean: boolean;
- Int: number;
- Float: number;
- DateTime: any;
- JSON: any;
- Upload: any;
-};
-
-export type Account = {
- __typename?: 'Account';
- createdAt: Scalars['DateTime'];
- email: Scalars['String'];
- id: Scalars['String'];
- token: Scalars['String'];
- updatedAt: Scalars['DateTime'];
-};
-
-export type AuthResponse = Response & {
- __typename?: 'AuthResponse';
- data?: Maybe;
- exp?: Maybe;
- message?: Maybe;
- status: Scalars['Boolean'];
- token?: Maybe