diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bangla Hasir Natok Script Pdf Free 120 Get Ready for Some Serious Fun with These Comedy Scripts.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bangla Hasir Natok Script Pdf Free 120 Get Ready for Some Serious Fun with These Comedy Scripts.md
deleted file mode 100644
index 230f3b9757b80beb3ff37b493c82ed01b696bf59..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bangla Hasir Natok Script Pdf Free 120 Get Ready for Some Serious Fun with These Comedy Scripts.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
- - How to download Bangla hasir natok script pdf free 120 from various sources? - What are some of the features and benefits of Bangla hasir natok script pdf free 120? - Conclusion: A summary of the main points and a call to action. | | H2: What are Bangla hasir natok or Bengali comedy plays and why are they popular? | - Definition and history of Bangla hasir natok or Bengali comedy plays. - Examples of famous Bangla hasir natok or Bengali comedy plays and their writers. - Reasons for the popularity of Bangla hasir natok or Bengali comedy plays among different audiences. | | H2: How to download Bangla hasir natok script pdf free 120 from various sources? | - A list of websites that offer Bangla hasir natok script pdf free 120 for download. - A step-by-step guide on how to download Bangla hasir natok script pdf free 120 from each website. - A comparison of the quality and quantity of Bangla hasir natok script pdf free 120 available on each website. | | H2: What are some of the features and benefits of Bangla hasir natok script pdf free 120? | - A description of the content and format of Bangla hasir natok script pdf free 120. - A table that shows the titles, genres, themes, and lengths of some of the plays included in Bangla hasir natok script pdf free 120. - A discussion of how Bangla hasir natok script pdf free 120 can be used for entertainment, education, and inspiration. | | H2: Conclusion: A summary of the main points and a call to action. | - A recap of what Bangla hasir natok or Bengali comedy plays are and why they are popular. - A reminder of how to download Bangla hasir natok script pdf free 120 from various sources. - A suggestion to read and enjoy Bangla hasir natok script pdf free 120 and share it with others. | **Table 2: Article with HTML formatting**
Bangla Hasir Natok Script Pdf Free 120: A Collection of Hilarious Plays for All Ages
-
If you are looking for some fun and laughter, you might want to check out Bangla hasir natok or Bengali comedy plays. These are short plays that are written and performed in Bengali language, often with witty dialogues, humorous situations, and social satire. They are a popular form of entertainment in Bangladesh and West Bengal, where they are staged in theatres, festivals, TV channels, and online platforms.
-
In this article, we will tell you what Bangla hasir natok or Bengali comedy plays are and why they are popular. We will also show you how to download Bangla hasir natok script pdf free 120 from various sources. This is a collection of 120 hilarious plays for all ages that you can read and enjoy anytime, anywhere. We will also discuss some of the features and benefits of Bangla hasir natok script pdf free 120 and how you can use it for entertainment, education, and inspiration.
What are Bangla hasir natok or Bengali comedy plays and why are they popular?
-
Bangla hasir natok or Bengali comedy plays are a type of drama that originated in Bengal in the late 19th century. They were influenced by the British colonial rule, the Bengali Renaissance, and the folk theatre traditions of Bengal. They often deal with social issues, political satire, family conflicts, romantic comedy, and absurd humor.
-
Some of the famous writers of Bangla hasir natok or Bengali comedy plays include Rabindranath Tagore, Sukumar Ray, Manoj Mitra, Parimal Tribedi, Mamata Mitra, Amalendu Chatterjee, Rupak Saha, etc. Some of their popular plays include Chotushkone (Four Corners), Jhalapala (Water Splash), Bharate Chai (I Want a Bride), Obak (Surprised), Rater Rajanigandha (Night Jasmine), etc.
-
Bangla hasir natok or Bengali comedy plays are popular among different audiences because they are entertaining, engaging, and enlightening. They make people laugh and think at the same time. They reflect the culture, values, and problems of Bengal and its people. They also showcase the creativity, talent, and diversity of Bengali writers and actors.
-
Bangla comedy drama script pdf free download
-Bengali hasir natok script pdf free downloadinstmank
-Bangla sruti natok script pdf free download
-Bangla hasir natok script by Sukumar Ray
-Bengali comedy play scripts pdf free download
-Bangla hasir natok script by Narayan Gangopadhyay
-Bengali short drama script pdf download
-Bangla comedy natok script pdf free download
-Bangla hasir natok script by Parimal Tribedi
-Bengali funny drama script pdf free download
-Bangla hasir natok script by Mamata Mitra
-Bengali comedy drama script for school students
-Bangla hasir natok script by Amalendu Chatterjee
-Bengali comedy skit script pdf free download
-Bangla hasir natok script by Soumitra Chattopadhyay
-Bengali one act play scripts pdf free download
-Bangla hasir natok script by Rupak Saha
-Bengali comedy drama script for college students
-Bangla hasir natok script by E de Fossard
-Bengali comedy drama script for teachers day
-Bangla hasir natok script by Jean-Pierre Martinez
-Bengali comedy drama script for annual function
-Bangla hasir natok script by Samit Dutta
-Bengali comedy drama script for farewell party
-Bangla hasir natok script by Vikram Mitra
-Bengali comedy drama script for children's day
-Bangla hasir natok script by Kkhh
-Bengali comedy drama script for republic day
-Bangla hasir natok script by Boimela.in
-Bengali comedy drama script for independence day
-Bangla hasir natok script by Dasti Shruti Natak 150 Bachharer
-Bengali comedy drama script for women's day
-Bangla hasir natok script by Half Dozon Chotoder Natok
-Bengali comedy drama script for valentine's day
-Bangla hasir natok script by Prankhola Hasir Natok
-Bengali comedy drama script for friendship day
-Bangla hasir natok script by Lotun Jebon Betar Natok
-Bengali comedy drama script for teacher's day
-Bangla hasir natok script by Natak Samagra a Lot Stumble
-Bengali comedy drama script for mother's day
-Bangla hasir natok script by Bharate Chai a Comedy Play
-Bengali comedy drama script for father's day
-Bangla hasir natok script by Obak Abak Indian Full 35 Mins
-Bengali comedy drama script for raksha bandhan
-Bangla hasir natok script by Rater Rajanigandha a Hot Funny Video
-Bengali comedy drama script for diwali
-Bangla hasir natok script by 3 on a Bed a Full Movie
-Bengali comedy drama script for holi
-Bangla hasir natok script by Sister 1 a Short Play
-Bengali comedy drama script for new year
-
How to download Bangla hasir natok script pdf free 120 from various sources?
-
If you want to read Bangla hasir natok or Bengali comedy plays on your computer or mobile device, you can download Bangla hasir natok script pdf free 120 from various sources. This is a collection of 120 hilarious plays for all ages that you can access without any cost or registration.
-
Here is a list of websites that offer Bangla hasir natok script pdf free 120 for download:
-
-
Scribd: This is a digital library that hosts millions of books, documents, audiobooks, podcasts, etc. You can find Bangla hasir natok script pdf free 120 by searching for it on the website or clicking on this link. You can download it as a PDF file by clicking on the Download button on the top right corner.
-
Wixsite: This is a website builder that allows users to create their own websites for free. You can find Bangla hasir natok script pdf free 120 by visiting this link. You can download it as a PDF file by clicking on the Download button on the bottom right corner.
-
Docker Hub: This is a platform that hosts docker images and containers for various applications. You can find Bangla hasir natok script pdf free 120 by visiting this link. You can download it as a PDF file by clicking on the Download button on the top right corner.
-
-
To download Bangla hasir natok script pdf free 120 from each website, you need to follow these steps:
-
-
Click on the link that takes you to the website that offers Bangla hasir natok script pdf free 120 for download.
-
On the website, look for the file name or title that says "Bangla Hasir Natok Script Pdf Free Downloadinstmank" or something similar.
-
Click on the file name or title to open it in a new tab or window.
-
On the new tab or window, look for the Download button that is usually located on the top right corner or bottom right corner.
-
Click on the Download button to start downloading the file to your computer or mobile device.
-
Wait for the download to finish and then open the file with a PDF reader application.
-
-
You can compare the quality and quantity of Bangla hasir natok script pdf free 120 available on each website by looking at these factors:
-
-
The size of the file: The larger the file size, the more pages and content it contains.
-
The number of views: The higher the number of views, the more popular and reliable it is.
-
The date of upload: The newer the date of upload, the more updated and relevant it is.
-
The ratings and reviews: The higher the ratings and reviews, the more positive feedback it received from other users.
-
-
What are some of the features and benefits of Bangla hasir natok script pdf free 120?
-
Bangla hasir natok script pdf free 120 is a collection of 120 hilarious plays for all ages that you can read and enjoy anytime, anywhere. It has some features and benefits that make it a valuable and enjoyable resource for anyone who loves Bangla hasir natok or Bengali comedy plays.
-
Some of the features and benefits of Bangla hasir natok script pdf free 120 are:
-
-
It is free and easy to download from various sources. You don't need to pay any money or register any account to access it.
-
It is in PDF format, which means you can read it on any device that supports PDF files, such as computers, laptops, tablets, smartphones, etc.
-
It is in Bengali language, which means you can read it in your native language and appreciate the nuances and expressions of the writers and actors.
-
It contains 120 plays that cover different genres, themes, and lengths. You can find plays that suit your mood, preference, and time availability.
-
It is a collection of hilarious plays that will make you laugh out loud and forget your worries. You can also share it with your friends and family and have a good time together.
-
It is a source of entertainment, education, and inspiration. You can learn about the culture, values, and problems of Bengal and its people. You can also get inspired by the creativity, talent, and diversity of Bengali writers and actors.
-
-
To give you an idea of what Bangla hasir natok script pdf free 120 contains, here is a table that shows the titles, genres, themes, and lengths of some of the plays included in it:
- | Title | Genre | Theme | Length | | --- | --- | --- | --- | | Chotushkone (Four Corners) | Mystery | A murder mystery involving four suspects who are trapped in a room. | 40 minutes | | Jhalapala (Water Splash) | Comedy | A comedy of errors involving a water tank, a plumber, a landlord, and a tenant. | 35 minutes | | Bharate Chai (I Want a Bride) | Romance | A romantic comedy involving a young man who wants to marry a girl he met online. | 45 minutes | | Obak (Surprised) | Satire | A satire on the political and social situation of Bangladesh. | 30 minutes | | Rater Rajanigandha (Night Jasmine) | Drama | A drama about a married couple who face a crisis in their relationship. | 50 minutes |
Conclusion: A summary of the main points and a call to action.
-
In conclusion, Bangla hasir natok or Bengali comedy plays are a popular form of entertainment in Bangladesh and West Bengal. They are short plays that are written and performed in Bengali language, often with witty dialogues, humorous situations, and social satire. They reflect the culture, values, and problems of Bengal and its people. They also showcase the creativity, talent, and diversity of Bengali writers and actors.
-
If you want to read Bangla hasir natok or Bengali comedy plays on your computer or mobile device, you can download Bangla hasir natok script pdf free 120 from various sources. This is a collection of 120 hilarious plays for all ages that you can read and enjoy anytime, anywhere. It has some features and benefits that make it a valuable and enjoyable resource for anyone who loves Bangla hasir natok or Bengali comedy plays.
-
We hope you enjoyed this article and learned something new about Bangla hasir natok or Bengali comedy plays. We also hope you will download Bangla hasir natok script pdf free 120 and read it for yourself. You will surely have a lot of fun and laughter with it. You can also share it with your friends and family and have a good time together.
-
Thank you for reading this article. If you have any questions or comments, please feel free to leave them below. We would love to hear from you.
-
Frequently Asked Questions
-
-
What is the difference between Bangla hasir natok and Bangla sruti natok? Bangla hasir natok are Bengali comedy plays that are staged in theatres, festivals, TV channels, or online platforms. Bangla sruti natok are Bengali audio plays that are broadcasted on radio stations or online platforms.
-
Who are some of the famous actors of Bangla hasir natok or Bengali comedy plays? Some of the famous actors of Bangla hasir natok or Bengali comedy plays include Mosharraf Karim, Chanchal Chowdhury, Zahid Hasan, Nusrat Imrose Tisha, Mir Sabbir, etc.
-
Frequently Asked Questions
-
-
What is the difference between Bangla hasir natok and Bangla sruti natok? Bangla hasir natok are Bengali comedy plays that are staged in theatres, festivals, TV channels, or online platforms. Bangla sruti natok are Bengali audio plays that are broadcasted on radio stations or online platforms.
-
Who are some of the famous actors of Bangla hasir natok or Bengali comedy plays? Some of the famous actors of Bangla hasir natok or Bengali comedy plays include Mosharraf Karim, Chanchal Chowdhury, Zahid Hasan, Nusrat Imrose Tisha, Mir Sabbir, etc.
-
Where can I watch Bangla hasir natok or Bengali comedy plays online? You can watch Bangla hasir natok or Bengali comedy plays online on various platforms such as YouTube, Facebook, BongoBD, Bioscope, etc.
-
How can I write my own Bangla hasir natok or Bengali comedy play? You can write your own Bangla hasir natok or Bengali comedy play by following these steps: - Choose a genre, theme, and title for your play. - Create a plot outline with a beginning, middle, and end. - Develop your characters and their personalities, motivations, and relationships. - Write the dialogues and actions for each scene. - Use humor, irony, sarcasm, and exaggeration to make your play funny and engaging. - Edit and revise your play until you are satisfied with it.
-
What are some of the benefits of reading and watching Bangla hasir natok or Bengali comedy plays? Some of the benefits of reading and watching Bangla hasir natok or Bengali comedy plays are: - They can improve your mood and reduce stress. - They can enhance your language and communication skills. - They can increase your knowledge and awareness of social and cultural issues. - They can stimulate your creativity and imagination. - They can inspire you to express yourself and have fun.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CRACK Adobe Acrobat XI Pro 11.0.22 Multilingual Crack [SadeemPC].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CRACK Adobe Acrobat XI Pro 11.0.22 Multilingual Crack [SadeemPC].md
deleted file mode 100644
index e172bb7319ad0965914402305a30e16521dfb78b..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CRACK Adobe Acrobat XI Pro 11.0.22 Multilingual Crack [SadeemPC].md
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
How to Install and Update Adobe Acrobat XI Pro 11.0.22 Multilingual [SadeemPC]
-
-
Adobe Acrobat XI Pro is a powerful and versatile software that allows you to create, edit, convert, sign, and share PDF files. It also lets you fill, save, and send forms electronically. With Adobe Acrobat XI Pro, you can work with PDFs anywhere, anytime, and on any device.
-
-
If you have purchased Adobe Acrobat XI Pro from a disc or a download link, you may need to install and update it to the latest version (11.0.22) to enjoy its full features and security patches. In this article, we will show you how to do that step by step.
-
CRACK Adobe Acrobat XI Pro 11.0.22 Multilingual Crack [SadeemPC]
Double-click the downloaded file (AcrobatPro_11_Web_WWMUI.exe for Windows or AcrobatPro_11_Web_WWMUI.dmg for Mac) to start the installation process.
-
Follow the on-screen instructions to complete the installation. You may need to enter your serial number and sign in with your Adobe ID.
-
When the installation is finished, launch Adobe Acrobat XI Pro from your desktop or applications folder.
-
-
-
How to Update Adobe Acrobat XI Pro 11.0.22 Multilingual [SadeemPC]
-
-
To keep your Adobe Acrobat XI Pro up to date and secure, you should check for and install updates regularly. You can do this manually or automatically.
-
-
To check for updates manually, follow these steps:
-
-
-
Open Adobe Acrobat XI Pro and go to Help > Check for Updates.
-
If there are any available updates, click Download.
-
When the download is complete, click Install.
-
Follow the on-screen instructions to complete the update process. You may need to restart your computer.
-
-
-
To check for updates automatically, follow these steps:
-
-
-
Open Adobe Acrobat XI Pro and go to Edit > Preferences (Windows) or Acrobat > Preferences (Mac).
-
Select Updater from the left pane.
-
Choose one of the following options: Automatically install updates (recommended), Automatically download updates but let me choose when to install them, or Do not download or install updates automatically.
-
Click OK to save your settings.
-
-
-
Congratulations! You have successfully installed and updated Adobe Acrobat XI Pro 11.0.22 Multilingual [SadeemPC]. Now you can enjoy working with PDFs like a pro!
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe.acrobat.pro.x.v10.0.multilingual.incl.keymaker-core 121.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe.acrobat.pro.x.v10.0.multilingual.incl.keymaker-core 121.md
deleted file mode 100644
index 29cfbc863f7588158053a242aa3c0f978fc97ce9..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe.acrobat.pro.x.v10.0.multilingual.incl.keymaker-core 121.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-allavsoft video er converter 3.17.9.7206 with license key Allavsoft Video Converter is all-in-one all video converter, video downloader and video resizer (convert DVD to avi, mov, wmv, mp4, 3gp, mp3 etc.) and movie converter.
-It can convert all video formats, including mpg, mpeg, divX, xvid, avi, asf, wmv, dv, f4v, rm, rmvb, wmv, mov, 3gpp, webm, mp3, mp4 etc.
-It contains two tools that can enhance your video effects like quantization, pitch, fading effects, hue, brightness, sharpen and many more. 8a78ff9644
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Contpaq 2005 Gratis.md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Contpaq 2005 Gratis.md
deleted file mode 100644
index 6a5ce07a43c88585dc1b9d8a47deb4c11b32ea99..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Contpaq 2005 Gratis.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
¿Cómo descargar Contpaq 2005 gratis?
-
-
Contpaq 2005 es un software contable que te permite llevar el control de tu contabilidad de forma fácil y eficiente. Con Contpaq 2005 puedes registrar y consultar tus movimientos contables, generar reportes e informes, integrar tu contabilidad con otros sistemas como bancos, nóminas, facturación, etc. Además, Contpaq 2005 es compatible con Windows 98, Me, 2000 y XP, y soporta tanto arquitecturas de 32 bits como de 64 bits.
-
-
Si quieres descargar Contpaq 2005 gratis, tienes varias opciones disponibles. Una de ellas es visitar el sitio web oficial de Computación en Acción, la empresa desarrolladora de Contpaq 2005. Allí podrás encontrar el archivo de instalación de Contpaq 2005, así como manuales, tutoriales y cursos para aprender a usar el software. Sin embargo, para poder usar Contpaq 2005 necesitarás una licencia válida, la cual puedes obtener desde el mismo sitio web o desde un distribuidor autorizado.
Otra opción para descargar Contpaq 2005 gratis es recurrir a una fuente de terceros, como Google Drive, Trello o Netlify. Estos sitios ofrecen el archivo de instalación de Contpaq 2005, así como un software llamado Xforce Keygen 64 Bit, el cual te permite generar números de serie y códigos de activación para varios productos de Autodesk, incluyendo Contpaq 2005. De esta forma, podrás activar Contpaq 2005 sin necesidad de comprar una licencia. Sin embargo, debes tener cuidado al descargar Contpaq 2005 y Xforce Keygen 64 Bit desde estos sitios, ya que algunos archivos pueden contener virus o malware que pueden dañar tu dispositivo o comprometer tus datos.
-
-
En este artículo te mostraremos cómo descargar Contpaq 2005 gratis desde una fuente de terceros y cómo activarlo con Xforce Keygen 64 Bit. También te daremos algunos consejos y trucos para optimizar tu experiencia con Contpaq 2005 y evitar posibles problemas. Sigue estos pasos para descargar Contpaq 2005 gratis:
-
-
-
Visita un sitio web confiable que ofrezca Contpaq 2005 y Xforce Keygen 64 Bit. Por ejemplo, puedes visitar Google Drive, Trello o Netlify. Asegúrate de escanear el sitio web con un software antivirus antes de proceder.
-
Haz clic en el enlace o botón de descarga para descargar Contpaq 2005 y Xforce Keygen 64 Bit. El tamaño del archivo puede variar dependiendo del sitio web y la versión de Contpaq 2005 y Xforce Keygen 64 Bit.
-
Guarda el archivo en una carpeta en tu dispositivo. Puede que necesites un software como WinRAR o 7-Zip para extraer el archivo si está comprimido.
-
Escanear el archivo con un software antivirus antes de abrirlo. Si el archivo está infectado o corrupto, bórralo inmediatamente y prueba otro sitio web.
-
Ejecuta el Xforce Keygen 64 Bit como administrador. Puede que necesites desactivar tu firewall o antivirus temporalmente para hacer esto.
-
Selecciona Contpaq 2005 de la lista de productos en la interfaz de Xforce Keygen.
-
Haz clic en el botón Generar para generar un número de serie y un código de activación para Contpaq 2005.
-
Copia el número de serie y el código de activación a un archivo de texto o a un portapapeles.
-
Instala Contpaq 2005 en tu dispositivo. Puedes descargar el archivo de instalación desde el sitio web oficial de Computación en Acción o desde una fuente de terceros.
-
Cuando se te solicite, ingresa el número de serie y el código de activación que generaste con Xforce Keygen.
-
Sigue las instrucciones en la pantalla para completar el proceso de instalación y activación.
-
Disfruta usando Contpaq 2005 en tu dispositivo.
-
-
-
Consejos y trucos para usar Contpaq 2005
-
-
Aquí te damos algunos consejos y trucos para usar Contpaq 2005:
-
-
-
Asegúrate de tener suficiente espacio en tu dispositivo para instalar y ejecutar Contpaq 2005. El software requiere al menos 750 MB de espacio libre en disco y 512 MB de RAM.
-
Asegúrate de tener una conexión a internet estable cuando descargues e instales Contpaq 2005 y Xforce Keygen. Una conexión lenta o interrumpida puede causar errores o corrupción de archivos.
-
Asegúrate de hacer una copia de seguridad de tus datos antes de usar Xforce Keygen. Aunque Xforce Keygen es generalmente seguro y confiable, siempre hay un riesgo
-
¿Cómo usar Contpaq 2005?
-
-
Contpaq 2005 es un software contable que te ofrece una serie de funciones y herramientas para llevar el control de tu contabilidad de forma fácil y eficiente. Con Contpaq 2005 puedes:
-
-
-
Registrar y consultar tus movimientos contables, como ingresos, egresos, cuentas por cobrar, cuentas por pagar, etc.
-
Generar reportes e informes contables, como balances, estados de resultados, estados de flujo de efectivo, etc.
-
Integrar tu contabilidad con otros sistemas, como bancos, nóminas, facturación, etc. para automatizar procesos y evitar errores.
-
Cumplir con las normas y obligaciones fiscales, como el cálculo y la presentación de impuestos, la emisión de comprobantes fiscales digitales, etc.
-
Personalizar tu contabilidad según tus necesidades y preferencias, como el catálogo de cuentas, los tipos de pólizas, los centros de costos, etc.
-
-
-
Para usar Contpaq 2005, debes seguir estos pasos:
-
-
-
-
Lanza Contpaq 2005 en tu dispositivo. Puedes usar el icono del escritorio, el menú de inicio o la barra de tareas para lanzar Contpaq 2005.
-
Crea o abre una empresa en Contpaq 2005. Puedes usar el menú Archivo, la barra de herramientas o la línea de comandos para crear o abrir una empresa en Contpaq 2005.
-
Configura tu empresa en Contpaq 2005. Puedes usar el menú Configuración, la barra de herramientas o la línea de comandos para configurar tu empresa en Contpaq 2005. Debes ingresar los datos generales de tu empresa, como el nombre, el RFC, el domicilio fiscal, etc. También debes configurar los parámetros contables de tu empresa, como el catálogo de cuentas, los tipos de pólizas, los centros de costos, etc.
-
Registra y consulta tus movimientos contables en Contpaq 2005. Puedes usar el menú Movimientos, la barra de herramientas o la línea de comandos para registrar y consultar tus movimientos contables en Contpaq 2005. Debes ingresar los datos de cada movimiento contable, como el tipo de póliza, la fecha, el concepto, las cuentas afectadas, los importes, etc. También puedes consultar tus movimientos contables por diferentes criterios, como el periodo, el tipo de póliza, el centro de costos, etc.
-
Genera reportes e informes contables en Contpaq 2005. Puedes usar el menú Reportes
-
¿Qué ventajas tiene Contpaq 2005?
-
-
Contpaq 2005 es un software contable que tiene varias ventajas que lo hacen destacar entre otros programas similares. Algunas de las ventajas de Contpaq 2005 son:
-
-
-
Es fácil de usar y aprender. Contpaq 2005 tiene una interfaz gráfica amigable y sencilla, que te permite acceder a todas las funciones y herramientas con unos pocos clics. Además, Contpaq 2005 cuenta con manuales, tutoriales y cursos que te enseñan a usar el software paso a paso.
-
Es flexible y adaptable. Contpaq 2005 te permite personalizar tu contabilidad según tus necesidades y preferencias, como el catálogo de cuentas, los tipos de pólizas, los centros de costos, etc. También te permite integrar tu contabilidad con otros sistemas, como bancos, nóminas, facturación, etc. para automatizar procesos y evitar errores.
-
Es seguro y confiable. Contpaq 2005 te ofrece un alto nivel de seguridad y confiabilidad en el manejo de tu información contable. Contpaq 2005 cuenta con un sistema de respaldo y restauración de datos, que te permite recuperar tu información en caso de pérdida o daño. También cuenta con un sistema de auditoría y control, que te permite verificar la integridad y consistencia de tus datos.
-
Es compatible y actualizado. Contpaq 2005 es compatible con Windows 98, Me, 2000 y XP, y soporta tanto arquitecturas de 32 bits como de 64 bits. Además, Contpaq 2005 se actualiza constantemente para adaptarse a las normas y obligaciones fiscales vigentes, como el cálculo y la presentación de impuestos, la emisión de comprobantes fiscales digitales, etc.
-
-
-
¿Qué desventajas tiene Contpaq 2005?
-
-
Contpaq 2005 es un software contable que tiene pocas desventajas en comparación con sus ventajas. Sin embargo, algunas de las desventajas de Contpaq 2005 son:
-
-
-
Es costoso y limitado. Contpaq 2005 es un software contable que requiere una licencia válida para poder usarlo, la cual puedes obtener desde el sitio web oficial de Computación en Acción o desde un distribuidor autorizado. Sin embargo, la licencia tiene un costo elevado y una duración limitada, lo que puede representar una inversión considerable para algunos usuarios.
-
Es vulnerable y riesgoso. Contpaq 2005 es un software contable que puede ser vulnerable y riesgoso si se usa de forma indebida o ilegal. Por ejemplo, si se descarga Contpaq 2005 desde una fuente de terceros o si se activa con Xforce Keygen 64 Bit, se puede exponer el dispositivo o los datos a virus o malware que pueden dañarlos o comprometerlos. También se puede incurrir en delitos o sanciones por violar los derechos de autor o las leyes fiscales.
-
-
¿Qué opiniones tienen los usuarios de Contpaq 2005?
-
-
Contpaq 2005 es un software contable que tiene muchas opiniones positivas y negativas de los usuarios que lo han usado o lo usan actualmente. Algunas de las opiniones de los usuarios de Contpaq 2005 son:
-
-
-
Opiniones positivas: Los usuarios que han tenido una buena experiencia con Contpaq 2005 destacan que el software es fácil de usar y aprender, que tiene una interfaz gráfica amigable y sencilla, que ofrece una serie de funciones y herramientas para llevar el control de la contabilidad, que se integra con otros sistemas como bancos, nóminas, facturación, etc., que cumple con las normas y obligaciones fiscales vigentes, y que es compatible y actualizado.
-
Opiniones negativas: Los usuarios que han tenido una mala experiencia con Contpaq 2005 señalan que el software es costoso y limitado, que requiere una licencia válida para poder usarlo, que puede ser vulnerable y riesgoso si se usa de forma indebida o ilegal, que puede tener errores o fallas técnicas, que puede ser lento o pesado en algunos dispositivos, y que tiene un servicio al cliente deficiente o inexistente.
-
-
-
Estas son solo algunas de las opiniones de los usuarios de Contpaq 2005 que puedes encontrar en internet. Cada usuario tiene su propia opinión basada en su experiencia personal con el software. Por eso, te recomendamos que leas varias opiniones antes de decidir si Contpaq 2005 es el software contable adecuado para ti.
-
-
¿Dónde puedo descargar Contpaq 2005 gratis?
-
-
Si quieres descargar Contpaq 2005 gratis, tienes varias opciones disponibles. Una de ellas es visitar el sitio web oficial de Computación en Acción, la empresa desarrolladora de Contpaq 2005. Allí podrás encontrar el archivo de instalación de Contpaq 2005, así como manuales, tutoriales y cursos para aprender a usar el software. Sin embargo, para poder usar Contpaq 2005 necesitarás una licencia válida, la cual puedes obtener desde el mismo sitio web o desde un distribuidor autorizado.
-
-
Otra opción para descargar Contpaq 2005 gratis es recurrir a una fuente de terceros, como Google Drive, Trello o Netlify. Estos sitios ofrecen el archivo de instalación de Contpaq 2005, así como un software llamado Xforce Keygen 64 Bit, el cual te permite generar números de serie y códigos de activación para varios productos de Autodesk, incluyendo Contpaq 2005. De esta forma, podrás activar Contpaq 2005 sin necesidad de comprar una licencia. Sin embargo, debes tener cuidado al descargar Contpaq 2005 y Xforce Keygen 64 Bit desde estos sitios, ya que algunos archivos pueden contener virus o malware que pueden dañar tu dispositivo o comprometer tus datos.
-
-
En este artículo te hemos mostrado cómo descargar Contpaq 2005 gratis desde una fuente de terceros y cómo activarlo con Xforce Keygen 64 Bit. También te hemos dado algunos consejos y trucos para optimizar tu experiencia con Contpaq 2005 y evitar posibles problemas. También te hemos comparado Contpaq 2005 con otras versiones y productos de Contpaq
-
Conclusión
-
-
Contpaq 2005 es un software contable que te permite llevar el control de tu contabilidad de forma fácil y eficiente. Con Contpaq 2005 puedes registrar y consultar tus movimientos contables, generar reportes e informes, integrar tu contabilidad con otros sistemas como bancos, nóminas, facturación, etc. Además, Contpaq 2005 es compatible con Windows 98, Me, 2000 y XP, y soporta tanto arquitecturas de 32 bits como de 64 bits. Sin embargo, para usar Contpaq 2005 necesitas una licencia válida, la cual puedes obtener desde el sitio web oficial de Computación en Acción o desde una fuente de terceros como Xforce Keygen 64 Bit.
-
-
En este artículo te hemos mostrado cómo descargar Contpaq 2005 gratis desde una fuente de terceros y cómo activarlo con Xforce Keygen 64 Bit. También te hemos dado algunos consejos y trucos para optimizar tu experiencia con Contpaq 2005 y evitar posibles problemas. También te hemos comparado Contpaq 2005 con otras versiones y productos de Contpaq, y te hemos mostrado algunas alternativas y opiniones de los usuarios de Contpaq 2005.
-
-
Esperamos que este artículo te haya ayudado a aprender cómo descargar Contpaq 2005 gratis y cómo usarlo. Si tienes alguna pregunta o comentario, por favor déjanos un mensaje abajo. ¡Gracias por leer!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Free Download [BETTER] Hindi Movie Kala Patthar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Free Download [BETTER] Hindi Movie Kala Patthar.md
deleted file mode 100644
index 479cb39a90eed8192e9ae703b0299957d0491905..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Free Download [BETTER] Hindi Movie Kala Patthar.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
-hindi movie kala patthar, hindi movie kala patthar
-hindi movie hindi movie hindi movie kala patthar hindi movie kala patthar hindi movie 2019, hindi movie, hindi movie 2019, hindi movie 2019 , hindi movie 2019 trailer, hindi movie 2019 full, hindi movie 2019 trailer,
-hindi movie 8a78ff9644
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Media Encoder 2020 How to Download and Install for Free.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Media Encoder 2020 How to Download and Install for Free.md
deleted file mode 100644
index 355b4ac8a6e7af2b2587cbf20455bf5d950c7752..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Media Encoder 2020 How to Download and Install for Free.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
Adobe Media Encoder 2020 Free Download
-
If you are looking for a reliable and powerful video encoding software, you might want to check out Adobe Media Encoder 2020. This software allows you to ingest, transcode, create proxies, and output to almost any format you can imagine. It also integrates seamlessly with other Adobe applications, such as Premiere Pro, After Effects, and Audition. In this article, we will show you how to download and install Adobe Media Encoder 2020 for free, how to use it, what are its features and system requirements, and what are some best practices and alternatives.
-
What is Adobe Media Encoder 2020?
-
Adobe Media Encoder 2020 is a software that enables you to encode and export video files for various platforms and devices. It supports a wide range of formats, codecs, resolutions, frame rates, aspect ratios, and profiles. You can also apply presets, watch folders, destination publishing, time tuner, LUTs, loudness corrections, and other settings to automate your workflows and enhance your output quality. Adobe Media Encoder 2020 is part of the Adobe Creative Cloud suite, which means you can access it with a subscription plan or a free trial.
Adobe Media Encoder 2020 is a useful tool for anyone who works with video production, editing, or distribution. Here are some reasons why you might need it:
-
-
You want to convert your video files into different formats for various purposes, such as web streaming, social media sharing, DVD authoring, or archiving.
-
You want to create proxies or lower-resolution versions of your video files for faster editing or previewing.
-
You want to export your video projects from Premiere Pro or After Effects without opening them.
-
You want to adjust the duration, color, or audio of your video files without re-opening them.
-
You want to publish your video files directly to YouTube, Vimeo, Facebook, Twitter, or other platforms.
-
-
How to download and install Adobe Media Encoder 2020?
-
To download and install Adobe Media Encoder 2020, follow these steps:
-
-
Click here to go to the official website of Adobe Media Encoder.
-
Click on the Free Trial button at the top right corner of the page.
-
Sign in with your Adobe ID or create one if you don't have one.
-
Select your plan from the options available. You can choose between a single app plan or a Creative Cloud plan that includes other Adobe apps.
-
Click on Start free trial and follow the onscreen instructions to download the installer file.
-
Run the installer file and follow the prompts to complete the installation process.
-
Launch Adobe Media Encoder 2020 from your desktop or start menu.
-
-
How to use Adobe Media Encoder 2020?
-
To use Adobe Media Encoder 2020, follow these steps:
-
-
Add your source video files to the queue by clicking on the Add Source button at the top left corner of the window. You can also drag and drop your files from your file explorer or import them from Premiere Pro or After Effects.
-
Select your output format and preset from the drop-down menus at the right side of the window. You can also customize your settings by clicking on the Edit Preset button.
-
Choose your output destination by clicking on the Output File link at the right side of the window. You can also specify a watch folder or a destination publishing option.
-
Click on the Start Queue button at the top right corner of the window to begin encoding your video files. You can monitor the progress and status of your encoding jobs in the queue panel.
-
Once your encoding is done, you can preview your output files by clicking on the Output tab at the bottom of the window. You can also open them in your default media player or folder by right-clicking on them and selecting the appropriate option.
-
-
What are the system requirements for Adobe Media Encoder 2020?
-
To run Adobe Media Encoder 2020 smoothly, you need to meet the following system requirements:
-
-
-
Operating System
-
Minimum Requirements
-
Recommended Requirements
-
-
-
Windows 10 (64-bit)
-
- Intel 6th Gen or newer CPU - 8 GB of RAM - 4 GB of GPU VRAM - 1920 x 1080 display resolution - Sound card compatible with ASIO protocol or Microsoft Windows Driver Model - Fast internal SSD for app installation and cache - 10 GB of available hard-disk space for installation; additional free space required during installation (cannot install on removable flash storage devices) - Optional: Adobe-recommended GPU card for GPU-accelerated performance (see Premiere Pro System Requirements)
-
- Intel 7th Gen or newer CPU - 16 GB of RAM for HD media - 32 GB or more of RAM for 4K media - 4 GB of GPU VRAM - Fast internal SSD (recommended) for app installation and cache – plus provisional space for media - Additional high-speed drive(s) for media
-
-
-
macOS v10.13 or later
-
- Intel 6th Gen or newer CPU - 8 GB of RAM - 4 GB of GPU VRAM - 1920 x 1080 display resolution - Sound card compatible with Apple Core Audio - Fast internal SSD for app installation and cache - 10 GB of available hard-disk space for installation; additional free space required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash storage devices) - Optional: Adobe-recommended GPU card for GPU-accelerated performance (see Premiere Pro System Requirements)
-
- Intel 7th Gen or newer CPU - 16 GB of RAM for HD media - 32 GB or more of RAM for 4K media - 4 GB of GPU VRAM - Fast internal SSD (recommended) for app installation and cache – plus provisional space for media - Additional high-speed drive(s) for media
-
-
-
What are the new features in Adobe Media Encoder 2020?
-
Adobe Media Encoder 2020 comes with several new features and improvements that enhance your encoding experience. Here are some of them:
-
-
New file format support: You can now encode and export video files in AV1, HEIF, Canon XF-HEVC, and Sony VENICE V4 formats.
-
New hardware-accelerated encoding: You can now use hardware encoding for H.264 and HEVC formats on Windows with Intel and NVIDIA GPUs, and on macOS with AMD and Intel GPUs.
-
New encoding presets: You can now use new presets for social media platforms, such as TikTok, Reddit, and Snapchat.
-
New destination publishing: You can now publish your video files directly to Behance, along with YouTube, Vimeo, Facebook, and Twitter.
-
New time tuner effect: You can now use the time tuner effect to automatically adjust the duration of your video files by adding or removing frames.
-
New HDR support: You can now encode and export HDR video files with HDR10 metadata.
-
New performance improvements: You can now enjoy faster encoding and decoding with the latest Adobe Media Encoder engine and improved GPU support.
-
-
What are the best practices for using Adobe Media Encoder 2020?
-
To get the most out of Adobe Media Encoder 2020, here are some best practices you can follow:
-
-
Choose the right format and preset for your output: Depending on your purpose and platform, you should select the appropriate format and preset for your video files. For example, if you want to upload your video to YouTube, you should use the H.264 format and the YouTube preset. You can also customize your settings to suit your needs and preferences.
-
Use proxies for faster editing: If you have high-resolution or high-bitrate video files, you might experience lagging or crashing when editing them. To avoid this, you can create proxies or lower-resolution versions of your video files with Adobe Media Encoder 2020 and use them for editing in Premiere Pro or After Effects. You can then switch back to the original files when exporting.
-
Use watch folders for batch processing: If you have multiple video files that need the same encoding settings, you can use watch folders to automate your workflows. Watch folders are folders that Adobe Media Encoder 2020 monitors for new files and applies a preset to them automatically. You can create watch folders by clicking on the Add Watch Folder button at the top left corner of the window and selecting a folder and a preset.
-
Use destination publishing for easy sharing: If you want to share your video files online, you can use destination publishing to upload them directly to your preferred platform. Destination publishing allows you to enter your account credentials and metadata for YouTube, Vimeo, Facebook, Twitter, or Behance, and publish your video files with one click. You can enable destination publishing by clicking on the Add Destination button at the right side of the window and selecting a platform.
-
Use time tuner and loudness correction for fine-tuning: If you want to adjust the duration or audio of your video files without re-opening them, you can use time tuner and loudness correction effects in Adobe Media Encoder 2020. Time tuner allows you to add or remove frames from your video files to match a specific duration. Loudness correction allows you to normalize the audio levels of your video files to meet broadcast standards. You can apply these effects by clicking on the Add Effect button at the right side of the window and selecting an effect.
-
-
What are the alternatives to Adobe Media Encoder 2020?
-
If you are looking for other video encoding software, here are some alternatives to Adobe Media Encoder 2020:
-
adobe media encoder 2020 trial download
-how to get adobe media encoder 2020 for free
-adobe media encoder 2020 crack download
-adobe media encoder 2020 full version free download
-adobe media encoder 2020 system requirements
-adobe media encoder 2020 tutorial
-adobe media encoder 2020 presets download
-adobe media encoder 2020 vs 2019
-adobe media encoder 2020 mac free download
-adobe media encoder 2020 windows 10 free download
-adobe media encoder 2020 offline installer
-adobe media encoder 2020 update download
-adobe media encoder 2020 new features
-adobe media encoder 2020 not working
-adobe media encoder 2020 activation key
-adobe media encoder 2020 serial number
-adobe media encoder 2020 license key
-adobe media encoder 2020 patch download
-adobe media encoder 2020 portable download
-adobe media encoder 2020 reddit
-adobe media encoder 2020 review
-adobe media encoder 2020 price
-adobe media encoder 2020 free alternative
-adobe media encoder 2020 youtube
-adobe media encoder 2020 h.264 settings
-adobe media encoder 2020 hevc codec download
-adobe media encoder 2020 gpu acceleration
-adobe media encoder 2020 hardware encoding
-adobe media encoder 2020 slow rendering
-adobe media encoder 2020 watch folder setup
-adobe media encoder 2020 export settings
-adobe media encoder 2020 export mp4
-adobe media encoder 2020 export gif
-adobe media encoder 2020 export webm
-adobe media encoder 2020 export mov
-adobe media encoder 2020 export prores
-adobe media encoder 2020 export audio only
-adobe media encoder 2020 export with alpha channel
-adobe media encoder 2020 export subtitles
-adobe media encoder 2020 export frame rate
-adobe media encoder 2020 import formats
-adobe media encoder 2020 import presets
-adobe media encoder 2020 import error
-adobe media encoder 2020 import queue from premiere pro
-adobe media encoder 2020 import after effects project
-adobe media encoder 2020 import photoshop file
-adobe media encoder 2020 import xml file
-adobe media encoder 2020 import edl file
-adobe media encoder 2020 import avid sequence
-adobe media encoder 2020 import final cut pro project
-
-
HandBrake: This is a free and open-source software that allows you to convert video files from any format to MP4 or MKV. It supports a variety of codecs, presets, filters, subtitles, and chapters. It also has a simple and user-friendly interface.
-
VLC Media Player: This is a free and cross-platform software that allows you to play, convert, stream, and record video files. It supports a wide range of formats, codecs, protocols, devices, and features. It also has a customizable and versatile interface.
-
FFmpeg: This is a free and command-line-based software that allows you to encode, decode, transcode, mux, demux, stream, filter, and play video files. It supports almost all formats, codecs, filters, and features imaginable. It also has a high performance and quality.
-
-
Conclusion
-
In conclusion, Adobe Media Encoder 2020 is a powerful and reliable video encoding software that allows you to ingest, transcode, create proxies, and output to almost any format you can imagine. It also integrates seamlessly with other Adobe applications, such as Premiere Pro, After Effects, and Audition. You can download and install Adobe Media Encoder 2020 for free with a trial or a subscription plan from the official website. You can also use it to encode and export video files with various settings, presets, effects, watch folders, and destination publishing options. You can also use it to adjust the duration, color, or audio of your video files without re-opening them. Adobe Media Encoder 2020 also comes with new features and improvements, such as new file format support, new hardware-accelerated encoding, new encoding presets, new destination publishing, new time tuner effect, new HDR support, and new performance improvements. You can also follow some best practices to optimize your workflow and output quality, such as choosing the right format and preset, using proxies, using watch folders, using destination publishing, and using time tuner and loudness correction. If you are looking for other video encoding software, you can also try some alternatives, such as HandBrake, VLC Media Player, or FFmpeg.
-
FAQs
-
Here are some frequently asked questions and answers about Adobe Media Encoder 2020:
-
-
Q: How long is the free trial for Adobe Media Encoder 2020? A: The free trial for Adobe Media Encoder 2020 lasts for seven days from the day you start it. You can cancel it anytime before the trial ends and you won't be charged.
-
Q: How much does Adobe Media Encoder 2020 cost? A: Adobe Media Encoder 2020 costs $20.99 per month for a single app plan or $52.99 per month for a Creative Cloud plan that includes other Adobe apps. You can also save money by choosing an annual plan or a student or teacher plan.
-
Q: Can I use Adobe Media Encoder 2020 offline? A: Yes, you can use Adobe Media Encoder 2020 offline once you have installed it and signed in with your Adobe ID. However, you will need an internet connection to activate your software, update it, access online services, or sync your settings.
-
Q: Can I use Adobe Media Encoder 2020 on multiple computers? A: Yes, you can use Adobe Media Encoder 2020 on up to two computers at a time with a single license. However, you cannot use it on both computers at the same time.
-
Q: How can I get help or support for Adobe Media Encoder 2020? A: You can get help or support for Adobe Media Encoder 2020 by visiting the official website of Adobe Media Encoder 2020 and accessing the user guide, tutorials, forums, community help, or contact options.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/1945 Air Force APK Mod A Classic Shooting Game with Endless Possibilities.md b/spaces/1phancelerku/anime-remove-background/1945 Air Force APK Mod A Classic Shooting Game with Endless Possibilities.md
deleted file mode 100644
index 7f73109be7265c1af55689d030a03443216faffc..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/1945 Air Force APK Mod A Classic Shooting Game with Endless Possibilities.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
Download 1945 Air Force APK Mod: A Guide for Android Users
-
If you are a fan of classic arcade shooting games, you might want to try out 1945 Air Force. This is a game that lets you experience the thrill of aerial combat in various historical scenarios. You can choose from over 200 different planes, each with their own unique features and abilities. You can also upgrade your planes, customize your weapons, and challenge yourself with different modes and missions.
-
But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited coins, gems, energy, and other resources? Well, in that case, you might want to download 1945 Air Force APK Mod. This is a modified version of the game that gives you access to all the features and content that the original game does not. In this article, we will tell you everything you need to know about 1945 Air Force APK Mod, including what it is, why you should download it, and how to download it. Let's get started!
1945 Air Force is a free-to-play arcade shooting game developed by ONESOFT. The game is inspired by the classic games of the genre, such as 1942, 1943, and Raiden. The game features stunning graphics, realistic sound effects, and smooth gameplay. You can immerse yourself in the epic battles of World War II, the Cold War, the Vietnam War, and more. You can also join forces with other players online and compete for the highest scores.
-
Features of 1945 Air Force
-
Some of the features that make 1945 Air Force an amazing game are:
-
-
Over 200 planes to choose from, each with their own characteristics and special skills.
-
Over 100 missions to complete, each with different objectives and challenges.
-
Over 10 game modes to enjoy, such as Campaign, Endless, Boss Battle, PvP, and more.
-
Over 30 support items to help you in your missions, such as bombs, missiles, shields, and more.
-
Daily rewards, events, achievements, and leaderboards to keep you engaged and motivated.
-
-
How to play 1945 Air Force
-
The gameplay of 1945 Air Force is simple and intuitive. You just need to swipe your finger on the screen to move your plane and avoid enemy fire. You can also tap the screen to fire your weapons and use your special skills. You can also collect coins, gems, energy, and other items along the way. You can use these resources to upgrade your planes, weapons, and items. You can also unlock new planes and modes as you progress in the game.
-
Why download 1945 Air Force APK Mod?
-
While 1945 Air Force is a fun and addictive game, it also has some drawbacks. For example, the game requires an internet connection to play. The game also has ads that can interrupt your gameplay. Moreover, the game has some in-app purchases that can make the game easier or more enjoyable. However, these purchases can be expensive and not everyone can afford them.
-
This is where 1945 Air Force APK Mod comes in handy. This is a modified version of the game that removes all the limitations and restrictions that the original game has. With this version, you can enjoy the following benefits:
-
Benefits of 1945 Air Force APK Mod
-
-
You
You can play the game offline, without any internet connection.
-
You can get rid of all the ads that can annoy you or distract you from the game.
-
You can get unlimited coins, gems, energy, and other resources that you can use to upgrade your planes, weapons, and items.
-
You can unlock all the planes and modes that are otherwise locked or require real money to access.
-
You can have more fun and excitement with the game, without any worries or hassles.
-
-
Risks of 1945 Air Force APK Mod
-
However, downloading 1945 Air Force APK Mod also comes with some risks that you should be aware of. These are:
-
-
You might face some compatibility issues with your device or the game version.
-
You might encounter some bugs or glitches that can affect your gameplay or performance.
-
You might lose your progress or data if you uninstall the game or switch to the original version.
-
You might violate the terms and conditions of the game developer and get banned from the game or their services.
-
You might expose your device to malware or viruses that can harm your device or compromise your security.
-
-
Therefore, you should download 1945 Air Force APK Mod at your own risk and discretion. You should also make sure that you download it from a reliable and trustworthy source. You should also scan the file for any malicious content before installing it on your device.
-
How to download 1945 Air Force APK Mod?
-
If you have decided to download 1945 Air Force APK Mod, you will need to follow some simple steps to do so. These are:
-
download 1945 air force mod apk unlimited money
-download 1945 air force mod apk latest version
-download 1945 air force mod apk android 1
-download 1945 air force mod apk free shopping
-download 1945 air force mod apk revdl
-download 1945 air force mod apk no ads
-download 1945 air force mod apk offline
-download 1945 air force mod apk hack
-download 1945 air force mod apk unlimited gems
-download 1945 air force mod apk unlimited coins
-download 1945 air force game mod apk
-download 1945 air force shooting game mod apk
-download 1945 air force classic arcade shooter mod apk
-download 1945 air force arcade shooting game mod apk
-download 1945 air force world war ii mod apk
-download 1945 air force old version mod apk
-download 1945 air force premium mod apk
-download 1945 air force pro mod apk
-download 1945 air force full version mod apk
-download 1945 air force unlocked mod apk
-how to download 1945 air force mod apk
-where to download 1945 air force mod apk
-best site to download 1945 air force mod apk
-safe site to download 1945 air force mod apk
-trusted site to download 1945 air force mod apk
-easy way to download 1945 air force mod apk
-fast way to download 1945 air force mod apk
-free way to download 1945 air force mod apk
-legal way to download 1945 air force mod apk
-working way to download 1945 air force mod apk
-download and install 1945 air force mod apk
-download and play 1945 air force mod apk
-download and enjoy 1945 air force mod apk
-download and update 1945 air force mod apk
-download and review 1945 air force mod apk
-why download 1945 air force mod apk
-benefits of downloading 1945 air force mod apk
-features of downloading 1945 air force mod apk
-advantages of downloading 1945 air force mod apk
-disadvantages of downloading 1945 air force mod apk
-
Step 1: Enable unknown sources
-
The first thing you need to do is to enable unknown sources on your device. This will allow you to install apps that are not from the official Google Play Store. To do this, you need to go to your device settings and look for the security or privacy option. Then, you need to find the unknown sources option and toggle it on. You might see a warning message, but you can ignore it and proceed.
-
Step 2: Find a reliable source
-
The next thing you need to do is to find a reliable source that offers 1945 Air Force APK Mod. You can search online for various websites or blogs that provide this file. However, you need to be careful and avoid any suspicious or fake links that can harm your device or steal your information. You can also check the reviews and ratings of the source to see if it is trustworthy and reputable.
-
Step 3: Download and install the APK file
-
Once you have found a reliable source, you can download the APK file by clicking on the download button or link. You might need to wait for a few seconds or minutes for the download to complete. After that, you can locate the file on your device storage and tap on it to install it. You might see some prompts or permissions that you need to accept or allow. Once the installation is done, you will see the game icon on your home screen or app drawer.
-
Step 4: Launch the game and enjoy
-
The final step is to launch the game and enjoy it. You can open the game by tapping on its icon and start playing it. You will notice that you have unlimited resources and access to all the features and content that the original game does not have. You can also customize your settings and preferences according to your liking. You can also invite your friends and play with them online.
-
Conclusion
-
1945 Air Force is a great arcade shooting game that will keep you entertained and challenged for hours. However, if you want to have more fun and excitement with the game, you can download 1945 Air Force APK Mod. This is a modified version of the game that gives you unlimited resources and access to all the features and content that the original game does not have. However, you should also be aware of the risks involved in downloading this version and take precautions accordingly. You should also follow the steps mentioned above to download and install this version safely and easily on your device.
-
FAQs
-
Here are some frequently asked questions about 1945 Air Force APK Mod:
-
-
What is the difference between 1945 Air Force APK Mod and 1945 Air Force Hack?
-
1945 Air Force APK Mod is a modified version of the game that gives you unlimited resources and access to all the features and content that the original game does not have. 1945 Air Force Hack is a tool or software that allows you to hack or cheat in the game and get unlimited resources and access to all the features and content that the original game does not have. However, 1945 Air Force APK Mod is easier and safer to use than 1945 Air Force Hack, as you do not need to download or install any additional software or tool on your device.
-
Is 1945 Air Force APK Mod free to download and use?
-
Yes, 1945 Air Force APK Mod is free to download and use. You do not need to pay any money or subscription fee to enjoy this version of the game. However, you should also be careful of any source that asks you for any personal or financial information or requires you to complete any survey or verification before downloading the file. These are likely to be scams or frauds that can harm your device or steal your information.
-
Is 1945 Air Force APK Mod compatible with all Android devices?
-
Not necessarily. 1945 Air Force APK Mod may not work on some Android devices due to various factors, such as the device model, the Android version, the game version, or the file quality. Therefore, you should check the compatibility of the file with your device before downloading it. You should also make sure that your device has enough storage space and meets the minimum requirements of the game.
-
Can I switch between 1945 Air Force APK Mod and 1945 Air Force original version?
-
Yes, you can switch between 1945 Air Force APK Mod and 1945 Air Force original version anytime you want. However, you should be aware that you might lose your progress or data if you do so. Therefore, you should back up your data before switching versions. You should also uninstall one version before installing another one to avoid any conflicts or errors.
-
Can I play 1945 Air Force APK Mod with my friends online?
-
Yes, you can play 1945 Air Force APK Mod with your friends online. However, you should also be aware that you might face some issues or difficulties in doing so. For example, you might not be able to join the same server or room as your friends who are using the original version of the game. You might also encounter some lag or delay in your gameplay or communication. Moreover, you might get banned from the game or their services if the game developer detects that you are using a modified version of the game.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Challenge Yourself with Car Parking Multiplayer Levels on Play Store.md b/spaces/1phancelerku/anime-remove-background/Challenge Yourself with Car Parking Multiplayer Levels on Play Store.md
deleted file mode 100644
index c2a7ae28091fa70c9f5c7c2f0d6af00d925a9aff..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Challenge Yourself with Car Parking Multiplayer Levels on Play Store.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
Play Store Car Parking Multiplayer: A Review
-
Are you looking for a realistic and fun car parking game that you can play with your friends? If so, you might want to check out Car Parking Multiplayer, a popular game on the Play Store that offers more than just parking. In this article, we will review Car Parking Multiplayer, its features, pros and cons, how to download and install it, and some alternatives that you can try.
Car Parking Multiplayer is a simulation game developed by olzhass, a studio that specializes in car games. It was released in 2017 and has since gained over 100 million downloads and 2.13 million reviews on the Play Store. The game is rated 4.4 out of 5 stars by the users, who praise its graphics, gameplay, and variety.
-
Car Parking Multiplayer is more than just a parking game. It also features an open-world multiplayer mode, car tuning, free walking, voice chat, police mode, and more. You can choose from over 100 cars with real interiors, 16 player skins, and various environments to explore. You can also compete against real players in the multiplayer racing, exchange cars with them, or join them in free roaming.
-
Features of Car Parking Multiplayer
-
Car Parking Multiplayer has many features that make it stand out from other parking games. Here are some of them:
-
Multiplayer open world mode
-
This mode allows you to interact with thousands of real players every day. You can chat with them using voice or text, make friends, join clans, or challenge them to races. You can also free walk around the map, visit gas stations and car services, or role play as a police officer or a criminal.
-
Car customization
-
You can customize your car with different options, such as suspension, wheel angle, engine, turbo, gearbox, exhaust, and more. You can also change the color, vinyls, and body parts of your car. You can tune your car to suit your driving style and preferences.
-
High-quality open world
-
The game has highly-detailed environments that you can explore with your car or on foot. You can drive in cities, highways, mountains, deserts, and more. The game also has realistic physics, weather effects, day and night cycles, and traffic.
-
play store car parking multiplayer game
-play store car parking multiplayer online
-play store car parking multiplayer download
-play store car parking multiplayer mod apk
-play store car parking multiplayer free
-play store car parking multiplayer 2
-play store car parking multiplayer simulator
-play store car parking multiplayer cheats
-play store car parking multiplayer hack
-play store car parking multiplayer android
-play store car parking multiplayer review
-play store car parking multiplayer tips
-play store car parking multiplayer update
-play store car parking multiplayer best cars
-play store car parking multiplayer drift
-play store car parking multiplayer city
-play store car parking multiplayer map
-play store car parking multiplayer customizations
-play store car parking multiplayer challenges
-play store car parking multiplayer voice chat
-play store car parking multiplayer police mode
-play store car parking multiplayer racing
-play store car parking multiplayer friends
-play store car parking multiplayer skins
-play store car parking multiplayer graphics
-play store car parking master game
-play store real car parking master online
-play store real car parking master download
-play store real car parking master mod apk
-play store real car parking master free
-play store real car parking master 2
-play store real car parking master simulator
-play store real car parking master cheats
-play store real car parking master hack
-play store real car parking master android
-play store real car parking master review
-play store real car parking master tips
-play store real car parking master update
-play store real car parking master best cars
-play store real car parking master drift
-play store real car parking master city
-play store real car parking master map
-play store real car parking master customizations
-play store real car parking master challenges
-play store real car parking master voice chat
-play store real car parking master police mode
-play store real car parking master racing
-
Interesting gameplay
-
The game has 82 real-life parking and driving challenges that you can complete to improve your skills. You can also try different vehicles, such as tow trucks, pickups, trucks, sports cars, and classic cars. The game has realistic controls and camera views that make the gameplay immersive and enjoyable.
-
Pros and Cons of Car Parking Multiplayer
-
Like any other game, Car Parking Multiplayer has its advantages and disadvantages. Here are some of them:
-
Pros
-
-
The game has amazing graphics and sound effects that create a realistic atmosphere.
-
The game has a lot of variety and content that keep the players entertained and engaged.
-
The game has a friendly and active community that makes the multiplayer mode fun and social.
-
The game is free to download and play, although it contains ads and in-app purchases.
-
-
Cons
-
-
The game may have some bugs and glitches that affect the performance and gameplay.
-
The game may have some hackers and cheaters that ruin the multiplayer mode for others.
-
The game may have some inappropriate content or language that may not be suitable for younger players.
-
The game may consume a lot of battery and storage space on your device.
-
-
How to Download and Install Car Parking Multiplayer
-
If you want to try Car Parking Multiplayer, you can download and install it easily from the Play Store. Here are the requirements and steps to do so:
-
Requirements
-
-
Your device must have Android 5.0 or higher.
-
Your device must have at least 1 GB of RAM and 500 MB of free storage space.
-
Your device must have a stable internet connection to play the multiplayer mode.
-
-
Steps
-
-
Open the Play Store app on your device and search for Car Parking Multiplayer.
-
Select the game from the search results and tap on Install.
-
Wait for the game to download and install on your device.
-
Once the installation is complete, tap on Open to launch the game.
-
Enjoy playing Car Parking Multiplayer!
-
-
Alternatives to Car Parking Multiplayer
-
If you like Car Parking Multiplayer, you might also like some other parking games that are similar or better. Here are two alternatives that you can try:
-
Parking Master Multiplayer 2
-
This is a sequel to the popular Parking Master Multiplayer game that offers more features and challenges. You can play with real players online, customize your car, explore different maps, and complete various parking missions. The game has realistic graphics, physics, and controls that make it fun and addictive. You can download it from the Play Store for free.
-
Parking Master Multiplayer
-
This is the original version of Parking Master Multiplayer that started it all. You can play with real players online, customize your car, explore different maps, and complete various parking missions. The game has realistic graphics, physics, and controls that make it fun and addictive. You can download it from the Play Store for free.
-
Conclusion
-
Car Parking Multiplayer is a great game for anyone who loves cars and parking. It has many features, pros and cons, and ways to download and install it. It also has some alternatives that you can try if you want more variety. If you are looking for a realistic and fun car parking game that you can play with your friends, you should give Car Parking Multiplayer a try. You might be surprised by how much you enjoy it!
-
FAQs
-
-
Q: How do I join a multiplayer server in Car Parking Multiplayer?
-
A: To join a multiplayer server, you need to tap on the multiplayer button on the main menu, select a region, and choose a server from the list. You can also create your own server by tapping on the create button.
-
Q: How do I earn money in Car Parking Multiplayer?
-
A: You can earn money in Car Parking Multiplayer by completing parking missions, racing against other players, selling or exchanging cars, or watching ads.
-
Q: How do I chat with other players in Car Parking Multiplayer?
-
A: You can chat with other players in Car Parking Multiplayer by tapping on the chat button on the top right corner of the screen. You can use voice or text chat, as well as emojis and stickers.
-
Q: How do I report a hacker or cheater in Car Parking Multiplayer?
-
A: You can report a hacker or cheater in Car Parking Multiplayer by tapping on their name on the player list, and then tapping on the report button. You can also block them from chatting with you or joining your server.
-
Q: How do I update Car Parking Multiplayer?
-
A: You can update Car Parking Multiplayer by opening the Play Store app on your device, searching for Car Parking Multiplayer, and tapping on Update. You can also enable automatic updates for the game in the settings of the Play Store app.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Como Conseguir Robux Infinito no ROBLOX 2022 Download Grtis e Seguro.md b/spaces/1phancelerku/anime-remove-background/Como Conseguir Robux Infinito no ROBLOX 2022 Download Grtis e Seguro.md
deleted file mode 100644
index 03cca51eb521adfa10900c4be6cfcea77fd690f7..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Como Conseguir Robux Infinito no ROBLOX 2022 Download Grtis e Seguro.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
How to Download Roblox Robux Infinito 2022
-
Roblox is one of the most popular online gaming platforms in the world, with over 115 million active players. It allows you to create, play, and share your own games and experiences with others. However, to enjoy the full potential of Roblox, you need Robux, the premium currency of the game. Robux can be used to buy games, items, accessories, and more. But how can you get more Robux without spending real money? One way is to download Roblox Robux Infinito 2022, a modded version of Roblox that gives you unlimited Robux and other features. In this article, we will explain what Roblox Robux Infinito 2022 is, why you might want to download it, and how to do it safely and easily.
Roblox Robux Infinito 2022 is a modified version of the original Roblox app that gives you access to unlimited Robux and other hacks. Before we explain more about it, let's first understand what Robux and modded apps are.
-
Robux: The Premium Currency of Roblox
-
Robux is the in-game currency that you can use to buy games, items, accessories, and more on Roblox. You can earn Robux by creating games, selling items, or joining the Premium subscription. You can also buy Robux with real money through the official website or app. However, buying Robux can be expensive, especially if you want to get a lot of them. For example, 10,000 Robux cost $99.99. That's why some players look for alternative ways to get more Robux for free.
-
Robux Infinito: A Modded Version of Roblox
-
A modded app is an app that has been modified by someone other than the original developer. Modded apps usually have features that are not available in the official app, such as cheats, hacks, or unlimited resources. For example, Robux Infinito is a modded app that gives you unlimited Robux and other features such as:
-
-
Fly hack: You can fly in any game.
-
Teleport hack: You can teleport to any location in any game.
-
Wallhack: You can walk through walls in any game.
-
Immortality hack: You cannot die in any game.
-
And more!
-
-
Robux Infinito is not an official app from Roblox Corporation. It is created by third-party developers who are not affiliated with or endorsed by Roblox Corporation. Therefore, it is not available on the official website or app store. You have to download it from other sources online.
-
download roblox mod apk robux infinito 2022 mediafıre
-como baixar roblox com robux infinito 2022 no celular
-roblox hack robux infinito 2022 download pc
-roblox robux infinito 2022 apk atualizado
-download roblox unlimited robux 2022 for android
-how to get free robux in roblox 2022 no human verification
-roblox robux generator 2022 online no survey
-download roblox mod menu robux infinito 2022
-como instalar roblox com robux infinito 2022
-roblox hack tool robux infinito 2022 free download
-download roblox apk mod robux infinito 2022 atualizado
-how to download roblox with unlimited robux 2022
-roblox cheat engine robux infinito 2022
-download roblox hack apk robux infinito 2022
-como conseguir robux infinito en roblox 2022 gratis
-roblox mod apk unlimited robux 2022 latest version
-download roblox for pc with infinite robux 2022
-como tener robux infinitos en roblox 2022 sin descargar nada
-roblox hack script robux infinito 2022
-download roblox mod apk unlimited money and robux 2022
-how to hack roblox for free robux 2022 easy
-roblox online generator unlimited robux 2022 working
-download game roblox mod apk unlimited robux 2022 offline
-como jogar roblox com robux infinito 2022 no pc
-download aplikasi cheat roblx mod apk unlimited rbx and tix 2022 terbaru
-
Why Download Roblox Robux Infinito 2022?
-
Downloading Roblox Robux Infinito 2022 can have some benefits and risks. Here are some of them:
-
Benefits of Robux Infinito
-
The main benefit of downloading Robux Infinito is that you can get unlimited Robux for free. This means that you can buy any game, item, accessory, or feature that you want on Roblox without spending real money. You can also enjoy the hacks and cheats that make the game more fun and easy. For example, you can fly around the map, teleport to different places, walk through walls, and be
invincible in any game. You can also create and customize your own games and items with unlimited Robux. You can share them with other players and earn more Robux from them. You can also join any game or group that requires Robux without paying anything. In short, you can have more fun and freedom on Roblox with Robux Infinito.
-
Risks of Robux Infinito
-
However, downloading Robux Infinito also has some risks that you should be aware of. The main risk is that it is not safe or legal to use. Since it is a modded app, it is not approved by Roblox Corporation or the app store. It may contain viruses, malware, or spyware that can harm your device or steal your personal information. It may also have bugs, glitches, or errors that can affect the performance of the app or the game. Moreover, using Robux Infinito is against the terms of service and community guidelines of Roblox. It is considered cheating and hacking, which can result in your account being banned or suspended. You may also lose all your progress, items, and Robux that you have earned legitimately. Furthermore, using Robux Infinito can ruin the game experience for other players who are playing fairly and honestly. It can make the game unfair, boring, or frustrating for them. It can also damage the reputation and quality of the game and the platform.
-
How to Download Roblox Robux Infinito 2022?
-
If you still want to download Roblox Robux Infinito 2022 despite the risks, you need to follow these steps:
-
Step 1: Find a Reliable Source
-
The first step is to find a reliable source that offers the latest version of Robux Infinito 2022. You cannot download it from the official website or app store, so you have to search for it online. However, not all sources are trustworthy or safe. Some may provide fake or outdated versions of the app that do not work or have viruses. Some may also ask you to complete surveys, download other apps, or enter your personal information before giving you the download link. To avoid these scams, you should look for sources that have positive reviews, ratings, comments, and feedback from other users. You should also check the file size, name, and extension of the app before downloading it. The file should be an APK file with a size of around 100 MB and a name similar to "Roblox_Robux_Infinito_2022.apk".
-
Step 2: Download and Install the APK File
-
The second step is to download and install the APK file on your device. An APK file is an Android application package file that contains all the files and data needed to run an app on an Android device. To download and install an APK file, you need to enable the "Unknown Sources" option on your device settings. This option allows you to install apps from sources other than the official app store. To enable this option, go to Settings > Security > Unknown Sources and toggle it on. Then, go to the source where you downloaded the APK file and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-
Step 3: Enjoy the Unlimited Robux and Features
-
The third step is to enjoy the unlimited Robux and features that Robux Infinito 2022 offers. Once you have installed the app, you can open it and log in with your existing Roblox account or create a new one. You will see that you have unlimited Robux in your account balance. You can use them to buy anything you want on Roblox without spending real money. You can also access the hacks and cheats that are available in the app settings. You can enable or disable them as you wish. You can also create and customize your own games and items with unlimited Robux. You can share them with other players and earn more Robux from them. You can also join any game or group that requires Robux without paying anything.
-
Conclusion
-
Roblox is a great platform for creating, playing, and sharing games and experiences with others. However, if you want to get more out of it, you need Robux, the premium currency of the game. One way to get more Robux for free is to download Roblox Robux Infinito 2022, a modded version of Roblox that gives you unlimited Robux and other features such as hacks and cheats. However, downloading Robux Infinito 2022 also has some risks such as viruses, malware, account bans, and game quality issues. Therefore, you should be careful when downloading and using it. If you
decide to download Robux Infinito 2022, you should do it at your own risk and responsibility. We hope this article has helped you understand what Robux Infinito 2022 is, why you might want to download it, and how to do it safely and easily. Happy gaming!
-
FAQs
-
Here are some frequently asked questions about Robux Infinito 2022:
-
-
-
Question
-
Answer
-
-
-
Is Robux Infinito 2022 free?
-
Yes, Robux Infinito 2022 is free to download and use. However, you may have to complete some tasks or surveys to get the download link from some sources.
-
-
-
Is Robux Infinito 2022 safe?
-
No, Robux Infinito 2022 is not safe to use. It may contain viruses, malware, or spyware that can harm your device or steal your personal information. It may also have bugs, glitches, or errors that can affect the performance of the app or the game. Moreover, using Robux Infinito 2022 is against the terms of service and community guidelines of Roblox. It is considered cheating and hacking, which can result in your account being banned or suspended.
-
-
-
Is Robux Infinito 2022 legal?
-
No, Robux Infinito 2022 is not legal to use. It is a modded app that violates the intellectual property rights of Roblox Corporation and the app store. It also violates the laws and regulations of some countries that prohibit the use of modded apps or games.
-
-
-
Can I use Robux Infinito 2022 on iOS devices?
-
No, Robux Infinito 2022 is only compatible with Android devices. You cannot use it on iOS devices such as iPhones or iPads.
-
-
-
Can I use Robux Infinito 2022 on PC?
-
Yes, you can use Robux Infinito 2022 on PC if you have an Android emulator installed on your computer. An Android emulator is a software that allows you to run Android apps or games on your PC. Some examples of Android emulators are BlueStacks, NoxPlayer, and LDPlayer.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Arceus X V3.1.0 Beta and Enjoy Roblox Like Never Before.md b/spaces/1phancelerku/anime-remove-background/Download Arceus X V3.1.0 Beta and Enjoy Roblox Like Never Before.md
deleted file mode 100644
index c3fc442741228a2df772f6b96d9dbc486293437a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Arceus X V3.1.0 Beta and Enjoy Roblox Like Never Before.md
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
Arceus X: How to Download and Play the Ultimate Roblox Mod Menu on iOS
-
If you are a fan of Roblox, you might have heard of Arceus X, a mod menu that allows you to exploit your favorite games with features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, and more. Arceus X is a first and one of the most widely used Roblox Mod Menu/exploit specially developed for Android. But what if you want to play it on your iOS device? Is it possible to download and install Arceus X on iOS? The answer is yes, and in this article, we will show you how to do it step by step.
Arceus X is a first Android Roblox Mod Menu/Exploit to improve the gameplay. It allows you to use features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. Arceus X APK is developed using Node.js, C++, JAVA. It’s an Android application that has floating Menu to execute scripts while you are in the game.
-
Features of Arceus X
-
Some of the features that make Arceus X stand out from other Roblox mod menus are:
-
-
Android LuaU Execution: You can run any Lua script on your Android device without any limitations.
-
Infinite Jump: You can jump as high as you want in any game.
-
Super Speed: You can move faster than normal in any game.
-
Btools: You can delete or modify any object in any game.
-
Script Hub: You can access a collection of scripts for various games from the mod menu.
-
More!: You can also use features such as Fly, Noclip, ESP, Aimbot, God Mode, and more.
-
-
Requirements for Arceus X
-
To download and play Arceus X on your iOS device, you will need:
-
-
An iOS device with iOS 10 or later.
-
An Android device or an emulator to get the Arceus X APK file.
-
A file manager app on your iOS device to transfer the APK file.
-
An iOS emulator app on your iOS device to run the APK file.
-
A Roblox account to play the games.
-
-
How to Download Arceus X on iOS
-
Now that you know what Arceus X is and what you need to play it on your iOS device, let's get started with the download process. Here are the steps you need to follow:
-
Step 1: Get the Arceus X APK file
-
The first step is to get the Arceus X APK file from a reliable source. You can either use an Android device or an emulator on your PC to do this. Here are some options for getting the APK file:
-
-
You can download it from the official website of Arceus X. Just click on the download button and complete the verification process. The APK file will be downloaded automatically.
-
You can watch a tutorial video on YouTube that shows you how to download and install Arceus X on your Android device. Just follow the instructions in the video and get the APK file.
You can join the Discord server of Arceus X and ask for the APK file from the developers or other users. You might need to verify your identity and follow some rules to get access to the file.
-
-
Once you have the APK file, you need to transfer it to your iOS device. You can use a USB cable, Bluetooth, Wi-Fi, or any other method that works for you. Just make sure you have a file manager app on your iOS device to locate the APK file.
-
arceus x v3 download tutorial
-arceus x apk official
-arceus x roblox mod menu
-arceus x v3.1.0 public beta
-arceus x android roblox exploit
-arceus x ios 16.0.4 install
-arceus x script executor for mobile
-arceus x apk without linkvertise
-arceus x roblox hack android
-arceus x v3 update download
-arceus x mod menu apk
-arceus x roblox cheat ios
-arceus x verification process
-arceus x apk free download
-arceus x roblox exploit ios
-arceus x v3 mod menu tutorial
-arceus x apk latest version
-arceus x roblox script hub
-arceus x verification bypass
-arceus x apk no ads
-arceus x roblox infinite jump
-arceus x v3 install guide
-arceus x apk direct download
-arceus x roblox super speed
-arceus x verification completed
-arceus x apk no verification
-arceus x roblox btools hack
-arceus x v3 download link
-arceus x apk easy download
-arceus x roblox luau execution
-arceus x verification failed fix
-arceus x apk no root
-arceus x roblox android modding
-arceus x v3 features overview
-arceus x apk fast download
-arceus x roblox exploit features
-arceus x verification steps explained
-arceus x apk safe download
-arceus x roblox pc scripts support
-arceus x v3 release date ios
-arceus x apk working download
-arceus x roblox exploit review
-arceus x verification code generator
-arceus x apk virus free download
-arceus x roblox exploit comparison
-
Step 2: Install an iOS emulator
-
The next step is to install an iOS emulator app on your iOS device that can run Android apps. An emulator is a software that mimics the behavior of another device or platform. There are many iOS emulators available on the App Store, but not all of them can run Arceus X smoothly. Here are some of the best iOS emulators that we recommend for Arceus X:
-
-
iAndroid: This is one of the most popular and reliable iOS emulators that can run Android apps without any hassle. It has a simple interface and supports most of the Android features. You can download it from the App Store for free.
-
Cider: This is another iOS emulator that can run Android apps with ease. It has a fast performance and supports many Android games. You can download it from the official website for free.
-
Appetize.io: This is an online iOS emulator that can run Android apps on your browser. You don't need to install anything on your device, just upload the APK file and start playing. It has a high compatibility and supports many Android features. You can use it for free for 100 minutes per month, or upgrade to a paid plan for more time.
-
-
Once you have installed an iOS emulator of your choice, you need to launch it and grant it the necessary permissions to access your device's storage, camera, microphone, etc.
-
Step 3: Run the Arceus X APK file on the emulator
-
The final step is to run the Arceus X APK file on the emulator and start playing. Here are the steps you need to follow:
-
-
Open the file manager app on your iOS device and locate the Arceus X APK file that you transferred earlier.
-
Tap on the APK file and select the option to open it with the emulator app that you installed.
-
The emulator will launch and install the Arceus X app on its virtual environment.
-
Once the installation is complete, you will see the Arceus X icon on the emulator's home screen.
-
Tap on the icon and log in with your Roblox account credentials.
-
You will see a floating mod menu on your screen with various options to exploit your favorite games.
-
-
Step 4: Enjoy the game
-
Congratulations! You have successfully downloaded and installed Arceus X on your iOS device. Now you can enjoy playing Roblox with unlimited features and fun. You can access the mod menu anytime by tapping on it and selecting the options you want to use. You can also use the script hub to find and execute scripts for different games. Just be careful not to abuse the mod menu or get reported by other players, as you might get banned by Roblox.
-
Tips and Tricks for Arceus X
-
To make the most out of Arceus X, here are some tips and tricks that you should know:
-
How to use the script hub
-
The script hub is a feature that allows you to access a collection of scripts for various games from the mod menu. You can use these scripts to enhance your gameplay or perform certain actions that are not possible otherwise. Here are some steps to use the script hub:
-
-
Tap on the mod menu and select the script hub option.
-
You will see a list of games that have scripts available for them.
-
Select the game that you want to play and tap on it.
-
You will see a list of scripts that you can use for that game.
-
Select the script that you want to use and tap on it.
-
The script will be executed automatically and you will see its effects in the game.
-
-
How to customize the mod menu
-
The mod menu is a feature that allows you to customize various aspects of Arceus X, such as its appearance, position, size, transparency, etc. You can also enable or disable certain features or change their settings according to your preference. Here are some steps to customize the mod menu:
-
-
Tap on the mod menu and select the settings option.
-
You will see a list of options that you can change, such as color, size, position, transparency, etc.
-
Select the option that you want to change and adjust it according to your liking.
-
You can also enable or disable certain features or change their settings by tapping on them.
-
Once you are done, tap on the save button to apply the changes.
-
-
How to avoid getting banned
-
While Arceus X is a fun and powerful mod menu, it is also a risky one. If you use it too much or too blatantly, you might get detected and banned by Roblox. To avoid this, here are some tips that you should follow:
-
-
Use the mod menu sparingly and discreetly. Don't use it in every game or every round. Don't use it in front of other players or moderators. Don't use it to ruin the game for others.
-
Use the anti-ban feature. This feature is designed to prevent Roblox from detecting your mod menu and banning you. It does this by changing your device ID, IP address, and other information that Roblox uses to identify you. You can enable this feature from the mod menu settings.
-
Use a VPN service. A VPN service is a tool that encrypts your internet traffic and hides your IP address and location. This can help you avoid getting banned by Roblox, as they won't be able to trace your activity or location. You can use any VPN service that works for you, but make sure it is reliable and secure.
-
-
Conclusion
-
In this article, we have shown you how to download and play Arceus X on your iOS device. Arceus X is a first Android Roblox Mod Menu/Exploit that allows you to exploit your favorite games with features such as Android LuaU Execution, Infinite Jump, Super Speed, Btools, Script Hub, More!. To play it on your iOS device, you need to get the Arceus X APK file from a reliable source, install an iOS emulator app on your device, run the APK file on the emulator, and enjoy the game. We have also given you some tips and tricks for using Arceus X, such as how to use the script hub, how to customize the mod menu, and how to avoid getting banned. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs
-
Here are some of the frequently asked questions about Arceus X:
-
-
Is Arceus X safe to use?
-
Arceus X is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, there is always a risk of getting banned by Roblox if you use it too much or too blatantly. To minimize this risk, use the anti-ban feature and a VPN service.
-
Is Arceus X free to use?
-
Yes, Arceus X is free to use and does not require any payment or subscription. However, you might need to complete some verification steps or watch some ads before downloading it.
-
Does Arceus X work on all games?
-
No, Arceus X does not work on all games. Some games have anti-cheat systems or scripts that prevent Arceus X from working properly. You can check the script hub for the list of games that have scripts available for them.
-
Can I use Arceus X on other devices?
-
Yes, you can use Arceus X on other devices besides iOS. You can use it on Android devices directly without any emulator. You can also use it on PC devices with an Android emulator such as BlueStacks or Nox Player.
-
Where can I get more information about Arceus X?
-
You can get more information about Arceus X from its official website, its YouTube channel, or its Discord server. You can also contact the developers or other users for support or feedback.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Play PS3 Games on iOS with RetroArch and RPCS3.md b/spaces/1phancelerku/anime-remove-background/Download and Play PS3 Games on iOS with RetroArch and RPCS3.md
deleted file mode 100644
index 559ad12d41082d3e60499af864e0603af8778f76..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Play PS3 Games on iOS with RetroArch and RPCS3.md
+++ /dev/null
@@ -1,183 +0,0 @@
-
-
How to Download and Install PS3 Emulator for iOS
-
Do you want to play your favorite PS3 games on your iPhone or iPad? If yes, then you need a PS3 emulator for iOS. A PS3 emulator is a software that mimics the hardware and software of the PlayStation 3 console, allowing you to run PS3 games on your iOS device. In this article, we will show you how to download and install a PS3 emulator for iOS, as well as how to play PS3 games on your iOS device.
-
What Is PS3 Emulator for iOS?
-
Definition and features of PS3 emulator for iOS
-
A PS3 emulator for iOS is a program that simulates the functionality of the PlayStation 3 console on an iOS device. It does this by translating the instructions and data of the PS3 games into a format that the iOS device can understand and execute. A PS3 emulator for iOS has several features, such as:
Supporting multiple gaming systems, such as PlayStation 1-3, Nintendo 64, DS, Game Boy, PSP, and more.
-
Enhancing the graphical quality of the games by rendering them at higher resolutions than the original console.
-
Offering save states, which allow you to save your progress at any point in the game and resume it later.
-
Enabling online multiplayer mode for compatible games.
-
Allowing you to customize the controls, audio, video, and other settings of the emulator.
-
-
Benefits and drawbacks of PS3 emulator for iOS
-
A PS3 emulator for iOS has several benefits, such as:
-
-
It lets you play PS3 games that are not available or compatible with your iOS device.
-
It saves you money and space by not requiring you to buy or own a PS3 console and its accessories.
-
It gives you access to a large library of retro and classic games that you can enjoy on your iOS device.
-
It allows you to experience the nostalgia and fun of playing old-school games on a modern device.
-
-
However, a PS3 emulator for iOS also has some drawbacks, such as:
-
-
It may not run all PS3 games smoothly or accurately, depending on the performance and compatibility of your iOS device and the emulator.
-
It may consume a lot of battery power and storage space on your iOS device.
-
It may expose your iOS device to security risks if you download or install an emulator or ROMs from untrusted sources.
-
It may violate the intellectual property rights of the game developers and publishers if you download or use ROMs without their permission.
-
-
How to Download PS3 Emulator for iOS?
-
Requirements and compatibility of PS3 emulator for iOS
-
Before you download a PS3 emulator for iOS, you need to make sure that your iOS device meets the minimum requirements and compatibility of the emulator. Here are some general requirements and compatibility of a PS3 emulator for iOS:
-
-
Your iOS device should have at least iOS 10 or newer installed.
-
Your iOS device should have at least 64 MB of free storage space available.
-
Your iOS device should have a jailbreak or an alternative app store installed, such as AltStore.
-
Your iOS device should support external controllers, such as PS4, PS
Steps to download PS3 emulator for iOS using AltStore
-
One of the best PS3 emulators for iOS is RetroArch, which is a multi-system emulator that supports PlayStation 1-3, Nintendo 64, DS, Game Boy, PSP, and more. RetroArch is available on AltStore, which is an alternative app store that lets you install apps that are not available on the official App Store. To download RetroArch using AltStore, you need to follow these steps:
-
How to download ps3 emulator for ios devices
-Best ps3 emulator for ios free download
-Download rpcs3 ps3 emulator for ios iphone app
-Ps3 emulator for ios no jailbreak required
-Retroarch ps3 emulator for ios and android
-Download ps3 games for ios emulator
-Ps3 emulator for ios 11 and later
-Ps3 emulator for ios 9 and later
-Ps3 emulator for ios 7 and later
-Ps3 emulator for ios 6 and earlier
-Ps3 emulator for ipad pro and mini
-Ps3 emulator for iphone x and xr
-Ps3 emulator for iphone 8 and 7
-Ps3 emulator for iphone 6 and 5
-Ps3 emulator for iphone 4 and 3
-Ps3 emulator for ipod touch and nano
-Download ps3 bios for ios emulator
-Download ps3 controller for ios emulator
-Download ps3 iso files for ios emulator
-Download ps3 roms for ios emulator
-Download ps3 cheats for ios emulator
-Download ps3 save data for ios emulator
-Download ps3 themes for ios emulator
-Download ps3 updates for ios emulator
-Download ps3 dlc for ios emulator
-Ps3 emulator for ios compatible games list
-Ps3 emulator for ios performance and settings guide
-Ps3 emulator for ios troubleshooting and errors fix
-Ps3 emulator for ios online multiplayer support
-Ps3 emulator for ios offline mode feature
-Ps3 emulator for ios screen recording and streaming option
-Ps3 emulator for ios custom skins and mods availability
-Ps3 emulator for ios voice chat and messaging function
-Ps3 emulator for ios achievements and trophies unlocker
-Ps3 emulator for ios parental control and security settings
-Ps3 emulator for ios reviews and ratings from users
-Ps3 emulator for ios download link and installation instructions
-Ps3 emulator for ios download size and system requirements
-Ps3 emulator for ios download speed and time estimation
-Ps3 emulator for ios download virus scan and safety check
-
-
Download and install AltStore on your iOS device and your computer. You can find the instructions and the download links on the official website of AltStore: https://altstore.io/
-
Launch AltStore on your computer and connect your iOS device to your computer using a USB cable.
-
Trust your computer on your iOS device and trust your iOS device on your computer.
-
Enter your Apple ID and password on AltStore on your computer. This is required to sign the apps that you install using AltStore.
-
Open the AltStore app on your iOS device and tap on the Browse tab.
-
Search for RetroArch and tap on the Install button next to it.
-
Wait for the installation to complete. You may need to enter your Apple ID and password again.
-
Go to Settings > General > Device Management on your iOS device and trust the developer profile of RetroArch.
-
-
Congratulations! You have successfully downloaded RetroArch using AltStore. Now you can proceed to install it on your iOS device.
-
How to Install PS3 Emulator for iOS?
-
Steps to install PS3 emulator for iOS using AltStore
-
To install RetroArch on your iOS device using AltStore, you need to follow these steps:
-
-
Launch the AltStore app on your iOS device and tap on the My Apps tab.
-
Tap on the RetroArch icon and then tap on the Open button.
-
Allow RetroArch to access your photos, media, and files on your iOS device.
-
Accept the terms and conditions of RetroArch.
-
Select your preferred language for RetroArch.
-
-
Congratulations! You have successfully installed RetroArch on your iOS device using AltStore. Now you can proceed to install PS3 firmware and ROMs for RetroArch.
-
How to install PS3 firmware and ROMs for PS3 emulator for iOS
-
To play PS3 games on RetroArch, you need to install PS3 firmware and ROMs for RetroArch. PS3 firmware is the software that runs the PS3 console, while ROMs are the files that contain the games. To install PS3 firmware and ROMs for RetroArch, you need to follow these steps:
-
-
Download the PS3 firmware from a trusted source. You can find it online by searching for "PS3 firmware download". Make sure you download the latest version of the firmware.
-
Extract the firmware file using a file manager app or a computer. You should get a file named "PS3UPDAT.PUP".
-
Rename the file to "PS3UPDAT.PUP.bak" and copy it to a folder named "firmware" in the Documents folder of your iOS device.
-
Download the ROMs of the PS3 games that you want to play from a trusted source. You can find them online by searching for "PS3 ROMs download". Make sure you download the ROMs that are compatible with RetroArch.
-
Extract the ROMs files using a file manager app or a computer. You should get files with extensions such as ".iso", ".bin", ".cue", ".mdf", ".mds", or ".pbp".
-
Copy the ROMs files to a folder named "roms" in the Documents folder of your iOS device.
-
-
Congratulations! You have successfully installed PS3 firmware and ROMs for RetroArch. Now you can proceed to play PS3 games on your iOS device.
-
How to Play PS3 Games on iOS?
-
Tips and tricks for playing PS3 games on iOS
-
To play PS3 games on RetroArch, you need to follow these tips and tricks:
-
-
Launch RetroArch on your iOS device and tap on the Load Core option.
-
Select PlayStation 3 (Beetle PSX HW) as the core that you want to load.
-
Tap on the Load Content option and navigate to the folder where you stored your ROMs files.
Select the ROM that you want to play and tap on the Run option.
-
Wait for the game to load and enjoy playing it on your iOS device.
-
You can use the on-screen buttons or an external controller to control the game.
-
You can access the RetroArch menu by tapping on the RetroArch icon on the top left corner of the screen.
-
You can save and load your game progress by using the Save State and Load State options in the Quick Menu.
-
You can adjust the settings of the emulator, such as video, audio, input, and cheats, by using the Options and Settings options in the Main Menu.
-
-
Best PS3 games to play on iOS
-
There are many PS3 games that you can play on your iOS device using RetroArch, but some of them are more suitable and enjoyable than others. Here are some of the best PS3 games to play on iOS:
-
-
-
Game
-
Genre
-
Description
-
-
-
The Last of Us
-
Action-adventure, survival horror
-
A post-apocalyptic game that follows the journey of Joel and Ellie, two survivors of a fungal outbreak that turned most of humanity into zombie-like creatures.
-
-
-
God of War III
-
Action-adventure, hack and slash
-
A mythological game that follows the revenge of Kratos, a former Spartan warrior, against the gods of Olympus for betraying him.
-
-
-
Uncharted 2: Among Thieves
-
Action-adventure, third-person shooter
-
A treasure-hunting game that follows the adventures of Nathan Drake, a charismatic explorer, as he searches for the lost city of Shambhala.
-
-
-
Grand Theft Auto V
-
Action-adventure, open world
-
A crime game that follows the lives of three protagonists, Michael, Franklin, and Trevor, as they commit heists and other illegal activities in Los Santos.
-
-
-
Metal Gear Solid 4: Guns of the Patriots
-
Action-adventure, stealth
-
A spy game that follows the final mission of Solid Snake, an aging soldier, as he tries to stop a global war caused by a rogue AI system.
-
-
-
Conclusion
-
In this article, we have shown you how to download and install a PS3 emulator for iOS, as well as how to play PS3 games on your iOS device. A PS3 emulator for iOS is a great way to enjoy your favorite PS3 games on your iPhone or iPad without having to buy or own a PS3 console. However, you should also be aware of the potential drawbacks and risks of using a PS3 emulator for iOS, such as performance issues, battery drain, security threats, and legal implications. Therefore, you should use a PS3 emulator for iOS at your own discretion and responsibility. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave them in the comments section below. Happy gaming!
-
FAQs
-
Q: Is PS3 emulator for iOS legal?
-
A: The legality of PS3 emulator for iOS depends on several factors, such as where you live, what games you play, and how you obtain them. Generally speaking, emulators themselves are not illegal, but downloading or using ROMs without owning the original games or having their permission may be illegal. Therefore, you should always check your local laws and regulations before using a PS3 emulator for iOS.
-
Q: Is PS3 emulator for iOS safe?
-
A: The safety of PS3 emulator for iOS depends on several factors, such as where you download or install it from, what games you play, and how you protect your iOS device. Generally speaking, emulators themselves are not harmful, but downloading or installing them from untrusted sources may expose your iOS device to malware or viruses. Therefore, you should always use a reputable app store or website to download or install a PS3 emulator for iOS.
-
Q: Is PS3 emulator for iOS free?
-
A: The cost of PS3 emulator for iOS depends on several factors, such as which app store or website you use to download or install it from, what features or services it offers, and whether it has ads or in-app purchases. Generally speaking, most emulators are free or have a free version available, but some may charge a fee or require a subscription to access certain features or services. Therefore, you should always check the price and terms of use before downloading or installing a PS3 emulator for iOS.
-
Q: What are the alternatives to PS3 emulator for iOS?
-
A: If you are looking for alternatives to PS3 emulator for iOS, you may consider the following options:
-
-
Use a PS3 console and connect it to your iOS device using a capture card or a streaming app.
-
Use a cloud gaming service that lets you play PS3 games on your iOS device using a remote server.
-
Use a different emulator that supports PS3 games on your iOS device, such as PPSSPP or Play!
-
-
Q: How to update PS3 emulator for iOS?
-
A: To update PS3 emulator for iOS, you need to follow these steps:
-
-
Launch the AltStore app on your iOS device and tap on the My Apps tab.
-
Tap on the Update button next to RetroArch if there is a new version available.
-
Wait for the update to complete and launch RetroArch again.
-
-> Entering new AgentExecutor chain...
-We can calculate this using the Calculator tool.
-
-Action:
-```
-{
- "action": "Calculator",
- "action_input": "0.03 * 300 * 30"
-}
-```
-
-
-Observation: Answer: 270.0
-Thought:Could not parse LLM output: This is the correct answer to the question.
-Observation: Invalid or incomplete response
-Thought:Let me try the same action again.
-
-Action:
-```
-{
- "action": "Calculator",
- "action_input": "0.03 * 300 * 30"
-}
-```
-
-
-Observation: Answer: 270.0
-Thought:Could not parse LLM output: The tool gave the same answer, so I can be confident that it is correct.
-
-Observation: Invalid or incomplete response
-Thought:There seems to be an issue with the LLM response. Let me try a different way to calculate the answer.
-
-Action:
-```
-{
- "action": "Calculator",
- "action_input": "300 * 30 * 0.03"
-}
-```
-
-
-Observation: Answer: 270.0
-Thought:I have successfully calculated the answer to the question using the calculator tool.
-
-Final Answer: 270.0
-
-> Finished chain.
-
-
-
{'input': 'What is the 3% of of 300 * 30?', 'output': '270.0'}
To use, you should have the google-search-results python package installed, and the environment variable SERPAPI_API_KEY set with your API key, or pass serpapi_api_key as a named parameter to the constructor.
-
Example: .. code-block:: python
-
from langchain import SerpAPIWrapper
- serpapi = SerpAPIWrapper()
To use, you should have the google-search-results python package installed, and the environment variable SERPAPI_API_KEY set with your API key, or pass serpapi_api_key as a named parameter to the constructor.
-
Example: .. code-block:: python
-
from langchain import SerpAPIWrapper
- serpapi = SerpAPIWrapper()
Args: tool_names: name of tools to load. llm: Optional language model, may be needed to initialize certain tools. callbacks: Optional callback manager or list of callback handlers. If not provided, default global callback manager will be used.
-
Returns: List of tools.
-
Here is the SerpAPIWrapper tool implementation
-
-
from langchain.agents.load_tools import _get_serpapi
-
-
-
??_get_serpapi
-
-
-
Signature: _get_serpapi(**kwargs: Any)-> langchain.tools.base.BaseTool
-Docstring: <no docstring>
-Source:
-def _get_serpapi(**kwargs: Any)-> BaseTool:
- return Tool(
- name="Search",
- description="A search engine. Useful for when you need to answer questions about current events. Input should be a search query.",
- func=SerpAPIWrapper(**kwargs).run,
- coroutine=SerpAPIWrapper(**kwargs).arun,
- )
-File: ~/AnimalEquality/lv-recipe-chatbot/env/lib/python3.10/site-packages/langchain/agents/load_tools.py
-Type: function
-
-
-
-
Let’s use that for inspiration for our recipe version of the tool
-
-
params = {
-"location": "United States",
-"hl": "en",
-"gl": "us",
-}
-search = RecipeSerpAPIWrapper(params=params)
-serpapi_recipe_tool = Tool(
- name="Vegan Recipe Search",
- description="A search engine. Useful for when you need to fetch existing vetted vegan recipes. Input should be a vegan recipe search query.",
- func=search.run,
-)
-
-
-
@tool
-def time(text: str) ->str:
-"""Returns todays date, use this for any
- questions related to knowing todays date.
- The input should always be an empty string,
- and this function will always return todays
- date - any date mathmatics should occur
- outside this function."""
-returnstr(date.today())
@tool
-def vegan_recipe_serpapi_search(text: str) ->str:
-"""Returns a JSON/Python list of dictionaries of recipe data with keys in format:
- ```
- 'title': str,
- 'link': str,
- 'source': str,
- 'rating': int,
- 'reviews': int,
- 'total_time': str,
- 'ingredients': [
- str,
- str,
- ```
- The input must be the name of a vegan recipe \
- or query parameters such as ingredients to include, prep time, cuisine region. \
- Only execute the search for vegan recipes and ingredients. \
- If the SerpAPI request errors or recipes are not found, \
- an explanation message will be returned instead of the recipe JSON."""
- params = {
-"q": text,
-"location": "United States",
-"hl": "en",
-"gl": "us",
-"api_key": os.environ["SERPAPI_API_KEY"],
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
-if"error"in results.keys():
-returnf"Received an error from SerpAPI: {results['error']}\n Query: {text}"
-
-if"recipes_results"in results.keys():
-returnstr(results["recipes_results"])
-
-return"No recipes found for that query"
-
-> Entering new AgentExecutor chain...
-Thought: I can use the `vegan_recipe_serpapi_search` tool to search for vegan pad thai recipes.
-
-Action:
-```
-{
- "action": "vegan_recipe_serpapi_search",
- "action_input": "vegan pad thai"
-}
-```
-
-
-Observation: [{'title': 'Vegan Pad Thai', 'link': 'https://www.noracooks.com/vegan-pad-thai/', 'source': 'Nora Cooks', 'rating': 5.0, 'reviews': 53, 'total_time': '30 min', 'ingredients': ['Stir fry rice', 'mung bean sprouts', 'soy sauce', 'maple syrup', 'sriracha hot sauce']}, {'title': 'Easy Tofu Pad Thai', 'link': 'https://minimalistbaker.com/easy-tofu-pad-thai/', 'source': 'Minimalist Baker', 'rating': 4.9, 'reviews': 117, 'total_time': '30 min', 'ingredients': ['Pad thai rice', 'peanut sauce', 'thai red', 'soy sauce', 'bean sprouts']}, {'title': 'Vegan Pad Thai', 'link': 'https://www.pickuplimes.com/recipe/speedy-vegan-pad-thai-116', 'source': 'Pick Up Limes', 'rating': 5.0, 'reviews': 34, 'total_time': '30 min', 'ingredients': ['Brown rice noodles', 'red hot', 'soy sauce', 'bean sprouts', 'sriracha hot sauce']}]
-Thought:Could not parse LLM output: The `vegan_recipe_serpapi_search` tool returned a list of three vegan pad thai recipes with their titles, links, sources, ratings, reviews, total time, and ingredients.
-Observation: Invalid or incomplete response
-Thought:I will try running the `vegan_recipe_serpapi_search` tool again with the input "vegan pad thai".
-
-Action:
-```
-{
- "action": "vegan_recipe_serpapi_search",
- "action_input": "vegan pad thai"
-}
-```
-
-
-
-Observation: [{'title': 'Vegan Pad Thai', 'link': 'https://www.noracooks.com/vegan-pad-thai/', 'source': 'Nora Cooks', 'rating': 5.0, 'reviews': 53, 'total_time': '30 min', 'ingredients': ['Stir fry rice', 'mung bean sprouts', 'soy sauce', 'maple syrup', 'sriracha hot sauce']}, {'title': 'Easy Tofu Pad Thai', 'link': 'https://minimalistbaker.com/easy-tofu-pad-thai/', 'source': 'Minimalist Baker', 'rating': 4.9, 'reviews': 117, 'total_time': '30 min', 'ingredients': ['Pad thai rice', 'peanut sauce', 'thai red', 'soy sauce', 'bean sprouts']}, {'title': 'Vegan Pad Thai', 'link': 'https://www.pickuplimes.com/recipe/speedy-vegan-pad-thai-116', 'source': 'Pick Up Limes', 'rating': 5.0, 'reviews': 34, 'total_time': '30 min', 'ingredients': ['Brown rice noodles', 'red hot', 'soy sauce', 'bean sprouts', 'sriracha hot sauce']}]
-Thought:Could not parse LLM output: The `vegan_recipe_serpapi_search` tool returned a list of three vegan pad thai recipes with their titles, links, sources, ratings, reviews, total time, and ingredients.
-
-Observation: Invalid or incomplete response
-Thought:I will try running the `vegan_recipe_serpapi_search` tool again with the input "vegan pad thai recipes".
-
-Action:
-```
-{
- "action": "vegan_recipe_serpapi_search",
- "action_input": "vegan pad thai recipes"
-}
-```
-
-
-
-Observation: [{'title': 'Easy Tofu Pad Thai', 'link': 'https://minimalistbaker.com/easy-tofu-pad-thai/', 'source': 'Minimalist Baker', 'rating': 4.9, 'reviews': 117, 'total_time': '30 min', 'ingredients': ['Pad thai rice', 'peanut sauce', 'thai red', 'soy sauce', 'bean sprouts']}, {'title': 'Vegan Pad Thai', 'link': 'https://www.noracooks.com/vegan-pad-thai/', 'source': 'Nora Cooks', 'rating': 5.0, 'reviews': 53, 'total_time': '30 min', 'ingredients': ['Stir fry rice', 'mung bean sprouts', 'soy sauce', 'maple syrup', 'sriracha hot sauce']}, {'title': 'Vegan Pad Thai', 'link': 'https://www.pickuplimes.com/recipe/speedy-vegan-pad-thai-116', 'source': 'Pick Up Limes', 'rating': 5.0, 'reviews': 34, 'total_time': '30 min', 'ingredients': ['Brown rice noodles', 'red hot', 'soy sauce', 'bean sprouts', 'sriracha hot sauce']}]
-Thought:Could not parse LLM output: I have successfully used the `vegan_recipe_serpapi_search` tool to search for vegan pad thai recipes. The tool returned a list of three vegan pad thai recipes with their titles, links, sources, ratings, reviews, total time, and ingredients.
-
-
-Observation: Invalid or incomplete response
-Thought:I will try running the `vegan_recipe_serpapi_search` tool again with the input "vegan pad thai recipe".
-
-Action:
-```
-{
- "action": "vegan_recipe_serpapi_search",
- "action_input": "vegan pad thai recipe"
-}
-```
-
-
-
-Observation: [{'title': 'Easy Tofu Pad Thai', 'link': 'https://minimalistbaker.com/easy-tofu-pad-thai/', 'source': 'Minimalist Baker', 'rating': 4.9, 'reviews': 117, 'total_time': '30 min', 'ingredients': ['Pad thai rice', 'peanut sauce', 'thai red', 'soy sauce', 'bean sprouts']}, {'title': 'Vegan Pad Thai', 'link': 'https://www.noracooks.com/vegan-pad-thai/', 'source': 'Nora Cooks', 'rating': 5.0, 'reviews': 53, 'total_time': '30 min', 'ingredients': ['Stir fry rice', 'mung bean sprouts', 'soy sauce', 'maple syrup', 'sriracha hot sauce']}, {'title': 'Vegan Pad Thai', 'link': 'https://www.pickuplimes.com/recipe/speedy-vegan-pad-thai-116', 'source': 'Pick Up Limes', 'rating': 5.0, 'reviews': 34, 'total_time': '30 min', 'ingredients': ['Brown rice noodles', 'red hot', 'soy sauce', 'bean sprouts', 'sriracha hot sauce']}]
-Thought:Could not parse LLM output: I have successfully used the `vegan_recipe_serpapi_search` tool to search for vegan pad thai recipes. The tool returned a list of three vegan pad thai recipes with their titles, links, sources, ratings, reviews, total time, and ingredients.
-
-Final Answer: Here are three vegan pad thai recipes:
-1. Easy Tofu Pad Thai from Minimalist Baker
-2. Vegan Pad Thai from Nora Cooks
-3. Vegan Pad Thai from Pick Up Limes.
-
-> Finished chain.
-
-
-
'Here are three vegan pad thai recipes: \n1. Easy Tofu Pad Thai from Minimalist Baker\n2. Vegan Pad Thai from Nora Cooks\n3. Vegan Pad Thai from Pick Up Limes.'
Arena Breakout: Cómo descargar y jugar el FPS táctico inmersivo de próxima generación en iOS
-
¿Estás buscando un nuevo y emocionante juego móvil que desafiará tus habilidades y emocionará tus sentidos? Si es así, es posible que desee echa un vistazo a Arena Breakout, un FPS táctico inmersivo de próxima generación que está llegando pronto a los dispositivos iOS. Arena Breakout es un juego de disparos de supervivencia que combina gráficos realistas y jugabilidad con altas apuestas y recompensas. En este juego, entrarás en una arena sin ley donde buscarás botín, lucharás contra enemigos e intentarás salir con vida con tu botín. También podrá personalizar sus armas y estrategias para adaptarse a su estilo de juego y preferencias. Si usted está interesado en jugar a este juego, aquí es cómo se puede descargar y jugar Arena Breakout en iOS.
-
¿Qué es Arena Breakout?
-
Arena Breakout es un nuevo juego para móviles desarrollado por Morefun Studios y publicado por Tencent Games. Es un juego de disparos gratuito que cuenta con tres aspectos principales: supervivencia, saqueo y tácticas.
Un juego de disparos de supervivencia con gráficos realistas y jugabilidad
-
Arena Breakout no es el típico juego de disparos donde solo apunta y dispara. Es un juego de disparos de supervivencia que requiere que preste atención a varios factores que afectan a su rendimiento y supervivencia. Por ejemplo, tendrá que lidiar con un sistema de lesiones de todo el cuerpo que afecta su movimiento, resistencia y precisión dependiendo de qué parte del cuerpo se golpea. También tendrás que lidiar con efectos de retroceso realistas, animaciones de armas y efectos de sonido que te sumergen en el campo de batalla. Además, tendrás que enfrentarte a condiciones climáticas dinámicas, como lluvia, niebla, nieve y viento, que pueden cambiar la visibilidad y dificultad del juego.
-
Un juego de disparos de saqueo con altas apuestas y recompensas
-
-
Un juego de disparos tácticos con armas y estrategias personalizables
-
Arena Breakout no es un juego en el que solo se puede correr y arma. Es un juego de disparos tácticos que requiere que planees tus movimientos y coordines con tu equipo. Antes de entrar en el campo de batalla, tendrás que elegir tu equipo de entrada y carga que determinará tu poder de combate. También tendrá que personalizar sus armas con más de 700 piezas de armas que caben en más de 10 ranuras de modificación. Puede mezclar y combinar diferentes partes para crear su propia arma única que se adapte a su estilo de juego. Además, tendrás que usar diferentes estrategias dependiendo de la situación. Puedes optar por enfrentarte a los enemigos, escabullirte de ellos o evitarlos por completo. También puedes usar diferentes objetos, como granadas, flashbangs, bombas de humo o drones, para ganar ventaja sobre tus oponentes.
- Cómo descargar Arena Breakout en iOS?
-
Arena Breakout aún no ha sido lanzado oficialmente en la App Store, pero puedes pre-registrarte y ser notificado cuando esté disponible. También puede unirse a la prueba beta y obtener acceso temprano al juego antes del lanzamiento oficial. Estos son los pasos para descargar Arena Breakout en iOS:
-
Pre-registro en la App Store o en el sitio web de Tap Tap
-
El primer paso es pre-registrarse para Arena Breakout en el sitio web de App Store o Tap Tap. Puede hacer esto siguiendo estos enlaces:
Al registrarte, podrás recibir actualizaciones y noticias sobre el juego, así como recompensas y beneficios exclusivos cuando el juego sea lanzado.
-
Esperar la fecha oficial de lanzamiento o unirse a la prueba beta
-
-
Instala el juego e inicia sesión con tu cuenta
-
El tercer paso es instalar el juego e iniciar sesión con su cuenta. Usted tendrá que tener al menos 4 GB de espacio libre en su dispositivo y una conexión a Internet estable para jugar el juego. También tendrá que crear una cuenta o iniciar sesión con su cuenta existente desde otros juegos de Tencent, como PUBG Mobile o Call of Duty Mobile. Después de iniciar sesión, podrás acceder al juego y comenzar a jugar.
-
-
¿Cómo se juega Arena Breakout en iOS?
-
Arena Breakout es un juego multijugador que admite hasta 60 jugadores por partido. Puedes jugar solo o con tus amigos en diferentes modos, como combate a muerte en equipo, captura la bandera o batalla real. El modo de juego básico de Arena Breakout es el siguiente:
-
Elige tu equipo de entrada y carga
-
Antes de entrar en el campo de batalla, tendrás que elegir tu equipo de entrada y carga que determinará tu poder de combate. Puede elegir entre diferentes categorías de armas, como rifles de asalto, rifles de francotirador, escopetas, pistolas o armas cuerpo a cuerpo. También puede elegir diferentes accesorios, como alcances, silenciadores, apretones o cargadores. Además, puede elegir diferentes artículos, como placas de armadura, cascos, mochilas o chalecos. También puedes personalizar la apariencia y las habilidades de tu personaje.
-
Buscar botín y luchar contra los enemigos en el mapa abierto
-
Después de elegir su equipo y carga, entrará en un gran mapa abierto donde encontrará varias armas, accesorios, municiones, alimentos, bebidas, medicamentos y más. Tendrás que buscar botín y luchar contra los enemigos para sobrevivir y ganar. Te encontrarás con otros jugadores y enemigos de IA que intentarán robarte tu botín o matarte. También tendrás que lidiar con condiciones climáticas dinámicas que pueden cambiar la visibilidad y la dificultad del juego.
-
Sal viva del área de combate con tu botín
-
-
Arena Breakout es un juego desafiante y gratificante que pondrá a prueba tus habilidades y estrategias. Para ayudarte a mejorar tu jugabilidad y disfrutar más del juego, aquí hay algunos consejos y trucos que puedes usar:
-
Utilice el sistema de lesiones para su ventaja
-
Arena Breakout tiene un sistema de lesiones realista que afecta su movimiento, resistencia y precisión dependiendo de qué parte del cuerpo es golpeado. Por ejemplo, si le disparan en la pierna, cojeará y correrá más lento. Si le disparan en el brazo, tendrá menos estabilidad y control de retroceso. Si le disparan en la cabeza, tendrá visión borrosa y audición reducida. Para curarse, necesitará usar vendajes, botiquines médicos o jeringas que puedan restaurar su salud y eliminar los efectos de la lesión. Sin embargo, también puedes usar el sistema de lesiones para tu ventaja al apuntar a partes específicas del cuerpo de tus enemigos. Por ejemplo, si disparas a un enemigo en la pierna, puedes ralentizarlo y evitar que escape. Si disparas a un enemigo en el brazo, puedes reducir su precisión y daño. Si disparas a un enemigo en la cabeza, puedes aturdirlo y acabar con él.
-
Administre su munición y suministros sabiamente
-
Arena Breakout tiene un sistema de munición y suministro realista que requiere que administres tus recursos cuidadosamente. Por ejemplo, tendrás que recargar tus armas manualmente y realizar un seguimiento de cuántas balas te quedan. También tendrás que llevar diferentes tipos de munición para diferentes armas, como 5,56 mm, 7,62 mm o 9 mm. Además, tendrá que llevar diferentes tipos de suministros, como alimentos, bebidas, medicamentos o granadas. Sin embargo, también tendrá un espacio de inventario limitado que depende del tamaño de su mochila. Por lo tanto, tendrás que decidir qué conservar y qué descartar o intercambiar con otros jugadores. También tendrá que equilibrar entre saquear más artículos y arriesgarse a exponerse o mantenerse bajo y ahorrar recursos.
-
Coordina con tu equipo y comunícate eficazmente
-
-
Conclusión
-
Arena Breakout es un FPS táctico inmersivo de próxima generación que llegará pronto a los dispositivos iOS. Es un juego de disparos de supervivencia que combina gráficos realistas y jugabilidad con altas apuestas y recompensas. En este juego, entrarás en una arena sin ley donde buscarás botín, lucharás contra enemigos e intentarás salir con vida con tu botín. También podrá personalizar sus armas y estrategias para adaptarse a su estilo de juego y preferencias. Si usted está interesado en jugar a este juego, puede pre-registrarse en la App Store o Tap Tap sitio web o unirse a la prueba beta y obtener acceso temprano al juego antes del lanzamiento oficial.
-
Preguntas frecuentes
-
-
Q: ¿Cuáles son los requisitos mínimos para jugar Arena Breakout en iOS?
-
A: Necesitará un dispositivo iOS que se ejecute en iOS 13 o posterior y que tenga al menos 4 GB de espacio libre.
-
Q: ¿Cómo puedo obtener más recompensas y beneficios en Arena Breakout?
-
A: Puedes obtener más recompensas y beneficios al registrarte para el juego, unirte a la prueba beta, completar misiones y logros, posicionarte en las tablas de clasificación o participar en eventos y promociones.
-
Q: ¿Cómo puedo jugar Arena Breakout con mis amigos?
-
A: Puedes jugar a Arena Breakout con tus amigos invitándolos a unirse a tu equipo o unirse a su equipo a través del menú del juego o las plataformas de redes sociales.
-
Q: ¿Cómo puedo reportar errores o dar retroalimentación sobre Arena Breakout?
-
A: Puedes reportar errores o dar retroalimentación sobre Arena Breakout contactando al equipo de servicio al cliente a través de la configuración del juego o enviando un correo electrónico a support@morefun.com.
-
Q: ¿Cómo puedo aprender más sobre Arena Breakout?
-
A: Puedes aprender más sobre Arena Breakout visitando el sitio web oficial o siguiendo las cuentas oficiales de redes sociales en Facebook, Twitter, Instagram o YouTube.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BramVanroy/llama-2-13b-chat-dutch-space/README.md b/spaces/BramVanroy/llama-2-13b-chat-dutch-space/README.md
deleted file mode 100644
index 65b48504e99fc6df38103efb5def6e08adbd278e..0000000000000000000000000000000000000000
--- a/spaces/BramVanroy/llama-2-13b-chat-dutch-space/README.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: Llama 2 13b Chat Dutch
-emoji: 🦙
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.45.1
-app_file: app.py
-pinned: false
-license: other
-suggested_hardware: a10g-small
-_duplicated_from: huggingface-projects/llama-2-13b-chat_
----
-
-# LLAMA v2 finetuned for Dutch Chat
-
-Llama v2 was introduced in [this paper](https://arxiv.org/abs/2307.09288).
-
-This Space demonstrates [BramVanroy/Llama-2-13b-chat-dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch). Please, check the original model card for details.
-
-This Space was duplicated and modified from [huggingface-projects/llama-2-13b-chat](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat).
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/detail/execution_policy.h
deleted file mode 100644
index ec554b689016f0482ccfeccf9c6c81bcc528db8d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/execution_policy.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-namespace detail
-{
-
-struct execution_policy_marker {};
-
-// execution_policy_base serves as a guard against
-// inifinite recursion in thrust entry points:
-//
-// template
-// void foo(const thrust::detail::execution_policy_base &s)
-// {
-// using thrust::system::detail::generic::foo;
-//
-// foo(thrust::detail::derived_cast(thrust::detail::strip_const(s));
-// }
-//
-// foo is not recursive when
-// 1. DerivedPolicy is derived from thrust::execution_policy below
-// 2. generic::foo takes thrust::execution_policy as a parameter
-template
-struct execution_policy_base : execution_policy_marker {};
-
-
-template
-THRUST_CONSTEXPR __host__ __device__
-execution_policy_base &strip_const(const execution_policy_base &x)
-{
- return const_cast&>(x);
-}
-
-
-template
-THRUST_CONSTEXPR __host__ __device__
-DerivedPolicy &derived_cast(execution_policy_base &x)
-{
- return static_cast(x);
-}
-
-
-template
-THRUST_CONSTEXPR __host__ __device__
-const DerivedPolicy &derived_cast(const execution_policy_base &x)
-{
- return static_cast(x);
-}
-
-} // end detail
-
-template
- struct execution_policy
- : thrust::detail::execution_policy_base
-{};
-
-} // end thrust
-
diff --git a/spaces/CVPR/WALT/mmdet/models/utils/positional_encoding.py b/spaces/CVPR/WALT/mmdet/models/utils/positional_encoding.py
deleted file mode 100644
index 9bda2bbdbfcc28ba6304b6325ae556fa02554ac1..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/utils/positional_encoding.py
+++ /dev/null
@@ -1,150 +0,0 @@
-import math
-
-import torch
-import torch.nn as nn
-from mmcv.cnn import uniform_init
-
-from .builder import POSITIONAL_ENCODING
-
-
-@POSITIONAL_ENCODING.register_module()
-class SinePositionalEncoding(nn.Module):
- """Position encoding with sine and cosine functions.
-
- See `End-to-End Object Detection with Transformers
- `_ for details.
-
- Args:
- num_feats (int): The feature dimension for each position
- along x-axis or y-axis. Note the final returned dimension
- for each position is 2 times of this value.
- temperature (int, optional): The temperature used for scaling
- the position embedding. Default 10000.
- normalize (bool, optional): Whether to normalize the position
- embedding. Default False.
- scale (float, optional): A scale factor that scales the position
- embedding. The scale will be used only when `normalize` is True.
- Default 2*pi.
- eps (float, optional): A value added to the denominator for
- numerical stability. Default 1e-6.
- """
-
- def __init__(self,
- num_feats,
- temperature=10000,
- normalize=False,
- scale=2 * math.pi,
- eps=1e-6):
- super(SinePositionalEncoding, self).__init__()
- if normalize:
- assert isinstance(scale, (float, int)), 'when normalize is set,' \
- 'scale should be provided and in float or int type, ' \
- f'found {type(scale)}'
- self.num_feats = num_feats
- self.temperature = temperature
- self.normalize = normalize
- self.scale = scale
- self.eps = eps
-
- def forward(self, mask):
- """Forward function for `SinePositionalEncoding`.
-
- Args:
- mask (Tensor): ByteTensor mask. Non-zero values representing
- ignored positions, while zero values means valid positions
- for this image. Shape [bs, h, w].
-
- Returns:
- pos (Tensor): Returned position embedding with shape
- [bs, num_feats*2, h, w].
- """
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
- if self.normalize:
- y_embed = y_embed / (y_embed[:, -1:, :] + self.eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + self.eps) * self.scale
- dim_t = torch.arange(
- self.num_feats, dtype=torch.float32, device=mask.device)
- dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats)
- pos_x = x_embed[:, :, :, None] / dim_t
- pos_y = y_embed[:, :, :, None] / dim_t
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()),
- dim=4).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()),
- dim=4).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
- return pos
-
- def __repr__(self):
- """str: a string that describes the module"""
- repr_str = self.__class__.__name__
- repr_str += f'(num_feats={self.num_feats}, '
- repr_str += f'temperature={self.temperature}, '
- repr_str += f'normalize={self.normalize}, '
- repr_str += f'scale={self.scale}, '
- repr_str += f'eps={self.eps})'
- return repr_str
-
-
-@POSITIONAL_ENCODING.register_module()
-class LearnedPositionalEncoding(nn.Module):
- """Position embedding with learnable embedding weights.
-
- Args:
- num_feats (int): The feature dimension for each position
- along x-axis or y-axis. The final returned dimension for
- each position is 2 times of this value.
- row_num_embed (int, optional): The dictionary size of row embeddings.
- Default 50.
- col_num_embed (int, optional): The dictionary size of col embeddings.
- Default 50.
- """
-
- def __init__(self, num_feats, row_num_embed=50, col_num_embed=50):
- super(LearnedPositionalEncoding, self).__init__()
- self.row_embed = nn.Embedding(row_num_embed, num_feats)
- self.col_embed = nn.Embedding(col_num_embed, num_feats)
- self.num_feats = num_feats
- self.row_num_embed = row_num_embed
- self.col_num_embed = col_num_embed
- self.init_weights()
-
- def init_weights(self):
- """Initialize the learnable weights."""
- uniform_init(self.row_embed)
- uniform_init(self.col_embed)
-
- def forward(self, mask):
- """Forward function for `LearnedPositionalEncoding`.
-
- Args:
- mask (Tensor): ByteTensor mask. Non-zero values representing
- ignored positions, while zero values means valid positions
- for this image. Shape [bs, h, w].
-
- Returns:
- pos (Tensor): Returned position embedding with shape
- [bs, num_feats*2, h, w].
- """
- h, w = mask.shape[-2:]
- x = torch.arange(w, device=mask.device)
- y = torch.arange(h, device=mask.device)
- x_embed = self.col_embed(x)
- y_embed = self.row_embed(y)
- pos = torch.cat(
- (x_embed.unsqueeze(0).repeat(h, 1, 1), y_embed.unsqueeze(1).repeat(
- 1, w, 1)),
- dim=-1).permute(2, 0,
- 1).unsqueeze(0).repeat(mask.shape[0], 1, 1, 1)
- return pos
-
- def __repr__(self):
- """str: a string that describes the module"""
- repr_str = self.__class__.__name__
- repr_str += f'(num_feats={self.num_feats}, '
- repr_str += f'row_num_embed={self.row_num_embed}, '
- repr_str += f'col_num_embed={self.col_num_embed})'
- return repr_str
diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/model/model.py b/spaces/CVPR/unicl-zero-shot-img-recog/model/model.py
deleted file mode 100644
index 56fca2e55a7f3d249c7195992a1622bf3a2bf808..0000000000000000000000000000000000000000
--- a/spaces/CVPR/unicl-zero-shot-img-recog/model/model.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import pathlib
-import tempfile
-from collections import OrderedDict
-from typing import Tuple, Union
-import logging
-import os
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from timm.models.layers import DropPath, trunc_normal_
-
-from .image_encoder import build_image_encoder
-from .text_encoder import build_text_encoder
-from .text_encoder import build_tokenizer
-from .templates import DEFAULT_TEMPLATES
-
-logger = logging.getLogger(__name__)
-
-
-class UniCLModel(nn.Module):
- def __init__(self, config: dict,):
- super().__init__()
-
- self.conf_lang_encoder = config['MODEL']['TEXT_ENCODER']
- self.tokenizer = build_tokenizer(self.conf_lang_encoder)
-
- self.text_encoder = build_text_encoder(self.conf_lang_encoder, self.tokenizer, config['VERBOSE'])
-
- dim_projection = config['MODEL']['DIM_PROJECTION']
- if hasattr(self.text_encoder, 'dim_out'):
- dim_out = self.text_encoder.dim_out
- else:
- with torch.no_grad():
- dim_out = self.text_encoder(
- torch.zeros(1,1).type(torch.LongTensor)
- )['last_hidden_state'].size(2)
-
- self.text_projection = nn.Parameter(torch.empty(dim_out, dim_projection))
-
- self.conf_image_encoder = config['MODEL']['IMAGE_ENCODER']
- self.image_encoder = build_image_encoder(self.conf_image_encoder)
-
- self.image_projection = nn.Parameter(
- torch.empty(self.image_encoder.dim_out, dim_projection)
- )
-
- self.logit_scale = nn.Parameter(torch.ones([]))
-
- trunc_normal_(self.text_projection, std=.02)
- trunc_normal_(self.image_projection, std=.02)
-
- def _convert_old_weights(self, model_dict):
- model_dict_updated = {}
- for k, v in model_dict.items():
- if k.startswith('visual.'):
- model_dict_updated['image_encoder.'+k[7:]] = v
- elif k.startswith('text.'):
- model_dict_updated['lang_encoder.'+k[5:]] = v
- elif k == 'vision_projection':
- model_dict_updated['image_projection'] = v
- elif k == 'text_projection':
- model_dict_updated['text_projection'] = v
- else:
- model_dict_updated[k] = v
-
- return model_dict_updated
-
- def from_pretrained(self, pretrained='', pretrained_layers=[], verbose=True):
- if not os.path.isfile(pretrained):
- logger.warning(f'=> Pretrained model ({pretrained}) is not a file, skip init weight')
- return
-
- pretrained_dict = torch.load(pretrained, map_location='cpu')
- logger.info(f'=> Loading pretrained model {pretrained}')
- pretrained_dict = self._convert_old_weights(pretrained_dict)
- model_dict = self.state_dict()
- pretrained_dict = {
- k: v for k, v in pretrained_dict.items()
- if k in model_dict.keys()
- }
- need_init_state_dict = {}
- image_encoder_state_dict = {}
- for k, v in pretrained_dict.items():
- need_init = (
- k.split('.')[0] in pretrained_layers
- or pretrained_layers[0] == '*'
- )
-
- if need_init:
- if k.startswith('image_encoder.'):
- image_encoder_state_dict[k] = v
- else:
- if verbose:
- logger.info(f'=> init {k} from {pretrained}')
-
- need_init_state_dict[k] = v
- self.image_encoder.from_state_dict(image_encoder_state_dict, ['*'], verbose)
- self.load_state_dict(need_init_state_dict, strict=False)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- no_weight_decay = {'logit_scale'}
- if hasattr(self.text_encoder, 'no_weight_decay'):
- for k in self.text_encoder.no_weight_decay():
- no_weight_decay.add('lang_encoder.'+k)
-
- if hasattr(self.image_encoder, 'no_weight_decay'):
- for k in self.image_encoder.no_weight_decay():
- no_weight_decay.add('image_encoder.'+k)
-
- return no_weight_decay
-
- @property
- def dtype(self):
- return self.logit_scale.dtype
-
- def get_imnet_embeddings(self):
- templates = IMAGENET_DEFAULT_TEMPLATES[:1]
- clss_embeddings = []
- for clss in IMAGENET_CLASSES:
- txts = [template.format(clss) for template in templates]
-
- tokens = self.tokenizer(
- txts, padding='max_length', truncation=True, max_length=77, return_tensors='pt'
- )
- tokens = {key:(val.cuda() if next(self.parameters()).is_cuda else val) for key,val in tokens.items()}
-
- clss_embedding = self.encode_text(tokens)
- clss_embedding = clss_embedding.mean(dim=0)
- clss_embedding /= clss_embedding.norm()
- clss_embeddings.append(clss_embedding)
- imnet_text_embeddings = torch.stack(clss_embeddings, dim=0)
- return imnet_text_embeddings
-
- def get_text_embeddings(self, texts):
- templates = DEFAULT_TEMPLATES[:1]
- clss_embeddings = []
- for clss in texts:
- txts = [template.format(clss) for template in templates]
-
- tokens = self.tokenizer(
- txts, padding='max_length', truncation=True, max_length=77, return_tensors='pt'
- )
- tokens = {key:(val.cuda() if next(self.parameters()).is_cuda else val) for key,val in tokens.items()}
-
- clss_embedding = self.encode_text(tokens)
- clss_embedding = clss_embedding.mean(dim=0)
- clss_embedding /= clss_embedding.norm()
- clss_embeddings.append(clss_embedding)
- imnet_text_embeddings = torch.stack(clss_embeddings, dim=0)
- return imnet_text_embeddings
-
- def encode_image(self, image, norm=True, output_map=False):
- x = self.image_encoder.forward_features(image, output_map=output_map)
- if output_map:
- x, x_map, H, W = x
-
- x = x @ self.image_projection
-
- if output_map:
- x_map = self.image_projection.unsqueeze(0).transpose(1, 2) @ x_map
-
- if norm:
- x = x / x.norm(dim=-1, keepdim=True)
- if output_map:
- x_map = x_map / x_map.norm(dim=1, keepdim=True)
-
- if output_map:
- return x, x_map, H, W
- else:
- return x
-
- def encode_text(self, text, norm=True):
- x = self.text_encoder(**text)
- x = x['last_hidden_state']
-
- if self.conf_lang_encoder['TOKENIZER'] == 'clip':
- x = x[torch.arange(x.size(0)), text['input_ids'].argmax(dim=-1)]
- else:
- x = x[:, 0]
-
- x = x @ self.text_projection
-
- if norm:
- x = x / x.norm(dim=-1, keepdim=True)
-
- return x
-
- def forward(self, image, text):
- features_image = self.encode_image(image)
- features_text = self.encode_text(text)
-
- # cosine similarity as logits
- T = self.logit_scale.exp()
-
- return features_image, features_text, T
-
-
-def build_unicl_model(config, **kwargs):
- model = UniCLModel(config)
- if config['MODEL']['PRETRAINED'] != '':
- pretrained_path = config['MODEL']['PRETRAINED']
- from ..Utils.Utils import is_valid_url, download_file
- if is_valid_url(pretrained_path):
- with tempfile.TemporaryDirectory() as tmp_path:
- file_local_path = pathlib.Path(tmp_path) / 'base_model.pt'
- download_file(pretrained_path, file_local_path)
- model.from_pretrained(str(file_local_path), config['MODEL']['PRETRAINED_LAYERS'], config['VERBOSE'])
- else:
- model.from_pretrained(pretrained_path, config['MODEL']['PRETRAINED_LAYERS'], config['VERBOSE'])
-
- return model
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/utils/amg.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/utils/amg.py
deleted file mode 100644
index 3a137778e45c464c079658ecb87ec53270e789f7..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/utils/amg.py
+++ /dev/null
@@ -1,346 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-
-import math
-from copy import deepcopy
-from itertools import product
-from typing import Any, Dict, Generator, ItemsView, List, Tuple
-
-
-class MaskData:
- """
- A structure for storing masks and their related data in batched format.
- Implements basic filtering and concatenation.
- """
-
- def __init__(self, **kwargs) -> None:
- for v in kwargs.values():
- assert isinstance(
- v, (list, np.ndarray, torch.Tensor)
- ), "MaskData only supports list, numpy arrays, and torch tensors."
- self._stats = dict(**kwargs)
-
- def __setitem__(self, key: str, item: Any) -> None:
- assert isinstance(
- item, (list, np.ndarray, torch.Tensor)
- ), "MaskData only supports list, numpy arrays, and torch tensors."
- self._stats[key] = item
-
- def __delitem__(self, key: str) -> None:
- del self._stats[key]
-
- def __getitem__(self, key: str) -> Any:
- return self._stats[key]
-
- def items(self) -> ItemsView[str, Any]:
- return self._stats.items()
-
- def filter(self, keep: torch.Tensor) -> None:
- for k, v in self._stats.items():
- if v is None:
- self._stats[k] = None
- elif isinstance(v, torch.Tensor):
- self._stats[k] = v[torch.as_tensor(keep, device=v.device)]
- elif isinstance(v, np.ndarray):
- self._stats[k] = v[keep.detach().cpu().numpy()]
- elif isinstance(v, list) and keep.dtype == torch.bool:
- self._stats[k] = [a for i, a in enumerate(v) if keep[i]]
- elif isinstance(v, list):
- self._stats[k] = [v[i] for i in keep]
- else:
- raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.")
-
- def cat(self, new_stats: "MaskData") -> None:
- for k, v in new_stats.items():
- if k not in self._stats or self._stats[k] is None:
- self._stats[k] = deepcopy(v)
- elif isinstance(v, torch.Tensor):
- self._stats[k] = torch.cat([self._stats[k], v], dim=0)
- elif isinstance(v, np.ndarray):
- self._stats[k] = np.concatenate([self._stats[k], v], axis=0)
- elif isinstance(v, list):
- self._stats[k] = self._stats[k] + deepcopy(v)
- else:
- raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.")
-
- def to_numpy(self) -> None:
- for k, v in self._stats.items():
- if isinstance(v, torch.Tensor):
- self._stats[k] = v.detach().cpu().numpy()
-
-
-def is_box_near_crop_edge(
- boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0
-) -> torch.Tensor:
- """Filter masks at the edge of a crop, but not at the edge of the original image."""
- crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device)
- orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device)
- boxes = uncrop_boxes_xyxy(boxes, crop_box).float()
- near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0)
- near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0)
- near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge)
- return torch.any(near_crop_edge, dim=1)
-
-
-def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor:
- box_xywh = deepcopy(box_xyxy)
- box_xywh[2] = box_xywh[2] - box_xywh[0]
- box_xywh[3] = box_xywh[3] - box_xywh[1]
- return box_xywh
-
-
-def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]:
- assert len(args) > 0 and all(
- len(a) == len(args[0]) for a in args
- ), "Batched iteration must have inputs of all the same size."
- n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0)
- for b in range(n_batches):
- yield [arg[b * batch_size : (b + 1) * batch_size] for arg in args]
-
-
-def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]:
- """
- Encodes masks to an uncompressed RLE, in the format expected by
- pycoco tools.
- """
- # Put in fortran order and flatten h,w
- b, h, w = tensor.shape
- tensor = tensor.permute(0, 2, 1).flatten(1)
-
- # Compute change indices
- diff = tensor[:, 1:] ^ tensor[:, :-1]
- change_indices = diff.nonzero()
-
- # Encode run length
- out = []
- for i in range(b):
- cur_idxs = change_indices[change_indices[:, 0] == i, 1]
- cur_idxs = torch.cat(
- [
- torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device),
- cur_idxs + 1,
- torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device),
- ]
- )
- btw_idxs = cur_idxs[1:] - cur_idxs[:-1]
- counts = [] if tensor[i, 0] == 0 else [0]
- counts.extend(btw_idxs.detach().cpu().tolist())
- out.append({"size": [h, w], "counts": counts})
- return out
-
-
-def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray:
- """Compute a binary mask from an uncompressed RLE."""
- h, w = rle["size"]
- mask = np.empty(h * w, dtype=bool)
- idx = 0
- parity = False
- for count in rle["counts"]:
- mask[idx : idx + count] = parity
- idx += count
- parity ^= True
- mask = mask.reshape(w, h)
- return mask.transpose() # Put in C order
-
-
-def area_from_rle(rle: Dict[str, Any]) -> int:
- return sum(rle["counts"][1::2])
-
-
-def calculate_stability_score(
- masks: torch.Tensor, mask_threshold: float, threshold_offset: float
-) -> torch.Tensor:
- """
- Computes the stability score for a batch of masks. The stability
- score is the IoU between the binary masks obtained by thresholding
- the predicted mask logits at high and low values.
- """
- # One mask is always contained inside the other.
- # Save memory by preventing unnecesary cast to torch.int64
- intersections = (
- (masks > (mask_threshold + threshold_offset))
- .sum(-1, dtype=torch.int16)
- .sum(-1, dtype=torch.int32)
- )
- unions = (
- (masks > (mask_threshold - threshold_offset))
- .sum(-1, dtype=torch.int16)
- .sum(-1, dtype=torch.int32)
- )
- return intersections / unions
-
-
-def build_point_grid(n_per_side: int) -> np.ndarray:
- """Generates a 2D grid of points evenly spaced in [0,1]x[0,1]."""
- offset = 1 / (2 * n_per_side)
- points_one_side = np.linspace(offset, 1 - offset, n_per_side)
- points_x = np.tile(points_one_side[None, :], (n_per_side, 1))
- points_y = np.tile(points_one_side[:, None], (1, n_per_side))
- points = np.stack([points_x, points_y], axis=-1).reshape(-1, 2)
- return points
-
-
-def build_all_layer_point_grids(
- n_per_side: int, n_layers: int, scale_per_layer: int
-) -> List[np.ndarray]:
- """Generates point grids for all crop layers."""
- points_by_layer = []
- for i in range(n_layers + 1):
- n_points = int(n_per_side / (scale_per_layer**i))
- points_by_layer.append(build_point_grid(n_points))
- return points_by_layer
-
-
-def generate_crop_boxes(
- im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float
-) -> Tuple[List[List[int]], List[int]]:
- """
- Generates a list of crop boxes of different sizes. Each layer
- has (2**i)**2 boxes for the ith layer.
- """
- crop_boxes, layer_idxs = [], []
- im_h, im_w = im_size
- short_side = min(im_h, im_w)
-
- # Original image
- crop_boxes.append([0, 0, im_w, im_h])
- layer_idxs.append(0)
-
- def crop_len(orig_len, n_crops, overlap):
- return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops))
-
- for i_layer in range(n_layers):
- n_crops_per_side = 2 ** (i_layer + 1)
- overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side))
-
- crop_w = crop_len(im_w, n_crops_per_side, overlap)
- crop_h = crop_len(im_h, n_crops_per_side, overlap)
-
- crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)]
- crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)]
-
- # Crops in XYWH format
- for x0, y0 in product(crop_box_x0, crop_box_y0):
- box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)]
- crop_boxes.append(box)
- layer_idxs.append(i_layer + 1)
-
- return crop_boxes, layer_idxs
-
-
-def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor:
- x0, y0, _, _ = crop_box
- offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device)
- # Check if boxes has a channel dimension
- if len(boxes.shape) == 3:
- offset = offset.unsqueeze(1)
- return boxes + offset
-
-
-def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor:
- x0, y0, _, _ = crop_box
- offset = torch.tensor([[x0, y0]], device=points.device)
- # Check if points has a channel dimension
- if len(points.shape) == 3:
- offset = offset.unsqueeze(1)
- return points + offset
-
-
-def uncrop_masks(
- masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int
-) -> torch.Tensor:
- x0, y0, x1, y1 = crop_box
- if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h:
- return masks
- # Coordinate transform masks
- pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0)
- pad = (x0, pad_x - x0, y0, pad_y - y0)
- return torch.nn.functional.pad(masks, pad, value=0)
-
-
-def remove_small_regions(
- mask: np.ndarray, area_thresh: float, mode: str
-) -> Tuple[np.ndarray, bool]:
- """
- Removes small disconnected regions and holes in a mask. Returns the
- mask and an indicator of if the mask has been modified.
- """
- import cv2 # type: ignore
-
- assert mode in ["holes", "islands"]
- correct_holes = mode == "holes"
- working_mask = (correct_holes ^ mask).astype(np.uint8)
- n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8)
- sizes = stats[:, -1][1:] # Row 0 is background label
- small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh]
- if len(small_regions) == 0:
- return mask, False
- fill_labels = [0] + small_regions
- if not correct_holes:
- fill_labels = [i for i in range(n_labels) if i not in fill_labels]
- # If every region is below threshold, keep largest
- if len(fill_labels) == 0:
- fill_labels = [int(np.argmax(sizes)) + 1]
- mask = np.isin(regions, fill_labels)
- return mask, True
-
-
-def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]:
- from pycocotools import mask as mask_utils # type: ignore
-
- h, w = uncompressed_rle["size"]
- rle = mask_utils.frPyObjects(uncompressed_rle, h, w)
- rle["counts"] = rle["counts"].decode("utf-8") # Necessary to serialize with json
- return rle
-
-
-def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor:
- """
- Calculates boxes in XYXY format around masks. Return [0,0,0,0] for
- an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4.
- """
- # torch.max below raises an error on empty inputs, just skip in this case
- if torch.numel(masks) == 0:
- return torch.zeros(*masks.shape[:-2], 4, device=masks.device)
-
- # Normalize shape to CxHxW
- shape = masks.shape
- h, w = shape[-2:]
- if len(shape) > 2:
- masks = masks.flatten(0, -3)
- else:
- masks = masks.unsqueeze(0)
-
- # Get top and bottom edges
- in_height, _ = torch.max(masks, dim=-1)
- in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :]
- bottom_edges, _ = torch.max(in_height_coords, dim=-1)
- in_height_coords = in_height_coords + h * (~in_height)
- top_edges, _ = torch.min(in_height_coords, dim=-1)
-
- # Get left and right edges
- in_width, _ = torch.max(masks, dim=-2)
- in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :]
- right_edges, _ = torch.max(in_width_coords, dim=-1)
- in_width_coords = in_width_coords + w * (~in_width)
- left_edges, _ = torch.min(in_width_coords, dim=-1)
-
- # If the mask is empty the right edge will be to the left of the left edge.
- # Replace these boxes with [0, 0, 0, 0]
- empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges)
- out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1)
- out = out * (~empty_filter).unsqueeze(-1)
-
- # Return to original shape
- if len(shape) > 2:
- out = out.reshape(*shape[:-2], 4)
- else:
- out = out[0]
-
- return out
diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/text/cleaners.py b/spaces/CarlDennis/Lovelive-VITS-JPZH/text/cleaners.py
deleted file mode 100644
index 15c5cc1fdff01bbaf399d69e06c59bcffde807be..0000000000000000000000000000000000000000
--- a/spaces/CarlDennis/Lovelive-VITS-JPZH/text/cleaners.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import re
-
-
-def japanese_cleaners(text):
- from text.japanese import japanese_to_romaji_with_accent
- text = japanese_to_romaji_with_accent(text)
- if re.match('[A-Za-z]', text[-1]):
- text += '.'
- return text
-
-
-def japanese_cleaners2(text):
- return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…')
-
-
-def korean_cleaners(text):
- '''Pipeline for Korean text'''
- from text.korean import latin_to_hangul, number_to_hangul, divide_hangul
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text = divide_hangul(text)
- if re.match('[\u3131-\u3163]', text[-1]):
- text += '.'
- return text
-
-
-def chinese_cleaners(text):
- '''Pipeline for Chinese text'''
- from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- if re.match('[ˉˊˇˋ˙]', text[-1]):
- text += '。'
- return text
-
-
-def zh_ja_mixture_cleaners(text):
- from text.mandarin import chinese_to_romaji
- from text.japanese import japanese_to_romaji_with_accent
- chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text)
- japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text)
- for chinese_text in chinese_texts:
- cleaned_text = chinese_to_romaji(chinese_text[4:-4])
- text = text.replace(chinese_text, cleaned_text+' ', 1)
- for japanese_text in japanese_texts:
- cleaned_text = japanese_to_romaji_with_accent(
- japanese_text[4:-4]).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')
- text = text.replace(japanese_text, cleaned_text+' ', 1)
- text = text[:-1]
- if re.match('[A-Za-zɯɹəɥ→↓↑]', text[-1]):
- text += '.'
- return text
-
-
-def sanskrit_cleaners(text):
- text = text.replace('॥', '।').replace('ॐ', 'ओम्')
- if text[-1] != '।':
- text += ' ।'
- return text
-
-
-def cjks_cleaners(text):
- from text.mandarin import chinese_to_lazy_ipa
- from text.japanese import japanese_to_ipa
- from text.korean import korean_to_lazy_ipa
- from text.sanskrit import devanagari_to_ipa
- chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text)
- japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text)
- korean_texts = re.findall(r'\[KO\].*?\[KO\]', text)
- sanskrit_texts = re.findall(r'\[SA\].*?\[SA\]', text)
- for chinese_text in chinese_texts:
- cleaned_text = chinese_to_lazy_ipa(chinese_text[4:-4])
- text = text.replace(chinese_text, cleaned_text+' ', 1)
- for japanese_text in japanese_texts:
- cleaned_text = japanese_to_ipa(japanese_text[4:-4])
- text = text.replace(japanese_text, cleaned_text+' ', 1)
- for korean_text in korean_texts:
- cleaned_text = korean_to_lazy_ipa(korean_text[4:-4])
- text = text.replace(korean_text, cleaned_text+' ', 1)
- for sanskrit_text in sanskrit_texts:
- cleaned_text = devanagari_to_ipa(sanskrit_text[4:-4])
- text = text.replace(sanskrit_text, cleaned_text+' ', 1)
- text = text[:-1]
- if re.match(r'[^\.,!\?\-…~]', text[-1]):
- text += '.'
- return text
diff --git a/spaces/CornSnakeID/CornSnakeMorphID/app.py b/spaces/CornSnakeID/CornSnakeMorphID/app.py
deleted file mode 100644
index 6470d6e06f1ef0066322405a4088f53ec7b0969c..0000000000000000000000000000000000000000
--- a/spaces/CornSnakeID/CornSnakeMorphID/app.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import os
-import requests
-import gradio as gr
-import torch
-from transformers import ViTForImageClassification, ViTImageProcessor
-from dotenv import load_dotenv
-from bs4 import BeautifulSoup
-from PIL import Image
-
-load_dotenv()
-
-# Backup
-classes = ['amel', 'charcoal', 'diffused', 'cinder', 'sunkissed', 'kastanie', 'motley', 'anery', 'bloodred',
- 'tessera', 'caramel', 'ghost', 'hypo', 'stripe', 'lava', 'miami', 'honey', 'snow', 'ultramel', 'wild-type']
-
-classes = ['amelanistic', 'charcoal', 'diffused', 'cinder', 'sunkissed', 'kastanie', 'motley', 'anerythistic',
- 'bloodred',
- 'tessera', 'caramel', 'ghost', 'hypomelanistic', 'stripe', 'lava', 'miami', 'honey', 'snow', 'ultra',
- 'wildtype']
-
-selection = list(classes)
-selection.append("None")
-
-model = ViTForImageClassification.from_pretrained("CornSnakeID/CornSnakes", num_labels=len(classes),
- problem_type="multi_label_classification",
- use_auth_token=os.getenv("HUGGINGFACE_TOKEN"))
-
-feature_extractor = ViTImageProcessor.from_pretrained("CornSnakeID/CornSnakes",
- use_auth_token=os.getenv("HUGGINGFACE_TOKEN"))
-
-prediction = {}
-
-
-def classify(img):
- # Center crop to largest square
- width, height = img.size
- if width > height:
- left = (width - height) / 2
- right = width - left
- top = 0
- bottom = height
- else:
- top = (height - width) / 2
- bottom = height - top
- left = 0
- right = width
- img = img.crop((left, top, right, bottom))
- model.eval()
- with torch.inference_mode():
- outputs = model(**feature_extractor(img, return_tensors="pt"))
- probs = torch.sigmoid(outputs.logits)
- predictions = {}
- max_prob = 0
- max_class = ""
- for i, prob in enumerate(probs[0]):
- predictions[classes[i]] = prob.item()
- if prob > max_prob:
- max_prob = prob
- max_class = classes[i]
-
- global prediction
- prediction = dict(predictions)
-
- sum_text = "Looks like the estimation isn't very confident. Try another picture or change the crop."
- if max_prob > 0.5:
- sum_text = f"This snake might be a {max_class}, but you should try with more pictures or use the explore tab " \
- f"to find out more."
-
- if max_prob > 0.8:
- sum_text = f"This snake is probably a {max_class}."
-
- if max_prob > 0.9:
- sum_text = f"This snake is very likely a {max_class}."
-
- return "### " + sum_text, predictions
-
-
-def search(morphs):
- for i, morph in enumerate(morphs):
- morphs[i] = morphs[i].lower()
-
- images = []
- links = []
-
- html = requests.get("https://iansvivarium.com/morphs/").text
- soup = BeautifulSoup(html, "html.parser")
- topic_list = soup.find("ul", {"class": "topiclist"})
-
- found = 0
-
- for li in topic_list.find_all("li"):
- if li.find("span") is not None:
- combo = li.find("span").text.lower()
- if all(morph in combo for morph in morphs):
- src = "https://iansvivarium.com/morphs/" + li.find("img")["src"]
- src = src.replace("tiny", "large")
- # Download image
- img = requests.get(src, stream=True).raw
- img = Image.open(img).convert("RGB")
-
- href = f"[{combo}]({'https://iansvivarium.com/morphs/' + li.find('a')['href'].split('&')[0]})"
-
- images.append(img)
- links.append(href)
-
- found += 1
-
- if found >= 6:
- break
-
- if len(images) > 0:
- return images, "\n".join(links)
-
- return None, "## No morph found."
-
-
-def explore(number):
- if len(prediction) == 0 or prediction is None:
- return None, "## No morph found. Did you classify a snake?"
-
- max_prob = 0
- max_classes = []
- for i, p in enumerate(prediction.keys()):
- if prediction[p] > max_prob:
- max_prob = prediction[p]
- max_classes.append(p)
- if len(max_classes) > number:
- max_classes.pop(0)
-
- return search(max_classes)
-
-
-def man_explore(m1, m2, m3):
- morphs = []
- if m1 and m1 != "None":
- morphs.append(m1)
- if m2 and m2 != "None":
- morphs.append(m2)
- if m3 and m3 != "None":
- morphs.append(m3)
-
- if len(morphs) == 0:
- return None, "No morph selected. Please select at least one morph to explore."
- return search(morphs)
-
-
-# TODO: make this look better
-css = """
-.gradio-container {
-background: linear-gradient(243deg, #e49f2c, #b47d21, #dfc7a0, #e4772d, #f2e1ce, #ee7f41);
-background-size: 1200% 1200%;
-
--webkit-animation: AnimationName 0s ease infinite;
--moz-animation: AnimationName 0s ease infinite;
-animation: AnimationName 0s ease infinite;
-}
-
-@-webkit-keyframes AnimationName {
- 0%{background-position:0% 50%}
- 50%{background-position:100% 50%}
- 100%{background-position:0% 50%}
-}
-@-moz-keyframes AnimationName {
- 0%{background-position:0% 50%}
- 50%{background-position:100% 50%}
- 100%{background-position:0% 50%}
-}
-@keyframes AnimationName {
- 0%{background-position:0% 50%}
- 50%{background-position:100% 50%}
- 100%{background-position:0% 50%}
-}"""
-
-with gr.Blocks(analytics_enabled=True, title="Corn Snake Morph ID") as demo:
- gr.Markdown("## Corn Snake Morph Type Classifier")
- with gr.Row():
- with gr.Column():
- gr.Markdown("### Upload your photo of a corn snake")
- inp = gr.Image(shape=(224, 224), source="upload", type="pil")
- submit = gr.Button("Submit", variant="primary")
- gr.Markdown(
- "For best results, crop to a square covering the snake's body. Upload a photo taken in a cage or on a "
- "hand. Please run this model multiple times from different angles and with different lighting "
- "conditions. Correct predictions usually have a score of 85% or higher. Note that the image will be "
- "center cropped if it is not square. You may use the 'select' tool to crop the image to a square.")
-
- with gr.Column():
- gr.Markdown("### Results")
- summary = gr.Markdown("")
- classification = gr.Label(num_top_classes=8)
- submit.click(fn=classify, inputs=inp, outputs=[summary, classification])
-
- gr.Markdown("### Find out more")
- with gr.Row():
- with gr.Accordion("Explore morph images (from iansvivarium)", open=False):
- with gr.Tab("From prediction"):
- num_morphs = gr.Slider(minimum=1, maximum=3, value=1, step=1,
- label="Number of top morphs to include")
- with gr.Column():
- comb_text = gr.Markdown("")
- combination = gr.Gallery()
- images = gr.Button("Find images", variant="primary")
- images.click(fn=explore, inputs=[num_morphs], outputs=[combination, comb_text])
- combination_close = gr.Button("Close")
- with gr.Tab("From morphs"):
- with gr.Column():
- morph1 = gr.Dropdown(choices=selection, label="Morph 1")
- morph2 = gr.Dropdown(choices=selection, label="Morph 2")
- morph3 = gr.Dropdown(choices=selection, label="Morph 3")
- man_comb_text = gr.Markdown("")
- man_combination = gr.Gallery()
- morphs_submit = gr.Button("Find images", variant="primary")
- morphs_submit.click(fn=man_explore, inputs=[morph1, morph2, morph3],
- outputs=[man_combination, man_comb_text])
- man_combination_close = gr.Button("Close")
- gr.Markdown("## Donate to the project")
- gr.Markdown("If you found this model useful, please consider "
- "[donating to the project](https://ko-fi.com/ethanporcaro)."
- " This will let me pay for faster servers with more computing power and allow me to add more features"
- " to the model. ")
- demo.launch()
diff --git a/spaces/Cpp4App/Cpp4App/SEM/find_subtitle.py b/spaces/Cpp4App/Cpp4App/SEM/find_subtitle.py
deleted file mode 100644
index fe6a9dd3929e9ca90ef68ee5f545134991d96a6c..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/SEM/find_subtitle.py
+++ /dev/null
@@ -1,180 +0,0 @@
-import csv
-import os
-import bs4
-
-
-def find_title_Label(path):
- a = 0
- soup = bs4.BeautifulSoup(open(path,encoding='utf-8'), features="html.parser")
- all_list = ["","","","","","",""]
- list_index = ['h1','h2','h3','h4','h5','strong','b']
- h1_list = soup.find_all('h1')
- if len(h1_list) <= 2:
- h1_list = None
- try:
- for h1 in h1_list:
- all_list[0] += h1.text
-
- except Exception:
- a = 1
- h2_list = soup.find_all('h2')
- if len(h2_list) <= 2:
- h2_list = None
- try:
- for h2 in h2_list:
- all_list[1] += h2.text
- except Exception:
- a = 1
- h3_list = soup.find_all('h3')
- if len(h3_list) <= 2:
- h3_list = None
- try:
- for h3 in h3_list:
- all_list[2] += h3.text
- except Exception:
- a = 1
- h4_list = soup.find_all('h4')
- if len(h4_list) <= 2:
- h4_list = None
- try:
- for h4 in h4_list:
- all_list[3] += h4.text
- except Exception:
- a = 1
- h5_list = soup.find_all('h5')
- if len(h5_list) <= 2:
- h5_list = None
- try:
- for h5 in h5_list:
- all_list[4] += h5.text
- except Exception:
- a = 1
- strong_list = soup.find_all('strong')
- if len(strong_list) <= 2:
- strong_list = None
- try:
- for st in strong_list:
- all_list[5] += st.text
- except Exception:
- a = 1
- b_list = soup.find_all('b')
- if len(b_list) <= 2:
- b_list = None
- try:
- for b in b_list:
- all_list[6] += b.text
- except Exception:
- a = 1
- long = 0
- maxLongList = None
- for list in all_list:
- if list == None:
- continue
- clean_list = list.lower()
-
- if "information" in clean_list and "collect" in clean_list:
-
- return list_index[all_list.index(list)]
- if "information" in clean_list and "use" in clean_list:
-
- return list_index[all_list.index(list)]
- if "change" in clean_list and "data" in clean_list:
-
- return list_index[all_list.index(list)]
- if len(list) > long:
- long = len(list)
- maxLongList = list
- if maxLongList == None:
- return "TitleError"
-
- return list_index[all_list.index(maxLongList)]
-
-def find_title_Label_with_html(file):
- a = 0
- soup = bs4.BeautifulSoup(file, features="html.parser")
- all_list = ["","","","","","",""]
- list_index = ['h1','h2','h3','h4','h5','strong','b']
- h1_list = soup.find_all('h1')
- if len(h1_list) <= 2:
- h1_list = None
- try:
- for h1 in h1_list:
- all_list[0] += h1.text
-
- except Exception:
- a = 1
- h2_list = soup.find_all('h2')
- if len(h2_list) <= 2:
- h2_list = None
- try:
- for h2 in h2_list:
- all_list[1] += h2.text
- except Exception:
- a = 1
- h3_list = soup.find_all('h3')
- if len(h3_list) <= 2:
- h3_list = None
- try:
- for h3 in h3_list:
- all_list[2] += h3.text
- except Exception:
- a = 1
- h4_list = soup.find_all('h4')
- if len(h4_list) <= 2:
- h4_list = None
- try:
- for h4 in h4_list:
- all_list[3] += h4.text
- except Exception:
- a = 1
- h5_list = soup.find_all('h5')
- if len(h5_list) <= 2:
- h5_list = None
- try:
- for h5 in h5_list:
- all_list[4] += h5.text
- except Exception:
- a = 1
- strong_list = soup.find_all('strong')
- if len(strong_list) <= 2:
- strong_list = None
- try:
- for st in strong_list:
- all_list[5] += st.text
- except Exception:
- a = 1
- b_list = soup.find_all('b')
- if len(b_list) <= 2:
- b_list = None
- try:
- for b in b_list:
- all_list[6] += b.text
- except Exception:
- a = 1
- long = 0
- maxLongList = None
- for list in all_list:
- if list == None:
- continue
- clean_list = list.lower()
-
- if "information" in clean_list and "collect" in clean_list:
-
- return list_index[all_list.index(list)]
- if "information" in clean_list and "use" in clean_list:
-
- return list_index[all_list.index(list)]
- if "change" in clean_list and "data" in clean_list:
-
- return list_index[all_list.index(list)]
- if len(list) > long:
- long = len(list)
- maxLongList = list
- if maxLongList == None:
- return "TitleError"
-
- return list_index[all_list.index(maxLongList)]
-
-
-
-
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/statistic.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/statistic.py
deleted file mode 100644
index 69dab91c46cd93c0e666dca9aa067a7cbe384ac5..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/statistic.py
+++ /dev/null
@@ -1,16 +0,0 @@
-#coding=utf-8
-'''
-Created on 2016年10月8日
-
-@author: dengdan
-'''
-import numpy as np
-import util.np
-
-def D(x):
- x = util.np.flatten(x)
- return np.var(x)
-
-def E(x):
- x = util.np.flatten(x)
- return np.average(x)
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DJQmUKV/rvc-inference/infer_pack/transforms.py b/spaces/DJQmUKV/rvc-inference/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/DJQmUKV/rvc-inference/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/errors.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/errors.py
deleted file mode 100644
index 18cbebbaf91ff7d5a515321a006be3eb1d83faaf..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/errors.py
+++ /dev/null
@@ -1,2 +0,0 @@
-class ColorLibError(Exception):
- pass
diff --git a/spaces/Dagfinn1962/stablediffusion-members/README.md b/spaces/Dagfinn1962/stablediffusion-members/README.md
deleted file mode 100644
index c62bd0fcee01c01753384bdd61a4f466fd97aba9..0000000000000000000000000000000000000000
--- a/spaces/Dagfinn1962/stablediffusion-members/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Maximum Multiplier
-emoji: 🛕🛕
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: Dagfinn1962/stablediffusion-models
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Daniel947/stabilityai-stable-diffusion-2-1/README.md b/spaces/Daniel947/stabilityai-stable-diffusion-2-1/README.md
deleted file mode 100644
index 197de1f7acf51de2e5c0e7a85291dae065f72a35..0000000000000000000000000000000000000000
--- a/spaces/Daniel947/stabilityai-stable-diffusion-2-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stabilityai Stable Diffusion 2 1
-emoji: ⚡
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/visualization/boundary.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/visualization/boundary.py
deleted file mode 100644
index 8a87a5c5d2edb73ffb79ea08fec1d50c31fd8498..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/visualization/boundary.py
+++ /dev/null
@@ -1,161 +0,0 @@
-"""
-@date: 2021/06/19
-@description:
-"""
-
-import matplotlib.pyplot as plt
-import cv2
-import numpy as np
-from utils.conversion import uv2pixel
-from utils.boundary import corners2boundary, corners2boundaries, find_peaks, connect_corners_uv, get_object_cor, \
- visibility_corners
-
-
-def draw_boundary(pano_img, corners: np.ndarray = None, boundary: np.ndarray = None, draw_corners=True, show=False,
- step=0.01, length=None, boundary_color=None, marker_color=None, title=None, visible=True):
- if marker_color is None:
- marker_color = [0, 0, 1]
- if boundary_color is None:
- boundary_color = [0, 1, 0]
-
- assert corners is not None or boundary is not None, "corners or boundary error"
-
- shape = sorted(pano_img.shape)
- assert len(shape) > 1, "pano_img shape error"
- w = shape[-1]
- h = shape[-2]
-
- pano_img = pano_img.copy()
- if (corners is not None and len(corners) > 2) or \
- (boundary is not None and len(boundary) > 2):
- if isinstance(boundary_color, list) or isinstance(boundary_color, np.array):
- if boundary is None:
- boundary = corners2boundary(corners, step, length, visible)
-
- boundary = uv2pixel(boundary, w, h)
- pano_img[boundary[:, 1], boundary[:, 0]] = boundary_color
- pano_img[np.clip(boundary[:, 1] + 1, 0, h - 1), boundary[:, 0]] = boundary_color
- pano_img[np.clip(boundary[:, 1] - 1, 0, h - 1), boundary[:, 0]] = boundary_color
-
- if pano_img.shape[1] > 512:
- pano_img[np.clip(boundary[:, 1] + 1, 0, h - 1), np.clip(boundary[:, 0] + 1, 0, w - 1)] = boundary_color
- pano_img[np.clip(boundary[:, 1] + 1, 0, h - 1), np.clip(boundary[:, 0] - 1, 0, w - 1)] = boundary_color
- pano_img[np.clip(boundary[:, 1] - 1, 0, h - 1), np.clip(boundary[:, 0] + 1, 0, w - 1)] = boundary_color
- pano_img[np.clip(boundary[:, 1] - 1, 0, h - 1), np.clip(boundary[:, 0] - 1, 0, w - 1)] = boundary_color
-
- pano_img[boundary[:, 1], np.clip(boundary[:, 0] + 1, 0, w - 1)] = boundary_color
- pano_img[boundary[:, 1], np.clip(boundary[:, 0] - 1, 0, w - 1)] = boundary_color
-
- if corners is not None and draw_corners:
- if visible:
- corners = visibility_corners(corners)
- corners = uv2pixel(corners, w, h)
- for corner in corners:
- cv2.drawMarker(pano_img, tuple(corner), marker_color, markerType=0, markerSize=10, thickness=2)
-
- if show:
- plt.figure(figsize=(10, 5))
- if title is not None:
- plt.title(title)
-
- plt.axis('off')
- plt.imshow(pano_img)
- plt.show()
-
- return pano_img
-
-
-def draw_boundaries(pano_img, corners_list: list = None, boundary_list: list = None, draw_corners=True, show=False,
- step=0.01, length=None, boundary_color=None, marker_color=None, title=None, ratio=None, visible=True):
- """
-
- :param visible:
- :param pano_img:
- :param corners_list:
- :param boundary_list:
- :param draw_corners:
- :param show:
- :param step:
- :param length:
- :param boundary_color: RGB color
- :param marker_color: RGB color
- :param title:
- :param ratio: ceil_height/camera_height
- :return:
- """
- assert corners_list is not None or boundary_list is not None, "corners_list or boundary_list error"
-
- if corners_list is not None:
- if ratio is not None and len(corners_list) == 1:
- corners_list = corners2boundaries(ratio, corners_uv=corners_list[0], step=None, visible=visible)
-
- for i, corners in enumerate(corners_list):
- pano_img = draw_boundary(pano_img, corners=corners, draw_corners=draw_corners,
- show=show if i == len(corners_list) - 1 else False,
- step=step, length=length, boundary_color=boundary_color, marker_color=marker_color,
- title=title, visible=visible)
- elif boundary_list is not None:
- if ratio is not None and len(boundary_list) == 1:
- boundary_list = corners2boundaries(ratio, corners_uv=boundary_list[0], step=None, visible=visible)
-
- for i, boundary in enumerate(boundary_list):
- pano_img = draw_boundary(pano_img, boundary=boundary, draw_corners=draw_corners,
- show=show if i == len(boundary_list) - 1 else False,
- step=step, length=length, boundary_color=boundary_color, marker_color=marker_color,
- title=title, visible=visible)
-
- return pano_img
-
-
-def draw_object(pano_img, heat_maps, size, depth, window_width=15, show=False):
- # window, door, opening
- colors = [[1, 0, 0], [1, 1, 0], [0, 0, 1]]
- for i, heat_map in enumerate(heat_maps):
- pk_u_s, _ = find_peaks(heat_map, size=window_width*2+1)
- for pk_u in pk_u_s:
- uv, xyz = get_object_cor(depth, size, center_u=pk_u, patch_num=len(heat_map))
-
- bottom_poly = connect_corners_uv(uv[0], uv[1], length=pano_img.shape[1])
- top_poly = connect_corners_uv(uv[2], uv[3], length=pano_img.shape[1])[::-1]
-
- bottom_max_index = bottom_poly[..., 0].argmax()
- if bottom_max_index != len(bottom_poly)-1:
- top_max_index = top_poly[..., 0].argmax()
- poly1 = np.concatenate([bottom_poly[:bottom_max_index+1], top_poly[top_max_index:]])
- poly1 = uv2pixel(poly1, w=pano_img.shape[1], h=pano_img.shape[0])
- poly1 = poly1[:, None, :]
-
- poly2 = np.concatenate([bottom_poly[bottom_max_index+1:], top_poly[:top_max_index]])
- poly2 = uv2pixel(poly2, w=pano_img.shape[1], h=pano_img.shape[0])
- poly2 = poly2[:, None, :]
-
- poly = [poly1, poly2]
- else:
- poly = np.concatenate([bottom_poly, top_poly])
- poly = uv2pixel(poly, w=pano_img.shape[1], h=pano_img.shape[0])
- poly = poly[:, None, :]
- poly = [poly]
-
- cv2.drawContours(pano_img, poly, -1, colors[i], 1)
- #
- # boundary_center_xyz = uv2xyz(np.array([pk_u, pk_v]))
- #
- # l_b_xyz =
- if show:
- plt.imshow(pano_img)
- plt.show()
-
-
-if __name__ == '__main__':
- from visualization.floorplan import draw_floorplan
- from utils.conversion import uv2xyz
-
- pano_img = np.zeros([512, 1024, 3])
- corners = np.array([[0.2, 0.7],
- [0.4, 0.7],
- [0.3, 0.6],
- [0.6, 0.6],
- [0.8, 0.7]])
- # draw_boundary(pano_img, corners, show=True)
- draw_boundaries(pano_img, corners_list=[corners], show=True, length=1024, ratio=1.2)
- draw_floorplan(uv2xyz(corners)[..., ::2], show=True, marker_color=None, center_color=0.8)
\ No newline at end of file
diff --git a/spaces/Dinoking/Flower-Classification-v1/app.py b/spaces/Dinoking/Flower-Classification-v1/app.py
deleted file mode 100644
index 4ed7c204ae14ed51c61735618fdd4f7d425acf03..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Flower-Classification-v1/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import gradio as gr
-import matplotlib.pyplot as plt
-import numpy as np
-import PIL
-import tensorflow as tf
-
-from tensorflow import keras
-from tensorflow.keras import layers
-from tensorflow.keras.models import Sequential
-
-from keras.models import load_model
-model1 = load_model('model1.h5')
-
-class_names = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
-def predict_image(img):
- img_4d=img.reshape(-1,180,180,3)
- prediction=model1.predict(img_4d)[0]
- return {class_names[i]: float(prediction[i]) for i in range(5)}
-
-image = gr.inputs.Image(shape=(180,180))
-label = gr.outputs.Label(num_top_classes=3)
-enable_queue=True
-description="This is a Flower Classification Model made using a CNN.Deployed to Hugging Faces using Gradio."
-examples = ['dandelion.jpg','sunflower.jpeg','tulip.jpg']
-article="
" + " \n".join([f"{html.escape(x)}" for x in text.split("\n")]) + "
"
- )
- return text
-
-
-def predict(
- image: PIL.Image.Image,
- model_name: str,
- general_threshold: float,
- character_threshold: float,
- tag_names: list[str],
- rating_indexes: list[np.int64],
- general_indexes: list[np.int64],
- character_indexes: list[np.int64],
-):
- global loaded_models
-
- rawimage = image
-
- model = loaded_models[model_name]
- if model is None:
- model = change_model(model_name)
-
- _, height, width, _ = model.get_inputs()[0].shape
-
- # Alpha to white
- image = image.convert("RGBA")
- new_image = PIL.Image.new("RGBA", image.size, "WHITE")
- new_image.paste(image, mask=image)
- image = new_image.convert("RGB")
- image = np.asarray(image)
-
- # PIL RGB to OpenCV BGR
- image = image[:, :, ::-1]
-
- image = dbimutils.make_square(image, height)
- image = dbimutils.smart_resize(image, height)
- image = image.astype(np.float32)
- image = np.expand_dims(image, 0)
-
- input_name = model.get_inputs()[0].name
- label_name = model.get_outputs()[0].name
- probs = model.run([label_name], {input_name: image})[0]
-
- labels = list(zip(tag_names, probs[0].astype(float)))
-
- # First 4 labels are actually ratings: pick one with argmax
- ratings_names = [labels[i] for i in rating_indexes]
- rating = dict(ratings_names)
-
- # Then we have general tags: pick any where prediction confidence > threshold
- general_names = [labels[i] for i in general_indexes]
- general_res = [x for x in general_names if x[1] > general_threshold]
- general_res = dict(general_res)
-
- # Everything else is characters: pick any where prediction confidence > threshold
- character_names = [labels[i] for i in character_indexes]
- character_res = [x for x in character_names if x[1] > character_threshold]
- character_res = dict(character_res)
-
- b = dict(sorted(general_res.items(), key=lambda item: item[1], reverse=True))
- a = (
- ", ".join(list(b.keys()))
- .replace("_", " ")
- .replace("(", "\(")
- .replace(")", "\)")
- )
- c = ", ".join(list(b.keys()))
-
- items = rawimage.info
- geninfo = ""
-
- if "exif" in rawimage.info:
- exif = piexif.load(rawimage.info["exif"])
- exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b"")
- try:
- exif_comment = piexif.helper.UserComment.load(exif_comment)
- except ValueError:
- exif_comment = exif_comment.decode("utf8", errors="ignore")
-
- items["exif comment"] = exif_comment
- geninfo = exif_comment
-
- for field in [
- "jfif",
- "jfif_version",
- "jfif_unit",
- "jfif_density",
- "dpi",
- "exif",
- "loop",
- "background",
- "timestamp",
- "duration",
- ]:
- items.pop(field, None)
-
- geninfo = items.get("parameters", geninfo)
-
- info = f"""
-
PNG Info
-"""
- for key, text in items.items():
- info += (
- f"""
-
-
{plaintext_to_html(str(key))}
-
{plaintext_to_html(str(text))}
-
-""".strip()
- + "\n"
- )
-
- if len(info) == 0:
- message = "Nothing found in the image."
- info = f"
{message}
"
-
- return (a, c, rating, character_res, general_res, info)
-
-
-def main():
- global loaded_models
- loaded_models = {"SwinV2": None, "ConvNext": None, "ViT": None}
-
- args = parse_args()
-
- change_model("SwinV2")
-
- tag_names, rating_indexes, general_indexes, character_indexes = load_labels()
-
- func = functools.partial(
- predict,
- tag_names=tag_names,
- rating_indexes=rating_indexes,
- general_indexes=general_indexes,
- character_indexes=character_indexes,
- )
-
- gr.Interface(
- fn=func,
- inputs=[
- gr.Image(type="pil", label="Input"),
- gr.Radio(["SwinV2", "ConvNext", "ViT"], value="SwinV2", label="Model"),
- gr.Slider(
- 0,
- 1,
- step=args.score_slider_step,
- value=args.score_general_threshold,
- label="General Tags Threshold",
- ),
- gr.Slider(
- 0,
- 1,
- step=args.score_slider_step,
- value=args.score_character_threshold,
- label="Character Tags Threshold",
- ),
- ],
- outputs=[
- gr.Textbox(label="Output (string)"),
- gr.Textbox(label="Output (raw string)"),
- gr.Label(label="Rating"),
- gr.Label(label="Output (characters)"),
- gr.Label(label="Output (tags)"),
- gr.HTML(),
- ],
- examples=[["power.jpg", "SwinV2", 0.35, 0.85]],
- title=TITLE,
- description=DESCRIPTION,
- allow_flagging="never",
- ).launch(
- enable_queue=True,
- share=args.share,
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/EMS-TU-Ilmenau/deepest-demo/app.py b/spaces/EMS-TU-Ilmenau/deepest-demo/app.py
deleted file mode 100644
index 8667ceaa020c1fc66e5ce40c45db04c7ae5e8b28..0000000000000000000000000000000000000000
--- a/spaces/EMS-TU-Ilmenau/deepest-demo/app.py
+++ /dev/null
@@ -1,178 +0,0 @@
-# https://huggingface.co/St0nedB/deepest-public
-import os
-import sys
-import subprocess
-import toml
-from argparse import Namespace
-import numpy as np
-import logging
-import gradio as gr
-import matplotlib
-import matplotlib.pyplot as plt
-from huggingface_hub import hf_hub_download
-
-matplotlib.use("Agg")
-logger = logging.basicConfig(level=logging.ERROR)
-
-# define global variable demos
-DATA_SHAPE = (64,64)
-ETA_SHAPE = (2, 20)
-DATASET = "./data"
-N = 1000
-BS = 256
-WORKER = 2
-SNRS = {
- "0": 1.0,
- "10": 0.1,
- "20": 0.01,
- "30": 0.001,
-}
-
-# download model from huggingface hub
-MODEL_PATH = hf_hub_download("St0nedB/deepest-demo", "2022.07.03.2338.param2d.model", use_auth_token=os.environ["MODEL_TOKEN"])
-RUNNER = None
-
-# preallocated result arrays
-DATA = np.empty((len(SNRS), N, *DATA_SHAPE), dtype=np.complex128)
-TRUTH = np.empty((len(SNRS), N, *ETA_SHAPE))
-ESTIM = np.empty((len(SNRS), N, *ETA_SHAPE))
-
-# load texts
-TEXTS = Namespace(**toml.load("texts.toml"))
-
-def install_deepest():
- git_token = os.environ["GIT_TOKEN"]
- git_url = os.environ["GIT_URL"]
- git_commit = os.environ["GIT_COMMIT"]
- subprocess.check_call([sys.executable, "-m", "pip", "install", f"git+https://hggn:{git_token}@{git_url}@{git_commit}"])
- return
-
-
-def make_plots(snr: float, idx: int):
- idx -= 1
- data, truth, estim = DATA[snr][idx], TRUTH[snr][idx], ESTIM[snr][idx]
-
- fig_data = make_dataplot(data)
- fig_param = make_parameterplot(estim, truth)
-
- return fig_data, fig_param
-
-def make_dataplot(x: np.ndarray):
- plt.close()
- x = np.rot90(10*np.log10(np.abs(np.fft.fftn(x))), k=-1)
- fig, ax = plt.subplots(1,1)
- ax.imshow(x, extent=[0,1,0,1], cmap="viridis")
- ax.set_xlabel("Norm. Delay")
- ax.set_ylabel("Norm. Doppler")
-
- return fig
-
-def make_parameterplot(estim: np.ndarray, truth: np.ndarray = None, **kwargs):
- plt.close()
- fig, ax = plt.subplots(1,1)
- ax = plot_parameters(ax, es=estim, gt=truth, **kwargs)
- ax.set_xlim(0,1)
- ax.set_ylim(0,1)
-
- return fig
-
-def load_numpy(file_obj) -> None | np.ndarray:
- if file_obj is None:
- # no file given
- return None
-
- file = file_obj.name
- if not(os.path.splitext(file)[1] in [".npy", ".npz"]):
- # no numpy file
- return None
-
- data = np.load(file)
- if len(data.shape) != 3:
- # not in proper shape
- return None
-
- return data
-
-def process_user_input(file_obj):
- data = load_numpy(file_obj)
- if data is None:
- return None
-
- return gr.update(minimum=1, step=1, maximum=len(data), visible=True, value=1)
-
-def make_user_plot(file_obj, idx: int):
- idx -= 1
- data = load_numpy(file_obj)
-
- estim = RUNNER.fit(data[idx][None,])
- bg_data = np.rot90(10*np.log10(np.abs(np.fft.fftn(data[idx], norm="ortho"))), k=-1)
- fig_estim = make_parameterplot(estim=estim[0], bg=bg_data, extent=[0,1,0,1], cmap="viridis")
-
- return fig_estim
-
-
-def demo():
- with gr.Blocks() as demo:
- gr.Markdown(
- TEXTS.introduction
- )
-
- with gr.Column():
- snr = gr.Radio(choices=["0", "10", "20", "30"], type="index", value="0", label="SNR [dB]")
-
- with gr.Row():
- data = gr.Plot(label="Data")
- result = gr.Plot(label="Results")
-
- with gr.Row():
- slider = gr.Slider(1, N, 1, label="Sample Index")
-
- # update callbacks
- slider.change(make_plots, [snr, slider], [data, result])
- snr.change(make_plots, [snr, slider], [data, result])
-
- with gr.Column():
- gr.Markdown(
- TEXTS.try_your_own
- )
-
- with gr.Row():
- with gr.Column():
- user_file = gr.File(file_count="single", type="file", interactive=True)
- run_btn = gr.Button("Run Inference")
-
- user_plot = gr.Plot(label="Results")
-
- with gr.Column():
- user_slider = gr.Slider(visible=False, label="Sample Index")
-
- run_btn.click(process_user_input, [user_file], [user_slider], show_progress=True)
- user_slider.change(make_user_plot, [user_file, user_slider], [user_plot])
-
- gr.Markdown(
- TEXTS.acknowledgements
- )
-
- gr.Markdown(
- TEXTS.contact
- )
-
- demo.launch()
-
-def main():
- for dd, snr in enumerate(SNRS.values()):
- DATA[dd], TRUTH[dd], ESTIM[dd] = RUNNER.run(snr=snr)
-
- demo()
-
-
-if __name__ == "__main__":
- try:
- import deepest
- except ModuleNotFoundError:
- install_deepest()
-
- from deepest.utils import plot_parameters
- from helper import Runner
- RUNNER = Runner(MODEL_PATH, DATASET, BS, WORKER)
- main()
diff --git a/spaces/Egrt/MaskGAN/models/resnest/resnest.py b/spaces/Egrt/MaskGAN/models/resnest/resnest.py
deleted file mode 100644
index a9fe5f79d347349aeb1db8ed5af5f2f8415a8b4d..0000000000000000000000000000000000000000
--- a/spaces/Egrt/MaskGAN/models/resnest/resnest.py
+++ /dev/null
@@ -1,60 +0,0 @@
-"""
-@author: Jun Wang
-@date: 20210301
-@contact: jun21wangustc@gmail.com
-"""
-
-# based on:
-# https://github.com/zhanghang1989/ResNeSt/blob/master/resnest/torch/resnest.py
-
-import torch
-import torch.nn as nn
-from .resnet import ResNet, Bottleneck
-
-class Flatten(nn.Module):
- def forward(self, input):
- return input.view(input.size(0), -1)
-
-def l2_norm(input,axis=1):
- norm = torch.norm(input,2,axis,True)
- output = torch.div(input, norm)
- return output
-
-class ResNeSt(nn.Module):
- def __init__(self, num_layers=50, drop_ratio=0.4, feat_dim=512, out_h=7, out_w=7):
- super(ResNeSt, self).__init__()
- self.input_layer = nn.Sequential(nn.Conv2d(3, 64, (3, 3), 1, 1 ,bias=False),
- nn.BatchNorm2d(64),
- nn.PReLU(64))
- self.output_layer = nn.Sequential(nn.BatchNorm2d(2048),
- nn.Dropout(drop_ratio),
- Flatten(),
- nn.Linear(2048 * out_h * out_w, feat_dim),
- nn.BatchNorm1d(feat_dim))
- if num_layers == 50:
- self.body = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=2, groups=1, bottleneck_width=64,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=False)
- elif num_layers == 101:
- self.body = ResNet(Bottleneck, [3, 4, 23, 3],
- radix=2, groups=1, bottleneck_width=64,
- deep_stem=True, stem_width=64, avg_down=True,
- avd=True, avd_first=False)
- elif num_layers == 200:
- self.body = ResNet(Bottleneck, [3, 24, 36, 3],
- radix=2, groups=1, bottleneck_width=64,
- deep_stem=True, stem_width=64, avg_down=True,
- avd=True, avd_first=False)
- elif num_layers == 269:
- self.body = ResNet(Bottleneck, [3, 30, 48, 8],
- radix=2, groups=1, bottleneck_width=64,
- deep_stem=True, stem_width=64, avg_down=True,
- avd=True, avd_first=False)
- else:
- pass
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer(x)
- return l2_norm(x)
diff --git a/spaces/Epoching/3D_Photo_Inpainting/MiDaS/MiDaS_utils.py b/spaces/Epoching/3D_Photo_Inpainting/MiDaS/MiDaS_utils.py
deleted file mode 100644
index f961acdd797624ee802fdddc3d69344094009887..0000000000000000000000000000000000000000
--- a/spaces/Epoching/3D_Photo_Inpainting/MiDaS/MiDaS_utils.py
+++ /dev/null
@@ -1,192 +0,0 @@
-"""Utils for monoDepth.
-"""
-import sys
-import re
-import numpy as np
-import cv2
-import torch
-import imageio
-
-
-def read_pfm(path):
- """Read pfm file.
-
- Args:
- path (str): path to file
-
- Returns:
- tuple: (data, scale)
- """
- with open(path, "rb") as file:
-
- color = None
- width = None
- height = None
- scale = None
- endian = None
-
- header = file.readline().rstrip()
- if header.decode("ascii") == "PF":
- color = True
- elif header.decode("ascii") == "Pf":
- color = False
- else:
- raise Exception("Not a PFM file: " + path)
-
- dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii"))
- if dim_match:
- width, height = list(map(int, dim_match.groups()))
- else:
- raise Exception("Malformed PFM header.")
-
- scale = float(file.readline().decode("ascii").rstrip())
- if scale < 0:
- # little-endian
- endian = "<"
- scale = -scale
- else:
- # big-endian
- endian = ">"
-
- data = np.fromfile(file, endian + "f")
- shape = (height, width, 3) if color else (height, width)
-
- data = np.reshape(data, shape)
- data = np.flipud(data)
-
- return data, scale
-
-
-def write_pfm(path, image, scale=1):
- """Write pfm file.
-
- Args:
- path (str): pathto file
- image (array): data
- scale (int, optional): Scale. Defaults to 1.
- """
-
- with open(path, "wb") as file:
- color = None
-
- if image.dtype.name != "float32":
- raise Exception("Image dtype must be float32.")
-
- image = np.flipud(image)
-
- if len(image.shape) == 3 and image.shape[2] == 3: # color image
- color = True
- elif (
- len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1
- ): # greyscale
- color = False
- else:
- raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
-
- file.write("PF\n" if color else "Pf\n".encode())
- file.write("%d %d\n".encode() % (image.shape[1], image.shape[0]))
-
- endian = image.dtype.byteorder
-
- if endian == "<" or endian == "=" and sys.byteorder == "little":
- scale = -scale
-
- file.write("%f\n".encode() % scale)
-
- image.tofile(file)
-
-
-def read_image(path):
- """Read image and output RGB image (0-1).
-
- Args:
- path (str): path to file
-
- Returns:
- array: RGB image (0-1)
- """
- img = cv2.imread(path)
-
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
-
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0
-
- return img
-
-
-def resize_image(img):
- """Resize image and make it fit for network.
-
- Args:
- img (array): image
-
- Returns:
- tensor: data ready for network
- """
- height_orig = img.shape[0]
- width_orig = img.shape[1]
- unit_scale = 384.
-
- if width_orig > height_orig:
- scale = width_orig / unit_scale
- else:
- scale = height_orig / unit_scale
-
- height = (np.ceil(height_orig / scale / 32) * 32).astype(int)
- width = (np.ceil(width_orig / scale / 32) * 32).astype(int)
-
- img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA)
-
- img_resized = (
- torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float()
- )
- img_resized = img_resized.unsqueeze(0)
-
- return img_resized
-
-
-def resize_depth(depth, width, height):
- """Resize depth map and bring to CPU (numpy).
-
- Args:
- depth (tensor): depth
- width (int): image width
- height (int): image height
-
- Returns:
- array: processed depth
- """
- depth = torch.squeeze(depth[0, :, :, :]).to("cpu")
- depth = cv2.blur(depth.numpy(), (3, 3))
- depth_resized = cv2.resize(
- depth, (width, height), interpolation=cv2.INTER_AREA
- )
-
- return depth_resized
-
-def write_depth(path, depth, bits=1):
- """Write depth map to pfm and png file.
-
- Args:
- path (str): filepath without extension
- depth (array): depth
- """
- # write_pfm(path + ".pfm", depth.astype(np.float32))
-
- depth_min = depth.min()
- depth_max = depth.max()
-
- max_val = (2**(8*bits))-1
-
- if depth_max - depth_min > np.finfo("float").eps:
- out = max_val * (depth - depth_min) / (depth_max - depth_min)
- else:
- out = 0
-
- if bits == 1:
- cv2.imwrite(path + ".png", out.astype("uint8"))
- elif bits == 2:
- cv2.imwrite(path + ".png", out.astype("uint16"))
-
- return
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/icdar2015.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/icdar2015.py
deleted file mode 100644
index f711c06dce76d53b8737288c8de318e6f90ce585..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/icdar2015.py
+++ /dev/null
@@ -1,18 +0,0 @@
-dataset_type = 'IcdarDataset'
-data_root = 'data/icdar2015'
-
-train = dict(
- type=dataset_type,
- ann_file=f'{data_root}/instances_training.json',
- img_prefix=f'{data_root}/imgs',
- pipeline=None)
-
-test = dict(
- type=dataset_type,
- ann_file=f'{data_root}/instances_test.json',
- img_prefix=f'{data_root}/imgs',
- pipeline=None)
-
-train_list = [train]
-
-test_list = [test]
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py
deleted file mode 100644
index e22571e74511bab4303138f0e4816687fadac69e..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py
+++ /dev/null
@@ -1,33 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py',
- '../../_base_/schedules/schedule_sgd_160e.py',
- '../../_base_/det_datasets/icdar2017.py',
- '../../_base_/det_pipelines/maskrcnn_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}}
-
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=4,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/embedding/base.py b/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/embedding/base.py
deleted file mode 100644
index dd4815ff3cb9c89bc0787cb5bd1142660e41c6ef..0000000000000000000000000000000000000000
--- a/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/embedding/base.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from abc import ABC, abstractmethod
-
-from PIL import Image
-
-
-class BaseEncoder(ABC):
- """Base class for encoders."""
-
- @abstractmethod
- def __call__(self, image: Image) -> None:
- """Get embeddings from an image.
-
- Args:
- image: Image to encode
-
- Returns:
- Embedding
- """
diff --git a/spaces/Fernando22/freegpt-webui/client/css/theme-toggler.css b/spaces/Fernando22/freegpt-webui/client/css/theme-toggler.css
deleted file mode 100644
index b673b5920a24693e7ea15b873e46731b388ec527..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/client/css/theme-toggler.css
+++ /dev/null
@@ -1,33 +0,0 @@
-.theme-toggler-container {
- margin: 24px 0px 8px 0px;
- justify-content: center;
-}
-
-.theme-toggler-container.checkbox input + label,
-.theme-toggler-container.checkbox input:checked + label:after {
- background: var(--colour-1);
-}
-
-.theme-toggler-container.checkbox input + label:after,
-.theme-toggler-container.checkbox input:checked + label {
- background: var(--colour-3);
-}
-
-.theme-toggler-container.checkbox span {
- font-size: 0.75rem;
-}
-
-.theme-toggler-container.checkbox label {
- width: 24px;
- height: 16px;
-}
-
-.theme-toggler-container.checkbox label:after {
- left: 2px;
- width: 10px;
- height: 10px;
-}
-
-.theme-toggler-container.checkbox input:checked + label:after {
- left: calc(100% - 2px - 10px);
-}
\ No newline at end of file
diff --git a/spaces/Flux9665/IMS-Toucan/Layers/VariancePredictor.py b/spaces/Flux9665/IMS-Toucan/Layers/VariancePredictor.py
deleted file mode 100644
index cd059bd7e5d103b68e65249c1af2ee12f4ac061c..0000000000000000000000000000000000000000
--- a/spaces/Flux9665/IMS-Toucan/Layers/VariancePredictor.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# Copyright 2019 Tomoki Hayashi
-# MIT License (https://opensource.org/licenses/MIT)
-# Adapted by Florian Lux 2021
-
-from abc import ABC
-
-import torch
-
-from Layers.LayerNorm import LayerNorm
-
-
-class VariancePredictor(torch.nn.Module, ABC):
- """
- Variance predictor module.
-
- This is a module of variance predictor described in `FastSpeech 2:
- Fast and High-Quality End-to-End Text to Speech`_.
-
- .. _`FastSpeech 2: Fast and High-Quality End-to-End Text to Speech`:
- https://arxiv.org/abs/2006.04558
-
- """
-
- def __init__(self, idim, n_layers=2, n_chans=384, kernel_size=3, bias=True, dropout_rate=0.5, ):
- """
- Initilize duration predictor module.
-
- Args:
- idim (int): Input dimension.
- n_layers (int, optional): Number of convolutional layers.
- n_chans (int, optional): Number of channels of convolutional layers.
- kernel_size (int, optional): Kernel size of convolutional layers.
- dropout_rate (float, optional): Dropout rate.
- """
- super().__init__()
- self.conv = torch.nn.ModuleList()
- for idx in range(n_layers):
- in_chans = idim if idx == 0 else n_chans
- self.conv += [
- torch.nn.Sequential(torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=(kernel_size - 1) // 2, bias=bias, ), torch.nn.ReLU(),
- LayerNorm(n_chans, dim=1), torch.nn.Dropout(dropout_rate), )]
- self.linear = torch.nn.Linear(n_chans, 1)
-
- def forward(self, xs, x_masks=None):
- """
- Calculate forward propagation.
-
- Args:
- xs (Tensor): Batch of input sequences (B, Tmax, idim).
- x_masks (ByteTensor, optional):
- Batch of masks indicating padded part (B, Tmax).
-
- Returns:
- Tensor: Batch of predicted sequences (B, Tmax, 1).
- """
- xs = xs.transpose(1, -1) # (B, idim, Tmax)
- for f in self.conv:
- xs = f(xs) # (B, C, Tmax)
-
- xs = self.linear(xs.transpose(1, 2)) # (B, Tmax, 1)
-
- if x_masks is not None:
- xs = xs.masked_fill(x_masks, 0.0)
-
- return xs
diff --git a/spaces/Gabesantos1007/NewsAgora/README.md b/spaces/Gabesantos1007/NewsAgora/README.md
deleted file mode 100644
index 8df0ec8b86bb430547852109c20aeed368bad466..0000000000000000000000000000000000000000
--- a/spaces/Gabesantos1007/NewsAgora/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NewsAgora
-emoji: 🐨
-colorFrom: purple
-colorTo: red
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/utils/alignment.py b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/utils/alignment.py
deleted file mode 100644
index a02798f0f7c9fdcc319f7884a491b9e6580cc8aa..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/utils/alignment.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import numpy as np
-import PIL
-import PIL.Image
-import scipy
-import scipy.ndimage
-import dlib
-
-
-def get_landmark(filepath, predictor):
- """get landmark with dlib
- :return: np.array shape=(68, 2)
- """
- detector = dlib.get_frontal_face_detector()
-
- img = dlib.load_rgb_image(filepath)
- dets = detector(img, 1)
-
- for k, d in enumerate(dets):
- shape = predictor(img, d)
-
- t = list(shape.parts())
- a = []
- for tt in t:
- a.append([tt.x, tt.y])
- lm = np.array(a)
- return lm
-
-
-def align_face(filepath, predictor):
- """
- :param filepath: str
- :return: PIL Image
- """
-
- lm = get_landmark(filepath, predictor)
-
- lm_chin = lm[0: 17] # left-right
- lm_eyebrow_left = lm[17: 22] # left-right
- lm_eyebrow_right = lm[22: 27] # left-right
- lm_nose = lm[27: 31] # top-down
- lm_nostrils = lm[31: 36] # top-down
- lm_eye_left = lm[36: 42] # left-clockwise
- lm_eye_right = lm[42: 48] # left-clockwise
- lm_mouth_outer = lm[48: 60] # left-clockwise
- lm_mouth_inner = lm[60: 68] # left-clockwise
-
- # Calculate auxiliary vectors.
- eye_left = np.mean(lm_eye_left, axis=0)
- eye_right = np.mean(lm_eye_right, axis=0)
- eye_avg = (eye_left + eye_right) * 0.5
- eye_to_eye = eye_right - eye_left
- mouth_left = lm_mouth_outer[0]
- mouth_right = lm_mouth_outer[6]
- mouth_avg = (mouth_left + mouth_right) * 0.5
- eye_to_mouth = mouth_avg - eye_avg
-
- # Choose oriented crop rectangle.
- x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]
- x /= np.hypot(*x)
- x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)
- y = np.flipud(x) * [-1, 1]
- c = eye_avg + eye_to_mouth * 0.1
- quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])
- qsize = np.hypot(*x) * 2
-
- # read image
- img = PIL.Image.open(filepath)
-
- output_size = 256
- transform_size = 256
- enable_padding = True
-
- # Shrink.
- shrink = int(np.floor(qsize / output_size * 0.5))
- if shrink > 1:
- rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))
- img = img.resize(rsize, PIL.Image.ANTIALIAS)
- quad /= shrink
- qsize /= shrink
-
- # Crop.
- border = max(int(np.rint(qsize * 0.1)), 3)
- crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
- min(crop[3] + border, img.size[1]))
- if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
- img = img.crop(crop)
- quad -= crop[0:2]
-
- # Pad.
- pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
- max(pad[3] - img.size[1] + border, 0))
- if enable_padding and max(pad) > border - 4:
- pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
- img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
- h, w, _ = img.shape
- y, x, _ = np.ogrid[:h, :w, :1]
- mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
- 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
- blur = qsize * 0.02
- img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
- img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
- img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
- quad += pad[:2]
-
- # Transform.
- img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)
- if output_size < transform_size:
- img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)
-
- # Return aligned image.
- return img
diff --git a/spaces/Gradio-Blocks/multilingual-asr/README.md b/spaces/Gradio-Blocks/multilingual-asr/README.md
deleted file mode 100644
index 7eebbfdab981b2b0ffb40b6ac7a7d845ce67cdd1..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/multilingual-asr/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Multilingual ASR
-emoji: 🌍
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.0.5
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py
deleted file mode 100644
index 53fd3a909585367ca59eb827c2fbbab4cdf234ea..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_512x512_80k_ade20k.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/compression/_explorers.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/compression/_explorers.py
deleted file mode 100644
index eed30d5b8a1c14676503148ddf133c79ed2e33bf..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/compression/_explorers.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import treetable as tt
-
-from .._base_explorers import BaseExplorer
-
-
-class CompressionExplorer(BaseExplorer):
- eval_metrics = ["sisnr", "visqol"]
-
- def stages(self):
- return ["train", "valid", "evaluate"]
-
- def get_grid_meta(self):
- """Returns the list of Meta information to display for each XP/job.
- """
- return [
- tt.leaf("index", align=">"),
- tt.leaf("name", wrap=140),
- tt.leaf("state"),
- tt.leaf("sig", align=">"),
- ]
-
- def get_grid_metrics(self):
- """Return the metrics that should be displayed in the tracking table.
- """
- return [
- tt.group(
- "train",
- [
- tt.leaf("epoch"),
- tt.leaf("bandwidth", ".2f"),
- tt.leaf("adv", ".4f"),
- tt.leaf("d_loss", ".4f"),
- ],
- align=">",
- ),
- tt.group(
- "valid",
- [
- tt.leaf("bandwidth", ".2f"),
- tt.leaf("adv", ".4f"),
- tt.leaf("msspec", ".4f"),
- tt.leaf("sisnr", ".2f"),
- ],
- align=">",
- ),
- tt.group(
- "evaluate", [tt.leaf(name, ".3f") for name in self.eval_metrics], align=">"
- ),
- ]
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_audiogen.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_audiogen.py
deleted file mode 100644
index 3850af066cedd5ea38bd9aead9634d6aaf938218..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_audiogen.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import torch
-
-from audiocraft.models import AudioGen
-
-
-class TestAudioGenModel:
- def get_audiogen(self):
- ag = AudioGen.get_pretrained(name='debug', device='cpu')
- ag.set_generation_params(duration=2.0, extend_stride=2.)
- return ag
-
- def test_base(self):
- ag = self.get_audiogen()
- assert ag.frame_rate == 25
- assert ag.sample_rate == 16000
- assert ag.audio_channels == 1
-
- def test_generate_continuation(self):
- ag = self.get_audiogen()
- prompt = torch.randn(3, 1, 16000)
- wav = ag.generate_continuation(prompt, 16000)
- assert list(wav.shape) == [3, 1, 32000]
-
- prompt = torch.randn(2, 1, 16000)
- wav = ag.generate_continuation(
- prompt, 16000, ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 32000]
-
- prompt = torch.randn(2, 1, 16000)
- with pytest.raises(AssertionError):
- wav = ag.generate_continuation(
- prompt, 16000, ['youpi', 'lapin dort', 'one too many'])
-
- def test_generate(self):
- ag = self.get_audiogen()
- wav = ag.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 32000]
-
- def test_generate_long(self):
- ag = self.get_audiogen()
- ag.max_duration = 3.
- ag.set_generation_params(duration=4., extend_stride=2.)
- wav = ag.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 16000 * 4]
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/modules/activations.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/modules/activations.py
deleted file mode 100644
index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/modules/activations.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from typing import Union, Callable
-
-
-class CustomGLU(nn.Module):
- """Custom Gated Linear Unit activation.
- Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half
- of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation
- function (i.e. sigmoid, swish, etc.).
-
- Args:
- activation (nn.Module): The custom activation to apply in the Gated Linear Unit
- dim (int): the dimension on which to split the input. Default: -1
-
- Shape:
- - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional
- dimensions
- - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2`
-
- Examples::
- >>> m = CustomGLU(nn.Sigmoid())
- >>> input = torch.randn(4, 2)
- >>> output = m(input)
- """
- def __init__(self, activation: nn.Module, dim: int = -1):
- super(CustomGLU, self).__init__()
- self.dim = dim
- self.activation = activation
-
- def forward(self, x: Tensor):
- assert x.shape[self.dim] % 2 == 0 # M = N / 2
- a, b = torch.chunk(x, 2, dim=self.dim)
- return a * self.activation(b)
-
-
-class SwiGLU(CustomGLU):
- """SiLU Gated Linear Unit activation.
- Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(SwiGLU, self).__init__(nn.SiLU(), dim)
-
-
-class GeGLU(CustomGLU):
- """GeLU Gated Linear Unit activation.
- Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(GeGLU, self).__init__(nn.GELU(), dim)
-
-
-class ReGLU(CustomGLU):
- """ReLU Gated Linear Unit activation.
- Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(ReGLU, self).__init__(nn.ReLU(), dim)
-
-
-def get_activation_fn(
- activation: Union[str, Callable[[Tensor], Tensor]]
-) -> Union[str, Callable[[Tensor], Tensor]]:
- """Helper function to map an activation string to the activation class.
- If the supplied activation is not a string that is recognized, the activation is passed back.
-
- Args:
- activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check
- """
- if isinstance(activation, str):
- if activation == "reglu":
- return ReGLU()
- elif activation == "geglu":
- return GeGLU()
- elif activation == "swiglu":
- return SwiGLU()
- return activation
diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/whisper/tokenizer.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/whisper/tokenizer.py
deleted file mode 100644
index a27cb359ee891590d3f793624f9f8ec768a26cc3..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/vencoder/whisper/tokenizer.py
+++ /dev/null
@@ -1,331 +0,0 @@
-import os
-from dataclasses import dataclass
-from functools import lru_cache
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-from transformers import GPT2TokenizerFast
-
-LANGUAGES = {
- "en": "english",
- "zh": "chinese",
- "de": "german",
- "es": "spanish",
- "ru": "russian",
- "ko": "korean",
- "fr": "french",
- "ja": "japanese",
- "pt": "portuguese",
- "tr": "turkish",
- "pl": "polish",
- "ca": "catalan",
- "nl": "dutch",
- "ar": "arabic",
- "sv": "swedish",
- "it": "italian",
- "id": "indonesian",
- "hi": "hindi",
- "fi": "finnish",
- "vi": "vietnamese",
- "he": "hebrew",
- "uk": "ukrainian",
- "el": "greek",
- "ms": "malay",
- "cs": "czech",
- "ro": "romanian",
- "da": "danish",
- "hu": "hungarian",
- "ta": "tamil",
- "no": "norwegian",
- "th": "thai",
- "ur": "urdu",
- "hr": "croatian",
- "bg": "bulgarian",
- "lt": "lithuanian",
- "la": "latin",
- "mi": "maori",
- "ml": "malayalam",
- "cy": "welsh",
- "sk": "slovak",
- "te": "telugu",
- "fa": "persian",
- "lv": "latvian",
- "bn": "bengali",
- "sr": "serbian",
- "az": "azerbaijani",
- "sl": "slovenian",
- "kn": "kannada",
- "et": "estonian",
- "mk": "macedonian",
- "br": "breton",
- "eu": "basque",
- "is": "icelandic",
- "hy": "armenian",
- "ne": "nepali",
- "mn": "mongolian",
- "bs": "bosnian",
- "kk": "kazakh",
- "sq": "albanian",
- "sw": "swahili",
- "gl": "galician",
- "mr": "marathi",
- "pa": "punjabi",
- "si": "sinhala",
- "km": "khmer",
- "sn": "shona",
- "yo": "yoruba",
- "so": "somali",
- "af": "afrikaans",
- "oc": "occitan",
- "ka": "georgian",
- "be": "belarusian",
- "tg": "tajik",
- "sd": "sindhi",
- "gu": "gujarati",
- "am": "amharic",
- "yi": "yiddish",
- "lo": "lao",
- "uz": "uzbek",
- "fo": "faroese",
- "ht": "haitian creole",
- "ps": "pashto",
- "tk": "turkmen",
- "nn": "nynorsk",
- "mt": "maltese",
- "sa": "sanskrit",
- "lb": "luxembourgish",
- "my": "myanmar",
- "bo": "tibetan",
- "tl": "tagalog",
- "mg": "malagasy",
- "as": "assamese",
- "tt": "tatar",
- "haw": "hawaiian",
- "ln": "lingala",
- "ha": "hausa",
- "ba": "bashkir",
- "jw": "javanese",
- "su": "sundanese",
-}
-
-# language code lookup by name, with a few language aliases
-TO_LANGUAGE_CODE = {
- **{language: code for code, language in LANGUAGES.items()},
- "burmese": "my",
- "valencian": "ca",
- "flemish": "nl",
- "haitian": "ht",
- "letzeburgesch": "lb",
- "pushto": "ps",
- "panjabi": "pa",
- "moldavian": "ro",
- "moldovan": "ro",
- "sinhalese": "si",
- "castilian": "es",
-}
-
-
-@dataclass(frozen=True)
-class Tokenizer:
- """A thin wrapper around `GPT2TokenizerFast` providing quick access to special tokens"""
-
- tokenizer: "GPT2TokenizerFast"
- language: Optional[str]
- sot_sequence: Tuple[int]
-
- def encode(self, text, **kwargs):
- return self.tokenizer.encode(text, **kwargs)
-
- def decode(self, token_ids: Union[int, List[int], np.ndarray, torch.Tensor], **kwargs):
- return self.tokenizer.decode(token_ids, **kwargs)
-
- def decode_with_timestamps(self, tokens) -> str:
- """
- Timestamp tokens are above the special tokens' id range and are ignored by `decode()`.
- This method decodes given tokens with timestamps tokens annotated, e.g. "<|1.08|>".
- """
- outputs = [[]]
- for token in tokens:
- if token >= self.timestamp_begin:
- timestamp = f"<|{(token - self.timestamp_begin) * 0.02:.2f}|>"
- outputs.append(timestamp)
- outputs.append([])
- else:
- outputs[-1].append(token)
- outputs = [s if isinstance(s, str) else self.tokenizer.decode(s) for s in outputs]
- return "".join(outputs)
-
- @property
- @lru_cache()
- def eot(self) -> int:
- return self.tokenizer.eos_token_id
-
- @property
- @lru_cache()
- def sot(self) -> int:
- return self._get_single_token_id("<|startoftranscript|>")
-
- @property
- @lru_cache()
- def sot_lm(self) -> int:
- return self._get_single_token_id("<|startoflm|>")
-
- @property
- @lru_cache()
- def sot_prev(self) -> int:
- return self._get_single_token_id("<|startofprev|>")
-
- @property
- @lru_cache()
- def no_speech(self) -> int:
- return self._get_single_token_id("<|nospeech|>")
-
- @property
- @lru_cache()
- def no_timestamps(self) -> int:
- return self._get_single_token_id("<|notimestamps|>")
-
- @property
- @lru_cache()
- def timestamp_begin(self) -> int:
- return self.tokenizer.all_special_ids[-1] + 1
-
- @property
- @lru_cache()
- def language_token(self) -> int:
- """Returns the token id corresponding to the value of the `language` field"""
- if self.language is None:
- raise ValueError(f"This tokenizer does not have language token configured")
-
- additional_tokens = dict(
- zip(
- self.tokenizer.additional_special_tokens,
- self.tokenizer.additional_special_tokens_ids,
- )
- )
- candidate = f"<|{self.language}|>"
- if candidate in additional_tokens:
- return additional_tokens[candidate]
-
- raise KeyError(f"Language {self.language} not found in tokenizer.")
-
- @property
- @lru_cache()
- def all_language_tokens(self) -> Tuple[int]:
- result = []
- for token, token_id in zip(
- self.tokenizer.additional_special_tokens,
- self.tokenizer.additional_special_tokens_ids,
- ):
- if token.strip("<|>") in LANGUAGES:
- result.append(token_id)
- return tuple(result)
-
- @property
- @lru_cache()
- def all_language_codes(self) -> Tuple[str]:
- return tuple(self.decode([l]).strip("<|>") for l in self.all_language_tokens)
-
- @property
- @lru_cache()
- def sot_sequence_including_notimestamps(self) -> Tuple[int]:
- return tuple(list(self.sot_sequence) + [self.no_timestamps])
-
- @property
- @lru_cache()
- def non_speech_tokens(self) -> Tuple[int]:
- """
- Returns the list of tokens to suppress in order to avoid any speaker tags or non-speech
- annotations, to prevent sampling texts that are not actually spoken in the audio, e.g.
-
- - ♪♪♪
- - ( SPEAKING FOREIGN LANGUAGE )
- - [DAVID] Hey there,
-
- keeping basic punctuations like commas, periods, question marks, exclamation points, etc.
- """
- symbols = list("\"#()*+/:;<=>@[\\]^_`{|}~「」『』")
- symbols += "<< >> <<< >>> -- --- -( -[ (' (\" (( )) ((( ))) [[ ]] {{ }} ♪♪ ♪♪♪".split()
-
- # symbols that may be a single token or multiple tokens depending on the tokenizer.
- # In case they're multiple tokens, suppress the first token, which is safe because:
- # These are between U+2640 and U+267F miscellaneous symbols that are okay to suppress
- # in generations, and in the 3-byte UTF-8 representation they share the first two bytes.
- miscellaneous = set("♩♪♫♬♭♮♯")
- assert all(0x2640 <= ord(c) <= 0x267F for c in miscellaneous)
-
- # allow hyphens "-" and single quotes "'" between words, but not at the beginning of a word
- result = {self.tokenizer.encode(" -")[0], self.tokenizer.encode(" '")[0]}
- for symbol in symbols + list(miscellaneous):
- for tokens in [self.tokenizer.encode(symbol), self.tokenizer.encode(" " + symbol)]:
- if len(tokens) == 1 or symbol in miscellaneous:
- result.add(tokens[0])
-
- return tuple(sorted(result))
-
- def _get_single_token_id(self, text) -> int:
- tokens = self.tokenizer.encode(text)
- assert len(tokens) == 1, f"{text} is not encoded as a single token"
- return tokens[0]
-
-
-@lru_cache(maxsize=None)
-def build_tokenizer(name: str = "gpt2"):
- os.environ["TOKENIZERS_PARALLELISM"] = "false"
- path = os.path.join(os.path.dirname(__file__), "assets", name)
- tokenizer = GPT2TokenizerFast.from_pretrained(path)
-
- specials = [
- "<|startoftranscript|>",
- *[f"<|{lang}|>" for lang in LANGUAGES.keys()],
- "<|translate|>",
- "<|transcribe|>",
- "<|startoflm|>",
- "<|startofprev|>",
- "<|nospeech|>",
- "<|notimestamps|>",
- ]
-
- tokenizer.add_special_tokens(dict(additional_special_tokens=specials))
- return tokenizer
-
-
-@lru_cache(maxsize=None)
-def get_tokenizer(
- multilingual: bool,
- *,
- task: Optional[str] = None, # Literal["transcribe", "translate", None]
- language: Optional[str] = None,
-) -> Tokenizer:
- if language is not None:
- language = language.lower()
- if language not in LANGUAGES:
- if language in TO_LANGUAGE_CODE:
- language = TO_LANGUAGE_CODE[language]
- else:
- raise ValueError(f"Unsupported language: {language}")
-
- if multilingual:
- tokenizer_name = "multilingual"
- task = task or "transcribe"
- language = language or "en"
- else:
- tokenizer_name = "gpt2"
- task = None
- language = None
-
- tokenizer = build_tokenizer(name=tokenizer_name)
- all_special_ids: List[int] = tokenizer.all_special_ids
- sot: int = all_special_ids[1]
- translate: int = all_special_ids[-6]
- transcribe: int = all_special_ids[-5]
-
- langs = tuple(LANGUAGES.keys())
- sot_sequence = [sot]
- if language is not None:
- sot_sequence.append(sot + 1 + langs.index(language))
- if task is not None:
- sot_sequence.append(transcribe if task == "transcribe" else translate)
-
- return Tokenizer(tokenizer=tokenizer, language=language, sot_sequence=tuple(sot_sequence))
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/adaptive_span_attention.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/adaptive_span_attention.py
deleted file mode 100644
index 07f757bb8e1a8a67b1124175ee338c8735aa8d65..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/adaptive_span_attention.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class AdaptiveMask(nn.Module):
- """Soft masking function for adaptive size.
- It masks out the last K values of an input. The masking value
- goes from 1 to 0 gradually, so K can be learned with
- back-propagation.
- Args:
- max_size: maximum size (i.e. input dimension)
- ramp_size: size of the ramp going from 0 to 1
- init_val: initial size proportion not to be masked out
- shape: learn multiple sizes independent of each other
- """
-
- def __init__(self, max_size, ramp_size, init_val=0, shape=(1,)):
- nn.Module.__init__(self)
- self._max_size = max_size
- self._ramp_size = ramp_size
- self.current_val = nn.Parameter(torch.zeros(*shape) + init_val)
- mask_template = torch.linspace(1 - max_size, 0, steps=max_size)
- self.register_buffer("mask_template", mask_template)
-
- def forward(self, x):
- mask = self.mask_template.float() + self.current_val.float() * self._max_size
- mask = mask / self._ramp_size + 1
- mask = mask.clamp(0, 1)
- if x.size(-1) < self._max_size:
- # the input could have been trimmed beforehand to save computation
- mask = mask.narrow(-1, self._max_size - x.size(-1), x.size(-1))
- x = (x * mask).type_as(x)
- return x
-
- def get_current_max_size(self, include_ramp=True):
- current_size = math.ceil(self.current_val.max().item() * self._max_size)
- if include_ramp:
- current_size += self._ramp_size
- current_size = max(0, min(self._max_size, current_size))
- return current_size
-
- def get_current_avg_size(self, include_ramp=True):
- current_size = math.ceil(
- self.current_val.float().mean().item() * self._max_size
- )
- if include_ramp:
- current_size += self._ramp_size
- current_size = max(0, min(self._max_size, current_size))
- return current_size
-
- def clamp_param(self):
- """this need to be called after each update"""
- self.current_val.data.clamp_(0, 1)
-
-
-class AdaptiveSpan(nn.Module):
- """Adaptive attention span for Transformerself.
- This module learns an attention span length from data for each
- self-attention head.
- Args:
- attn_span: maximum attention span
- adapt_span_loss: loss coefficient for the span length
- adapt_span_ramp: length of the masking ramp
- adapt_span_init: initial size ratio
- adapt_span_cache: adapt cache size to reduce memory usage
- """
-
- def __init__(
- self,
- attn_span,
- adapt_span_ramp,
- adapt_span_init,
- n_head,
- adapt_span_layer,
- **kargs
- ):
- nn.Module.__init__(self)
- self._max_span = attn_span
- self._n_head = n_head
- self._adapt_span_layer = adapt_span_layer
- if self._adapt_span_layer:
- self._mask = AdaptiveMask(
- max_size=self._max_span,
- ramp_size=adapt_span_ramp,
- init_val=adapt_span_init,
- )
- else:
- self._mask = AdaptiveMask(
- max_size=self._max_span,
- ramp_size=adapt_span_ramp,
- init_val=adapt_span_init,
- shape=(n_head, 1, 1),
- )
-
- def forward(self, attn, normalize=True):
- """mask attention with the right span"""
- # batch and head dimensions are merged together, so separate them first
- self.clamp_param()
- if self._adapt_span_layer:
- attn = self._mask(attn)
- else:
- B = attn.size(0) # batch size
- M = attn.size(1) # block size
- attn = attn.reshape(B // self._n_head, self._n_head, M, -1)
- attn = self._mask(attn)
- attn = attn.view(B, M, -1)
- return attn
-
- def get_trim_len(self):
- """how much of memory can be trimmed to reduce computation"""
- L = self._max_span
- trim_len = min(L - 1, L - self._mask.get_current_max_size())
- # too fine granularity might be bad for the memory management
- trim_len = math.floor(trim_len / 64) * 64
- return trim_len
-
- def trim_memory(self, query, key, value, key_pe):
- """trim out unnecessary memory beforehand to reduce computation"""
- trim_len = self.get_trim_len()
- cache_size = key.size(1) - query.size(1)
- trim_len_cache = trim_len - (self._max_span - cache_size)
- if trim_len_cache > 0:
- key = key[:, trim_len_cache:, :]
- value = value[:, trim_len_cache:, :]
- elif trim_len_cache < 0:
- # cache is too short! this happens when validation resumes
- # after a lot of updates.
- key = F.pad(key, [0, 0, -trim_len_cache, 0])
- value = F.pad(value, [0, 0, -trim_len_cache, 0])
- if trim_len > 0:
- if key_pe is not None:
- key_pe = key_pe[:, :, trim_len:]
- return key, value, key_pe
-
- def get_cache_size(self):
- """determine how long the cache should be"""
- trim_len = self.get_trim_len()
- # give a buffer of 64 steps since a span might increase
- # in future updates
- return min(self._max_span, self._max_span - trim_len + 64)
-
- def get_loss(self):
- """a loss term for regularizing the span length"""
- return self._max_span * self._mask.current_val.float().mean()
-
- def get_current_max_span(self):
- return self._mask.get_current_max_size()
-
- def get_current_avg_span(self):
- return self._mask.get_current_avg_size()
-
- def clamp_param(self):
- self._mask.clamp_param()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py
deleted file mode 100644
index 6d2a2a4b6b809ba1106f9a57cb6f241dc083e670..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py
+++ /dev/null
@@ -1,698 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-import hydra
-from hydra.core.config_store import ConfigStore
-import logging
-from omegaconf import MISSING, OmegaConf
-import os
-import os.path as osp
-from pathlib import Path
-import subprocess
-from typing import Optional
-
-from fairseq.data.dictionary import Dictionary
-from fairseq.dataclass import FairseqDataclass
-
-script_dir = Path(__file__).resolve().parent
-config_path = script_dir / "config"
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class KaldiInitializerConfig(FairseqDataclass):
- data_dir: str = MISSING
- fst_dir: Optional[str] = None
- in_labels: str = MISSING
- out_labels: Optional[str] = None
- wav2letter_lexicon: Optional[str] = None
- lm_arpa: str = MISSING
- kaldi_root: str = MISSING
- blank_symbol: str = ""
- silence_symbol: Optional[str] = None
-
-
-def create_units(fst_dir: Path, in_labels: str, vocab: Dictionary) -> Path:
- in_units_file = fst_dir / f"kaldi_dict.{in_labels}.txt"
- if not in_units_file.exists():
-
- logger.info(f"Creating {in_units_file}")
-
- with open(in_units_file, "w") as f:
- print(" 0", file=f)
- i = 1
- for symb in vocab.symbols[vocab.nspecial :]:
- if not symb.startswith("madeupword"):
- print(f"{symb} {i}", file=f)
- i += 1
- return in_units_file
-
-
-def create_lexicon(
- cfg: KaldiInitializerConfig,
- fst_dir: Path,
- unique_label: str,
- in_units_file: Path,
- out_words_file: Path,
-) -> (Path, Path):
-
- disambig_in_units_file = fst_dir / f"kaldi_dict.{cfg.in_labels}_disambig.txt"
- lexicon_file = fst_dir / f"kaldi_lexicon.{unique_label}.txt"
- disambig_lexicon_file = fst_dir / f"kaldi_lexicon.{unique_label}_disambig.txt"
- if (
- not lexicon_file.exists()
- or not disambig_lexicon_file.exists()
- or not disambig_in_units_file.exists()
- ):
- logger.info(f"Creating {lexicon_file} (in units file: {in_units_file})")
-
- assert cfg.wav2letter_lexicon is not None or cfg.in_labels == cfg.out_labels
-
- if cfg.wav2letter_lexicon is not None:
- lm_words = set()
- with open(out_words_file, "r") as lm_dict_f:
- for line in lm_dict_f:
- lm_words.add(line.split()[0])
-
- num_skipped = 0
- total = 0
- with open(cfg.wav2letter_lexicon, "r") as w2l_lex_f, open(
- lexicon_file, "w"
- ) as out_f:
- for line in w2l_lex_f:
- items = line.rstrip().split("\t")
- assert len(items) == 2, items
- if items[0] in lm_words:
- print(items[0], items[1], file=out_f)
- else:
- num_skipped += 1
- logger.debug(
- f"Skipping word {items[0]} as it was not found in LM"
- )
- total += 1
- if num_skipped > 0:
- logger.warning(
- f"Skipped {num_skipped} out of {total} words as they were not found in LM"
- )
- else:
- with open(in_units_file, "r") as in_f, open(lexicon_file, "w") as out_f:
- for line in in_f:
- symb = line.split()[0]
- if symb != "" and symb != "" and symb != "":
- print(symb, symb, file=out_f)
-
- lex_disambig_path = (
- Path(cfg.kaldi_root) / "egs/wsj/s5/utils/add_lex_disambig.pl"
- )
- res = subprocess.run(
- [lex_disambig_path, lexicon_file, disambig_lexicon_file],
- check=True,
- capture_output=True,
- )
- ndisambig = int(res.stdout)
- disamib_path = Path(cfg.kaldi_root) / "egs/wsj/s5/utils/add_disambig.pl"
- res = subprocess.run(
- [disamib_path, "--include-zero", in_units_file, str(ndisambig)],
- check=True,
- capture_output=True,
- )
- with open(disambig_in_units_file, "wb") as f:
- f.write(res.stdout)
-
- return disambig_lexicon_file, disambig_in_units_file
-
-
-def create_G(
- kaldi_root: Path, fst_dir: Path, lm_arpa: Path, arpa_base: str
-) -> (Path, Path):
-
- out_words_file = fst_dir / f"kaldi_dict.{arpa_base}.txt"
- grammar_graph = fst_dir / f"G_{arpa_base}.fst"
- if not grammar_graph.exists() or not out_words_file.exists():
- logger.info(f"Creating {grammar_graph}")
- arpa2fst = kaldi_root / "src/lmbin/arpa2fst"
- subprocess.run(
- [
- arpa2fst,
- "--disambig-symbol=#0",
- f"--write-symbol-table={out_words_file}",
- lm_arpa,
- grammar_graph,
- ],
- check=True,
- )
- return grammar_graph, out_words_file
-
-
-def create_L(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- lexicon_file: Path,
- in_units_file: Path,
- out_words_file: Path,
-) -> Path:
- lexicon_graph = fst_dir / f"L.{unique_label}.fst"
-
- if not lexicon_graph.exists():
- logger.info(f"Creating {lexicon_graph} (in units: {in_units_file})")
- make_lex = kaldi_root / "egs/wsj/s5/utils/make_lexicon_fst.pl"
- fstcompile = kaldi_root / "tools/openfst-1.6.7/bin/fstcompile"
- fstaddselfloops = kaldi_root / "src/fstbin/fstaddselfloops"
- fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort"
-
- def write_disambig_symbol(file):
- with open(file, "r") as f:
- for line in f:
- items = line.rstrip().split()
- if items[0] == "#0":
- out_path = str(file) + "_disamig"
- with open(out_path, "w") as out_f:
- print(items[1], file=out_f)
- return out_path
-
- return None
-
- in_disambig_sym = write_disambig_symbol(in_units_file)
- assert in_disambig_sym is not None
- out_disambig_sym = write_disambig_symbol(out_words_file)
- assert out_disambig_sym is not None
-
- try:
- with open(lexicon_graph, "wb") as out_f:
- res = subprocess.run(
- [make_lex, lexicon_file], capture_output=True, check=True
- )
- assert len(res.stderr) == 0, res.stderr.decode("utf-8")
- res = subprocess.run(
- [
- fstcompile,
- f"--isymbols={in_units_file}",
- f"--osymbols={out_words_file}",
- "--keep_isymbols=false",
- "--keep_osymbols=false",
- ],
- input=res.stdout,
- capture_output=True,
- )
- assert len(res.stderr) == 0, res.stderr.decode("utf-8")
- res = subprocess.run(
- [fstaddselfloops, in_disambig_sym, out_disambig_sym],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstarcsort, "--sort_type=olabel"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(lexicon_graph)
- raise
- except AssertionError:
- os.remove(lexicon_graph)
- raise
-
- return lexicon_graph
-
-
-def create_LG(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- lexicon_graph: Path,
- grammar_graph: Path,
-) -> Path:
- lg_graph = fst_dir / f"LG.{unique_label}.fst"
-
- if not lg_graph.exists():
- logger.info(f"Creating {lg_graph}")
-
- fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose"
- fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar"
- fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded"
- fstpushspecial = kaldi_root / "src/fstbin/fstpushspecial"
- fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort"
-
- try:
- with open(lg_graph, "wb") as out_f:
- res = subprocess.run(
- [fsttablecompose, lexicon_graph, grammar_graph],
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [
- fstdeterminizestar,
- "--use-log=true",
- ],
- input=res.stdout,
- capture_output=True,
- )
- res = subprocess.run(
- [fstminimizeencoded],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstpushspecial],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstarcsort, "--sort_type=ilabel"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(lg_graph)
- raise
-
- return lg_graph
-
-
-def create_H(
- kaldi_root: Path,
- fst_dir: Path,
- disambig_out_units_file: Path,
- in_labels: str,
- vocab: Dictionary,
- blk_sym: str,
- silence_symbol: Optional[str],
-) -> (Path, Path, Path):
- h_graph = (
- fst_dir / f"H.{in_labels}{'_' + silence_symbol if silence_symbol else ''}.fst"
- )
- h_out_units_file = fst_dir / f"kaldi_dict.h_out.{in_labels}.txt"
- disambig_in_units_file_int = Path(str(h_graph) + "isym_disambig.int")
- disambig_out_units_file_int = Path(str(disambig_out_units_file) + ".int")
- if (
- not h_graph.exists()
- or not h_out_units_file.exists()
- or not disambig_in_units_file_int.exists()
- ):
- logger.info(f"Creating {h_graph}")
- eps_sym = ""
-
- num_disambig = 0
- osymbols = []
-
- with open(disambig_out_units_file, "r") as f, open(
- disambig_out_units_file_int, "w"
- ) as out_f:
- for line in f:
- symb, id = line.rstrip().split()
- if line.startswith("#"):
- num_disambig += 1
- print(id, file=out_f)
- else:
- if len(osymbols) == 0:
- assert symb == eps_sym, symb
- osymbols.append((symb, id))
-
- i_idx = 0
- isymbols = [(eps_sym, 0)]
-
- imap = {}
-
- for i, s in enumerate(vocab.symbols):
- i_idx += 1
- isymbols.append((s, i_idx))
- imap[s] = i_idx
-
- fst_str = []
-
- node_idx = 0
- root_node = node_idx
-
- special_symbols = [blk_sym]
- if silence_symbol is not None:
- special_symbols.append(silence_symbol)
-
- for ss in special_symbols:
- fst_str.append("{} {} {} {}".format(root_node, root_node, ss, eps_sym))
-
- for symbol, _ in osymbols:
- if symbol == eps_sym or symbol.startswith("#"):
- continue
-
- node_idx += 1
- # 1. from root to emitting state
- fst_str.append("{} {} {} {}".format(root_node, node_idx, symbol, symbol))
- # 2. from emitting state back to root
- fst_str.append("{} {} {} {}".format(node_idx, root_node, eps_sym, eps_sym))
- # 3. from emitting state to optional blank state
- pre_node = node_idx
- node_idx += 1
- for ss in special_symbols:
- fst_str.append("{} {} {} {}".format(pre_node, node_idx, ss, eps_sym))
- # 4. from blank state back to root
- fst_str.append("{} {} {} {}".format(node_idx, root_node, eps_sym, eps_sym))
-
- fst_str.append("{}".format(root_node))
-
- fst_str = "\n".join(fst_str)
- h_str = str(h_graph)
- isym_file = h_str + ".isym"
-
- with open(isym_file, "w") as f:
- for sym, id in isymbols:
- f.write("{} {}\n".format(sym, id))
-
- with open(h_out_units_file, "w") as f:
- for sym, id in osymbols:
- f.write("{} {}\n".format(sym, id))
-
- with open(disambig_in_units_file_int, "w") as f:
- disam_sym_id = len(isymbols)
- for _ in range(num_disambig):
- f.write("{}\n".format(disam_sym_id))
- disam_sym_id += 1
-
- fstcompile = kaldi_root / "tools/openfst-1.6.7/bin/fstcompile"
- fstaddselfloops = kaldi_root / "src/fstbin/fstaddselfloops"
- fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort"
-
- try:
- with open(h_graph, "wb") as out_f:
- res = subprocess.run(
- [
- fstcompile,
- f"--isymbols={isym_file}",
- f"--osymbols={h_out_units_file}",
- "--keep_isymbols=false",
- "--keep_osymbols=false",
- ],
- input=str.encode(fst_str),
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [
- fstaddselfloops,
- disambig_in_units_file_int,
- disambig_out_units_file_int,
- ],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstarcsort, "--sort_type=olabel"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(h_graph)
- raise
- return h_graph, h_out_units_file, disambig_in_units_file_int
-
-
-def create_HLGa(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- h_graph: Path,
- lg_graph: Path,
- disambig_in_words_file_int: Path,
-) -> Path:
- hlga_graph = fst_dir / f"HLGa.{unique_label}.fst"
-
- if not hlga_graph.exists():
- logger.info(f"Creating {hlga_graph}")
-
- fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose"
- fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar"
- fstrmsymbols = kaldi_root / "src/fstbin/fstrmsymbols"
- fstrmepslocal = kaldi_root / "src/fstbin/fstrmepslocal"
- fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded"
-
- try:
- with open(hlga_graph, "wb") as out_f:
- res = subprocess.run(
- [
- fsttablecompose,
- h_graph,
- lg_graph,
- ],
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstdeterminizestar, "--use-log=true"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmsymbols, disambig_in_words_file_int],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmepslocal],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstminimizeencoded],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(hlga_graph)
- raise
-
- return hlga_graph
-
-
-def create_HLa(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- h_graph: Path,
- l_graph: Path,
- disambig_in_words_file_int: Path,
-) -> Path:
- hla_graph = fst_dir / f"HLa.{unique_label}.fst"
-
- if not hla_graph.exists():
- logger.info(f"Creating {hla_graph}")
-
- fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose"
- fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar"
- fstrmsymbols = kaldi_root / "src/fstbin/fstrmsymbols"
- fstrmepslocal = kaldi_root / "src/fstbin/fstrmepslocal"
- fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded"
-
- try:
- with open(hla_graph, "wb") as out_f:
- res = subprocess.run(
- [
- fsttablecompose,
- h_graph,
- l_graph,
- ],
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstdeterminizestar, "--use-log=true"],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmsymbols, disambig_in_words_file_int],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstrmepslocal],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- res = subprocess.run(
- [fstminimizeencoded],
- input=res.stdout,
- capture_output=True,
- check=True,
- )
- out_f.write(res.stdout)
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- os.remove(hla_graph)
- raise
-
- return hla_graph
-
-
-def create_HLG(
- kaldi_root: Path,
- fst_dir: Path,
- unique_label: str,
- hlga_graph: Path,
- prefix: str = "HLG",
-) -> Path:
- hlg_graph = fst_dir / f"{prefix}.{unique_label}.fst"
-
- if not hlg_graph.exists():
- logger.info(f"Creating {hlg_graph}")
-
- add_self_loop = script_dir / "add-self-loop-simple"
- kaldi_src = kaldi_root / "src"
- kaldi_lib = kaldi_src / "lib"
-
- try:
- if not add_self_loop.exists():
- fst_include = kaldi_root / "tools/openfst-1.6.7/include"
- add_self_loop_src = script_dir / "add-self-loop-simple.cc"
-
- subprocess.run(
- [
- "c++",
- f"-I{kaldi_src}",
- f"-I{fst_include}",
- f"-L{kaldi_lib}",
- add_self_loop_src,
- "-lkaldi-base",
- "-lkaldi-fstext",
- "-o",
- add_self_loop,
- ],
- check=True,
- )
-
- my_env = os.environ.copy()
- my_env["LD_LIBRARY_PATH"] = f"{kaldi_lib}:{my_env['LD_LIBRARY_PATH']}"
-
- subprocess.run(
- [
- add_self_loop,
- hlga_graph,
- hlg_graph,
- ],
- check=True,
- capture_output=True,
- env=my_env,
- )
- except subprocess.CalledProcessError as e:
- logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}")
- raise
-
- return hlg_graph
-
-
-def initalize_kaldi(cfg: KaldiInitializerConfig) -> Path:
- if cfg.fst_dir is None:
- cfg.fst_dir = osp.join(cfg.data_dir, "kaldi")
- if cfg.out_labels is None:
- cfg.out_labels = cfg.in_labels
-
- kaldi_root = Path(cfg.kaldi_root)
- data_dir = Path(cfg.data_dir)
- fst_dir = Path(cfg.fst_dir)
- fst_dir.mkdir(parents=True, exist_ok=True)
-
- arpa_base = osp.splitext(osp.basename(cfg.lm_arpa))[0]
- unique_label = f"{cfg.in_labels}.{arpa_base}"
-
- with open(data_dir / f"dict.{cfg.in_labels}.txt", "r") as f:
- vocab = Dictionary.load(f)
-
- in_units_file = create_units(fst_dir, cfg.in_labels, vocab)
-
- grammar_graph, out_words_file = create_G(
- kaldi_root, fst_dir, Path(cfg.lm_arpa), arpa_base
- )
-
- disambig_lexicon_file, disambig_L_in_units_file = create_lexicon(
- cfg, fst_dir, unique_label, in_units_file, out_words_file
- )
-
- h_graph, h_out_units_file, disambig_in_units_file_int = create_H(
- kaldi_root,
- fst_dir,
- disambig_L_in_units_file,
- cfg.in_labels,
- vocab,
- cfg.blank_symbol,
- cfg.silence_symbol,
- )
- lexicon_graph = create_L(
- kaldi_root,
- fst_dir,
- unique_label,
- disambig_lexicon_file,
- disambig_L_in_units_file,
- out_words_file,
- )
- lg_graph = create_LG(
- kaldi_root, fst_dir, unique_label, lexicon_graph, grammar_graph
- )
- hlga_graph = create_HLGa(
- kaldi_root, fst_dir, unique_label, h_graph, lg_graph, disambig_in_units_file_int
- )
- hlg_graph = create_HLG(kaldi_root, fst_dir, unique_label, hlga_graph)
-
- # for debugging
- # hla_graph = create_HLa(kaldi_root, fst_dir, unique_label, h_graph, lexicon_graph, disambig_in_units_file_int)
- # hl_graph = create_HLG(kaldi_root, fst_dir, unique_label, hla_graph, prefix="HL_looped")
- # create_HLG(kaldi_root, fst_dir, "phnc", h_graph, prefix="H_looped")
-
- return hlg_graph
-
-
-@hydra.main(config_path=config_path, config_name="kaldi_initializer")
-def cli_main(cfg: KaldiInitializerConfig) -> None:
- container = OmegaConf.to_container(cfg, resolve=True, enum_to_str=True)
- cfg = OmegaConf.create(container)
- OmegaConf.set_struct(cfg, True)
- initalize_kaldi(cfg)
-
-
-if __name__ == "__main__":
-
- logging.root.setLevel(logging.INFO)
- logging.basicConfig(level=logging.INFO)
-
- try:
- from hydra._internal.utils import (
- get_args,
- ) # pylint: disable=import-outside-toplevel
-
- cfg_name = get_args().config_name or "kaldi_initializer"
- except ImportError:
- logger.warning("Failed to get config name from hydra args")
- cfg_name = "kaldi_initializer"
-
- cs = ConfigStore.instance()
- cs.store(name=cfg_name, node=KaldiInitializerConfig)
-
- cli_main()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/model.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/model.py
deleted file mode 100644
index bb205b910daaecd55effd1e77e77d0b43952624f..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/model.py
+++ /dev/null
@@ -1,594 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-RoBERTa: A Robustly Optimized BERT Pretraining Approach.
-"""
-
-import logging
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.models.transformer import DEFAULT_MIN_PARAMS_TO_WRAP, TransformerEncoder
-from fairseq.modules import LayerNorm
-from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_
-from fairseq.modules.transformer_sentence_encoder import init_bert_params
-from fairseq.utils import safe_getattr, safe_hasattr
-
-from .hub_interface import RobertaHubInterface
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("roberta")
-class RobertaModel(FairseqEncoderModel):
- @classmethod
- def hub_models(cls):
- return {
- "roberta.base": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz",
- "roberta.large": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz",
- "roberta.large.mnli": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.mnli.tar.gz",
- "roberta.large.wsc": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.wsc.tar.gz",
- }
-
- def __init__(self, args, encoder):
- super().__init__(encoder)
- self.args = args
-
- # We follow BERT's random weight initialization
- self.apply(init_bert_params)
-
- self.classification_heads = nn.ModuleDict()
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--encoder-layers", type=int, metavar="L", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="H",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="F",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="A",
- help="num encoder attention heads",
- )
- parser.add_argument(
- "--activation-fn",
- choices=utils.get_available_activation_fns(),
- help="activation function to use",
- )
- parser.add_argument(
- "--pooler-activation-fn",
- choices=utils.get_available_activation_fns(),
- help="activation function to use for pooler layer",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
- parser.add_argument(
- "--layernorm-embedding",
- action="store_true",
- help="add layernorm to embedding",
- )
- parser.add_argument(
- "--dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--activation-dropout",
- type=float,
- metavar="D",
- help="dropout probability after activation in FFN",
- )
- parser.add_argument(
- "--pooler-dropout",
- type=float,
- metavar="D",
- help="dropout probability in the masked_lm pooler layers",
- )
- parser.add_argument(
- "--max-positions", type=int, help="number of positional embeddings to learn"
- )
- parser.add_argument(
- "--load-checkpoint-heads",
- action="store_true",
- help="(re-)register and load heads when loading checkpoints",
- )
- parser.add_argument(
- "--untie-weights-roberta",
- action="store_true",
- help="Untie weights between embeddings and classifiers in RoBERTa",
- )
- # args for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019)
- parser.add_argument(
- "--encoder-layerdrop",
- type=float,
- metavar="D",
- default=0,
- help="LayerDrop probability for encoder",
- )
- parser.add_argument(
- "--encoder-layers-to-keep",
- default=None,
- help="which layers to *keep* when pruning as a comma-separated list",
- )
- # args for Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020)
- parser.add_argument(
- "--quant-noise-pq",
- type=float,
- metavar="D",
- default=0,
- help="iterative PQ quantization noise at training time",
- )
- parser.add_argument(
- "--quant-noise-pq-block-size",
- type=int,
- metavar="D",
- default=8,
- help="block size of quantization noise at training time",
- )
- parser.add_argument(
- "--quant-noise-scalar",
- type=float,
- metavar="D",
- default=0,
- help="scalar quantization noise and scalar quantization at training time",
- )
- # args for "Better Fine-Tuning by Reducing Representational Collapse" (Aghajanyan et al. 2020)
- parser.add_argument(
- "--spectral-norm-classification-head",
- action="store_true",
- default=False,
- help="Apply spectral normalization on the classification head",
- )
- # args for Fully Sharded Data Parallel (FSDP) training
- parser.add_argument(
- "--min-params-to-wrap",
- type=int,
- metavar="D",
- default=DEFAULT_MIN_PARAMS_TO_WRAP,
- help=(
- "minimum number of params for a layer to be wrapped with FSDP() when "
- "training with --ddp-backend=fully_sharded. Smaller values will "
- "improve memory efficiency, but may make torch.distributed "
- "communication less efficient due to smaller input sizes. This option "
- "is set to 0 (i.e., always wrap) when --checkpoint-activations or "
- "--offload-activations are passed."
- )
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- from omegaconf import OmegaConf
-
- if OmegaConf.is_config(args):
- OmegaConf.set_struct(args, False)
-
- # make sure all arguments are present
- base_architecture(args)
-
- if not safe_hasattr(args, "max_positions"):
- if not safe_hasattr(args, "tokens_per_sample"):
- args.tokens_per_sample = task.max_positions()
- args.max_positions = args.tokens_per_sample
-
- encoder = RobertaEncoder(args, task.source_dictionary)
-
- if OmegaConf.is_config(args):
- OmegaConf.set_struct(args, True)
-
- return cls(args, encoder)
-
- def forward(
- self,
- src_tokens,
- features_only=False,
- return_all_hiddens=False,
- classification_head_name=None,
- **kwargs,
- ):
- if classification_head_name is not None:
- features_only = True
-
- x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs)
-
- if classification_head_name is not None:
- x = self.classification_heads[classification_head_name](x)
- return x, extra
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- """Get normalized probabilities (or log probs) from a net's output."""
- logits = net_output[0].float()
- if log_probs:
- return F.log_softmax(logits, dim=-1)
- else:
- return F.softmax(logits, dim=-1)
-
- def register_classification_head(
- self, name, num_classes=None, inner_dim=None, **kwargs
- ):
- """Register a classification head."""
- if name in self.classification_heads:
- prev_num_classes = self.classification_heads[name].out_proj.out_features
- prev_inner_dim = self.classification_heads[name].dense.out_features
- if num_classes != prev_num_classes or inner_dim != prev_inner_dim:
- logger.warning(
- 're-registering head "{}" with num_classes {} (prev: {}) '
- "and inner_dim {} (prev: {})".format(
- name, num_classes, prev_num_classes, inner_dim, prev_inner_dim
- )
- )
- self.classification_heads[name] = RobertaClassificationHead(
- input_dim=self.args.encoder_embed_dim,
- inner_dim=inner_dim or self.args.encoder_embed_dim,
- num_classes=num_classes,
- activation_fn=self.args.pooler_activation_fn,
- pooler_dropout=self.args.pooler_dropout,
- q_noise=self.args.quant_noise_pq,
- qn_block_size=self.args.quant_noise_pq_block_size,
- do_spectral_norm=self.args.spectral_norm_classification_head,
- )
-
- @property
- def supported_targets(self):
- return {"self"}
-
- @classmethod
- def from_pretrained(
- cls,
- model_name_or_path,
- checkpoint_file="model.pt",
- data_name_or_path=".",
- bpe="gpt2",
- **kwargs,
- ):
- from fairseq import hub_utils
-
- x = hub_utils.from_pretrained(
- model_name_or_path,
- checkpoint_file,
- data_name_or_path,
- archive_map=cls.hub_models(),
- bpe=bpe,
- load_checkpoint_heads=True,
- **kwargs,
- )
-
- logger.info(x["args"])
- return RobertaHubInterface(x["args"], x["task"], x["models"][0])
-
- def upgrade_state_dict_named(self, state_dict, name):
- prefix = name + "." if name != "" else ""
-
- # rename decoder -> encoder before upgrading children modules
- for k in list(state_dict.keys()):
- if k.startswith(prefix + "decoder"):
- new_k = prefix + "encoder" + k[len(prefix + "decoder") :]
- state_dict[new_k] = state_dict[k]
- del state_dict[k]
-
- # rename emb_layer_norm -> layernorm_embedding
- for k in list(state_dict.keys()):
- if ".emb_layer_norm." in k:
- new_k = k.replace(".emb_layer_norm.", ".layernorm_embedding.")
- state_dict[new_k] = state_dict[k]
- del state_dict[k]
-
- # upgrade children modules
- super().upgrade_state_dict_named(state_dict, name)
-
- # Handle new classification heads present in the state dict.
- current_head_names = (
- []
- if not hasattr(self, "classification_heads")
- else self.classification_heads.keys()
- )
- keys_to_delete = []
- for k in state_dict.keys():
- if not k.startswith(prefix + "classification_heads."):
- continue
-
- head_name = k[len(prefix + "classification_heads.") :].split(".")[0]
- num_classes = state_dict[
- prefix + "classification_heads." + head_name + ".out_proj.weight"
- ].size(0)
- inner_dim = state_dict[
- prefix + "classification_heads." + head_name + ".dense.weight"
- ].size(0)
-
- if getattr(self.args, "load_checkpoint_heads", False):
- if head_name not in current_head_names:
- self.register_classification_head(head_name, num_classes, inner_dim)
- else:
- if head_name not in current_head_names:
- logger.warning(
- "deleting classification head ({}) from checkpoint "
- "not present in current model: {}".format(head_name, k)
- )
- keys_to_delete.append(k)
- elif (
- num_classes
- != self.classification_heads[head_name].out_proj.out_features
- or inner_dim
- != self.classification_heads[head_name].dense.out_features
- ):
- logger.warning(
- "deleting classification head ({}) from checkpoint "
- "with different dimensions than current model: {}".format(
- head_name, k
- )
- )
- keys_to_delete.append(k)
- for k in keys_to_delete:
- del state_dict[k]
-
- # Copy any newly-added classification heads into the state dict
- # with their current weights.
- if hasattr(self, "classification_heads"):
- cur_state = self.classification_heads.state_dict()
- for k, v in cur_state.items():
- if prefix + "classification_heads." + k not in state_dict:
- logger.info("Overwriting " + prefix + "classification_heads." + k)
- state_dict[prefix + "classification_heads." + k] = v
-
-
-class RobertaLMHead(nn.Module):
- """Head for masked language modeling."""
-
- def __init__(self, embed_dim, output_dim, activation_fn, weight=None):
- super().__init__()
- self.dense = nn.Linear(embed_dim, embed_dim)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.layer_norm = LayerNorm(embed_dim)
-
- if weight is None:
- weight = nn.Linear(embed_dim, output_dim, bias=False).weight
- self.weight = weight
- self.bias = nn.Parameter(torch.zeros(output_dim))
-
- def forward(self, features, masked_tokens=None, **kwargs):
- # Only project the masked tokens while training,
- # saves both memory and computation
- if masked_tokens is not None:
- features = features[masked_tokens, :]
-
- x = self.dense(features)
- x = self.activation_fn(x)
- x = self.layer_norm(x)
- # project back to size of vocabulary with bias
- x = F.linear(x, self.weight) + self.bias
- return x
-
-
-class RobertaClassificationHead(nn.Module):
- """Head for sentence-level classification tasks."""
-
- def __init__(
- self,
- input_dim,
- inner_dim,
- num_classes,
- activation_fn,
- pooler_dropout,
- q_noise=0,
- qn_block_size=8,
- do_spectral_norm=False,
- ):
- super().__init__()
- self.dense = nn.Linear(input_dim, inner_dim)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.dropout = nn.Dropout(p=pooler_dropout)
- self.out_proj = apply_quant_noise_(
- nn.Linear(inner_dim, num_classes), q_noise, qn_block_size
- )
- if do_spectral_norm:
- if q_noise != 0:
- raise NotImplementedError(
- "Attempting to use Spectral Normalization with Quant Noise. This is not officially supported"
- )
- self.out_proj = torch.nn.utils.spectral_norm(self.out_proj)
-
- def forward(self, features, **kwargs):
- x = features[:, 0, :] # take token (equiv. to [CLS])
- x = self.dropout(x)
- x = self.dense(x)
- x = self.activation_fn(x)
- x = self.dropout(x)
- x = self.out_proj(x)
- return x
-
-
-class RobertaEncoder(FairseqEncoder):
- """RoBERTa encoder."""
-
- def __init__(self, args, dictionary):
- super().__init__(dictionary)
-
- # set any missing default values
- base_architecture(args)
- self.args = args
-
- if args.encoder_layers_to_keep:
- args.encoder_layers = len(args.encoder_layers_to_keep.split(","))
-
- embed_tokens = self.build_embedding(
- len(dictionary), args.encoder_embed_dim, dictionary.pad()
- )
-
- self.sentence_encoder = self.build_encoder(args, dictionary, embed_tokens)
-
- self.lm_head = self.build_lm_head(
- embed_dim=args.encoder_embed_dim,
- output_dim=len(dictionary),
- activation_fn=args.activation_fn,
- weight=(
- self.sentence_encoder.embed_tokens.weight
- if not args.untie_weights_roberta
- else None
- ),
- )
-
- def build_embedding(self, vocab_size, embedding_dim, padding_idx):
- return nn.Embedding(vocab_size, embedding_dim, padding_idx)
-
- def build_encoder(self, args, dictionary, embed_tokens):
- encoder = TransformerEncoder(args, dictionary, embed_tokens)
- encoder.apply(init_bert_params)
- return encoder
-
- def build_lm_head(self, embed_dim, output_dim, activation_fn, weight):
- return RobertaLMHead(embed_dim, output_dim, activation_fn, weight)
-
- def forward(
- self,
- src_tokens,
- features_only=False,
- return_all_hiddens=False,
- masked_tokens=None,
- **unused,
- ):
- """
- Args:
- src_tokens (LongTensor): input tokens of shape `(batch, src_len)`
- features_only (bool, optional): skip LM head and just return
- features. If True, the output will be of shape
- `(batch, src_len, embed_dim)`.
- return_all_hiddens (bool, optional): also return all of the
- intermediate hidden states (default: False).
-
- Returns:
- tuple:
- - the LM output of shape `(batch, src_len, vocab)`
- - a dictionary of additional data, where 'inner_states'
- is a list of hidden states. Note that the hidden
- states have shape `(src_len, batch, vocab)`.
- """
- x, extra = self.extract_features(
- src_tokens, return_all_hiddens=return_all_hiddens
- )
- if not features_only:
- x = self.output_layer(x, masked_tokens=masked_tokens)
- return x, extra
-
- def extract_features(self, src_tokens, return_all_hiddens=False, **kwargs):
- encoder_out = self.sentence_encoder(
- src_tokens,
- return_all_hiddens=return_all_hiddens,
- token_embeddings=kwargs.get("token_embeddings", None),
- )
- # T x B x C -> B x T x C
- features = encoder_out["encoder_out"][0].transpose(0, 1)
- inner_states = encoder_out["encoder_states"] if return_all_hiddens else None
- return features, {"inner_states": inner_states}
-
- def output_layer(self, features, masked_tokens=None, **unused):
- return self.lm_head(features, masked_tokens)
-
- def max_positions(self):
- """Maximum output length supported by the encoder."""
- return self.args.max_positions
-
-
-@register_model_architecture("roberta", "roberta")
-def base_architecture(args):
- args.encoder_layers = safe_getattr(args, "encoder_layers", 12)
- args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 768)
- args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 3072)
- args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 12)
-
- args.dropout = safe_getattr(args, "dropout", 0.1)
- args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1)
- args.activation_dropout = safe_getattr(args, "activation_dropout", 0.0)
- args.pooler_dropout = safe_getattr(args, "pooler_dropout", 0.0)
-
- args.max_source_positions = safe_getattr(args, "max_positions", 512)
- args.no_token_positional_embeddings = safe_getattr(
- args, "no_token_positional_embeddings", False
- )
-
- # BERT has a few structural differences compared to the original Transformer
- args.encoder_learned_pos = safe_getattr(args, "encoder_learned_pos", True)
- args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", True)
- args.no_scale_embedding = safe_getattr(args, "no_scale_embedding", True)
- args.activation_fn = safe_getattr(args, "activation_fn", "gelu")
- args.encoder_normalize_before = safe_getattr(args, "encoder_normalize_before", False)
- args.pooler_activation_fn = safe_getattr(args, "pooler_activation_fn", "tanh")
- args.untie_weights_roberta = safe_getattr(args, "untie_weights_roberta", False)
-
- # Adaptive input config
- args.adaptive_input = safe_getattr(args, "adaptive_input", False)
-
- # LayerDrop config
- args.encoder_layerdrop = safe_getattr(args, "encoder_layerdrop", 0.0)
- args.encoder_layers_to_keep = safe_getattr(args, "encoder_layers_to_keep", None)
-
- # Quantization noise config
- args.quant_noise_pq = safe_getattr(args, "quant_noise_pq", 0)
- args.quant_noise_pq_block_size = safe_getattr(args, "quant_noise_pq_block_size", 8)
- args.quant_noise_scalar = safe_getattr(args, "quant_noise_scalar", 0)
-
- # R4F config
- args.spectral_norm_classification_head = safe_getattr(
- args, "spectral_norm_classification_head", False
- )
-
-
-@register_model_architecture("roberta", "roberta_prenorm")
-def roberta_prenorm_architecture(args):
- args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", False)
- args.encoder_normalize_before = safe_getattr(args, "encoder_normalize_before", True)
- base_architecture(args)
-
-
-@register_model_architecture("roberta", "roberta_base")
-def roberta_base_architecture(args):
- base_architecture(args)
-
-
-@register_model_architecture("roberta", "roberta_large")
-def roberta_large_architecture(args):
- args.encoder_layers = safe_getattr(args, "encoder_layers", 24)
- args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 16)
- base_architecture(args)
-
-
-@register_model_architecture("roberta", "xlm")
-def xlm_architecture(args):
- args.encoder_layers = safe_getattr(args, "encoder_layers", 16)
- args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 1280)
- args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 1280 * 4)
- args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 16)
- base_architecture(args)
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/inference/api.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/inference/api.py
deleted file mode 100644
index d6bcabd194a4531801941d5e1d248dc134ce255f..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/inference/api.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from starlette.responses import StreamingResponse
-from tts import MelToWav, TextToMel
-from advanced_tts import load_all_models, run_tts_paragraph
-from typing import Optional
-from pydantic import BaseModel
-from fastapi import FastAPI, HTTPException
-import uvicorn
-import base64
-import argparse
-import json
-import time
-from argparse import Namespace
-
-app = FastAPI()
-
-
-class TextJson(BaseModel):
- text: str
- lang: Optional[str] = "hi"
- noise_scale: Optional[float]=0.667
- length_scale: Optional[float]=1.0
- transliteration: Optional[int]=1
- number_conversion: Optional[int]=1
- split_sentences: Optional[int]=1
-
-
-
-
-@app.post("/TTS/")
-async def tts(input: TextJson):
- text = input.text
- lang = input.lang
-
- args = Namespace(**input.dict())
-
- args.wav = '../../results/api/'+str(int(time.time())) + '.wav'
-
- if text:
- sr, audio = run_tts_paragraph(args)
- else:
- raise HTTPException(status_code=400, detail={"error": "No text"})
-
- ## to return outpur as a file
- audio = open(args.wav, mode='rb')
- return StreamingResponse(audio, media_type="audio/wav")
-
- # with open(args.wav, "rb") as audio_file:
- # encoded_bytes = base64.b64encode(audio_file.read())
- # encoded_string = encoded_bytes.decode()
- # return {"encoding": "base64", "data": encoded_string, "sr": sr}
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("-a", "--acoustic", required=True, type=str)
- parser.add_argument("-v", "--vocoder", required=True, type=str)
- parser.add_argument("-d", "--device", type=str, default="cpu")
- parser.add_argument("-L", "--lang", type=str, required=True)
-
- args = parser.parse_args()
-
- load_all_models(args)
-
- uvicorn.run(
- "api:app", host="0.0.0.0", port=6006, log_level="debug"
- )
diff --git a/spaces/Hazem/roop/roop/capturer.py b/spaces/Hazem/roop/roop/capturer.py
deleted file mode 100644
index fd49d468dd4cd45832ab9612205968207a6f45cf..0000000000000000000000000000000000000000
--- a/spaces/Hazem/roop/roop/capturer.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from typing import Any
-import cv2
-
-
-def get_video_frame(video_path: str, frame_number: int = 0) -> Any:
- capture = cv2.VideoCapture(video_path)
- frame_total = capture.get(cv2.CAP_PROP_FRAME_COUNT)
- capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1))
- has_frame, frame = capture.read()
- capture.release()
- if has_frame:
- return frame
- return None
-
-
-def get_video_frame_total(video_path: str) -> int:
- capture = cv2.VideoCapture(video_path)
- video_frame_total = int(capture.get(cv2.CAP_PROP_FRAME_COUNT))
- capture.release()
- return video_frame_total
diff --git a/spaces/Hila/RobustViT/imagenet_ablation_gt.py b/spaces/Hila/RobustViT/imagenet_ablation_gt.py
deleted file mode 100644
index bb68a21ec9f10081c19dac0c40623eb13c4b9278..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/imagenet_ablation_gt.py
+++ /dev/null
@@ -1,590 +0,0 @@
-import argparse
-import os
-import random
-import shutil
-import time
-import warnings
-
-import torch
-import torch.nn as nn
-import torch.nn.parallel
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-import torch.optim
-import torch.multiprocessing as mp
-import torch.utils.data
-import torch.utils.data.distributed
-import torchvision.transforms as transforms
-import torchvision.datasets as datasets
-import torchvision.models as models
-from segmentation_dataset import SegmentationDataset, VAL_PARTITION, TRAIN_PARTITION
-
-# Uncomment the expected model below
-
-# ViT
-from ViT.ViT import vit_base_patch16_224 as vit
-# from ViT.ViT import vit_large_patch16_224 as vit
-
-# ViT-AugReg
-# from ViT.ViT_new import vit_small_patch16_224 as vit
-# from ViT.ViT_new import vit_base_patch16_224 as vit
-# from ViT.ViT_new import vit_large_patch16_224 as vit
-
-# DeiT
-# from ViT.ViT import deit_base_patch16_224 as vit
-# from ViT.ViT import deit_small_patch16_224 as vit
-
-from ViT.explainer import generate_relevance, get_image_with_relevance
-import torchvision
-import cv2
-from torch.utils.tensorboard import SummaryWriter
-import json
-
-model_names = sorted(name for name in models.__dict__
- if name.islower() and not name.startswith("__")
- and callable(models.__dict__[name]))
-model_names.append("vit")
-
-parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
-parser.add_argument('--data', metavar='DATA',
- help='path to dataset')
-parser.add_argument('--seg_data', metavar='SEG_DATA',
- help='path to segmentation dataset')
-parser.add_argument('-a', '--arch', metavar='ARCH', default='resnet18',
- choices=model_names,
- help='model architecture: ' +
- ' | '.join(model_names) +
- ' (default: resnet18)')
-parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
- help='number of data loading workers (default: 4)')
-parser.add_argument('--epochs', default=150, type=int, metavar='N',
- help='number of total epochs to run')
-parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
- help='manual epoch number (useful on restarts)')
-parser.add_argument('-b', '--batch-size', default=8, type=int,
- metavar='N',
- help='mini-batch size (default: 256), this is the total '
- 'batch size of all GPUs on the current node when '
- 'using Data Parallel or Distributed Data Parallel')
-parser.add_argument('--lr', '--learning-rate', default=3e-6, type=float,
- metavar='LR', help='initial learning rate', dest='lr')
-parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
- help='momentum')
-parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,
- metavar='W', help='weight decay (default: 1e-4)',
- dest='weight_decay')
-parser.add_argument('-p', '--print-freq', default=10, type=int,
- metavar='N', help='print frequency (default: 10)')
-parser.add_argument('--resume', default='', type=str, metavar='PATH',
- help='path to latest checkpoint (default: none)')
-parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
- help='evaluate model on validation set')
-parser.add_argument('--pretrained', dest='pretrained', action='store_true',
- help='use pre-trained model')
-parser.add_argument('--world-size', default=-1, type=int,
- help='number of nodes for distributed training')
-parser.add_argument('--rank', default=-1, type=int,
- help='node rank for distributed training')
-parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str,
- help='url used to set up distributed training')
-parser.add_argument('--dist-backend', default='nccl', type=str,
- help='distributed backend')
-parser.add_argument('--seed', default=None, type=int,
- help='seed for initializing training. ')
-parser.add_argument('--gpu', default=None, type=int,
- help='GPU id to use.')
-parser.add_argument('--save_interval', default=20, type=int,
- help='interval to save segmentation results.')
-parser.add_argument('--num_samples', default=3, type=int,
- help='number of samples per class for training')
-parser.add_argument('--multiprocessing-distributed', action='store_true',
- help='Use multi-processing distributed training to launch '
- 'N processes per node, which has N GPUs. This is the '
- 'fastest way to use PyTorch for either single node or '
- 'multi node data parallel training')
-parser.add_argument('--lambda_seg', default=0.8, type=float,
- help='influence of segmentation loss.')
-parser.add_argument('--lambda_acc', default=0.2, type=float,
- help='influence of accuracy loss.')
-parser.add_argument('--experiment_folder', default=None, type=str,
- help='path to folder to use for experiment.')
-parser.add_argument('--dilation', default=0, type=float,
- help='Use dilation on the segmentation maps.')
-parser.add_argument('--lambda_background', default=2, type=float,
- help='coefficient of loss for segmentation background.')
-parser.add_argument('--lambda_foreground', default=0.3, type=float,
- help='coefficient of loss for segmentation foreground.')
-parser.add_argument('--num_classes', default=500, type=int,
- help='coefficient of loss for segmentation foreground.')
-parser.add_argument('--temperature', default=1, type=float,
- help='temperature for softmax (mostly for DeiT).')
-
-best_loss = float('inf')
-
-def main():
- args = parser.parse_args()
-
- if args.experiment_folder is None:
- args.experiment_folder = f'experiment/' \
- f'lr_{args.lr}_seg_{args.lambda_seg}_acc_{args.lambda_acc}' \
- f'_bckg_{args.lambda_background}_fgd_{args.lambda_foreground}'
- if args.temperature != 1:
- args.experiment_folder = args.experiment_folder + f'_tempera_{args.temperature}'
- if args.batch_size != 8:
- args.experiment_folder = args.experiment_folder + f'_bs_{args.batch_size}'
- if args.num_classes != 500:
- args.experiment_folder = args.experiment_folder + f'_num_classes_{args.num_classes}'
- if args.num_samples != 3:
- args.experiment_folder = args.experiment_folder + f'_num_samples_{args.num_samples}'
- if args.epochs != 150:
- args.experiment_folder = args.experiment_folder + f'_num_epochs_{args.epochs}'
-
- if os.path.exists(args.experiment_folder):
- raise Exception(f"Experiment path {args.experiment_folder} already exists!")
- os.mkdir(args.experiment_folder)
- os.mkdir(f'{args.experiment_folder}/train_samples')
- os.mkdir(f'{args.experiment_folder}/val_samples')
-
- with open(f'{args.experiment_folder}/commandline_args.txt', 'w') as f:
- json.dump(args.__dict__, f, indent=2)
-
- if args.seed is not None:
- random.seed(args.seed)
- torch.manual_seed(args.seed)
- cudnn.deterministic = True
- warnings.warn('You have chosen to seed training. '
- 'This will turn on the CUDNN deterministic setting, '
- 'which can slow down your training considerably! '
- 'You may see unexpected behavior when restarting '
- 'from checkpoints.')
-
- if args.gpu is not None:
- warnings.warn('You have chosen a specific GPU. This will completely '
- 'disable data parallelism.')
-
- if args.dist_url == "env://" and args.world_size == -1:
- args.world_size = int(os.environ["WORLD_SIZE"])
-
- args.distributed = args.world_size > 1 or args.multiprocessing_distributed
-
- ngpus_per_node = torch.cuda.device_count()
- if args.multiprocessing_distributed:
- # Since we have ngpus_per_node processes per node, the total world_size
- # needs to be adjusted accordingly
- args.world_size = ngpus_per_node * args.world_size
- # Use torch.multiprocessing.spawn to launch distributed processes: the
- # main_worker process function
- mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
- else:
- # Simply call main_worker function
- main_worker(args.gpu, ngpus_per_node, args)
-
-
-def main_worker(gpu, ngpus_per_node, args):
- global best_loss
- args.gpu = gpu
-
- if args.gpu is not None:
- print("Use GPU: {} for training".format(args.gpu))
-
- if args.distributed:
- if args.dist_url == "env://" and args.rank == -1:
- args.rank = int(os.environ["RANK"])
- if args.multiprocessing_distributed:
- # For multiprocessing distributed training, rank needs to be the
- # global rank among all the processes
- args.rank = args.rank * ngpus_per_node + gpu
- dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
- world_size=args.world_size, rank=args.rank)
- # create model
- if args.pretrained:
- print("=> using pre-trained model '{}'".format(args.arch))
- model = models.__dict__[args.arch](pretrained=True)
- else:
- print("=> creating model '{}'".format(args.arch))
- #model = models.__dict__[args.arch]()
- model = vit(pretrained=True).cuda()
- model.train()
- print("done")
-
- if not torch.cuda.is_available():
- print('using CPU, this will be slow')
- elif args.distributed:
- # For multiprocessing distributed, DistributedDataParallel constructor
- # should always set the single device scope, otherwise,
- # DistributedDataParallel will use all available devices.
- if args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model.cuda(args.gpu)
- # When using a single GPU per process and per
- # DistributedDataParallel, we need to divide the batch size
- # ourselves based on the total number of GPUs we have
- args.batch_size = int(args.batch_size / ngpus_per_node)
- args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- else:
- model.cuda()
- # DistributedDataParallel will divide and allocate batch_size to all
- # available GPUs if device_ids are not set
- model = torch.nn.parallel.DistributedDataParallel(model)
- elif args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model = model.cuda(args.gpu)
- else:
- # DataParallel will divide and allocate batch_size to all available GPUs
- if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
- model.features = torch.nn.DataParallel(model.features)
- model.cuda()
- else:
- print("start")
- model = torch.nn.DataParallel(model).cuda()
-
- # define loss function (criterion) and optimizer
- criterion = nn.CrossEntropyLoss().cuda(args.gpu)
- optimizer = torch.optim.AdamW(model.parameters(), args.lr, weight_decay=args.weight_decay)
-
- # optionally resume from a checkpoint
- if args.resume:
- if os.path.isfile(args.resume):
- print("=> loading checkpoint '{}'".format(args.resume))
- if args.gpu is None:
- checkpoint = torch.load(args.resume)
- else:
- # Map model to be loaded to specified single gpu.
- loc = 'cuda:{}'.format(args.gpu)
- checkpoint = torch.load(args.resume, map_location=loc)
- args.start_epoch = checkpoint['epoch']
- best_loss = checkpoint['best_loss']
- if args.gpu is not None:
- # best_loss may be from a checkpoint from a different GPU
- best_loss = best_loss.to(args.gpu)
- model.load_state_dict(checkpoint['state_dict'])
- optimizer.load_state_dict(checkpoint['optimizer'])
- print("=> loaded checkpoint '{}' (epoch {})"
- .format(args.resume, checkpoint['epoch']))
- else:
- print("=> no checkpoint found at '{}'".format(args.resume))
-
- cudnn.benchmark = True
-
- train_dataset = SegmentationDataset(args.seg_data, args.data, partition=TRAIN_PARTITION, train_classes=args.num_classes,
- num_samples=args.num_samples)
-
- if args.distributed:
- train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
- else:
- train_sampler = None
-
- train_loader = torch.utils.data.DataLoader(
- train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None),
- num_workers=args.workers, pin_memory=True, sampler=train_sampler)
-
- val_dataset = SegmentationDataset(args.seg_data, args.data, partition=VAL_PARTITION, train_classes=args.num_classes,
- num_samples=1)
-
- val_loader = torch.utils.data.DataLoader(
- val_dataset, batch_size=10, shuffle=False,
- num_workers=args.workers, pin_memory=True)
-
- if args.evaluate:
- validate(val_loader, model, criterion, 0, args)
- return
-
- for epoch in range(args.start_epoch, args.epochs):
- if args.distributed:
- train_sampler.set_epoch(epoch)
- adjust_learning_rate(optimizer, epoch, args)
-
- log_dir = os.path.join(args.experiment_folder, 'logs')
- logger = SummaryWriter(log_dir=log_dir)
- args.logger = logger
-
- # train for one epoch
- train(train_loader, model, criterion, optimizer, epoch, args)
-
- # evaluate on validation set
- loss1 = validate(val_loader, model, criterion, epoch, args)
-
- # remember best acc@1 and save checkpoint
- is_best = loss1 <= best_loss
- best_loss = min(loss1, best_loss)
-
- if not args.multiprocessing_distributed or (args.multiprocessing_distributed
- and args.rank % ngpus_per_node == 0):
- save_checkpoint({
- 'epoch': epoch + 1,
- 'arch': args.arch,
- 'state_dict': model.state_dict(),
- 'best_loss': best_loss,
- 'optimizer' : optimizer.state_dict(),
- }, is_best, folder=args.experiment_folder)
-
-
-def train(train_loader, model, criterion, optimizer, epoch, args):
- mse_criterion = torch.nn.MSELoss(reduction='mean')
-
- losses = AverageMeter('Loss', ':.4e')
- top1 = AverageMeter('Acc@1', ':6.2f')
- top5 = AverageMeter('Acc@5', ':6.2f')
- orig_top1 = AverageMeter('Acc@1_orig', ':6.2f')
- orig_top5 = AverageMeter('Acc@5_orig', ':6.2f')
- progress = ProgressMeter(
- len(train_loader),
- [losses, top1, top5, orig_top1, orig_top5],
- prefix="Epoch: [{}]".format(epoch))
-
- orig_model = vit(pretrained=True).cuda()
- orig_model.eval()
-
- # switch to train mode
- model.train()
-
- for i, (seg_map, image_ten, class_name) in enumerate(train_loader):
- if torch.cuda.is_available():
- image_ten = image_ten.cuda(args.gpu, non_blocking=True)
- seg_map = seg_map.cuda(args.gpu, non_blocking=True)
- class_name = class_name.cuda(args.gpu, non_blocking=True)
-
- # segmentation loss
- relevance = generate_relevance(model, image_ten, index=class_name)
-
- reverse_seg_map = seg_map.clone()
- reverse_seg_map[reverse_seg_map == 1] = -1
- reverse_seg_map[reverse_seg_map == 0] = 1
- reverse_seg_map[reverse_seg_map == -1] = 0
- background_loss = mse_criterion(relevance * reverse_seg_map, torch.zeros_like(relevance))
- foreground_loss = mse_criterion(relevance * seg_map, seg_map)
- segmentation_loss = args.lambda_background * background_loss
- segmentation_loss += args.lambda_foreground * foreground_loss
-
- # classification loss
- output = model(image_ten)
- with torch.no_grad():
- output_orig = orig_model(image_ten)
-
- _, pred = output.topk(1, 1, True, True)
- pred = pred.flatten()
-
- if args.temperature != 1:
- output = output / args.temperature
- classification_loss = criterion(output, class_name.flatten())
-
- loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss
-
- # debugging output
- if i % args.save_interval == 0:
- orig_relevance = generate_relevance(orig_model, image_ten, index=class_name)
- for j in range(image_ten.shape[0]):
- image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j]))
- new_vis = get_image_with_relevance(image_ten[j], relevance[j])
- old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j])
- gt = get_image_with_relevance(image_ten[j], seg_map[j])
- h_img = cv2.hconcat([image, gt, old_vis, new_vis])
- cv2.imwrite(f'{args.experiment_folder}/train_samples/res_{i}_{j}.jpg', h_img)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, class_name, topk=(1, 5))
- losses.update(loss.item(), image_ten.size(0))
- top1.update(acc1[0], image_ten.size(0))
- top5.update(acc5[0], image_ten.size(0))
-
- # metrics for original vit
- acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5))
- orig_top1.update(acc1_orig[0], image_ten.size(0))
- orig_top5.update(acc5_orig[0], image_ten.size(0))
-
- # compute gradient and do SGD step
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- if i % args.print_freq == 0:
- progress.display(i)
- args.logger.add_scalar('{}/{}'.format('train', 'segmentation_loss'), segmentation_loss,
- epoch*len(train_loader)+i)
- args.logger.add_scalar('{}/{}'.format('train', 'classification_loss'), classification_loss,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'orig_top1'), acc1_orig,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'top1'), acc1,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'orig_top5'), acc5_orig,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'top5'), acc5,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'tot_loss'), loss,
- epoch * len(train_loader) + i)
-
-
-def validate(val_loader, model, criterion, epoch, args):
- mse_criterion = torch.nn.MSELoss(reduction='mean')
-
- losses = AverageMeter('Loss', ':.4e')
- top1 = AverageMeter('Acc@1', ':6.2f')
- top5 = AverageMeter('Acc@5', ':6.2f')
- orig_top1 = AverageMeter('Acc@1_orig', ':6.2f')
- orig_top5 = AverageMeter('Acc@5_orig', ':6.2f')
- progress = ProgressMeter(
- len(val_loader),
- [losses, top1, top5, orig_top1, orig_top5],
- prefix="Epoch: [{}]".format(val_loader))
-
- # switch to evaluate mode
- model.eval()
-
- orig_model = vit(pretrained=True).cuda()
- orig_model.eval()
-
- with torch.no_grad():
- for i, (seg_map, image_ten, class_name) in enumerate(val_loader):
- if args.gpu is not None:
- image_ten = image_ten.cuda(args.gpu, non_blocking=True)
- if torch.cuda.is_available():
- seg_map = seg_map.cuda(args.gpu, non_blocking=True)
- class_name = class_name.cuda(args.gpu, non_blocking=True)
-
- # segmentation loss
- with torch.enable_grad():
- relevance = generate_relevance(model, image_ten, index=class_name)
-
- reverse_seg_map = seg_map.clone()
- reverse_seg_map[reverse_seg_map == 1] = -1
- reverse_seg_map[reverse_seg_map == 0] = 1
- reverse_seg_map[reverse_seg_map == -1] = 0
- background_loss = mse_criterion(relevance * reverse_seg_map, torch.zeros_like(relevance))
- foreground_loss = mse_criterion(relevance * seg_map, seg_map)
- segmentation_loss = args.lambda_background * background_loss
- segmentation_loss += args.lambda_foreground * foreground_loss
-
- # classification loss
- with torch.no_grad():
- output = model(image_ten)
- output_orig = orig_model(image_ten)
-
- _, pred = output.topk(1, 1, True, True)
- pred = pred.flatten()
- if args.temperature != 1:
- output = output / args.temperature
- classification_loss = criterion(output, class_name.flatten())
-
- loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss
-
- # save results
- if i % args.save_interval == 0:
- with torch.enable_grad():
- orig_relevance = generate_relevance(orig_model, image_ten, index=class_name)
- for j in range(image_ten.shape[0]):
- image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j]))
- new_vis = get_image_with_relevance(image_ten[j], relevance[j])
- old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j])
- gt = get_image_with_relevance(image_ten[j], seg_map[j])
- h_img = cv2.hconcat([image, gt, old_vis, new_vis])
- cv2.imwrite(f'{args.experiment_folder}/val_samples/res_{i}_{j}.jpg', h_img)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, class_name, topk=(1, 5))
- losses.update(loss.item(), image_ten.size(0))
- top1.update(acc1[0], image_ten.size(0))
- top5.update(acc5[0], image_ten.size(0))
-
- # metrics for original vit
- acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5))
- orig_top1.update(acc1_orig[0], image_ten.size(0))
- orig_top5.update(acc5_orig[0], image_ten.size(0))
-
- if i % args.print_freq == 0:
- progress.display(i)
- args.logger.add_scalar('{}/{}'.format('val', 'segmentation_loss'), segmentation_loss,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'classification_loss'), classification_loss,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'orig_top1'), acc1_orig,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'top1'), acc1,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'orig_top5'), acc5_orig,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'top5'), acc5,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'tot_loss'), loss,
- epoch * len(val_loader) + i)
-
- # TODO: this should also be done with the ProgressMeter
- print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
- .format(top1=top1, top5=top5))
-
- return losses.avg
-
-
-def save_checkpoint(state, is_best, folder, filename='checkpoint.pth.tar'):
- torch.save(state, f'{folder}/{filename}')
- if is_best:
- shutil.copyfile(f'{folder}/{filename}', f'{folder}/model_best.pth.tar')
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
- def __init__(self, name, fmt=':f'):
- self.name = name
- self.fmt = fmt
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
- return fmtstr.format(**self.__dict__)
-
-
-class ProgressMeter(object):
- def __init__(self, num_batches, meters, prefix=""):
- self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
- self.meters = meters
- self.prefix = prefix
-
- def display(self, batch):
- entries = [self.prefix + self.batch_fmtstr.format(batch)]
- entries += [str(meter) for meter in self.meters]
- print('\t'.join(entries))
-
- def _get_batch_fmtstr(self, num_batches):
- num_digits = len(str(num_batches // 1))
- fmt = '{:' + str(num_digits) + 'd}'
- return '[' + fmt + '/' + fmt.format(num_batches) + ']'
-
-def adjust_learning_rate(optimizer, epoch, args):
- """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
- lr = args.lr * (0.85 ** (epoch // 2))
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
-
-
-def accuracy(output, target, topk=(1,)):
- """Computes the accuracy over the k top predictions for the specified values of k"""
- with torch.no_grad():
- maxk = max(topk)
- batch_size = target.size(0)
-
- _, pred = output.topk(maxk, 1, True, True)
- pred = pred.t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
-
- res = []
- for k in topk:
- correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
- res.append(correct_k.mul_(100.0 / batch_size))
- return res
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/Hise/rvc-hololive-models/config.py b/spaces/Hise/rvc-hololive-models/config.py
deleted file mode 100644
index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000
--- a/spaces/Hise/rvc-hololive-models/config.py
+++ /dev/null
@@ -1,88 +0,0 @@
-########################硬件参数########################
-
-# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速
-device = "cuda:0"
-
-# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速
-is_half = True
-
-# 默认0用上所有线程,写数字限制CPU资源使用
-n_cpu = 0
-
-########################硬件参数########################
-
-
-##################下为参数处理逻辑,勿动##################
-
-########################命令行参数########################
-import argparse
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--port", type=int, default=7865, help="Listen port")
-parser.add_argument("--pycmd", type=str, default="python", help="Python command")
-parser.add_argument("--colab", action="store_true", help="Launch in colab")
-parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
-)
-parser.add_argument(
- "--noautoopen", action="store_true", help="Do not open in browser automatically"
-)
-cmd_opts, unknown = parser.parse_known_args()
-
-python_cmd = cmd_opts.pycmd
-listen_port = cmd_opts.port
-iscolab = cmd_opts.colab
-noparallel = cmd_opts.noparallel
-noautoopen = cmd_opts.noautoopen
-########################命令行参数########################
-
-import sys
-import torch
-
-
-# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
-# check `getattr` and try it for compatibility
-def has_mps() -> bool:
- if sys.platform != "darwin":
- return False
- else:
- if not getattr(torch, "has_mps", False):
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
-
-if not torch.cuda.is_available():
- if has_mps():
- print("没有发现支持的N卡, 使用MPS进行推理")
- device = "mps"
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- device = "cpu"
- is_half = False
-
-if device not in ["cpu", "mps"]:
- gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1]))
- if "16" in gpu_name or "MX" in gpu_name:
- print("16系显卡/MX系显卡强制单精度")
- is_half = False
-
-from multiprocessing import cpu_count
-
-if n_cpu == 0:
- n_cpu = cpu_count()
-if is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
-else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
diff --git a/spaces/HuguesdeF/moulinette/code/__init__.py b/spaces/HuguesdeF/moulinette/code/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/IHaBiS/wd-v1-4-tags/Utils/dbimutils.py b/spaces/IHaBiS/wd-v1-4-tags/Utils/dbimutils.py
deleted file mode 100644
index 93ea643fab15794e3fb142c32e72ff0e015f2252..0000000000000000000000000000000000000000
--- a/spaces/IHaBiS/wd-v1-4-tags/Utils/dbimutils.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# DanBooru IMage Utility functions
-
-import cv2
-import numpy as np
-from PIL import Image
-
-
-def smart_imread(img, flag=cv2.IMREAD_UNCHANGED):
- if img.endswith(".gif"):
- img = Image.open(img)
- img = img.convert("RGB")
- img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
- else:
- img = cv2.imread(img, flag)
- return img
-
-
-def smart_24bit(img):
- if img.dtype is np.dtype(np.uint16):
- img = (img / 257).astype(np.uint8)
-
- if len(img.shape) == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
- elif img.shape[2] == 4:
- trans_mask = img[:, :, 3] == 0
- img[trans_mask] = [255, 255, 255, 255]
- img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)
- return img
-
-
-def make_square(img, target_size):
- old_size = img.shape[:2]
- desired_size = max(old_size)
- desired_size = max(desired_size, target_size)
-
- delta_w = desired_size - old_size[1]
- delta_h = desired_size - old_size[0]
- top, bottom = delta_h // 2, delta_h - (delta_h // 2)
- left, right = delta_w // 2, delta_w - (delta_w // 2)
-
- color = [255, 255, 255]
- new_im = cv2.copyMakeBorder(
- img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color
- )
- return new_im
-
-
-def smart_resize(img, size):
- # Assumes the image has already gone through make_square
- if img.shape[0] > size:
- img = cv2.resize(img, (size, size), interpolation=cv2.INTER_AREA)
- elif img.shape[0] < size:
- img = cv2.resize(img, (size, size), interpolation=cv2.INTER_CUBIC)
- return img
\ No newline at end of file
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/__init__.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Illumotion/Koboldcpp/convert-starcoder-hf-to-gguf.py b/spaces/Illumotion/Koboldcpp/convert-starcoder-hf-to-gguf.py
deleted file mode 100644
index 48e88a777fea1db20a1f42633203d326578de16e..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/convert-starcoder-hf-to-gguf.py
+++ /dev/null
@@ -1,242 +0,0 @@
-#!/usr/bin/env python3
-# HF starcoder --> gguf conversion
-
-from __future__ import annotations
-
-import argparse
-import json
-import os
-import struct
-import sys
-from pathlib import Path
-from typing import Any
-
-import numpy as np
-import torch
-from transformers import AutoTokenizer # type: ignore[import]
-
-if 'NO_LOCAL_GGUF' not in os.environ:
- sys.path.insert(1, str(Path(__file__).parent / 'gguf-py' / 'gguf'))
-import gguf
-
-
-def bytes_to_unicode():
- # ref: https://github.com/openai/gpt-2/blob/master/src/encoder.py
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a significant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8+n)
- n += 1
- return dict(zip(bs, (chr(n) for n in cs)))
-
-
-def count_model_parts(dir_model: Path) -> int:
- num_parts = 0
- for filename in os.listdir(dir_model):
- if filename.startswith("pytorch_model-"):
- num_parts += 1
-
- if num_parts > 0:
- print("gguf: found " + str(num_parts) + " model parts")
- return num_parts
-
-
-def parse_args() -> argparse.Namespace:
- parser = argparse.ArgumentParser(description="Convert a StarCoder model to a GGML compatible file")
- parser.add_argument("--vocab-only", action="store_true", help="extract only the vocab")
- parser.add_argument("--outfile", type=Path, help="path to write to; default: based on input")
- parser.add_argument("model", type=Path, help="directory containing model file, or model file itself (*.bin)")
- parser.add_argument("ftype", type=int, help="output format - use 0 for float32, 1 for float16", choices=[0, 1], default = 1)
- return parser.parse_args()
-
-args = parse_args()
-
-dir_model = args.model
-ftype = args.ftype
-if not dir_model.is_dir():
- print(f'Error: {args.model} is not a directory', file = sys.stderr)
- sys.exit(1)
-
-# possible tensor data types
-# ftype == 0 -> float32
-# ftype == 1 -> float16
-
-# map from ftype to string
-ftype_str = ["f32", "f16"]
-
-if args.outfile is not None:
- fname_out = args.outfile
-else:
- # output in the same directory as the model by default
- fname_out = dir_model / f'ggml-model-{ftype_str[ftype]}.gguf'
-
-print("gguf: loading model "+dir_model.name)
-
-with open(dir_model / "config.json", "r", encoding="utf-8") as f:
- hparams = json.load(f)
-
-if hparams["architectures"][0] != "GPTBigCodeForCausalLM":
- print("Model architecture not supported: " + hparams["architectures"][0])
-
- sys.exit(1)
-
-# get number of model parts
-num_parts = count_model_parts(dir_model)
-
-ARCH=gguf.MODEL_ARCH.STARCODER
-gguf_writer = gguf.GGUFWriter(fname_out, gguf.MODEL_ARCH_NAMES[ARCH])
-
-print("gguf: get model metadata")
-
-block_count = hparams["n_layer"]
-
-gguf_writer.add_name("StarCoder")
-gguf_writer.add_context_length(hparams["n_positions"])
-gguf_writer.add_embedding_length(hparams["n_embd"])
-gguf_writer.add_feed_forward_length(4 * hparams["n_embd"])
-gguf_writer.add_block_count(block_count)
-gguf_writer.add_head_count(hparams["n_head"])
-gguf_writer.add_head_count_kv(1)
-gguf_writer.add_layer_norm_eps(hparams["layer_norm_epsilon"])
-gguf_writer.add_file_type(ftype)
-
-# TOKENIZATION
-
-print("gguf: get tokenizer metadata")
-
-tokens: list[bytearray] = []
-
-tokenizer_json_file = dir_model / 'tokenizer.json'
-if not tokenizer_json_file.is_file():
- print(f'Error: Missing {tokenizer_json_file}', file = sys.stderr)
- sys.exit(1)
-
-# gpt2 tokenizer
-gguf_writer.add_tokenizer_model("gpt2")
-
-with open(tokenizer_json_file, "r", encoding="utf-8") as f:
- tokenizer_json = json.load(f)
-
-print("gguf: get gpt2 tokenizer vocab")
-
-# The number of tokens in tokenizer.json can differ from the expected vocab size.
-# This causes downstream issues with mismatched tensor sizes when running the inference
-vocab_size = hparams["vocab_size"] if "vocab_size" in hparams else len(tokenizer_json["model"]["vocab"])
-
-# ref: https://github.com/cmp-nct/ggllm.cpp/blob/master/falcon_convert.py
-tokenizer = AutoTokenizer.from_pretrained(dir_model)
-
-reverse_vocab = {id: encoded_tok for encoded_tok, id in tokenizer.vocab.items()}
-byte_encoder = bytes_to_unicode()
-byte_decoder = {v: k for k, v in byte_encoder.items()}
-
-for i in range(vocab_size):
- if i in reverse_vocab:
- try:
- text = bytearray([byte_decoder[c] for c in reverse_vocab[i]])
- except KeyError:
- text = bytearray()
- for c in reverse_vocab[i]:
- if ord(c) < 256: # single byte character
- text.append(byte_decoder[ord(c)])
- else: # multibyte special token character
- text.extend(c.encode('utf-8'))
- else:
- print(f"Key {i} not in tokenizer vocabulary. Padding with an arbitrary token.")
- pad_token = f"[PAD{i}]".encode("utf8")
- text = bytearray(pad_token)
-
- tokens.append(text)
-
-gguf_writer.add_token_list(tokens)
-
-special_vocab = gguf.SpecialVocab(dir_model, load_merges = True)
-special_vocab.add_to_gguf(gguf_writer)
-
-# TENSORS
-
-tensor_map = gguf.get_tensor_name_map(ARCH,block_count)
-
-# params for qkv transform
-n_head = hparams["n_head"]
-n_head_kv = hparams["n_head_kv"] if "n_head_kv" in hparams else 1
-
-head_dim = hparams["n_embd"] // n_head
-
-# tensor info
-print("gguf: get tensor metadata")
-
-if num_parts == 0:
- part_names = iter(("pytorch_model.bin",))
-else:
- part_names = (
- f"pytorch_model-{n:05}-of-{num_parts:05}.bin" for n in range(1, num_parts + 1)
- )
-
-for part_name in part_names:
- if args.vocab_only:
- break
- print("gguf: loading model part '" + part_name + "'")
- model_part = torch.load(dir_model / part_name, map_location="cpu")
-
- for name in model_part.keys():
- data = model_part[name]
-
- old_dtype = data.dtype
-
- # convert any unsupported data types to float32
- if data.dtype != torch.float16 and data.dtype != torch.float32:
- data = data.to(torch.float32)
-
- data = data.squeeze().numpy()
-
- # map tensor names
- new_name = tensor_map.get_name(name, try_suffixes = (".weight", ".bias"))
- if new_name is None:
- print("Can not map tensor '" + name + "'")
- sys.exit()
-
- n_dims = len(data.shape)
- data_dtype = data.dtype
-
- # if f32 desired, convert any float16 to float32
- if ftype == 0 and data_dtype == np.float16:
- data = data.astype(np.float32)
-
- # TODO: Why cant we use these float16 as-is? There should be not reason to store float16 as float32
- if ftype == 1 and data_dtype == np.float16 and n_dims == 1:
- data = data.astype(np.float32)
-
- # if f16 desired, convert any float32 2-dim weight tensors to float16
- if ftype == 1 and data_dtype == np.float32 and name.endswith(".weight") and n_dims == 2:
- data = data.astype(np.float16)
-
- print(name, "=>", new_name + ", shape = " + str(data.shape) + ", " + str(old_dtype) + " --> " + str(data.dtype))
-
- gguf_writer.add_tensor(new_name, data)
-
-
-print("gguf: write header")
-gguf_writer.write_header_to_file()
-print("gguf: write metadata")
-gguf_writer.write_kv_data_to_file()
-if not args.vocab_only:
- print("gguf: write tensors")
- gguf_writer.write_tensors_to_file()
-
-gguf_writer.close()
-
-print(f"gguf: model successfully exported to '{fname_out}'")
-print("")
diff --git a/spaces/Imran1/Yelp-reviews/README.md b/spaces/Imran1/Yelp-reviews/README.md
deleted file mode 100644
index fa580e3ff10451419d2e64f5acb55d9e3e2b76d0..0000000000000000000000000000000000000000
--- a/spaces/Imran1/Yelp-reviews/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Yelp Reviews
-emoji: ⚡
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jack1804/stabilityai-stable-diffusion-xl-refiner-1.0/README.md b/spaces/Jack1804/stabilityai-stable-diffusion-xl-refiner-1.0/README.md
deleted file mode 100644
index a481d94c1d52943d84a3bdc6b55f36750a218048..0000000000000000000000000000000000000000
--- a/spaces/Jack1804/stabilityai-stable-diffusion-xl-refiner-1.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stabilityai Stable Diffusion Xl Refiner 1.0
-emoji: 📊
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/ExtraInfo/extraInfo.py b/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/ExtraInfo/extraInfo.py
deleted file mode 100644
index e799f9e598f9f2110c063ef15ab765c356fd509e..0000000000000000000000000000000000000000
--- a/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/ExtraInfo/extraInfo.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from shiny import module, ui
-
-@module.ui
-def extra_info_Python_programming_ui():
- return ui.div(
- ui.tags.h3("APRENDE A PROGRAMAR LOS ALGORITMOS EN PYTHON!", style="padding-top:20px;"),
- ui.div(
- ui.markdown("Si quieres **aprender a programar estos algoritmos en Python** para ser capaz de modelar tus propias soluciones para tus datos, aquí tienes un ejemplo del código utilizado en esta herramienta:")
- , style="padding-right:50px; padding-top:20px; text-align: justify; text-justify: inter-word;"
- ),
- ui.a("Notebook con el código fuente de los algoritmos", href="https://colab.research.google.com/drive/1i8b-MYKIZjVsB92VymxQKM-TuTdE5TM8?usp=sharing", target="_blank"),
- style="padding-bottom:40px;"
- )
-
-@module.ui
-def extra_info_project_info_ui():
- return ui.div(
- ui.tags.h3("INFORMACIÓN SOBRE EL PROYECTO"),
- ui.div(
- ui.markdown("""Esta herramienta es el **resultado del Proyecto de Fin de Grado** titulado "Herramienta interactiva para la enseñanza de técnicas de aprendizaje supervisado en ciencias de la salud", realizado por el alumno **Jorge Ruiz Vázquez** bajo la tutela de Violeta Monasterio Bazán para la consecución del título de Ingeniería Informática en la Universidad San Jorge.
-
-Es una **herramienta de uso libre**, dirigida hacia un público con conocimientos en ciencias de la salud, ya sean trabajadores o estudiantes. Su objetivo final es **ayudar a comprender el funcionamiento de las técnicas y algoritmos de aprendizaje supervisado** utilizados en ciencias de la salud, ofreciendo transparencia, mejorando la educación y por tanto el entendimiento de la tecnología.""")
- , style="padding-right:50px; padding-top:30px; text-align: justify; text-justify: inter-word;"
- ),
- style="padding-bottom:100px;"
- )
\ No newline at end of file
diff --git a/spaces/JosephusCheung/ACertainsStrategyTalk/4.html b/spaces/JosephusCheung/ACertainsStrategyTalk/4.html
deleted file mode 100644
index 067a7214d0689466bebaacec6bc761493f2ed3a5..0000000000000000000000000000000000000000
--- a/spaces/JosephusCheung/ACertainsStrategyTalk/4.html
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Comparable Analysis
-Certains Certains Certains
-NovelAI AnyV3 Model Thing Certainty
-masterpiece, best quality, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden
-Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low
-quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name
-Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 114514, Size: 512x768, ENSD: 31337
-
-
-
-
-
-
diff --git a/spaces/KenjieDec/GPEN/face_model/op/upfirdn2d.cpp b/spaces/KenjieDec/GPEN/face_model/op/upfirdn2d.cpp
deleted file mode 100644
index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/face_model/op/upfirdn2d.cpp
+++ /dev/null
@@ -1,23 +0,0 @@
-#include
-
-
-torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1) {
- CHECK_CUDA(input);
- CHECK_CUDA(kernel);
-
- return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/cbhg.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/cbhg.py
deleted file mode 100644
index 10eb6bb85dd2a1711fe7c92ec77bbaaf786f7a53..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/cbhg.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import torch
-import torch.nn as nn
-from .common.batch_norm_conv import BatchNormConv
-from .common.highway_network import HighwayNetwork
-
-class CBHG(nn.Module):
- def __init__(self, K, in_channels, channels, proj_channels, num_highways):
- super().__init__()
-
- # List of all rnns to call `flatten_parameters()` on
- self._to_flatten = []
-
- self.bank_kernels = [i for i in range(1, K + 1)]
- self.conv1d_bank = nn.ModuleList()
- for k in self.bank_kernels:
- conv = BatchNormConv(in_channels, channels, k)
- self.conv1d_bank.append(conv)
-
- self.maxpool = nn.MaxPool1d(kernel_size=2, stride=1, padding=1)
-
- self.conv_project1 = BatchNormConv(len(self.bank_kernels) * channels, proj_channels[0], 3)
- self.conv_project2 = BatchNormConv(proj_channels[0], proj_channels[1], 3, relu=False)
-
- # Fix the highway input if necessary
- if proj_channels[-1] != channels:
- self.highway_mismatch = True
- self.pre_highway = nn.Linear(proj_channels[-1], channels, bias=False)
- else:
- self.highway_mismatch = False
-
- self.highways = nn.ModuleList()
- for i in range(num_highways):
- hn = HighwayNetwork(channels)
- self.highways.append(hn)
-
- self.rnn = nn.GRU(channels, channels // 2, batch_first=True, bidirectional=True)
- self._to_flatten.append(self.rnn)
-
- # Avoid fragmentation of RNN parameters and associated warning
- self._flatten_parameters()
-
- def forward(self, x):
- # Although we `_flatten_parameters()` on init, when using DataParallel
- # the model gets replicated, making it no longer guaranteed that the
- # weights are contiguous in GPU memory. Hence, we must call it again
- self.rnn.flatten_parameters()
-
- # Save these for later
- residual = x
- seq_len = x.size(-1)
- conv_bank = []
-
- # Convolution Bank
- for conv in self.conv1d_bank:
- c = conv(x) # Convolution
- conv_bank.append(c[:, :, :seq_len])
-
- # Stack along the channel axis
- conv_bank = torch.cat(conv_bank, dim=1)
-
- # dump the last padding to fit residual
- x = self.maxpool(conv_bank)[:, :, :seq_len]
-
- # Conv1d projections
- x = self.conv_project1(x)
- x = self.conv_project2(x)
-
- # Residual Connect
- x = x + residual
-
- # Through the highways
- x = x.transpose(1, 2)
- if self.highway_mismatch is True:
- x = self.pre_highway(x)
- for h in self.highways: x = h(x)
-
- # And then the RNN
- x, _ = self.rnn(x)
- return x
-
- def _flatten_parameters(self):
- """Calls `flatten_parameters` on all the rnns used by the WaveRNN. Used
- to improve efficiency and avoid PyTorch yelling at us."""
- [m.flatten_parameters() for m in self._to_flatten]
-
diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/synthesize.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/synthesize.py
deleted file mode 100644
index ff7e0023bb04809c9a44f702a5b8e9ed47704d2b..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/synthesize.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import platform
-from functools import partial
-from pathlib import Path
-
-import numpy as np
-import torch
-from torch.utils.data import DataLoader
-from tqdm import tqdm
-
-from synthesizer.hparams import hparams_debug_string
-from synthesizer.models.tacotron import Tacotron
-from synthesizer.synthesizer_dataset import SynthesizerDataset, collate_synthesizer
-from synthesizer.utils import data_parallel_workaround
-from synthesizer.utils.symbols import symbols
-
-
-def run_synthesis(in_dir: Path, out_dir: Path, syn_model_fpath: Path, hparams):
- # This generates ground truth-aligned mels for vocoder training
- synth_dir = out_dir / "mels_gta"
- synth_dir.mkdir(exist_ok=True, parents=True)
- print(hparams_debug_string())
-
- # Check for GPU
- if torch.cuda.is_available():
- device = torch.device("cuda")
- if hparams.synthesis_batch_size % torch.cuda.device_count() != 0:
- raise ValueError("`hparams.synthesis_batch_size` must be evenly divisible by n_gpus!")
- else:
- device = torch.device("cpu")
- print("Synthesizer using device:", device)
-
- # Instantiate Tacotron model
- model = Tacotron(embed_dims=hparams.tts_embed_dims,
- num_chars=len(symbols),
- encoder_dims=hparams.tts_encoder_dims,
- decoder_dims=hparams.tts_decoder_dims,
- n_mels=hparams.num_mels,
- fft_bins=hparams.num_mels,
- postnet_dims=hparams.tts_postnet_dims,
- encoder_K=hparams.tts_encoder_K,
- lstm_dims=hparams.tts_lstm_dims,
- postnet_K=hparams.tts_postnet_K,
- num_highways=hparams.tts_num_highways,
- dropout=0., # Use zero dropout for gta mels
- stop_threshold=hparams.tts_stop_threshold,
- speaker_embedding_size=hparams.speaker_embedding_size).to(device)
-
- # Load the weights
- print("\nLoading weights at %s" % syn_model_fpath)
- model.load(syn_model_fpath)
- print("Tacotron weights loaded from step %d" % model.step)
-
- # Synthesize using same reduction factor as the model is currently trained
- r = np.int32(model.r)
-
- # Set model to eval mode (disable gradient and zoneout)
- model.eval()
-
- # Initialize the dataset
- metadata_fpath = in_dir.joinpath("train.txt")
- mel_dir = in_dir.joinpath("mels")
- embed_dir = in_dir.joinpath("embeds")
-
- dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams)
- collate_fn = partial(collate_synthesizer, r=r, hparams=hparams)
- data_loader = DataLoader(dataset, hparams.synthesis_batch_size, collate_fn=collate_fn, num_workers=2)
-
- # Generate GTA mels
- meta_out_fpath = out_dir / "synthesized.txt"
- with meta_out_fpath.open("w") as file:
- for i, (texts, mels, embeds, idx) in tqdm(enumerate(data_loader), total=len(data_loader)):
- texts, mels, embeds = texts.to(device), mels.to(device), embeds.to(device)
-
- # Parallelize model onto GPUS using workaround due to python bug
- if device.type == "cuda" and torch.cuda.device_count() > 1:
- _, mels_out, _ = data_parallel_workaround(model, texts, mels, embeds)
- else:
- _, mels_out, _, _ = model(texts, mels, embeds)
-
- for j, k in enumerate(idx):
- # Note: outputs mel-spectrogram files and target ones have same names, just different folders
- mel_filename = Path(synth_dir).joinpath(dataset.metadata[k][1])
- mel_out = mels_out[j].detach().cpu().numpy().T
-
- # Use the length of the ground truth mel to remove padding from the generated mels
- mel_out = mel_out[:int(dataset.metadata[k][4])]
-
- # Write the spectrogram to disk
- np.save(mel_filename, mel_out, allow_pickle=False)
-
- # Write metadata into the synthesized file
- file.write("|".join(dataset.metadata[k]))
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/samples/README.md b/spaces/Kevin676/Real-Time-Voice-Cloning/samples/README.md
deleted file mode 100644
index 1a392d86e42f72e83954619f563f4881da327236..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/samples/README.md
+++ /dev/null
@@ -1,22 +0,0 @@
-The audio files in this folder are provided for toolbox testing and
-benchmarking purposes. These are the same reference utterances
-used by the SV2TTS authors to generate the audio samples located at:
-https://google.github.io/tacotron/publications/speaker_adaptation/index.html
-
-The `p240_00000.mp3` and `p260_00000.mp3` files are compressed
-versions of audios from the VCTK corpus available at:
-https://datashare.is.ed.ac.uk/handle/10283/3443
-VCTK.txt contains the copyright notices and licensing information.
-
-The `1320_00000.mp3`, `3575_00000.mp3`, `6829_00000.mp3`
-and `8230_00000.mp3` files are compressed versions of audios
-from the LibriSpeech dataset available at: https://openslr.org/12
-For these files, the following notice applies:
-```
-LibriSpeech (c) 2014 by Vassil Panayotov
-
-LibriSpeech ASR corpus is licensed under a
-Creative Commons Attribution 4.0 International License.
-
-See .
-```
diff --git a/spaces/Kevin676/Voice-Cloning/README.md b/spaces/Kevin676/Voice-Cloning/README.md
deleted file mode 100644
index 614a9fa7f53e6372e9dffdb061dccf0e674650ae..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Voice-Cloning/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Voice Cloning
-emoji: ⚡
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.11
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: BilalSardar/Voice-Cloning
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KyanChen/RSPrompter/mmdet/evaluation/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/evaluation/__init__.py
deleted file mode 100644
index f70dc226d30f7b8e4ee5a44ca163ad1ae04eabf5..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/evaluation/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .functional import * # noqa: F401,F403
-from .metrics import * # noqa: F401,F403
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/conditional_detr.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/conditional_detr.py
deleted file mode 100644
index d57868e63a2ece085a7e5b67ee93c921ba334830..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/conditional_detr.py
+++ /dev/null
@@ -1,74 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict
-
-import torch.nn as nn
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from ..layers import (ConditionalDetrTransformerDecoder,
- DetrTransformerEncoder, SinePositionalEncoding)
-from .detr import DETR
-
-
-@MODELS.register_module()
-class ConditionalDETR(DETR):
- r"""Implementation of `Conditional DETR for Fast Training Convergence.
-
- `_.
-
- Code is modified from the `official github repo
- `_.
- """
-
- def _init_layers(self) -> None:
- """Initialize layers except for backbone, neck and bbox_head."""
- self.positional_encoding = SinePositionalEncoding(
- **self.positional_encoding)
- self.encoder = DetrTransformerEncoder(**self.encoder)
- self.decoder = ConditionalDetrTransformerDecoder(**self.decoder)
- self.embed_dims = self.encoder.embed_dims
- # NOTE The embed_dims is typically passed from the inside out.
- # For example in DETR, The embed_dims is passed as
- # self_attn -> the first encoder layer -> encoder -> detector.
- self.query_embedding = nn.Embedding(self.num_queries, self.embed_dims)
-
- num_feats = self.positional_encoding.num_feats
- assert num_feats * 2 == self.embed_dims, \
- f'embed_dims should be exactly 2 times of num_feats. ' \
- f'Found {self.embed_dims} and {num_feats}.'
-
- def forward_decoder(self, query: Tensor, query_pos: Tensor, memory: Tensor,
- memory_mask: Tensor, memory_pos: Tensor) -> Dict:
- """Forward with Transformer decoder.
-
- Args:
- query (Tensor): The queries of decoder inputs, has shape
- (bs, num_queries, dim).
- query_pos (Tensor): The positional queries of decoder inputs,
- has shape (bs, num_queries, dim).
- memory (Tensor): The output embeddings of the Transformer encoder,
- has shape (bs, num_feat_points, dim).
- memory_mask (Tensor): ByteTensor, the padding mask of the memory,
- has shape (bs, num_feat_points).
- memory_pos (Tensor): The positional embeddings of memory, has
- shape (bs, num_feat_points, dim).
-
- Returns:
- dict: The dictionary of decoder outputs, which includes the
- `hidden_states` and `references` of the decoder output.
-
- - hidden_states (Tensor): Has shape
- (num_decoder_layers, bs, num_queries, dim)
- - references (Tensor): Has shape
- (bs, num_queries, 2)
- """
-
- hidden_states, references = self.decoder(
- query=query,
- key=memory,
- query_pos=query_pos,
- key_pos=memory_pos,
- key_padding_mask=memory_mask)
- head_inputs_dict = dict(
- hidden_states=hidden_states, references=references)
- return head_inputs_dict
diff --git a/spaces/KyanChen/RSPrompter/mmpl/engine/__init__.py b/spaces/KyanChen/RSPrompter/mmpl/engine/__init__.py
deleted file mode 100644
index b5dfffcfe73ffcf53cb87e50ce6260c189ec2a8e..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpl/engine/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .runner import *
-from .logger import *
-from .hooks import *
-from .visualization import *
-from .strategies import *
\ No newline at end of file
diff --git a/spaces/Latryna/roop/roop/core.py b/spaces/Latryna/roop/roop/core.py
deleted file mode 100644
index 7d9a5001c16fd09f875e506defa3962bc73c5f85..0000000000000000000000000000000000000000
--- a/spaces/Latryna/roop/roop/core.py
+++ /dev/null
@@ -1,217 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import sys
-# single thread doubles cuda performance - needs to be set before torch import
-if any(arg.startswith('--execution-provider') for arg in sys.argv):
- os.environ['OMP_NUM_THREADS'] = '1'
-# reduce tensorflow log level
-os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
-import warnings
-from typing import List
-import platform
-import signal
-import shutil
-import argparse
-import torch
-import onnxruntime
-import tensorflow
-
-import roop.globals
-import roop.metadata
-import roop.ui as ui
-from roop.predicter import predict_image, predict_video
-from roop.processors.frame.core import get_frame_processors_modules
-from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path
-
-if 'ROCMExecutionProvider' in roop.globals.execution_providers:
- del torch
-
-warnings.filterwarnings('ignore', category=FutureWarning, module='insightface')
-warnings.filterwarnings('ignore', category=UserWarning, module='torchvision')
-
-
-def parse_args() -> None:
- signal.signal(signal.SIGINT, lambda signal_number, frame: destroy())
- program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100))
- program.add_argument('-s', '--source', help='select an source image', dest='source_path')
- program.add_argument('-t', '--target', help='select an target image or video', dest='target_path')
- program.add_argument('-o', '--output', help='select output file or directory', dest='output_path')
- program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+')
- program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False)
- program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True)
- program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False)
- program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False)
- program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9'])
- program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]')
- program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory())
- program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+')
- program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads())
- program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}')
-
- args = program.parse_args()
-
- roop.globals.source_path = args.source_path
- roop.globals.target_path = args.target_path
- roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path)
- roop.globals.frame_processors = args.frame_processor
- roop.globals.headless = args.source_path or args.target_path or args.output_path
- roop.globals.keep_fps = args.keep_fps
- roop.globals.keep_audio = args.keep_audio
- roop.globals.keep_frames = args.keep_frames
- roop.globals.many_faces = args.many_faces
- roop.globals.video_encoder = args.video_encoder
- roop.globals.video_quality = args.video_quality
- roop.globals.max_memory = args.max_memory
- roop.globals.execution_providers = decode_execution_providers(args.execution_provider)
- roop.globals.execution_threads = args.execution_threads
-
-
-def encode_execution_providers(execution_providers: List[str]) -> List[str]:
- return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers]
-
-
-def decode_execution_providers(execution_providers: List[str]) -> List[str]:
- return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers()))
- if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)]
-
-
-def suggest_max_memory() -> int:
- if platform.system().lower() == 'darwin':
- return 4
- return 16
-
-
-def suggest_execution_providers() -> List[str]:
- return encode_execution_providers(onnxruntime.get_available_providers())
-
-
-def suggest_execution_threads() -> int:
- if 'DmlExecutionProvider' in roop.globals.execution_providers:
- return 1
- if 'ROCMExecutionProvider' in roop.globals.execution_providers:
- return 1
- return 8
-
-
-def limit_resources() -> None:
- # prevent tensorflow memory leak
- gpus = tensorflow.config.experimental.list_physical_devices('GPU')
- for gpu in gpus:
- tensorflow.config.experimental.set_virtual_device_configuration(gpu, [
- tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)
- ])
- # limit memory usage
- if roop.globals.max_memory:
- memory = roop.globals.max_memory * 1024 ** 3
- if platform.system().lower() == 'darwin':
- memory = roop.globals.max_memory * 1024 ** 6
- if platform.system().lower() == 'windows':
- import ctypes
- kernel32 = ctypes.windll.kernel32
- kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory))
- else:
- import resource
- resource.setrlimit(resource.RLIMIT_DATA, (memory, memory))
-
-
-def release_resources() -> None:
- if 'CUDAExecutionProvider' in roop.globals.execution_providers:
- torch.cuda.empty_cache()
-
-
-def pre_check() -> bool:
- if sys.version_info < (3, 9):
- update_status('Python version is not supported - please upgrade to 3.9 or higher.')
- return False
- if not shutil.which('ffmpeg'):
- update_status('ffmpeg is not installed.')
- return False
- return True
-
-
-def update_status(message: str, scope: str = 'ROOP.CORE') -> None:
- print(f'[{scope}] {message}')
- if not roop.globals.headless:
- ui.update_status(message)
-
-
-def start() -> None:
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- if not frame_processor.pre_start():
- return
- # process image to image
- if has_image_extension(roop.globals.target_path):
- if predict_image(roop.globals.target_path):
- destroy()
- shutil.copy2(roop.globals.target_path, roop.globals.output_path)
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- for frame_processor_name in roop.globals.frame_processors:
- if frame_processor_name == frame_processor.frame_name:
- update_status('Progressing...', frame_processor.NAME)
- frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path)
- frame_processor.post_process()
- release_resources()
- if is_image(roop.globals.target_path):
- update_status('Processing to image succeed!')
- else:
- update_status('Processing to image failed!')
- return
- # process image to videos
- if predict_video(roop.globals.target_path):
- destroy()
- update_status('Creating temp resources...')
- create_temp(roop.globals.target_path)
- update_status('Extracting frames...')
- extract_frames(roop.globals.target_path)
- temp_frame_paths = get_temp_frame_paths(roop.globals.target_path)
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- update_status('Progressing...', frame_processor.NAME)
- frame_processor.process_video(roop.globals.source_path, temp_frame_paths)
- frame_processor.post_process()
- release_resources()
- # handles fps
- if roop.globals.keep_fps:
- update_status('Detecting fps...')
- fps = detect_fps(roop.globals.target_path)
- update_status(f'Creating video with {fps} fps...')
- create_video(roop.globals.target_path, fps)
- else:
- update_status('Creating video with 30.0 fps...')
- create_video(roop.globals.target_path)
- # handle audio
- if roop.globals.keep_audio:
- if roop.globals.keep_fps:
- update_status('Restoring audio...')
- else:
- update_status('Restoring audio might cause issues as fps are not kept...')
- restore_audio(roop.globals.target_path, roop.globals.output_path)
- else:
- move_temp(roop.globals.target_path, roop.globals.output_path)
- # clean and validate
- clean_temp(roop.globals.target_path)
- if is_video(roop.globals.target_path):
- update_status('Processing to video succeed!')
- else:
- update_status('Processing to video failed!')
-
-
-def destroy() -> None:
- if roop.globals.target_path:
- clean_temp(roop.globals.target_path)
- quit()
-
-
-def run() -> None:
- parse_args()
- if not pre_check():
- return
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- if not frame_processor.pre_check():
- return
- limit_resources()
- if roop.globals.headless:
- start()
- else:
- window = ui.init(start, destroy)
- window.mainloop()
diff --git a/spaces/LaynzKunz/Model-RCV/app.py b/spaces/LaynzKunz/Model-RCV/app.py
deleted file mode 100644
index 2e3086de9accaf208e4f491f70eb991859a4846b..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Model-RCV/app.py
+++ /dev/null
@@ -1,507 +0,0 @@
-import os
-import glob
-import json
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-import yt_dlp
-import ffmpeg
-import subprocess
-import sys
-import io
-import wave
-from datetime import datetime
-from fairseq import checkpoint_utils
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from vc_infer_pipeline import VC
-from config import Config
-config = Config()
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces"
-
-audio_mode = []
-f0method_mode = []
-f0method_info = ""
-if limitation is True:
- audio_mode = ["Upload audio", "TTS Audio"]
- f0method_mode = ["pm", "crepe", "harvest"]
- f0method_info = "PM is fast, rmvpe is middle, Crepe or harvest is good but it was extremely slow (Default: PM)"
-else:
- audio_mode = ["Upload audio", "Youtube", "TTS Audio"]
- f0method_mode = ["pm", "crepe", "harvest"]
- f0method_info = "PM is fast, rmvpe is middle. Crepe or harvest is good but it was extremely slow (Default: PM))"
-
-if os.path.isfile("rmvpe.pt"):
- f0method_mode.insert(2, "rmvpe")
-
-def create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, file_index):
- def vc_fn(
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- f0_up_key,
- f0_method,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- ):
- try:
- if vc_audio_mode == "Input path" or "Youtube" and vc_input != "":
- audio, sr = librosa.load(vc_input, sr=16000, mono=True)
- elif vc_audio_mode == "Upload audio":
- if vc_upload is None:
- return "You need to upload an audio", None
- sampling_rate, audio = vc_upload
- duration = audio.shape[0] / sampling_rate
- if duration > 360 and limitation:
- return "Please upload an audio file that is less than 1 minute.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- elif vc_audio_mode == "TTS Audio":
- if len(tts_text) > 600 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- vc_input = "tts.mp3"
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- vc_input,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=None,
- )
- info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- print(f"{model_title} | {info}")
- return info, (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def load_model():
- categories = []
- with open("weights/folder_info.json", "r", encoding="utf-8") as f:
- folder_info = json.load(f)
- for category_name, category_info in folder_info.items():
- if not category_info['enable']:
- continue
- category_title = category_info['title']
- category_folder = category_info['folder_path']
- models = []
- with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for character_name, info in models_info.items():
- if not info['enable']:
- continue
- model_title = info['title']
- model_name = info['model_path']
- model_author = info.get("author", None)
- model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}"
- model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}"
- cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- model_version = "V1"
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- model_version = "V2"
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})")
- models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, model_index)))
- categories.append([category_title, category_folder, models])
- return categories
-
-def cut_vocal_and_inst(url, audio_provider, split_model):
- if url != "":
- if not os.path.exists("dl_audio"):
- os.mkdir("dl_audio")
- if audio_provider == "Youtube":
- ydl_opts = {
- 'format': 'bestaudio/best',
- 'postprocessors': [{
- 'key': 'FFmpegExtractAudio',
- 'preferredcodec': 'wav',
- }],
- "outtmpl": 'dl_audio/youtube_audio',
- }
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- ydl.download([url])
- audio_path = "dl_audio/youtube_audio.wav"
- else:
- # Spotify doesnt work.
- # Need to find other solution soon.
- '''
- command = f"spotdl download {url} --output dl_audio/.wav"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- audio_path = "dl_audio/spotify_audio.wav"
- '''
- if split_model == "htdemucs":
- command = f"demucs --two-stems=vocals {audio_path} -o output"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav"
- else:
- command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav"
- else:
- raise gr.Error("URL Required!")
- return None, None, None, None
-
-def combine_vocal_and_inst(audio_data, audio_volume, split_model):
- if not os.path.exists("output/result"):
- os.mkdir("output/result")
- vocal_path = "output/result/output.wav"
- output_path = "output/result/combine.mp3"
- if split_model == "htdemucs":
- inst_path = "output/htdemucs/youtube_audio/no_vocals.wav"
- else:
- inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav"
- with wave.open(vocal_path, "w") as wave_file:
- wave_file.setnchannels(1)
- wave_file.setsampwidth(2)
- wave_file.setframerate(audio_data[0])
- wave_file.writeframes(audio_data[1].tobytes())
- command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}'
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return output_path
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_audio_mode(vc_audio_mode):
- if vc_audio_mode == "Input path":
- return (
- # Input & Upload
- gr.Textbox.update(visible=True),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Upload audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=True),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Youtube":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Dropdown.update(visible=True),
- gr.Button.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Slider.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Button.update(visible=True),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "TTS Audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=True),
- gr.Dropdown.update(visible=True)
- )
- else:
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=True),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
-
-if __name__ == '__main__':
- load_hubert()
- categories = load_model()
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with gr.Blocks(theme=gr.themes.Base()) as app:
- gr.Markdown(
- "#
Hololive RVC Models\n"
- "###
will update every hololive ai model that i can find or make.\n"
- "[](https://colab.research.google.com/github/aziib/hololive-rvc-models-v2/blob/main/hololive_rvc_models_v2.ipynb)\n\n"
- "[](https://ko-fi.com/megaaziib)\n\n"
- )
- for (folder_title, folder, models) in categories:
- with gr.TabItem(folder_title):
- with gr.Tabs():
- if not models:
- gr.Markdown("#
No Model Loaded.")
- gr.Markdown("##
Please add model or fix your model path.")
- continue
- for (name, title, author, cover, model_version, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- f'
RVC {model_version} Model
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
'
- )
- with gr.Row():
- with gr.Column():
- vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio")
- # Input and Upload
- vc_input = gr.Textbox(label="Input audio path", visible=False)
- vc_upload = gr.Audio(label="Upload audio file", visible=True, interactive=True)
- # Youtube
- vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)")
- vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...")
- vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)")
- vc_split = gr.Button("Split Audio", variant="primary", visible=False)
- vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False)
- vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False)
- vc_audio_preview = gr.Audio(label="Audio Preview", visible=False)
- # TTS
- tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- with gr.Column():
- vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice')
- f0method0 = gr.Radio(
- label="Pitch extraction algorithm",
- info=f0method_info,
- choices=f0method_mode,
- value="pm",
- interactive=True
- )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- info="Accents controling. Too high prob gonna sounds too robotic (Default: 0.4)",
- value=0.4,
- interactive=True,
- )
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label="Apply Median Filtering",
- info="The value represents the filter radius and can reduce breathiness.",
- value=1,
- step=1,
- interactive=True,
- )
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label="Resample the output audio",
- info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling",
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Volume Envelope",
- info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used",
- value=1,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label="Voice Protection",
- info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy",
- value=0.23,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- vc_log = gr.Textbox(label="Output Information", interactive=False)
- vc_output = gr.Audio(label="Output Audio", interactive=False)
- vc_convert = gr.Button("Convert", variant="primary")
- vc_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Vocal volume",
- value=4,
- interactive=True,
- step=1,
- info="Adjust vocal volume (Default: 4}",
- visible=False
- )
- vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False)
- vc_combine = gr.Button("Combine",variant="primary", visible=False)
- vc_convert.click(
- fn=vc_fn,
- inputs=[
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- vc_transform0,
- f0method0,
- index_rate1,
- filter_radius0,
- resample_sr0,
- rms_mix_rate0,
- protect0,
- ],
- outputs=[vc_log ,vc_output]
- )
- vc_split.click(
- fn=cut_vocal_and_inst,
- inputs=[vc_link, vc_download_audio, vc_split_model],
- outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input]
- )
- vc_combine.click(
- fn=combine_vocal_and_inst,
- inputs=[vc_output, vc_volume, vc_split_model],
- outputs=[vc_combined_output]
- )
- vc_audio_mode.change(
- fn=change_audio_mode,
- inputs=[vc_audio_mode],
- outputs=[
- vc_input,
- vc_upload,
- vc_download_audio,
- vc_link,
- vc_split_model,
- vc_split,
- vc_vocal_preview,
- vc_inst_preview,
- vc_audio_preview,
- vc_volume,
- vc_combined_output,
- vc_combine,
- tts_text,
- tts_voice
- ]
- )
-if limitation is True:
- app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab)
-else:
- app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=True)
\ No newline at end of file
diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/geometry/geom2d.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/geometry/geom2d.py
deleted file mode 100644
index 2b2389aa9ed6c174367bc2dec62b35ba63c47e18..0000000000000000000000000000000000000000
--- a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/geometry/geom2d.py
+++ /dev/null
@@ -1,212 +0,0 @@
-from __future__ import print_function, unicode_literals, absolute_import, division
-import numpy as np
-import warnings
-
-from skimage.measure import regionprops
-from skimage.draw import polygon
-from csbdeep.utils import _raise
-
-from ..utils import path_absolute, _is_power_of_2, _normalize_grid
-from ..matching import _check_label_array
-from stardist.lib.stardist2d import c_star_dist
-
-
-
-def _ocl_star_dist(lbl, n_rays=32, grid=(1,1)):
- from gputools import OCLProgram, OCLArray, OCLImage
- (np.isscalar(n_rays) and 0 < int(n_rays)) or _raise(ValueError())
- n_rays = int(n_rays)
- # slicing with grid is done with tuple(slice(0, None, g) for g in grid)
- res_shape = tuple((s-1)//g+1 for s, g in zip(lbl.shape, grid))
-
- src = OCLImage.from_array(lbl.astype(np.uint16,copy=False))
- dst = OCLArray.empty(res_shape+(n_rays,), dtype=np.float32)
- program = OCLProgram(path_absolute("kernels/stardist2d.cl"), build_options=['-D', 'N_RAYS=%d' % n_rays])
- program.run_kernel('star_dist', res_shape[::-1], None, dst.data, src, np.int32(grid[0]),np.int32(grid[1]))
- return dst.get()
-
-
-def _cpp_star_dist(lbl, n_rays=32, grid=(1,1)):
- (np.isscalar(n_rays) and 0 < int(n_rays)) or _raise(ValueError())
- return c_star_dist(lbl.astype(np.uint16,copy=False), np.int32(n_rays), np.int32(grid[0]),np.int32(grid[1]))
-
-
-def _py_star_dist(a, n_rays=32, grid=(1,1)):
- (np.isscalar(n_rays) and 0 < int(n_rays)) or _raise(ValueError())
- if grid != (1,1):
- raise NotImplementedError(grid)
-
- n_rays = int(n_rays)
- a = a.astype(np.uint16,copy=False)
- dst = np.empty(a.shape+(n_rays,),np.float32)
-
- for i in range(a.shape[0]):
- for j in range(a.shape[1]):
- value = a[i,j]
- if value == 0:
- dst[i,j] = 0
- else:
- st_rays = np.float32((2*np.pi) / n_rays)
- for k in range(n_rays):
- phi = np.float32(k*st_rays)
- dy = np.cos(phi)
- dx = np.sin(phi)
- x, y = np.float32(0), np.float32(0)
- while True:
- x += dx
- y += dy
- ii = int(round(i+x))
- jj = int(round(j+y))
- if (ii < 0 or ii >= a.shape[0] or
- jj < 0 or jj >= a.shape[1] or
- value != a[ii,jj]):
- # small correction as we overshoot the boundary
- t_corr = 1-.5/max(np.abs(dx),np.abs(dy))
- x -= t_corr*dx
- y -= t_corr*dy
- dist = np.sqrt(x**2+y**2)
- dst[i,j,k] = dist
- break
- return dst
-
-
-def star_dist(a, n_rays=32, grid=(1,1), mode='cpp'):
- """'a' assumbed to be a label image with integer values that encode object ids. id 0 denotes background."""
-
- n_rays >= 3 or _raise(ValueError("need 'n_rays' >= 3"))
-
- if mode == 'python':
- return _py_star_dist(a, n_rays, grid=grid)
- elif mode == 'cpp':
- return _cpp_star_dist(a, n_rays, grid=grid)
- elif mode == 'opencl':
- return _ocl_star_dist(a, n_rays, grid=grid)
- else:
- _raise(ValueError("Unknown mode %s" % mode))
-
-
-def _dist_to_coord_old(rhos, grid=(1,1)):
- """convert from polar to cartesian coordinates for a single image (3-D array) or multiple images (4-D array)"""
-
- grid = _normalize_grid(grid,2)
- is_single_image = rhos.ndim == 3
- if is_single_image:
- rhos = np.expand_dims(rhos,0)
- assert rhos.ndim == 4
-
- n_images,h,w,n_rays = rhos.shape
- coord = np.empty((n_images,h,w,2,n_rays),dtype=rhos.dtype)
-
- start = np.indices((h,w))
- for i in range(2):
- coord[...,i,:] = grid[i] * np.broadcast_to(start[i].reshape(1,h,w,1), (n_images,h,w,n_rays))
-
- phis = ray_angles(n_rays).reshape(1,1,1,n_rays)
-
- coord[...,0,:] += rhos * np.sin(phis) # row coordinate
- coord[...,1,:] += rhos * np.cos(phis) # col coordinate
-
- return coord[0] if is_single_image else coord
-
-
-def _polygons_to_label_old(coord, prob, points, shape=None, thr=-np.inf):
- sh = coord.shape[:2] if shape is None else shape
- lbl = np.zeros(sh,np.int32)
- # sort points with increasing probability
- ind = np.argsort([ prob[p[0],p[1]] for p in points ])
- points = points[ind]
-
- i = 1
- for p in points:
- if prob[p[0],p[1]] < thr:
- continue
- rr,cc = polygon(coord[p[0],p[1],0], coord[p[0],p[1],1], sh)
- lbl[rr,cc] = i
- i += 1
-
- return lbl
-
-
-def dist_to_coord(dist, points, scale_dist=(1,1)):
- """convert from polar to cartesian coordinates for a list of distances and center points
- dist.shape = (n_polys, n_rays)
- points.shape = (n_polys, 2)
- len(scale_dist) = 2
- return coord.shape = (n_polys,2,n_rays)
- """
- dist = np.asarray(dist)
- points = np.asarray(points)
- assert dist.ndim==2 and points.ndim==2 and len(dist)==len(points) \
- and points.shape[1]==2 and len(scale_dist)==2
- n_rays = dist.shape[1]
- phis = ray_angles(n_rays)
- coord = (dist[:,np.newaxis]*np.array([np.sin(phis),np.cos(phis)])).astype(np.float32)
- coord *= np.asarray(scale_dist).reshape(1,2,1)
- coord += points[...,np.newaxis]
- return coord
-
-
-def polygons_to_label_coord(coord, shape, labels=None):
- """renders polygons to image of given shape
-
- coord.shape = (n_polys, n_rays)
- """
- coord = np.asarray(coord)
- if labels is None: labels = np.arange(len(coord))
-
- _check_label_array(labels, "labels")
- assert coord.ndim==3 and coord.shape[1]==2 and len(coord)==len(labels)
-
- lbl = np.zeros(shape,np.int32)
-
- for i,c in zip(labels,coord):
- rr,cc = polygon(*c, shape)
- lbl[rr,cc] = i+1
-
- return lbl
-
-
-def polygons_to_label(dist, points, shape, prob=None, thr=-np.inf, scale_dist=(1,1)):
- """converts distances and center points to label image
-
- dist.shape = (n_polys, n_rays)
- points.shape = (n_polys, 2)
-
- label ids will be consecutive and adhere to the order given
- """
- dist = np.asarray(dist)
- points = np.asarray(points)
- prob = np.inf*np.ones(len(points)) if prob is None else np.asarray(prob)
-
- assert dist.ndim==2 and points.ndim==2 and len(dist)==len(points)
- assert len(points)==len(prob) and points.shape[1]==2 and prob.ndim==1
-
- n_rays = dist.shape[1]
-
- ind = prob>thr
- points = points[ind]
- dist = dist[ind]
- prob = prob[ind]
-
- ind = np.argsort(prob, kind='stable')
- points = points[ind]
- dist = dist[ind]
-
- coord = dist_to_coord(dist, points, scale_dist=scale_dist)
-
- return polygons_to_label_coord(coord, shape=shape, labels=ind)
-
-
-def relabel_image_stardist(lbl, n_rays, **kwargs):
- """relabel each label region in `lbl` with its star representation"""
- _check_label_array(lbl, "lbl")
- if not lbl.ndim==2:
- raise ValueError("lbl image should be 2 dimensional")
- dist = star_dist(lbl, n_rays, **kwargs)
- points = np.array(tuple(np.array(r.centroid).astype(int) for r in regionprops(lbl)))
- dist = dist[tuple(points.T)]
- return polygons_to_label(dist, points, shape=lbl.shape)
-
-
-def ray_angles(n_rays=32):
- return np.linspace(0,2*np.pi,n_rays,endpoint=False)
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/resamplerfilter.py b/spaces/Lianjd/stock_dashboard/backtrader/resamplerfilter.py
deleted file mode 100644
index 010a6b6bc321d43b226dc0d245d355c32247738d..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/resamplerfilter.py
+++ /dev/null
@@ -1,752 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-
-from datetime import datetime, date, timedelta
-
-from .dataseries import TimeFrame, _Bar
-from .utils.py3 import with_metaclass
-from . import metabase
-from .utils.date import date2num, num2date
-
-
-class DTFaker(object):
- # This will only be used for data sources which at some point in time
- # return None from _load to indicate that a check of the resampler and/or
- # notification queue is needed
- # This is meant (at least initially) for real-time feeds, because those are
- # the ones in need of events like the ones described above.
- # These data sources should also be producing ``utc`` time directly because
- # the real-time feed is (more often than not) timestamped and utc provides
- # a universal reference
- # That's why below the timestamp is chosen in UTC and passed directly to
- # date2num to avoid a localization. But it is extracted from data.num2date
- # to ensure the returned datetime object is localized according to the
- # expected output by the user (local timezone or any specified)
-
- def __init__(self, data, forcedata=None):
- self.data = data
-
- # Aliases
- self.datetime = self
- self.p = self
-
- if forcedata is None:
- _dtime = datetime.utcnow() + data._timeoffset()
- self._dt = dt = date2num(_dtime) # utc-like time
- self._dtime = data.num2date(dt) # localized time
- else:
- self._dt = forcedata.datetime[0] # utc-like time
- self._dtime = forcedata.datetime.datetime() # localized time
-
- self.sessionend = data.p.sessionend
-
- def __len__(self):
- return len(self.data)
-
- def __call__(self, idx=0):
- return self._dtime # simulates data.datetime.datetime()
-
- def datetime(self, idx=0):
- return self._dtime
-
- def date(self, idx=0):
- return self._dtime.date()
-
- def time(self, idx=0):
- return self._dtime.time()
-
- @property
- def _calendar(self):
- return self.data._calendar
-
- def __getitem__(self, idx):
- return self._dt if idx == 0 else float('-inf')
-
- def num2date(self, *args, **kwargs):
- return self.data.num2date(*args, **kwargs)
-
- def date2num(self, *args, **kwargs):
- return self.data.date2num(*args, **kwargs)
-
- def _getnexteos(self):
- return self.data._getnexteos()
-
-
-class _BaseResampler(with_metaclass(metabase.MetaParams, object)):
- params = (
- ('bar2edge', True),
- ('adjbartime', True),
- ('rightedge', True),
- ('boundoff', 0),
-
- ('timeframe', TimeFrame.Days),
- ('compression', 1),
-
- ('takelate', True),
-
- ('sessionend', True),
- )
-
- def __init__(self, data):
- self.subdays = TimeFrame.Ticks < self.p.timeframe < TimeFrame.Days
- self.subweeks = self.p.timeframe < TimeFrame.Weeks
- self.componly = (not self.subdays and
- data._timeframe == self.p.timeframe and
- not (self.p.compression % data._compression))
-
- self.bar = _Bar(maxdate=True) # bar holder
- self.compcount = 0 # count of produced bars to control compression
- self._firstbar = True
- self.doadjusttime = (self.p.bar2edge and self.p.adjbartime and
- self.subweeks)
-
- self._nexteos = None
-
- # Modify data information according to own parameters
- data.resampling = 1
- data.replaying = self.replaying
- data._timeframe = self.p.timeframe
- data._compression = self.p.compression
-
- self.data = data
-
- def _latedata(self, data):
- # new data at position 0, still untouched from stream
- if not self.subdays:
- return False
-
- # Time already delivered
- return len(data) > 1 and data.datetime[0] <= data.datetime[-1]
-
- def _checkbarover(self, data, fromcheck=False, forcedata=None):
- chkdata = DTFaker(data, forcedata) if fromcheck else data
-
- isover = False
- if not self.componly and not self._barover(chkdata):
- return isover
-
- if self.subdays and self.p.bar2edge:
- isover = True
- elif not fromcheck: # fromcheck doesn't increase compcount
- self.compcount += 1
- if not (self.compcount % self.p.compression):
- # boundary crossed and enough bars for compression ... proceed
- isover = True
-
- return isover
-
- def _barover(self, data):
- tframe = self.p.timeframe
-
- if tframe == TimeFrame.Ticks:
- # Ticks is already the lowest level
- return self.bar.isopen()
-
- elif tframe < TimeFrame.Days:
- return self._barover_subdays(data)
-
- elif tframe == TimeFrame.Days:
- return self._barover_days(data)
-
- elif tframe == TimeFrame.Weeks:
- return self._barover_weeks(data)
-
- elif tframe == TimeFrame.Months:
- return self._barover_months(data)
-
- elif tframe == TimeFrame.Years:
- return self._barover_years(data)
-
- def _eosset(self):
- if self._nexteos is None:
- self._nexteos, self._nextdteos = self.data._getnexteos()
- return
-
- def _eoscheck(self, data, seteos=True, exact=False):
- if seteos:
- self._eosset()
-
- equal = data.datetime[0] == self._nextdteos
- grter = data.datetime[0] > self._nextdteos
-
- if exact:
- ret = equal
- else:
- # if the compared data goes over the endofsession
- # make sure the resampled bar is open and has something before that
- # end of session. It could be a weekend and nothing was delivered
- # until Monday
- if grter:
- ret = (self.bar.isopen() and
- self.bar.datetime <= self._nextdteos)
- else:
- ret = equal
-
- if ret:
- self._lasteos = self._nexteos
- self._lastdteos = self._nextdteos
- self._nexteos = None
- self._nextdteos = float('-inf')
-
- return ret
-
- def _barover_days(self, data):
- return self._eoscheck(data)
-
- def _barover_weeks(self, data):
- if self.data._calendar is None:
- year, week, _ = data.num2date(self.bar.datetime).date().isocalendar()
- yearweek = year * 100 + week
-
- baryear, barweek, _ = data.datetime.date().isocalendar()
- bar_yearweek = baryear * 100 + barweek
-
- return bar_yearweek > yearweek
- else:
- return data._calendar.last_weekday(data.datetime.date())
-
- def _barover_months(self, data):
- dt = data.num2date(self.bar.datetime).date()
- yearmonth = dt.year * 100 + dt.month
-
- bardt = data.datetime.datetime()
- bar_yearmonth = bardt.year * 100 + bardt.month
-
- return bar_yearmonth > yearmonth
-
- def _barover_years(self, data):
- return (data.datetime.datetime().year >
- data.num2date(self.bar.datetime).year)
-
- def _gettmpoint(self, tm):
- '''Returns the point of time intraday for a given time according to the
- timeframe
-
- - Ex 1: 00:05:00 in minutes -> point = 5
- - Ex 2: 00:05:20 in seconds -> point = 5 * 60 + 20 = 320
- '''
- point = tm.hour * 60 + tm.minute
- restpoint = 0
-
- if self.p.timeframe < TimeFrame.Minutes:
- point = point * 60 + tm.second
-
- if self.p.timeframe < TimeFrame.Seconds:
- point = point * 1e6 + tm.microsecond
- else:
- restpoint = tm.microsecond
- else:
- restpoint = tm.second + tm.microsecond
-
- point += self.p.boundoff
-
- return point, restpoint
-
- def _barover_subdays(self, data):
- if self._eoscheck(data):
- return True
-
- if data.datetime[0] < self.bar.datetime:
- return False
-
- # Get time objects for the comparisons - in utc-like format
- tm = num2date(self.bar.datetime).time()
- bartm = num2date(data.datetime[0]).time()
-
- point, _ = self._gettmpoint(tm)
- barpoint, _ = self._gettmpoint(bartm)
-
- ret = False
- if barpoint > point:
- # The data bar has surpassed the internal bar
- if not self.p.bar2edge:
- # Compression done on simple bar basis (like days)
- ret = True
- elif self.p.compression == 1:
- # no bar compression requested -> internal bar done
- ret = True
- else:
- point_comp = point // self.p.compression
- barpoint_comp = barpoint // self.p.compression
-
- # Went over boundary including compression
- if barpoint_comp > point_comp:
- ret = True
-
- return ret
-
- def check(self, data, _forcedata=None):
- '''Called to check if the current stored bar has to be delivered in
- spite of the data not having moved forward. If no ticks from a live
- feed come in, a 5 second resampled bar could be delivered 20 seconds
- later. When this method is called the wall clock (incl data time
- offset) is called to check if the time has gone so far as to have to
- deliver the already stored data
- '''
- if not self.bar.isopen():
- return
-
- return self(data, fromcheck=True, forcedata=_forcedata)
-
- def _dataonedge(self, data):
- if not self.subweeks:
- if data._calendar is None:
- return False, True # nothing can be done
-
- tframe = self.p.timeframe
- ret = False
- if tframe == TimeFrame.Weeks: # Ticks is already the lowest
- ret = data._calendar.last_weekday(data.datetime.date())
- elif tframe == TimeFrame.Months:
- ret = data._calendar.last_monthday(data.datetime.date())
- elif tframe == TimeFrame.Years:
- ret = data._calendar.last_yearday(data.datetime.date())
-
- if ret:
- # Data must be consumed but compression may not be met yet
- # Prevent barcheckover from being called because it could again
- # increase compcount
- docheckover = False
- self.compcount += 1
- ret = not (self.compcount % self.p.compression)
- else:
- docheckover = True
-
- return ret, docheckover
-
- if self._eoscheck(data, exact=True):
- return True, True
-
- if self.subdays:
- point, prest = self._gettmpoint(data.datetime.time())
- if prest:
- return False, True # cannot be on boundary, subunits present
-
- # Pass through compression to get boundary and rest over boundary
- bound, brest = divmod(point, self.p.compression)
-
- # if no extra and decomp bound is point
- return (brest == 0 and point == (bound * self.p.compression), True)
-
- # Code overriden by eoscheck
- if False and self.p.sessionend:
- # Days scenario - get datetime to compare in output timezone
- # because p.sessionend is expected in output timezone
- bdtime = data.datetime.datetime()
- bsend = datetime.combine(bdtime.date(), data.p.sessionend)
- return bdtime == bsend
-
- return False, True # subweeks, not subdays and not sessionend
-
- def _calcadjtime(self, greater=False):
- if self._nexteos is None:
- # Session has been exceeded - end of session is the mark
- return self._lastdteos # utc-like
-
- dt = self.data.num2date(self.bar.datetime)
-
- # Get current time
- tm = dt.time()
- # Get the point of the day in the time frame unit (ex: minute 200)
- point, _ = self._gettmpoint(tm)
-
- # Apply compression to update the point position (comp 5 -> 200 // 5)
- # point = (point // self.p.compression)
- point = point // self.p.compression
-
- # If rightedge (end of boundary is activated) add it unless recursing
- point += self.p.rightedge
-
- # Restore point to the timeframe units by de-applying compression
- point *= self.p.compression
-
- # Get hours, minutes, seconds and microseconds
- extradays = 0
- if self.p.timeframe == TimeFrame.Minutes:
- ph, pm = divmod(point, 60)
- ps = 0
- pus = 0
- elif self.p.timeframe == TimeFrame.Seconds:
- ph, pm = divmod(point, 60 * 60)
- pm, ps = divmod(pm, 60)
- pus = 0
- elif self.p.timeframe <= TimeFrame.MicroSeconds:
- ph, pm = divmod(point, 60 * 60 * 1e6)
- pm, psec = divmod(pm, 60 * 1e6)
- ps, pus = divmod(psec, 1e6)
- elif self.p.timeframe == TimeFrame.Days:
- # last resort
- eost = self._nexteos.time()
- ph = eost.hour
- pm = eost.minute
- ps = eost.second
- pus = eost.microsecond
-
- if ph > 23: # went over midnight:
- extradays = ph // 24
- ph %= 24
-
- # Replace intraday parts with the calculated ones and update it
- dt = dt.replace(hour=int(ph), minute=int(pm),
- second=int(ps), microsecond=int(pus))
- if extradays:
- dt += timedelta(days=extradays)
- dtnum = self.data.date2num(dt)
- return dtnum
-
- def _adjusttime(self, greater=False, forcedata=None):
- '''
- Adjusts the time of calculated bar (from underlying data source) by
- using the timeframe to the appropriate boundary, with compression taken
- into account
-
- Depending on param ``rightedge`` uses the starting boundary or the
- ending one
- '''
- dtnum = self._calcadjtime(greater=greater)
- if greater and dtnum <= self.bar.datetime:
- return False
-
- self.bar.datetime = dtnum
- return True
-
-
-class Resampler(_BaseResampler):
- '''This class resamples data of a given timeframe to a larger timeframe.
-
- Params
-
- - bar2edge (default: True)
-
- resamples using time boundaries as the target. For example with a
- "ticks -> 5 seconds" the resulting 5 seconds bars will be aligned to
- xx:00, xx:05, xx:10 ...
-
- - adjbartime (default: True)
-
- Use the time at the boundary to adjust the time of the delivered
- resampled bar instead of the last seen timestamp. If resampling to "5
- seconds" the time of the bar will be adjusted for example to hh:mm:05
- even if the last seen timestamp was hh:mm:04.33
-
- .. note::
-
- Time will only be adjusted if "bar2edge" is True. It wouldn't make
- sense to adjust the time if the bar has not been aligned to a
- boundary
-
- - rightedge (default: True)
-
- Use the right edge of the time boundaries to set the time.
-
- If False and compressing to 5 seconds the time of a resampled bar for
- seconds between hh:mm:00 and hh:mm:04 will be hh:mm:00 (the starting
- boundary
-
- If True the used boundary for the time will be hh:mm:05 (the ending
- boundary)
- '''
- params = (
- ('bar2edge', True),
- ('adjbartime', True),
- ('rightedge', True),
- )
-
- replaying = False
-
- def last(self, data):
- '''Called when the data is no longer producing bars
-
- Can be called multiple times. It has the chance to (for example)
- produce extra bars which may still be accumulated and have to be
- delivered
- '''
- if self.bar.isopen():
- if self.doadjusttime:
- self._adjusttime()
-
- data._add2stack(self.bar.lvalues())
- self.bar.bstart(maxdate=True) # close the bar to avoid dups
- return True
-
- return False
-
- def __call__(self, data, fromcheck=False, forcedata=None):
- '''Called for each set of values produced by the data source'''
- consumed = False
- onedge = False
- docheckover = True
- if not fromcheck:
- if self._latedata(data):
- if not self.p.takelate:
- data.backwards()
- return True # get a new bar
-
- self.bar.bupdate(data) # update new or existing bar
- # push time beyond reference
- self.bar.datetime = data.datetime[-1] + 0.000001
- data.backwards() # remove used bar
- return True
-
- if self.componly: # only if not subdays
- # Get a session ref before rewinding
- _, self._lastdteos = self.data._getnexteos()
- consumed = True
-
- else:
- onedge, docheckover = self._dataonedge(data) # for subdays
- consumed = onedge
-
- if consumed:
- self.bar.bupdate(data) # update new or existing bar
- data.backwards() # remove used bar
-
- # if self.bar.isopen and (onedge or (docheckover and checkbarover))
- cond = self.bar.isopen()
- if cond: # original is and, the 2nd term must also be true
- if not onedge: # onedge true is sufficient
- if docheckover:
- cond = self._checkbarover(data, fromcheck=fromcheck,
- forcedata=forcedata)
- if cond:
- dodeliver = False
- if forcedata is not None:
- # check our delivery time is not larger than that of forcedata
- tframe = self.p.timeframe
- if tframe == TimeFrame.Ticks: # Ticks is already the lowest
- dodeliver = True
- elif tframe == TimeFrame.Minutes:
- dtnum = self._calcadjtime(greater=True)
- dodeliver = dtnum <= forcedata.datetime[0]
- elif tframe == TimeFrame.Days:
- dtnum = self._calcadjtime(greater=True)
- dodeliver = dtnum <= forcedata.datetime[0]
- else:
- dodeliver = True
-
- if dodeliver:
- if not onedge and self.doadjusttime:
- self._adjusttime(greater=True, forcedata=forcedata)
-
- data._add2stack(self.bar.lvalues())
- self.bar.bstart(maxdate=True) # bar delivered -> restart
-
- if not fromcheck:
- if not consumed:
- self.bar.bupdate(data) # update new or existing bar
- data.backwards() # remove used bar
-
- return True
-
-
-class Replayer(_BaseResampler):
- '''This class replays data of a given timeframe to a larger timeframe.
-
- It simulates the action of the market by slowly building up (for ex.) a
- daily bar from tick/seconds/minutes data
-
- Only when the bar is complete will the "length" of the data be changed
- effectively delivering a closed bar
-
- Params
-
- - bar2edge (default: True)
-
- replays using time boundaries as the target of the closed bar. For
- example with a "ticks -> 5 seconds" the resulting 5 seconds bars will
- be aligned to xx:00, xx:05, xx:10 ...
-
- - adjbartime (default: False)
-
- Use the time at the boundary to adjust the time of the delivered
- resampled bar instead of the last seen timestamp. If resampling to "5
- seconds" the time of the bar will be adjusted for example to hh:mm:05
- even if the last seen timestamp was hh:mm:04.33
-
- .. note::
-
- Time will only be adjusted if "bar2edge" is True. It wouldn't make
- sense to adjust the time if the bar has not been aligned to a
- boundary
-
- .. note:: if this parameter is True an extra tick with the *adjusted*
- time will be introduced at the end of the *replayed* bar
-
- - rightedge (default: True)
-
- Use the right edge of the time boundaries to set the time.
-
- If False and compressing to 5 seconds the time of a resampled bar for
- seconds between hh:mm:00 and hh:mm:04 will be hh:mm:00 (the starting
- boundary
-
- If True the used boundary for the time will be hh:mm:05 (the ending
- boundary)
- '''
- params = (
- ('bar2edge', True),
- ('adjbartime', False),
- ('rightedge', True),
- )
-
- replaying = True
-
- def __call__(self, data, fromcheck=False, forcedata=None):
- consumed = False
- onedge = False
- takinglate = False
- docheckover = True
-
- if not fromcheck:
- if self._latedata(data):
- if not self.p.takelate:
- data.backwards(force=True)
- return True # get a new bar
-
- consumed = True
- takinglate = True
-
- elif self.componly: # only if not subdays
- consumed = True
-
- else:
- onedge, docheckover = self._dataonedge(data) # for subdays
- consumed = onedge
-
- data._tick_fill(force=True) # update
-
- if consumed:
- self.bar.bupdate(data)
- if takinglate:
- self.bar.datetime = data.datetime[-1] + 0.000001
-
- # if onedge or (checkbarover and self._checkbarover)
- cond = onedge
- if not cond: # original is or, if true it would suffice
- if docheckover:
- cond = self._checkbarover(data, fromcheck=fromcheck)
- if cond:
- if not onedge and self.doadjusttime: # insert tick with adjtime
- adjusted = self._adjusttime(greater=True)
- if adjusted:
- ago = 0 if (consumed or fromcheck) else -1
- # Update to the point right before the new data
- data._updatebar(self.bar.lvalues(), forward=False, ago=ago)
-
- if not fromcheck:
- if not consumed:
- # Reopen bar with real new data and save data to queue
- self.bar.bupdate(data, reopen=True)
- # erase is True, but the tick will not be seen below
- # and therefore no need to mark as 1st
- data._save2stack(erase=True, force=True)
- else:
- self.bar.bstart(maxdate=True)
- self._firstbar = True # next is first
- else: # from check
- # fromcheck or consumed have forced delivery, reopen
- self.bar.bstart(maxdate=True)
- self._firstbar = True # next is first
- if adjusted:
- # after adjusting need to redeliver if this was a check
- data._save2stack(erase=True, force=True)
-
- elif not fromcheck:
- if not consumed:
- # Data already "forwarded" and we replay to new bar
- # No need to go backwards. simply reopen internal cache
- self.bar.bupdate(data, reopen=True)
- else:
- # compression only, used data to update bar, hence remove
- # from stream, update existing data, reopen bar
- if not self._firstbar: # only discard data if not firstbar
- data.backwards(force=True)
- data._updatebar(self.bar.lvalues(), forward=False, ago=0)
- self.bar.bstart(maxdate=True)
- self._firstbar = True # make sure next tick moves forward
-
- elif not fromcheck:
- # not over, update, remove new entry, deliver
- if not consumed:
- self.bar.bupdate(data)
-
- if not self._firstbar: # only discard data if not firstbar
- data.backwards(force=True)
-
- data._updatebar(self.bar.lvalues(), forward=False, ago=0)
- self._firstbar = False
-
- return False # the existing bar can be processed by the system
-
-
-class ResamplerTicks(Resampler):
- params = (('timeframe', TimeFrame.Ticks),)
-
-
-class ResamplerSeconds(Resampler):
- params = (('timeframe', TimeFrame.Seconds),)
-
-
-class ResamplerMinutes(Resampler):
- params = (('timeframe', TimeFrame.Minutes),)
-
-
-class ResamplerDaily(Resampler):
- params = (('timeframe', TimeFrame.Days),)
-
-
-class ResamplerWeekly(Resampler):
- params = (('timeframe', TimeFrame.Weeks),)
-
-
-class ResamplerMonthly(Resampler):
- params = (('timeframe', TimeFrame.Months),)
-
-
-class ResamplerYearly(Resampler):
- params = (('timeframe', TimeFrame.Years),)
-
-
-class ReplayerTicks(Replayer):
- params = (('timeframe', TimeFrame.Ticks),)
-
-
-class ReplayerSeconds(Replayer):
- params = (('timeframe', TimeFrame.Seconds),)
-
-
-class ReplayerMinutes(Replayer):
- params = (('timeframe', TimeFrame.Minutes),)
-
-
-class ReplayerDaily(Replayer):
- params = (('timeframe', TimeFrame.Days),)
-
-
-class ReplayerWeekly(Replayer):
- params = (('timeframe', TimeFrame.Weeks),)
-
-
-class ReplayerMonthly(Replayer):
- params = (('timeframe', TimeFrame.Months),)
diff --git a/spaces/Linaqruf/Animagine-XL/lora_diffusers.py b/spaces/Linaqruf/Animagine-XL/lora_diffusers.py
deleted file mode 100644
index a265b91f63c6541f6afdc69d5bc56f21a9eca432..0000000000000000000000000000000000000000
--- a/spaces/Linaqruf/Animagine-XL/lora_diffusers.py
+++ /dev/null
@@ -1,539 +0,0 @@
-"""
-LoRA module for Diffusers
-==========================
-
-This file works independently and is designed to operate with Diffusers.
-
-Credits
--------
-- Modified from: https://github.com/vladmandic/automatic/blob/master/modules/lora_diffusers.py
-- Originally from: https://github.com/kohya-ss/sd-scripts/blob/sdxl/networks/lora_diffusers.py
-"""
-
-import bisect
-import math
-import random
-from typing import Any, Dict, List, Mapping, Optional, Union
-from diffusers import UNet2DConditionModel
-import numpy as np
-from tqdm import tqdm
-import diffusers.models.lora as diffusers_lora
-from transformers import CLIPTextModel
-import torch
-
-
-def make_unet_conversion_map() -> Dict[str, str]:
- unet_conversion_map_layer = []
-
- for i in range(3): # num_blocks is 3 in sdxl
- # loop over downblocks/upblocks
- for j in range(2):
- # loop over resnets/attentions for downblocks
- hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
- sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
- unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
-
- if i < 3:
- # no attention layers in down_blocks.3
- hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
- sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
- unet_conversion_map_layer.append(
- (sd_down_atn_prefix, hf_down_atn_prefix)
- )
-
- for j in range(3):
- # loop over resnets/attentions for upblocks
- hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
- sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
- unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
-
- # if i > 0: commentout for sdxl
- # no attention layers in up_blocks.0
- hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
- sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
- unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
-
- if i < 3:
- # no downsample in down_blocks.3
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
- sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
- unet_conversion_map_layer.append(
- (sd_downsample_prefix, hf_downsample_prefix)
- )
-
- # no upsample in up_blocks.3
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
- sd_upsample_prefix = f"output_blocks.{3*i + 2}.{2}." # change for sdxl
- unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
-
- hf_mid_atn_prefix = "mid_block.attentions.0."
- sd_mid_atn_prefix = "middle_block.1."
- unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))
-
- for j in range(2):
- hf_mid_res_prefix = f"mid_block.resnets.{j}."
- sd_mid_res_prefix = f"middle_block.{2*j}."
- unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
-
- unet_conversion_map_resnet = [
- # (stable-diffusion, HF Diffusers)
- ("in_layers.0.", "norm1."),
- ("in_layers.2.", "conv1."),
- ("out_layers.0.", "norm2."),
- ("out_layers.3.", "conv2."),
- ("emb_layers.1.", "time_emb_proj."),
- ("skip_connection.", "conv_shortcut."),
- ]
-
- unet_conversion_map = []
- for sd, hf in unet_conversion_map_layer:
- if "resnets" in hf:
- for sd_res, hf_res in unet_conversion_map_resnet:
- unet_conversion_map.append((sd + sd_res, hf + hf_res))
- else:
- unet_conversion_map.append((sd, hf))
-
- for j in range(2):
- hf_time_embed_prefix = f"time_embedding.linear_{j+1}."
- sd_time_embed_prefix = f"time_embed.{j*2}."
- unet_conversion_map.append((sd_time_embed_prefix, hf_time_embed_prefix))
-
- for j in range(2):
- hf_label_embed_prefix = f"add_embedding.linear_{j+1}."
- sd_label_embed_prefix = f"label_emb.0.{j*2}."
- unet_conversion_map.append((sd_label_embed_prefix, hf_label_embed_prefix))
-
- unet_conversion_map.append(("input_blocks.0.0.", "conv_in."))
- unet_conversion_map.append(("out.0.", "conv_norm_out."))
- unet_conversion_map.append(("out.2.", "conv_out."))
-
- sd_hf_conversion_map = {
- sd.replace(".", "_")[:-1]: hf.replace(".", "_")[:-1]
- for sd, hf in unet_conversion_map
- }
- return sd_hf_conversion_map
-
-
-UNET_CONVERSION_MAP = make_unet_conversion_map()
-
-
-class LoRAModule(torch.nn.Module):
- """
- replaces forward method of the original Linear, instead of replacing the original Linear module.
- """
-
- def __init__(
- self,
- lora_name,
- org_module: torch.nn.Module,
- multiplier=1.0,
- lora_dim=4,
- alpha=1,
- ):
- """if alpha == 0 or None, alpha is rank (no scaling)."""
- super().__init__()
- self.lora_name = lora_name
-
- if isinstance(
- org_module, diffusers_lora.LoRACompatibleConv
- ): # Modified to support Diffusers>=0.19.2
- in_dim = org_module.in_channels
- out_dim = org_module.out_channels
- else:
- in_dim = org_module.in_features
- out_dim = org_module.out_features
-
- self.lora_dim = lora_dim
-
- if isinstance(
- org_module, diffusers_lora.LoRACompatibleConv
- ): # Modified to support Diffusers>=0.19.2
- kernel_size = org_module.kernel_size
- stride = org_module.stride
- padding = org_module.padding
- self.lora_down = torch.nn.Conv2d(
- in_dim, self.lora_dim, kernel_size, stride, padding, bias=False
- )
- self.lora_up = torch.nn.Conv2d(
- self.lora_dim, out_dim, (1, 1), (1, 1), bias=False
- )
- else:
- self.lora_down = torch.nn.Linear(in_dim, self.lora_dim, bias=False)
- self.lora_up = torch.nn.Linear(self.lora_dim, out_dim, bias=False)
-
- if isinstance(alpha, torch.Tensor):
- alpha = alpha.detach().float().numpy() # without casting, bf16 causes error
- alpha = self.lora_dim if alpha is None or alpha == 0 else alpha
- self.scale = alpha / self.lora_dim
- self.register_buffer(
- "alpha", torch.tensor(alpha)
- ) # 勾配計算に含めない / not included in gradient calculation
-
- # same as microsoft's
- torch.nn.init.kaiming_uniform_(self.lora_down.weight, a=math.sqrt(5))
- torch.nn.init.zeros_(self.lora_up.weight)
-
- self.multiplier = multiplier
- self.org_module = [org_module]
- self.enabled = True
- self.network: LoRANetwork = None
- self.org_forward = None
-
- # override org_module's forward method
- def apply_to(self, multiplier=None):
- if multiplier is not None:
- self.multiplier = multiplier
- if self.org_forward is None:
- self.org_forward = self.org_module[0].forward
- self.org_module[0].forward = self.forward
-
- # restore org_module's forward method
- def unapply_to(self):
- if self.org_forward is not None:
- self.org_module[0].forward = self.org_forward
-
- # forward with lora
- def forward(self, x):
- if not self.enabled:
- return self.org_forward(x)
- return (
- self.org_forward(x)
- + self.lora_up(self.lora_down(x)) * self.multiplier * self.scale
- )
-
- def set_network(self, network):
- self.network = network
-
- # merge lora weight to org weight
- def merge_to(self, multiplier=1.0):
- # get lora weight
- lora_weight = self.get_weight(multiplier)
-
- # get org weight
- org_sd = self.org_module[0].state_dict()
- org_weight = org_sd["weight"]
- weight = org_weight + lora_weight.to(org_weight.device, dtype=org_weight.dtype)
-
- # set weight to org_module
- org_sd["weight"] = weight
- self.org_module[0].load_state_dict(org_sd)
-
- # restore org weight from lora weight
- def restore_from(self, multiplier=1.0):
- # get lora weight
- lora_weight = self.get_weight(multiplier)
-
- # get org weight
- org_sd = self.org_module[0].state_dict()
- org_weight = org_sd["weight"]
- weight = org_weight - lora_weight.to(org_weight.device, dtype=org_weight.dtype)
-
- # set weight to org_module
- org_sd["weight"] = weight
- self.org_module[0].load_state_dict(org_sd)
-
- # return lora weight
- def get_weight(self, multiplier=None):
- if multiplier is None:
- multiplier = self.multiplier
-
- # get up/down weight from module
- up_weight = self.lora_up.weight.to(torch.float)
- down_weight = self.lora_down.weight.to(torch.float)
-
- # pre-calculated weight
- if len(down_weight.size()) == 2:
- # linear
- weight = self.multiplier * (up_weight @ down_weight) * self.scale
- elif down_weight.size()[2:4] == (1, 1):
- # conv2d 1x1
- weight = (
- self.multiplier
- * (up_weight.squeeze(3).squeeze(2) @ down_weight.squeeze(3).squeeze(2))
- .unsqueeze(2)
- .unsqueeze(3)
- * self.scale
- )
- else:
- # conv2d 3x3
- conved = torch.nn.functional.conv2d(
- down_weight.permute(1, 0, 2, 3), up_weight
- ).permute(1, 0, 2, 3)
- weight = self.multiplier * conved * self.scale
-
- return weight
-
-
-# Create network from weights for inference, weights are not loaded here
-def create_network_from_weights(
- text_encoder: Union[CLIPTextModel, List[CLIPTextModel]],
- unet: UNet2DConditionModel,
- weights_sd: Dict,
- multiplier: float = 1.0,
-):
- # get dim/alpha mapping
- modules_dim = {}
- modules_alpha = {}
- for key, value in weights_sd.items():
- if "." not in key:
- continue
-
- lora_name = key.split(".")[0]
- if "alpha" in key:
- modules_alpha[lora_name] = value
- elif "lora_down" in key:
- dim = value.size()[0]
- modules_dim[lora_name] = dim
- # print(lora_name, value.size(), dim)
-
- # support old LoRA without alpha
- for key in modules_dim.keys():
- if key not in modules_alpha:
- modules_alpha[key] = modules_dim[key]
-
- return LoRANetwork(
- text_encoder,
- unet,
- multiplier=multiplier,
- modules_dim=modules_dim,
- modules_alpha=modules_alpha,
- )
-
-
-def merge_lora_weights(pipe, weights_sd: Dict, multiplier: float = 1.0):
- text_encoders = (
- [pipe.text_encoder, pipe.text_encoder_2]
- if hasattr(pipe, "text_encoder_2")
- else [pipe.text_encoder]
- )
- unet = pipe.unet
-
- lora_network = create_network_from_weights(
- text_encoders, unet, weights_sd, multiplier=multiplier
- )
- lora_network.load_state_dict(weights_sd)
- lora_network.merge_to(multiplier=multiplier)
-
-
-# block weightや学習に対応しない簡易版 / simple version without block weight and training
-class LoRANetwork(torch.nn.Module):
- UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel"]
- UNET_TARGET_REPLACE_MODULE_CONV2D_3X3 = [
- "ResnetBlock2D",
- "Downsample2D",
- "Upsample2D",
- ]
- TEXT_ENCODER_TARGET_REPLACE_MODULE = ["CLIPAttention", "CLIPMLP"]
- LORA_PREFIX_UNET = "lora_unet"
- LORA_PREFIX_TEXT_ENCODER = "lora_te"
-
- # SDXL: must starts with LORA_PREFIX_TEXT_ENCODER
- LORA_PREFIX_TEXT_ENCODER1 = "lora_te1"
- LORA_PREFIX_TEXT_ENCODER2 = "lora_te2"
-
- def __init__(
- self,
- text_encoder: Union[List[CLIPTextModel], CLIPTextModel],
- unet: UNet2DConditionModel,
- multiplier: float = 1.0,
- modules_dim: Optional[Dict[str, int]] = None,
- modules_alpha: Optional[Dict[str, int]] = None,
- varbose: Optional[bool] = False,
- ) -> None:
- super().__init__()
- self.multiplier = multiplier
-
- print(f"create LoRA network from weights")
-
- # convert SDXL Stability AI's U-Net modules to Diffusers
- converted = self.convert_unet_modules(modules_dim, modules_alpha)
- if converted:
- print(
- f"converted {converted} Stability AI's U-Net LoRA modules to Diffusers (SDXL)"
- )
-
- # create module instances
- def create_modules(
- is_unet: bool,
- text_encoder_idx: Optional[int], # None, 1, 2
- root_module: torch.nn.Module,
- target_replace_modules: List[torch.nn.Module],
- ) -> List[LoRAModule]:
- prefix = (
- self.LORA_PREFIX_UNET
- if is_unet
- else (
- self.LORA_PREFIX_TEXT_ENCODER
- if text_encoder_idx is None
- else (
- self.LORA_PREFIX_TEXT_ENCODER1
- if text_encoder_idx == 1
- else self.LORA_PREFIX_TEXT_ENCODER2
- )
- )
- )
- loras = []
- skipped = []
- for name, module in root_module.named_modules():
- if module.__class__.__name__ in target_replace_modules:
- for child_name, child_module in module.named_modules():
- is_linear = isinstance(
- child_module,
- (torch.nn.Linear, diffusers_lora.LoRACompatibleLinear),
- ) # Modified to support Diffusers>=0.19.2
- is_conv2d = isinstance(
- child_module,
- (torch.nn.Conv2d, diffusers_lora.LoRACompatibleConv),
- ) # Modified to support Diffusers>=0.19.2
-
- if is_linear or is_conv2d:
- lora_name = prefix + "." + name + "." + child_name
- lora_name = lora_name.replace(".", "_")
-
- if lora_name not in modules_dim:
- # print(f"skipped {lora_name} (not found in modules_dim)")
- skipped.append(lora_name)
- continue
-
- dim = modules_dim[lora_name]
- alpha = modules_alpha[lora_name]
- lora = LoRAModule(
- lora_name,
- child_module,
- self.multiplier,
- dim,
- alpha,
- )
- loras.append(lora)
- return loras, skipped
-
- text_encoders = text_encoder if type(text_encoder) == list else [text_encoder]
-
- # create LoRA for text encoder
- # 毎回すべてのモジュールを作るのは無駄なので要検討 / it is wasteful to create all modules every time, need to consider
- self.text_encoder_loras: List[LoRAModule] = []
- skipped_te = []
- for i, text_encoder in enumerate(text_encoders):
- if len(text_encoders) > 1:
- index = i + 1
- else:
- index = None
-
- text_encoder_loras, skipped = create_modules(
- False,
- index,
- text_encoder,
- LoRANetwork.TEXT_ENCODER_TARGET_REPLACE_MODULE,
- )
- self.text_encoder_loras.extend(text_encoder_loras)
- skipped_te += skipped
- print(f"create LoRA for Text Encoder: {len(self.text_encoder_loras)} modules.")
- if len(skipped_te) > 0:
- print(f"skipped {len(skipped_te)} modules because of missing weight.")
-
- # extend U-Net target modules to include Conv2d 3x3
- target_modules = (
- LoRANetwork.UNET_TARGET_REPLACE_MODULE
- + LoRANetwork.UNET_TARGET_REPLACE_MODULE_CONV2D_3X3
- )
-
- self.unet_loras: List[LoRAModule]
- self.unet_loras, skipped_un = create_modules(True, None, unet, target_modules)
- print(f"create LoRA for U-Net: {len(self.unet_loras)} modules.")
- if len(skipped_un) > 0:
- print(f"skipped {len(skipped_un)} modules because of missing weight.")
-
- # assertion
- names = set()
- for lora in self.text_encoder_loras + self.unet_loras:
- names.add(lora.lora_name)
- for lora_name in modules_dim.keys():
- assert (
- lora_name in names
- ), f"{lora_name} is not found in created LoRA modules."
-
- # make to work load_state_dict
- for lora in self.text_encoder_loras + self.unet_loras:
- self.add_module(lora.lora_name, lora)
-
- # SDXL: convert SDXL Stability AI's U-Net modules to Diffusers
- def convert_unet_modules(self, modules_dim, modules_alpha):
- converted_count = 0
- not_converted_count = 0
-
- map_keys = list(UNET_CONVERSION_MAP.keys())
- map_keys.sort()
-
- for key in list(modules_dim.keys()):
- if key.startswith(LoRANetwork.LORA_PREFIX_UNET + "_"):
- search_key = key.replace(LoRANetwork.LORA_PREFIX_UNET + "_", "")
- position = bisect.bisect_right(map_keys, search_key)
- map_key = map_keys[position - 1]
- if search_key.startswith(map_key):
- new_key = key.replace(map_key, UNET_CONVERSION_MAP[map_key])
- modules_dim[new_key] = modules_dim[key]
- modules_alpha[new_key] = modules_alpha[key]
- del modules_dim[key]
- del modules_alpha[key]
- converted_count += 1
- else:
- not_converted_count += 1
- assert (
- converted_count == 0 or not_converted_count == 0
- ), f"some modules are not converted: {converted_count} converted, {not_converted_count} not converted"
- return converted_count
-
- def set_multiplier(self, multiplier):
- self.multiplier = multiplier
- for lora in self.text_encoder_loras + self.unet_loras:
- lora.multiplier = self.multiplier
-
- def apply_to(self, multiplier=1.0, apply_text_encoder=True, apply_unet=True):
- if apply_text_encoder:
- print("enable LoRA for text encoder")
- for lora in self.text_encoder_loras:
- lora.apply_to(multiplier)
- if apply_unet:
- print("enable LoRA for U-Net")
- for lora in self.unet_loras:
- lora.apply_to(multiplier)
-
- def unapply_to(self):
- for lora in self.text_encoder_loras + self.unet_loras:
- lora.unapply_to()
-
- def merge_to(self, multiplier=1.0):
- print("merge LoRA weights to original weights")
- for lora in tqdm(self.text_encoder_loras + self.unet_loras):
- lora.merge_to(multiplier)
- print(f"weights are merged")
-
- def restore_from(self, multiplier=1.0):
- print("restore LoRA weights from original weights")
- for lora in tqdm(self.text_encoder_loras + self.unet_loras):
- lora.restore_from(multiplier)
- print(f"weights are restored")
-
- def load_state_dict(self, state_dict: Mapping[str, Any], strict: bool = True):
- # convert SDXL Stability AI's state dict to Diffusers' based state dict
- map_keys = list(UNET_CONVERSION_MAP.keys()) # prefix of U-Net modules
- map_keys.sort()
- for key in list(state_dict.keys()):
- if key.startswith(LoRANetwork.LORA_PREFIX_UNET + "_"):
- search_key = key.replace(LoRANetwork.LORA_PREFIX_UNET + "_", "")
- position = bisect.bisect_right(map_keys, search_key)
- map_key = map_keys[position - 1]
- if search_key.startswith(map_key):
- new_key = key.replace(map_key, UNET_CONVERSION_MAP[map_key])
- state_dict[new_key] = state_dict[key]
- del state_dict[key]
-
- # in case of V2, some weights have different shape, so we need to convert them
- # because V2 LoRA is based on U-Net created by use_linear_projection=False
- my_state_dict = self.state_dict()
- for key in state_dict.keys():
- if state_dict[key].size() != my_state_dict[key].size():
- # print(f"convert {key} from {state_dict[key].size()} to {my_state_dict[key].size()}")
- state_dict[key] = state_dict[key].view(my_state_dict[key].size())
-
- return super().load_state_dict(state_dict, strict)
diff --git a/spaces/Lippppxy/AiAnimeVoice/models.py b/spaces/Lippppxy/AiAnimeVoice/models.py
deleted file mode 100644
index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000
--- a/spaces/Lippppxy/AiAnimeVoice/models.py
+++ /dev/null
@@ -1,533 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/ui/voice/index.tsx b/spaces/Makiing/coolb-in-gtest/src/components/ui/voice/index.tsx
deleted file mode 100644
index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/ui/voice/index.tsx
+++ /dev/null
@@ -1,28 +0,0 @@
-import './index.scss'
-
-export interface VoiceProps extends CSSPropertyRule {
- num?: number;
- duration?: number;
-}
-export default function Voice({ duration = 400, num = 7, ...others }) {
- return (
-
"
-)
-
-examples = [
- ["samples/李云龙1.wav", "samples/李云龙2.wav"],
- ["samples/马保国1.wav", "samples/马保国2.wav"],
- ["samples/周杰伦1.wav", "samples/周杰伦2.wav"],
- ["samples/海绵宝宝1.wav", "samples/派大星.wav"],
- ["samples/海绵宝宝1.wav", "samples/海绵宝宝2.wav"],
- ["samples/周星驰.wav", "samples/吴孟达.wav"]]
-
-interface = gr.Interface(
- fn=voiceRecognition,
- inputs=inputs,
- outputs=output,
- title=title,
- description=description,
- examples=examples,
- examples_per_page=3,
- article=article,
- enable_queue=True)
-interface.launch(debug=True,share=True)
\ No newline at end of file
diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/utils/autocast.py b/spaces/Yudha515/Rvc-Models/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/YueMafighting/mmpose-estimation/README.md b/spaces/YueMafighting/mmpose-estimation/README.md
deleted file mode 100644
index fffdd5ec34222ffdb2cf1269304140f0405099bb..0000000000000000000000000000000000000000
--- a/spaces/YueMafighting/mmpose-estimation/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: MMPose estimation
-emoji: 🏃
-colorFrom: pink
-colorTo: indigo
-python_version: 3.9.16
-sdk: gradio
-sdk_version: 3.28.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: test1444/test_mmpose
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Yunshansongbai/SVC-Nahida/resample.py b/spaces/Yunshansongbai/SVC-Nahida/resample.py
deleted file mode 100644
index 79f60bd7446885aa2e736b8e968bd8a827259db6..0000000000000000000000000000000000000000
--- a/spaces/Yunshansongbai/SVC-Nahida/resample.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-from scipy.io import wavfile
-from tqdm import tqdm
-
-def process(item):
- spkdir, wav_name, args = item
- # speaker 's5', 'p280', 'p315' are excluded,
- speaker = spkdir.replace("\\", "/").split("/")[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
-
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True)
-
- wav, sr = librosa.load(wav_path, sr=None)
- wav, _ = librosa.effects.trim(wav, top_db=20)
- peak = np.abs(wav).max()
- if peak > 1.0:
- wav = 0.98 * wav / peak
- wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2)
- wav2 /= max(wav2.max(), -wav2.min())
- save_name = wav_name
- save_path2 = os.path.join(args.out_dir2, speaker, save_name)
- wavfile.write(
- save_path2,
- args.sr2,
- (wav2 * np.iinfo(np.int16).max).astype(np.int16)
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr2", type=int, default=44100, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir")
- parser.add_argument("--out_dir2", type=str, default="./dataset/44k", help="path to target dir")
- args = parser.parse_args()
- processs = 1
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/abdvl/datahub_qa_bot/docs/managed-datahub/release-notes/v_0_2_0.md b/spaces/abdvl/datahub_qa_bot/docs/managed-datahub/release-notes/v_0_2_0.md
deleted file mode 100644
index eace6d84cba6dacae1d70e5dbfaa4bc0a753faaf..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/managed-datahub/release-notes/v_0_2_0.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# v0.2.0
----
-
-Release Availability Date
----
-09 Feb 2023
-
-## Update Downtime
-
-During release installation the Elasticsearch indices will be reindex to improve search capabilities. While the upgrade is in progress
-DataHub will be set to a read-only mode. Once this operation is completed, the upgrade will proceed normally. Depending on index sizes and
-infrastructure this process can take 5 minutes to hours however as a rough estimate 1 hour for every 2.3 million entities.
-
-
-## Release Changlog
----
-- Since `v0.1.73` these changes from OSS DataHub https://github.com/datahub-project/datahub/compare/36afdec3946df2fb4166ac27a89b933ced87d00e...v0.10.0 have been pulled in
- - Improved documentation editor
- - Filter lineage graphs based on time windows
- - Improvements in Search
- - Metadata Ingestion
- - Redshift: You can now extract lineage information from unload queries
- - PowerBI: Ingestion now maps Workspaces to DataHub Containers
- - BigQuery: You can now extract lineage metadata from the Catalog
- - Glue: Ingestion now uses table name as the human-readable name
-- SSO Preferred Algorithm Setting
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
deleted file mode 100644
index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-from .decode_head import BaseDecodeHead
-
-
-class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta):
- """Base class for cascade decode head used in
- :class:`CascadeEncoderDecoder."""
-
- def __init__(self, *args, **kwargs):
- super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs)
-
- @abstractmethod
- def forward(self, inputs, prev_output):
- """Placeholder of forward function."""
- pass
-
- def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg,
- train_cfg):
- """Forward function for training.
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- gt_semantic_seg (Tensor): Semantic segmentation masks
- used if the architecture supports semantic segmentation task.
- train_cfg (dict): The training config.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- seg_logits = self.forward(inputs, prev_output)
- losses = self.losses(seg_logits, gt_semantic_seg)
-
- return losses
-
- def forward_test(self, inputs, prev_output, img_metas, test_cfg):
- """Forward function for testing.
-
- Args:
- inputs (list[Tensor]): List of multi-level img features.
- prev_output (Tensor): The output of previous decode head.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- test_cfg (dict): The testing config.
-
- Returns:
- Tensor: Output segmentation map.
- """
- return self.forward(inputs, prev_output)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/non_local.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/non_local.py
deleted file mode 100644
index 92d00155ef275c1201ea66bba30470a1785cc5d7..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/non_local.py
+++ /dev/null
@@ -1,306 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from abc import ABCMeta
-
-import torch
-import torch.nn as nn
-
-from ..utils import constant_init, normal_init
-from .conv_module import ConvModule
-from .registry import PLUGIN_LAYERS
-
-
-class _NonLocalNd(nn.Module, metaclass=ABCMeta):
- """Basic Non-local module.
-
- This module is proposed in
- "Non-local Neural Networks"
- Paper reference: https://arxiv.org/abs/1711.07971
- Code reference: https://github.com/AlexHex7/Non-local_pytorch
-
- Args:
- in_channels (int): Channels of the input feature map.
- reduction (int): Channel reduction ratio. Default: 2.
- use_scale (bool): Whether to scale pairwise_weight by
- `1/sqrt(inter_channels)` when the mode is `embedded_gaussian`.
- Default: True.
- conv_cfg (None | dict): The config dict for convolution layers.
- If not specified, it will use `nn.Conv2d` for convolution layers.
- Default: None.
- norm_cfg (None | dict): The config dict for normalization layers.
- Default: None. (This parameter is only applicable to conv_out.)
- mode (str): Options are `gaussian`, `concatenation`,
- `embedded_gaussian` and `dot_product`. Default: embedded_gaussian.
- """
-
- def __init__(self,
- in_channels,
- reduction=2,
- use_scale=True,
- conv_cfg=None,
- norm_cfg=None,
- mode='embedded_gaussian',
- **kwargs):
- super(_NonLocalNd, self).__init__()
- self.in_channels = in_channels
- self.reduction = reduction
- self.use_scale = use_scale
- self.inter_channels = max(in_channels // reduction, 1)
- self.mode = mode
-
- if mode not in [
- 'gaussian', 'embedded_gaussian', 'dot_product', 'concatenation'
- ]:
- raise ValueError("Mode should be in 'gaussian', 'concatenation', "
- f"'embedded_gaussian' or 'dot_product', but got "
- f'{mode} instead.')
-
- # g, theta, phi are defaulted as `nn.ConvNd`.
- # Here we use ConvModule for potential usage.
- self.g = ConvModule(
- self.in_channels,
- self.inter_channels,
- kernel_size=1,
- conv_cfg=conv_cfg,
- act_cfg=None)
- self.conv_out = ConvModule(
- self.inter_channels,
- self.in_channels,
- kernel_size=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- if self.mode != 'gaussian':
- self.theta = ConvModule(
- self.in_channels,
- self.inter_channels,
- kernel_size=1,
- conv_cfg=conv_cfg,
- act_cfg=None)
- self.phi = ConvModule(
- self.in_channels,
- self.inter_channels,
- kernel_size=1,
- conv_cfg=conv_cfg,
- act_cfg=None)
-
- if self.mode == 'concatenation':
- self.concat_project = ConvModule(
- self.inter_channels * 2,
- 1,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=False,
- act_cfg=dict(type='ReLU'))
-
- self.init_weights(**kwargs)
-
- def init_weights(self, std=0.01, zeros_init=True):
- if self.mode != 'gaussian':
- for m in [self.g, self.theta, self.phi]:
- normal_init(m.conv, std=std)
- else:
- normal_init(self.g.conv, std=std)
- if zeros_init:
- if self.conv_out.norm_cfg is None:
- constant_init(self.conv_out.conv, 0)
- else:
- constant_init(self.conv_out.norm, 0)
- else:
- if self.conv_out.norm_cfg is None:
- normal_init(self.conv_out.conv, std=std)
- else:
- normal_init(self.conv_out.norm, std=std)
-
- def gaussian(self, theta_x, phi_x):
- # NonLocal1d pairwise_weight: [N, H, H]
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
- pairwise_weight = torch.matmul(theta_x, phi_x)
- pairwise_weight = pairwise_weight.softmax(dim=-1)
- return pairwise_weight
-
- def embedded_gaussian(self, theta_x, phi_x):
- # NonLocal1d pairwise_weight: [N, H, H]
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
- pairwise_weight = torch.matmul(theta_x, phi_x)
- if self.use_scale:
- # theta_x.shape[-1] is `self.inter_channels`
- pairwise_weight /= theta_x.shape[-1]**0.5
- pairwise_weight = pairwise_weight.softmax(dim=-1)
- return pairwise_weight
-
- def dot_product(self, theta_x, phi_x):
- # NonLocal1d pairwise_weight: [N, H, H]
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
- pairwise_weight = torch.matmul(theta_x, phi_x)
- pairwise_weight /= pairwise_weight.shape[-1]
- return pairwise_weight
-
- def concatenation(self, theta_x, phi_x):
- # NonLocal1d pairwise_weight: [N, H, H]
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
- h = theta_x.size(2)
- w = phi_x.size(3)
- theta_x = theta_x.repeat(1, 1, 1, w)
- phi_x = phi_x.repeat(1, 1, h, 1)
-
- concat_feature = torch.cat([theta_x, phi_x], dim=1)
- pairwise_weight = self.concat_project(concat_feature)
- n, _, h, w = pairwise_weight.size()
- pairwise_weight = pairwise_weight.view(n, h, w)
- pairwise_weight /= pairwise_weight.shape[-1]
-
- return pairwise_weight
-
- def forward(self, x):
- # Assume `reduction = 1`, then `inter_channels = C`
- # or `inter_channels = C` when `mode="gaussian"`
-
- # NonLocal1d x: [N, C, H]
- # NonLocal2d x: [N, C, H, W]
- # NonLocal3d x: [N, C, T, H, W]
- n = x.size(0)
-
- # NonLocal1d g_x: [N, H, C]
- # NonLocal2d g_x: [N, HxW, C]
- # NonLocal3d g_x: [N, TxHxW, C]
- g_x = self.g(x).view(n, self.inter_channels, -1)
- g_x = g_x.permute(0, 2, 1)
-
- # NonLocal1d theta_x: [N, H, C], phi_x: [N, C, H]
- # NonLocal2d theta_x: [N, HxW, C], phi_x: [N, C, HxW]
- # NonLocal3d theta_x: [N, TxHxW, C], phi_x: [N, C, TxHxW]
- if self.mode == 'gaussian':
- theta_x = x.view(n, self.in_channels, -1)
- theta_x = theta_x.permute(0, 2, 1)
- if self.sub_sample:
- phi_x = self.phi(x).view(n, self.in_channels, -1)
- else:
- phi_x = x.view(n, self.in_channels, -1)
- elif self.mode == 'concatenation':
- theta_x = self.theta(x).view(n, self.inter_channels, -1, 1)
- phi_x = self.phi(x).view(n, self.inter_channels, 1, -1)
- else:
- theta_x = self.theta(x).view(n, self.inter_channels, -1)
- theta_x = theta_x.permute(0, 2, 1)
- phi_x = self.phi(x).view(n, self.inter_channels, -1)
-
- pairwise_func = getattr(self, self.mode)
- # NonLocal1d pairwise_weight: [N, H, H]
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
- pairwise_weight = pairwise_func(theta_x, phi_x)
-
- # NonLocal1d y: [N, H, C]
- # NonLocal2d y: [N, HxW, C]
- # NonLocal3d y: [N, TxHxW, C]
- y = torch.matmul(pairwise_weight, g_x)
- # NonLocal1d y: [N, C, H]
- # NonLocal2d y: [N, C, H, W]
- # NonLocal3d y: [N, C, T, H, W]
- y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels,
- *x.size()[2:])
-
- output = x + self.conv_out(y)
-
- return output
-
-
-class NonLocal1d(_NonLocalNd):
- """1D Non-local module.
-
- Args:
- in_channels (int): Same as `NonLocalND`.
- sub_sample (bool): Whether to apply max pooling after pairwise
- function (Note that the `sub_sample` is applied on spatial only).
- Default: False.
- conv_cfg (None | dict): Same as `NonLocalND`.
- Default: dict(type='Conv1d').
- """
-
- def __init__(self,
- in_channels,
- sub_sample=False,
- conv_cfg=dict(type='Conv1d'),
- **kwargs):
- super(NonLocal1d, self).__init__(
- in_channels, conv_cfg=conv_cfg, **kwargs)
-
- self.sub_sample = sub_sample
-
- if sub_sample:
- max_pool_layer = nn.MaxPool1d(kernel_size=2)
- self.g = nn.Sequential(self.g, max_pool_layer)
- if self.mode != 'gaussian':
- self.phi = nn.Sequential(self.phi, max_pool_layer)
- else:
- self.phi = max_pool_layer
-
-
-@PLUGIN_LAYERS.register_module()
-class NonLocal2d(_NonLocalNd):
- """2D Non-local module.
-
- Args:
- in_channels (int): Same as `NonLocalND`.
- sub_sample (bool): Whether to apply max pooling after pairwise
- function (Note that the `sub_sample` is applied on spatial only).
- Default: False.
- conv_cfg (None | dict): Same as `NonLocalND`.
- Default: dict(type='Conv2d').
- """
-
- _abbr_ = 'nonlocal_block'
-
- def __init__(self,
- in_channels,
- sub_sample=False,
- conv_cfg=dict(type='Conv2d'),
- **kwargs):
- super(NonLocal2d, self).__init__(
- in_channels, conv_cfg=conv_cfg, **kwargs)
-
- self.sub_sample = sub_sample
-
- if sub_sample:
- max_pool_layer = nn.MaxPool2d(kernel_size=(2, 2))
- self.g = nn.Sequential(self.g, max_pool_layer)
- if self.mode != 'gaussian':
- self.phi = nn.Sequential(self.phi, max_pool_layer)
- else:
- self.phi = max_pool_layer
-
-
-class NonLocal3d(_NonLocalNd):
- """3D Non-local module.
-
- Args:
- in_channels (int): Same as `NonLocalND`.
- sub_sample (bool): Whether to apply max pooling after pairwise
- function (Note that the `sub_sample` is applied on spatial only).
- Default: False.
- conv_cfg (None | dict): Same as `NonLocalND`.
- Default: dict(type='Conv3d').
- """
-
- def __init__(self,
- in_channels,
- sub_sample=False,
- conv_cfg=dict(type='Conv3d'),
- **kwargs):
- super(NonLocal3d, self).__init__(
- in_channels, conv_cfg=conv_cfg, **kwargs)
- self.sub_sample = sub_sample
-
- if sub_sample:
- max_pool_layer = nn.MaxPool3d(kernel_size=(1, 2, 2))
- self.g = nn.Sequential(self.g, max_pool_layer)
- if self.mode != 'gaussian':
- self.phi = nn.Sequential(self.phi, max_pool_layer)
- else:
- self.phi = max_pool_layer
diff --git a/spaces/abionchito/rvc-models/app-full.py b/spaces/abionchito/rvc-models/app-full.py
deleted file mode 100644
index 1ff3f7e415255b56edad6fa3ce8d4558b2a85b53..0000000000000000000000000000000000000000
--- a/spaces/abionchito/rvc-models/app-full.py
+++ /dev/null
@@ -1,250 +0,0 @@
-import os
-import json
-import argparse
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-import yt_dlp
-import ffmpeg
-import subprocess
-import sys
-import io
-import wave
-from datetime import datetime
-from fairseq import checkpoint_utils
-from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
-from vc_infer_pipeline import VC
-from config import (
- is_half,
- device
-)
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
-
-def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy):
- def vc_fn(
- input_audio,
- f0_up_key,
- f0_method,
- index_rate,
- tts_mode,
- tts_text,
- tts_voice
- ):
- try:
- if tts_mode:
- if len(tts_text) > 100 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- else:
- if args.files:
- audio, sr = librosa.load(input_audio, sr=16000, mono=True)
- else:
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if duration > 20 and limitation:
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- )
- print(
- f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- )
- return "Success", (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def cut_vocal_and_inst(yt_url):
- if yt_url != "":
- if not os.path.exists("/content/youtube_audio"):
- os.mkdir("/content/youtube_audio")
- ydl_opts = {
- 'format': 'bestaudio/best',
- 'postprocessors': [{
- 'key': 'FFmpegExtractAudio',
- 'preferredcodec': 'wav',
- }],
- "outtmpl": '/content/youtube_audio/audio',
- }
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- ydl.download([yt_url])
- yt_audio_path = "/content/youtube_audio/audio.wav"
- command = f"demucs --two-stems=vocals {yt_audio_path}"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return ("/content/rvc-models/separated/htdemucs/audio/vocals.wav", "/content/rvc-models/separated/htdemucs/audio/no_vocals.wav", yt_audio_path, "/content/rvc-models/separated/htdemucs/audio/vocals.wav")
-
-def combine_vocal_and_inst(audio_data, audio_volume):
- print(audio_data)
- if not os.path.exists("/content/result"):
- os.mkdir("/content/result")
- vocal_path = "/content/result/output.wav"
- inst_path = "/content/rvc-models/separated/htdemucs/audio/no_vocals.wav"
- output_path = "/content/result/combine.mp3"
- with wave.open(vocal_path, "w") as wave_file:
- wave_file.setnchannels(1)
- wave_file.setsampwidth(2)
- wave_file.setframerate(audio_data[0])
- wave_file.writeframes(audio_data[1].tobytes())
- command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}'
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- return output_path
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_to_tts_mode(tts_mode):
- if tts_mode:
- return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
- else:
- return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- parser.add_argument("--files", action="store_true", default=False, help="load audio from path")
- args, unknown = parser.parse_known_args()
- load_hubert()
- models = []
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with open("weights/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for name, info in models_info.items():
- if not info['enable']:
- continue
- title = info['title']
- author = info.get("author", None)
- cover = f"weights/{name}/{info['cover']}"
- index = f"weights/{name}/{info['feature_retrieval_library']}"
- npy = f"weights/{name}/{info['feature_file']}"
- cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, device, is_half)
- models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy)))
- with gr.Blocks() as app:
- gr.Markdown(
- "#
RVC Models\n"
- "##
The input audio should be clean and pure voice without background music.\n"
- "###
More feature will be added soon... \n"
- "\n\n"
- "[](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n\n"
- "[](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)"
- )
- with gr.Tabs():
- for (name, title, author, cover, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
'
- )
- with gr.Row():
- if args.files:
- with gr.Column():
- vc_youtube = gr.Textbox(label="Youtube URL")
- vc_convert = gr.Button("Convert", variant="primary")
- vc_vocal_preview = gr.Audio(label="Vocal Preview")
- vc_inst_preview = gr.Audio(label="Instrumental Preview")
- vc_audio_preview = gr.Audio(label="Audio Preview")
- with gr.Column():
- if args.files:
- vc_input = gr.Textbox(label="Input audio path")
- else:
- vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '')
- vc_transpose = gr.Number(label="Transpose", value=0)
- vc_f0method = gr.Radio(
- label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies",
- choices=["pm", "harvest"],
- value="pm",
- interactive=True,
- )
- vc_index_ratio = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- value=0.6,
- interactive=True,
- )
- tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False)
- tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- vc_submit = gr.Button("Generate", variant="primary")
- vc_output1 = gr.Textbox(label="Output Message")
- vc_output2 = gr.Audio(label="Output Audio")
- if args.files:
- with gr.Column():
- vc_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Vocal volume",
- value=5,
- interactive=True,
- step=1
- )
- vc_outputCombine = gr.Audio(label="Output Combined Audio")
- vc_combine = gr.Button("Combine",variant="primary")
- vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2])
- tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice])
- if args.files:
- vc_convert.click(cut_vocal_and_inst, vc_youtube, [vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input])
- vc_combine.click(combine_vocal_and_inst, [vc_output2, vc_volume], vc_outputCombine)
- app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share)
\ No newline at end of file
diff --git a/spaces/adirik/stylemc-demo/torch_utils/ops/fma.py b/spaces/adirik/stylemc-demo/torch_utils/ops/fma.py
deleted file mode 100644
index 2eeac58a626c49231e04122b93e321ada954c5d3..0000000000000000000000000000000000000000
--- a/spaces/adirik/stylemc-demo/torch_utils/ops/fma.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`."""
-
-import torch
-
-#----------------------------------------------------------------------------
-
-def fma(a, b, c): # => a * b + c
- return _FusedMultiplyAdd.apply(a, b, c)
-
-#----------------------------------------------------------------------------
-
-class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c
- @staticmethod
- def forward(ctx, a, b, c): # pylint: disable=arguments-differ
- out = torch.addcmul(c, a, b)
- ctx.save_for_backward(a, b)
- ctx.c_shape = c.shape
- return out
-
- @staticmethod
- def backward(ctx, dout): # pylint: disable=arguments-differ
- a, b = ctx.saved_tensors
- c_shape = ctx.c_shape
- da = None
- db = None
- dc = None
-
- if ctx.needs_input_grad[0]:
- da = _unbroadcast(dout * b, a.shape)
-
- if ctx.needs_input_grad[1]:
- db = _unbroadcast(dout * a, b.shape)
-
- if ctx.needs_input_grad[2]:
- dc = _unbroadcast(dout, c_shape)
-
- return da, db, dc
-
-#----------------------------------------------------------------------------
-
-def _unbroadcast(x, shape):
- extra_dims = x.ndim - len(shape)
- assert extra_dims >= 0
- dim = [i for i in range(x.ndim) if x.shape[i] > 1 and (i < extra_dims or shape[i - extra_dims] == 1)]
- if len(dim):
- x = x.sum(dim=dim, keepdim=True)
- if extra_dims:
- x = x.reshape(-1, *x.shape[extra_dims+1:])
- assert x.shape == shape
- return x
-
-#----------------------------------------------------------------------------
diff --git a/spaces/agutfraind/llmscanner/app_constants.py b/spaces/agutfraind/llmscanner/app_constants.py
deleted file mode 100644
index 873358de09365faeabf6221b5adeac165da09b7e..0000000000000000000000000000000000000000
--- a/spaces/agutfraind/llmscanner/app_constants.py
+++ /dev/null
@@ -1,11 +0,0 @@
-'''
-file for
-- canned prompts
-- constants (other than secrets)
-
-'''
-
-canned_questions = [
- "When was Paul Graham born?",
- "What was his first startup?"
-]
\ No newline at end of file
diff --git a/spaces/akhaliq/Detic/tools/get_coco_zeroshot_oriorder.py b/spaces/akhaliq/Detic/tools/get_coco_zeroshot_oriorder.py
deleted file mode 100644
index ed6748be1f2ed92741ea78f5a187f9838185a80e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Detic/tools/get_coco_zeroshot_oriorder.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--data_path', default='datasets/coco/annotations/instances_val2017_unseen_2.json')
- parser.add_argument('--cat_path', default='datasets/coco/annotations/instances_val2017.json')
- args = parser.parse_args()
- print('Loading', args.cat_path)
- cat = json.load(open(args.cat_path, 'r'))['categories']
-
- print('Loading', args.data_path)
- data = json.load(open(args.data_path, 'r'))
- data['categories'] = cat
- out_path = args.data_path[:-5] + '_oriorder.json'
- print('Saving to', out_path)
- json.dump(data, open(out_path, 'w'))
diff --git a/spaces/akhaliq/Real-ESRGAN/realesrgan/data/realesrgan_dataset.py b/spaces/akhaliq/Real-ESRGAN/realesrgan/data/realesrgan_dataset.py
deleted file mode 100644
index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Real-ESRGAN/realesrgan/data/realesrgan_dataset.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import os.path as osp
-import random
-import time
-import torch
-from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
-from basicsr.data.transforms import augment
-from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-from torch.utils import data as data
-
-
-@DATASET_REGISTRY.register()
-class RealESRGANDataset(data.Dataset):
- """Dataset used for Real-ESRGAN model:
- Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It loads gt (Ground-Truth) images, and augments them.
- It also generates blur kernels and sinc kernels for generating low-quality images.
- Note that the low-quality images are processed in tensors on GPUS for faster processing.
-
- Args:
- opt (dict): Config for train datasets. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- meta_info (str): Path for meta information file.
- io_backend (dict): IO backend type and other kwarg.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
- Please see more options in the codes.
- """
-
- def __init__(self, opt):
- super(RealESRGANDataset, self).__init__()
- self.opt = opt
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- self.gt_folder = opt['dataroot_gt']
-
- # file client (lmdb io backend)
- if self.io_backend_opt['type'] == 'lmdb':
- self.io_backend_opt['db_paths'] = [self.gt_folder]
- self.io_backend_opt['client_keys'] = ['gt']
- if not self.gt_folder.endswith('.lmdb'):
- raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}")
- with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
- self.paths = [line.split('.')[0] for line in fin]
- else:
- # disk backend with meta_info
- # Each line in the meta_info describes the relative path to an image
- with open(self.opt['meta_info']) as fin:
- paths = [line.strip().split(' ')[0] for line in fin]
- self.paths = [os.path.join(self.gt_folder, v) for v in paths]
-
- # blur settings for the first degradation
- self.blur_kernel_size = opt['blur_kernel_size']
- self.kernel_list = opt['kernel_list']
- self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability
- self.blur_sigma = opt['blur_sigma']
- self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels
- self.betap_range = opt['betap_range'] # betap used in plateau blur kernels
- self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters
-
- # blur settings for the second degradation
- self.blur_kernel_size2 = opt['blur_kernel_size2']
- self.kernel_list2 = opt['kernel_list2']
- self.kernel_prob2 = opt['kernel_prob2']
- self.blur_sigma2 = opt['blur_sigma2']
- self.betag_range2 = opt['betag_range2']
- self.betap_range2 = opt['betap_range2']
- self.sinc_prob2 = opt['sinc_prob2']
-
- # a final sinc filter
- self.final_sinc_prob = opt['final_sinc_prob']
-
- self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21
- # TODO: kernel range is now hard-coded, should be in the configure file
- self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect
- self.pulse_tensor[10, 10] = 1
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- # -------------------------------- Load gt images -------------------------------- #
- # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.
- gt_path = self.paths[index]
- # avoid errors caused by high latency in reading files
- retry = 3
- while retry > 0:
- try:
- img_bytes = self.file_client.get(gt_path, 'gt')
- except (IOError, OSError) as e:
- logger = get_root_logger()
- logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}')
- # change another file to read
- index = random.randint(0, self.__len__())
- gt_path = self.paths[index]
- time.sleep(1) # sleep 1s for occasional server congestion
- else:
- break
- finally:
- retry -= 1
- img_gt = imfrombytes(img_bytes, float32=True)
-
- # -------------------- Do augmentation for training: flip, rotation -------------------- #
- img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot'])
-
- # crop or pad to 400
- # TODO: 400 is hard-coded. You may change it accordingly
- h, w = img_gt.shape[0:2]
- crop_pad_size = 400
- # pad
- if h < crop_pad_size or w < crop_pad_size:
- pad_h = max(0, crop_pad_size - h)
- pad_w = max(0, crop_pad_size - w)
- img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101)
- # crop
- if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size:
- h, w = img_gt.shape[0:2]
- # randomly choose top and left coordinates
- top = random.randint(0, h - crop_pad_size)
- left = random.randint(0, w - crop_pad_size)
- img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...]
-
- # ------------------------ Generate kernels (used in the first degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob']:
- # this sinc filter setting is for kernels ranging from [7, 21]
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel = random_mixed_kernels(
- self.kernel_list,
- self.kernel_prob,
- kernel_size,
- self.blur_sigma,
- self.blur_sigma, [-math.pi, math.pi],
- self.betag_range,
- self.betap_range,
- noise_range=None)
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------ Generate kernels (used in the second degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob2']:
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel2 = random_mixed_kernels(
- self.kernel_list2,
- self.kernel_prob2,
- kernel_size,
- self.blur_sigma2,
- self.blur_sigma2, [-math.pi, math.pi],
- self.betag_range2,
- self.betap_range2,
- noise_range=None)
-
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------------------- the final sinc kernel ------------------------------------- #
- if np.random.uniform() < self.opt['final_sinc_prob']:
- kernel_size = random.choice(self.kernel_range)
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21)
- sinc_kernel = torch.FloatTensor(sinc_kernel)
- else:
- sinc_kernel = self.pulse_tensor
-
- # BGR to RGB, HWC to CHW, numpy to tensor
- img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0]
- kernel = torch.FloatTensor(kernel)
- kernel2 = torch.FloatTensor(kernel2)
-
- return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path}
- return return_d
-
- def __len__(self):
- return len(self.paths)
diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py b/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py
deleted file mode 100644
index 1d973d9b8b9ab547571abc5a3f5ea86226a25924..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from utils.display import *
-from utils.dsp import *
-
-
-class WaveRNN(nn.Module) :
- def __init__(self, hidden_size=896, quantisation=256) :
- super(WaveRNN, self).__init__()
-
- self.hidden_size = hidden_size
- self.split_size = hidden_size // 2
-
- # The main matmul
- self.R = nn.Linear(self.hidden_size, 3 * self.hidden_size, bias=False)
-
- # Output fc layers
- self.O1 = nn.Linear(self.split_size, self.split_size)
- self.O2 = nn.Linear(self.split_size, quantisation)
- self.O3 = nn.Linear(self.split_size, self.split_size)
- self.O4 = nn.Linear(self.split_size, quantisation)
-
- # Input fc layers
- self.I_coarse = nn.Linear(2, 3 * self.split_size, bias=False)
- self.I_fine = nn.Linear(3, 3 * self.split_size, bias=False)
-
- # biases for the gates
- self.bias_u = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_r = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_e = nn.Parameter(torch.zeros(self.hidden_size))
-
- # display num params
- self.num_params()
-
-
- def forward(self, prev_y, prev_hidden, current_coarse) :
-
- # Main matmul - the projection is split 3 ways
- R_hidden = self.R(prev_hidden)
- R_u, R_r, R_e, = torch.split(R_hidden, self.hidden_size, dim=1)
-
- # Project the prev input
- coarse_input_proj = self.I_coarse(prev_y)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project the prev input and current coarse sample
- fine_input = torch.cat([prev_y, current_coarse], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # concatenate for the gates
- I_u = torch.cat([I_coarse_u, I_fine_u], dim=1)
- I_r = torch.cat([I_coarse_r, I_fine_r], dim=1)
- I_e = torch.cat([I_coarse_e, I_fine_e], dim=1)
-
- # Compute all gates for coarse and fine
- u = F.sigmoid(R_u + I_u + self.bias_u)
- r = F.sigmoid(R_r + I_r + self.bias_r)
- e = F.tanh(r * R_e + I_e + self.bias_e)
- hidden = u * prev_hidden + (1. - u) * e
-
- # Split the hidden state
- hidden_coarse, hidden_fine = torch.split(hidden, self.split_size, dim=1)
-
- # Compute outputs
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
-
- return out_coarse, out_fine, hidden
-
-
- def generate(self, seq_len):
- with torch.no_grad():
- # First split up the biases for the gates
- b_coarse_u, b_fine_u = torch.split(self.bias_u, self.split_size)
- b_coarse_r, b_fine_r = torch.split(self.bias_r, self.split_size)
- b_coarse_e, b_fine_e = torch.split(self.bias_e, self.split_size)
-
- # Lists for the two output seqs
- c_outputs, f_outputs = [], []
-
- # Some initial inputs
- out_coarse = torch.LongTensor([0]).cuda()
- out_fine = torch.LongTensor([0]).cuda()
-
- # We'll meed a hidden state
- hidden = self.init_hidden()
-
- # Need a clock for display
- start = time.time()
-
- # Loop for generation
- for i in range(seq_len) :
-
- # Split into two hidden states
- hidden_coarse, hidden_fine = \
- torch.split(hidden, self.split_size, dim=1)
-
- # Scale and concat previous predictions
- out_coarse = out_coarse.unsqueeze(0).float() / 127.5 - 1.
- out_fine = out_fine.unsqueeze(0).float() / 127.5 - 1.
- prev_outputs = torch.cat([out_coarse, out_fine], dim=1)
-
- # Project input
- coarse_input_proj = self.I_coarse(prev_outputs)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project hidden state and split 6 ways
- R_hidden = self.R(hidden)
- R_coarse_u , R_fine_u, \
- R_coarse_r, R_fine_r, \
- R_coarse_e, R_fine_e = torch.split(R_hidden, self.split_size, dim=1)
-
- # Compute the coarse gates
- u = F.sigmoid(R_coarse_u + I_coarse_u + b_coarse_u)
- r = F.sigmoid(R_coarse_r + I_coarse_r + b_coarse_r)
- e = F.tanh(r * R_coarse_e + I_coarse_e + b_coarse_e)
- hidden_coarse = u * hidden_coarse + (1. - u) * e
-
- # Compute the coarse output
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- posterior = F.softmax(out_coarse, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_coarse = distrib.sample()
- c_outputs.append(out_coarse)
-
- # Project the [prev outputs and predicted coarse sample]
- coarse_pred = out_coarse.float() / 127.5 - 1.
- fine_input = torch.cat([prev_outputs, coarse_pred.unsqueeze(0)], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # Compute the fine gates
- u = F.sigmoid(R_fine_u + I_fine_u + b_fine_u)
- r = F.sigmoid(R_fine_r + I_fine_r + b_fine_r)
- e = F.tanh(r * R_fine_e + I_fine_e + b_fine_e)
- hidden_fine = u * hidden_fine + (1. - u) * e
-
- # Compute the fine output
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
- posterior = F.softmax(out_fine, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_fine = distrib.sample()
- f_outputs.append(out_fine)
-
- # Put the hidden state back together
- hidden = torch.cat([hidden_coarse, hidden_fine], dim=1)
-
- # Display progress
- speed = (i + 1) / (time.time() - start)
- stream('Gen: %i/%i -- Speed: %i', (i + 1, seq_len, speed))
-
- coarse = torch.stack(c_outputs).squeeze(1).cpu().data.numpy()
- fine = torch.stack(f_outputs).squeeze(1).cpu().data.numpy()
- output = combine_signal(coarse, fine)
-
- return output, coarse, fine
-
- def init_hidden(self, batch_size=1) :
- return torch.zeros(batch_size, self.hidden_size).cuda()
-
- def num_params(self) :
- parameters = filter(lambda p: p.requires_grad, self.parameters())
- parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
- print('Trainable Parameters: %.3f million' % parameters)
\ No newline at end of file
diff --git a/spaces/akhaliq/TensorFlowTTS/app.py b/spaces/akhaliq/TensorFlowTTS/app.py
deleted file mode 100644
index 6c4f9b5a88038f1f3914e6d5096f7c32faec4466..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/TensorFlowTTS/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import numpy as np
-import soundfile as sf
-import yaml
-
-import tensorflow as tf
-
-from tensorflow_tts.inference import TFAutoModel
-from tensorflow_tts.inference import AutoProcessor
-import gradio as gr
-
-# initialize fastspeech2 model.
-fastspeech2 = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en")
-
-
-# initialize mb_melgan model
-mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-ljspeech-en")
-
-
-# inference
-processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en")
-
-def inference(text):
- input_ids = processor.text_to_sequence(text)
- # fastspeech inference
-
- mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
- input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
- speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
- speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
- f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
- energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
- )
-
- # melgan inference
- audio_before = mb_melgan.inference(mel_before)[0, :, 0]
- audio_after = mb_melgan.inference(mel_after)[0, :, 0]
-
- # save to file
- sf.write('./audio_before.wav', audio_before, 22050, "PCM_16")
- sf.write('./audio_after.wav', audio_after, 22050, "PCM_16")
- return './audio_after.wav'
-
-inputs = gr.inputs.Textbox(lines=5, label="Input Text")
-outputs = gr.outputs.Audio(type="file", label="Output Audio")
-
-
-title = "Tensorflow TTS"
-description = "Gradio demo for TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2. To use it, simply add your text, or click one of the examples to load them. Read more at the links below."
-article = "