-
- Choque de zombies 2 Mod APK: Un juego de estrategia de superhéroes
-Introducción
-Si usted es un fan de los juegos de estrategia y superhéroes, entonces es posible que desee comprobar hacia fuera Choque de Zombies 2 Mod APK. Este es un juego popular que combina la construcción de bases, la lucha contra zombies y la invocación de superhéroes en un paquete emocionante. Usted puede jugar como un líder de un equipo de superhéroes y sus asistentes que tienen que defender su base de hordas de zombies horribles. También puedes unirte a alianzas con otros jugadores y luchar contra otros equipos en batallas online.
-choque de zombies 2 mod apk
DOWNLOAD ✔ https://bltlly.com/2v6M0H
-Clash of Zombies 2 Mod APK es una versión modificada del juego original que le da acceso a recursos ilimitados, gemas, héroes, y más. Usted puede disfrutar de todas las características del juego sin gastar dinero ni tiempo. También puedes desbloquear y actualizar a todos los superhéroes y sus asistentes, como Iron Man, Spider-Man, Hulk, Capitán América, Thor, Viuda Negra y más. También puedes usar sus habilidades y habilidades especiales para derrotar a los zombies y otros enemigos.
-Cómo descargar e instalar choque de zombies 2 Mod APK
-Descargar e instalar Clash of Zombies 2 Mod APK es muy fácil y simple. Solo tienes que seguir estos pasos:
-
-- Encontrar una fuente confiable que ofrece el archivo APK modded. Puede buscar en Google o utilizar el enlace de abajo. Asegúrese de descargar la última versión del archivo APK modded.
-- Antes de instalar el archivo APK modded, es necesario habilitar fuentes desconocidas en el dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-- Después de habilitar fuentes desconocidas, busque el archivo APK modded en su dispositivo y toque en él. Siga las instrucciones en la pantalla para instalar la aplicación.
-- Una vez que la instalación se ha completado, puede iniciar la aplicación y disfrutar jugando Clash of Zombies 2 Mod APK.
-
-
-Jugar Clash of Zombies 2 Mod APK es muy divertido y adictivo. Aquí hay algunos consejos sobre cómo jugar el juego:
-
-Cómo crear tu base y defenderla de zombies
-Su base es su sede principal donde puede construir y mejorar varios edificios, como cuarteles, laboratorios, fábricas, minas, almacenes y más. También puede colocar estructuras defensivas, como paredes, torretas, trampas y más. Necesitas crear una base fuerte que pueda soportar los ataques de zombies y otros jugadores.
-Para crear tu base, necesitas usar recursos como oro, elixir, elixir oscuro y gemas. Puedes obtener estos recursos extrayéndolos de tu base o saqueando las bases de otros jugadores. También puede usar gemas para acelerar el proceso de construcción o comprar más recursos.
-Para defender tu base de zombies, necesitas colocar a tus superhéroes y sus asistentes en lugares estratégicos. También puede utilizar sus habilidades y habilidades para repeler a los zombies. También puedes mejorar a tus héroes y edificios para hacerlos más fuertes y efectivos.
-Cómo invocar a los superhéroes y sus asistentes
-Los superhéroes y sus asistentes son tus unidades principales que puedes usar para luchar contra zombies y otros jugadores. Puedes invocarlos usando cartas de héroe que puedes obtener de cofres o comprándolas con gemas. También puedes actualizarlos usando fragmentos de héroe que puedes obtener de batallas o comprándolos con gemas.
-Puedes convocar hasta seis héroes y seis asistentes a la vez. Puedes elegir entre diferentes tipos de héroes y asistentes, como cuerpo a cuerpo, a distancia, tanque, apoyo, sanador, etc. Cada héroe y asistente tiene sus propias habilidades y habilidades que puedes usar en la batalla. También puede personalizar su apariencia cambiando sus trajes.
-Cómo mejorar tus héroes y edificios
-
-Puedes mejorar tus edificios usando recursos como oro, elixir, elixir oscuro y gemas. También puede usar gemas para acelerar el proceso de actualización o comprar más recursos.
-Cómo unir alianzas y luchar con otros jugadores
-Unir alianzas y luchar con otros jugadores es una de las características más emocionantes de Clash of Zombies 2 Mod APK. Puedes unirte a una alianza o crear tu propia alianza con otros jugadores de todo el mundo. Puedes chatear con ellos, compartir recursos con ellos, ayudarles con su defensa o ataque base, y más.
-También puedes luchar con otros jugadores en batallas online. Puedes atacar sus bases o defender tu propia base de sus ataques. También puedes participar en guerras de alianzas donde puedes formar equipo con los miembros de tu alianza y luchar contra otras alianzas por la gloria y las recompensas.
- Consejos y trucos para el choque de zombies 2 Mod APK
-Para aprovechar al máximo Clash of Zombies 2 Mod APK, aquí hay algunos consejos y trucos que usted debe saber:
-Cómo obtener más recursos y gemas
-Los recursos y gemas son muy importantes en Clash of Zombies 2 Mod APK ya que le permiten construir, actualizar, invocar y hacer más cosas en el juego. Hay varias maneras de obtener más recursos y gemas, como:
-
-- Extrayéndolos de tu base o saqueando las bases de otros jugadores. Puedes obtener oro, elixir, elixir oscuro y gemas de estas fuentes.
-- Completar misiones y logros. Puedes obtener recursos y gemas como recompensas por completar varias tareas y desafíos en el juego.
-- Abrir cofres y cajas. Puedes obtener recursos, gemas, cartas de héroes, fragmentos de héroes, libros de habilidades y más de estas fuentes. Puedes conseguir cofres y cajas ganando batallas, participando en eventos o comprándolos con gemas.
-
-
-Cómo usar las habilidades de tus héroes de manera efectiva
-Las habilidades de tus héroes son muy poderosas y útiles en la batalla. Pueden infligir daño masivo, sanar tus unidades, aturdir a tus enemigos y más. Sin embargo, debe usarlos sabiamente y estratégicamente, ya que tienen reutilizaciones y costos. Aquí hay algunos consejos sobre cómo usar las habilidades de tus héroes de manera efectiva:
-
-Conoce las habilidades de tus héroes y sus efectos. Puedes comprobar los detalles de las habilidades de tus héroes tocando sus iconos o yendo al menú de héroes. También puedes ver los efectos de sus habilidades en el campo de batalla mirando los iconos sobre sus cabezas.
-- Usa las habilidades de tus héroes de acuerdo a la situación. Tienes que tener en cuenta el tipo de enemigos, el terreno, la distancia y el momento al usar las habilidades de tus héroes. Por ejemplo, puedes usar el Unibeam de Iron Man para destruir a un grupo de enemigos desde lejos, el Web Shot de Spider-Man para inmovilizar a un solo enemigo cerca, o el Smash de Hulk para despejar el camino para tus unidades.
-- Combina las habilidades de tus héroes para obtener el máximo efecto. Puedes crear poderosos combos usando las habilidades de tus héroes juntos o en secuencia. Por ejemplo, puedes usar el Lanzamiento de Escudo del Capitán América para aturdir a un enemigo, luego usar el Golpe de Martillo de Thor para infligir daño adicional, o puedes usar la Mordida de Viuda Negra para bajar la defensa de un enemigo y luego usar el Aplastamiento de Hulk para acabar con ellos.
-
-Cómo ganar batallas y saqueos
-Batallas y redadas son los principales modos de choque de Zombies 2 Mod APK donde se puede luchar contra zombies y otros jugadores. Puedes ganar batallas y ataques destruyendo la base del enemigo o teniendo más estrellas que ellas al final del límite de tiempo. Aquí hay algunos consejos sobre cómo ganar batallas y saqueos:
-
-
-- Planifica tu estrategia de ataque cuidadosamente. Necesitas explorar la base del enemigo antes de atacar y buscar sus puntos débiles, defensas, trampas, recursos, etc. También necesitas decidir desde qué dirección atacar, qué unidades desplegar primero, qué habilidades usar cuando, etc.
-- Adaptarse a la situación cambiante rápidamente. Necesitas ser flexible y estar listo para cambiar tu estrategia de ataque de acuerdo a la situación cambiante en el campo de batalla. También debes tener cuidado con los contraataques, refuerzos, habilidades, etc.
-
-Cómo evitar errores comunes
-Para evitar cometer errores comunes que podrían costarte el juego, aquí hay algunas cosas que debes evitar hacer:
-
-- No apresures tus ataques o defensas. Necesitas tomarte tu tiempo y pensar cuidadosamente antes de hacer cualquier movimiento. También necesitas esperar el momento adecuado para usar tus habilidades o desplegar tus unidades.
-- No malgastes tus recursos o gemas. Necesitas gastar tus recursos y gemas sabiamente y solo en cosas que realmente necesitas o quieres. También necesita ahorrar algunos recursos y gemas para emergencias o actualizaciones futuras.
-- No descuides tu base o héroes. Necesitas mantener y actualizar tu base y héroes regularmente y mantenerlos en las mejores condiciones. También necesitas proteger tu base y héroes de zombies y otros jugadores.
-
-Conclusión
-Clash of Zombies 2 Mod APK es un juego divertido y adictivo que combina estrategia, acción y superhéroes en un solo paquete. Puede construir su propia base, convocar a sus superhéroes favoritos y sus asistentes, luchar contra zombies y otros jugadores, y disfrutar de recursos y gemas ilimitadas. También puede unirse a alianzas, participar en eventos y personalizar sus héroes y base. Si usted está buscando un juego que le mantendrá entretenido y desafiado durante horas, entonces usted debe tratar de choque de zombies 2 Mod APK. ¡No te arrepentirás!
-Preguntas frecuentes
-
-Q1: Es Clash of Zombies 2 Mod APK seguro de descargar e instalar?
-A1: Sí, es seguro siempre y cuando lo descargues de una fuente confiable. Puede utilizar el siguiente enlace para descargar la última versión del archivo APK modded. Asegúrate de habilitar también fuentes desconocidas en tu dispositivo antes de instalar la aplicación.
-Q2: ¿Necesito rootear mi dispositivo para usar Clash of Zombies 2 Mod APK?
-A2: No, no necesitas rootear tu dispositivo para usar esta versión modificada. Puedes instalarlo y reproducirlo en cualquier dispositivo Android sin ningún problema.
-Q3: ¿Puedo jugar Clash of Zombies 2 Mod APK en línea con otros jugadores?
-A3: Sí, puedes jugar online con otros jugadores siempre y cuando tengas una conexión a Internet estable. Puedes unirte a alianzas, luchar contra otros equipos y chatear con otros jugadores en el juego.
-Q4: ¿Puedo actualizar Clash of Zombies 2 Mod APK a la última versión?
-A4: Sí, puede actualizarlo a la última versión descargándolo e instalándolo de nuevo desde la misma fuente. No necesitas desinstalar la versión anterior ni perder tu progreso en el juego.
-Q5: ¿Qué pasa si me encuentro con cualquier problema al jugar Clash of Zombies 2 Mod APK?
-A5: Puede ponerse en contacto con el desarrollador o el modder para obtener soporte o informar de cualquier error o problema. También puede consultar el sitio web oficial o las páginas de redes sociales del juego para obtener más información y actualizaciones. 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Diapositivas De Fotos Con Msica Apk.md b/spaces/Benson/text-generation/Examples/Descargar Diapositivas De Fotos Con Msica Apk.md
deleted file mode 100644
index 8ec6b0ee3d455395b5af00787c588d741572b0f8..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Diapositivas De Fotos Con Msica Apk.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-Descargar presentación de fotos con música APK: Cómo crear increíbles presentaciones de diapositivas con sus fotos y música
-¿Quieres convertir tus fotos en videos impresionantes con música? ¿Quieres crear hermosas presentaciones de diapositivas con tus fotos y música? ¿Quieres compartir tus recuerdos con tus amigos y familiares de una manera creativa? Si usted respondió sí a cualquiera de estas preguntas, entonces usted debe descargar diapositivas de fotos con música apk.
-descargar diapositivas de fotos con música apk
DOWNLOAD ✶ https://bltlly.com/2v6KBo
-Presentación de fotos con música apk es una aplicación que le permite crear increíbles presentaciones de diapositivas con sus fotos y música. Puede usar esta aplicación para combinar varias fotos en un solo video, agregar música, efectos, filtros, pegatinas, texto y más. También puedes usar esta aplicación para crear cuadrículas, películas y videos musicales con tus fotos. Puedes guardar tus creaciones en la galería o compartirlas en plataformas de redes sociales como Facebook, Instagram, WhatsApp, etc.
-En este artículo, le mostraremos las características, beneficios y pasos de descargar y usar diapositivas de fotos con apk de música. Al final de este artículo, podrás crear increíbles presentaciones de diapositivas con tus fotos y música en minutos.
- Características de la presentación de fotos con música APK
-Presentación de fotos con música apk es una aplicación de gran alcance que ofrece una variedad de características para crear presentaciones de diapositivas. Estas son algunas de las características que puedes disfrutar:
-Creador de cuadrículas para Instagram
-Esta característica le permite crear cuadrículas impresionantes con sus fotos. Puede elegir entre una variedad de temas de cuadrícula en vivo que le ayudan a dividir imágenes y hacer un atractivo collage de Instagram o presentación de fotos. También puede ajustar el tamaño, la forma, el color, el borde y el fondo de las cuadrículas.
-Aplicación para hacer películas
-
-Aplicación de presentación de diapositivas Maker
-Esta función le permite crear hermosas presentaciones de diapositivas con sus fotos y música. Puede elegir entre una variedad de plantillas de presentación de diapositivas que tienen diferentes temas, marcos y música. También puede agregar texto, pegatinas, filtros y efectos a sus presentaciones de diapositivas. Puedes crear presentaciones de diapositivas para diferentes estados de ánimo como amor, celebración, diversión, etc.
-Creador de videos musicales
-Esta característica le permite crear videos impresionantes con sus fotos y música. Puede elegir entre una variedad de plantillas de videos musicales que tienen diferentes géneros, efectos, transiciones y música. También puedes recortar, recortar, rotar y voltear tus fotos. Puedes crear videos musicales para diferentes estilos como pop, rock, hip hop, etc.
-
-Editor de fotos
-Esta función le permite editar sus fotos con filtros, pegatinas y efectos. Puede elegir entre una variedad de filtros de fotos que mejoran el color, el brillo, el contraste y la saturación de sus fotos. También puede agregar pegatinas, texto, marcos y fondos a sus fotos. Puedes editar tus fotos para diferentes propósitos como selfie, belleza, arte, etc.
- Cómo descargar diapositivas de fotos con música APK
-Descargar diapositivas de fotos con música apk es fácil y rápido. Estos son los pasos que debe seguir:
-Paso 1: Ir al sitio web oficial o Google Play Store
-Usted puede descargar diapositivas de fotos con música apk desde el sitio web oficial o la Google Play Store. El sitio web oficial es https://photoslideshowwithmusic.com/ y el enlace de Google Play Store es https://play.google.com/store/apps/apps/details?id=com.photoslideshowwithmusic. También puede escanear el código QR a continuación para descargar la aplicación.
-
-Paso 2: Elija la versión que desea descargar
-
-Paso 3: Instalar la aplicación en el dispositivo
-Una vez que haya descargado la aplicación, debe instalarla en su dispositivo. Es posible que deba habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. A continuación, abra el archivo descargado y siga las instrucciones para instalar la aplicación.
-Paso 4: Inicie la aplicación y comience a crear presentaciones de diapositivas
-Después de instalar la aplicación, puede iniciarla y comenzar a crear presentaciones de diapositivas con sus fotos y música. Verá una interfaz sencilla y fácil de usar que lo guiará a través del proceso. Puede seleccionar fotos de su galería o cámara, elegir un tema, plantilla o marco para su presentación de diapositivas, agregar música, texto y efectos a su presentación de diapositivas, previsualizar y guardar su presentación de diapositivas en la galería o compartirla en las redes sociales.
- Cómo utilizar la presentación de fotos con música APK
-El uso de diapositivas de fotos con música apk es divertido y fácil. Estos son los pasos que debe seguir:
-Paso 1: Seleccione fotos de su galería o cámara
-Puede seleccionar hasta 100 fotos de su galería o cámara para su presentación. También puede ordenarlas por fecha o nombre. Para seleccionar fotos de su galería, toque en el icono de Galería en la pantalla de inicio de la aplicación. Para seleccionar fotos de tu cámara, toca el icono Cámara en la pantalla de inicio de la aplicación.
-Paso 2: Elija un tema, plantilla o marco para su presentación de diapositivas
-
-Paso 3: Añade música, texto y efectos a tu presentación de diapositivas
-Puedes añadir música, texto y efectos a tu presentación para hacerla más atractiva y expresiva. Puede elegir entre una variedad de géneros musicales, canciones y efectos de sonido para su presentación de diapositivas. También puede recortar, recortar y ajustar el volumen de la música. Para añadir música a la presentación de diapositivas, pulse el icono Música en la parte inferior de la pantalla. También puede agregar texto a su presentación para transmitir su mensaje o título. Puede elegir entre una variedad de fuentes, colores y tamaños para su texto. También puede ajustar la posición, la alineación y la duración del texto. Para añadir texto a la presentación de diapositivas, toque en el icono Texto en la parte inferior de la pantalla. También puede agregar efectos a su presentación para mejorar el estado de ánimo y el estilo de su presentación. Puede elegir entre una variedad de filtros, pegatinas y animaciones para su presentación de diapositivas. Para añadir efectos a la presentación de diapositivas, toque en el icono Efecto en la parte inferior de la pantalla.
-Paso 4: Previsualizar y guardar la presentación de diapositivas en la galería o compartirla en las redes sociales
-Puede previsualizar su presentación de diapositivas antes de guardarla o compartirla. También puede editar o eliminar cualquier foto, música, texto o efecto en su presentación de diapositivas. Para previsualizar su presentación de diapositivas, toque en el icono Reproducir en la esquina superior derecha de la pantalla. Para editar o eliminar cualquier elemento en su presentación, toque en él y utilice las opciones en la parte inferior de la pantalla. Para guardar su presentación en la galería, toque en el icono Guardar en la esquina superior derecha de la pantalla. Puede elegir entre diferentes formatos y resoluciones para su presentación. Para compartir su presentación en las redes sociales, toque en el icono Compartir en la esquina superior derecha de la pantalla. Puedes elegir entre diferentes plataformas como Facebook, Instagram, WhatsApp, etc.
- Beneficios de la presentación de fotos con música APK
-Presentación de fotos con música apk es una gran aplicación que ofrece muchos beneficios para la creación de presentaciones de diapositivas. Estos son algunos de los beneficios que puedes disfrutar:
-
-Presentación de fotos con música apk está diseñado para ser fácil y divertido de usar para cualquier persona. Usted no necesita ninguna habilidad técnica o experiencia para crear increíbles presentaciones de diapositivas con sus fotos y música. Solo tienes que seguir unos sencillos pasos y utilizar unos pocos toques y golpes. También puede dar rienda suelta a su creatividad y personalizar sus presentaciones de diapositivas como quieras.
-Tiene una variedad de temas, plantillas y marcos para elegir
-Presentación de fotos con música apk tiene una gran colección de temas, plantillas y marcos para crear presentaciones de diapositivas. Puede elegir entre diferentes estilos, estados de ánimo, ocasiones y géneros para sus presentaciones de diapositivas. También puede mezclar y combinar diferentes elementos para crear presentaciones únicas y personalizadas.
- Tiene un potente editor de fotos y fabricante de videos musicales
-Presentación de fotos con música apk tiene un potente editor de fotos y vídeo musical fabricante que le permiten editar sus fotos y crear vídeos musicales con facilidad. Puede utilizar varios filtros, pegatinas, efectos, transiciones y animaciones para mejorar sus fotos y videos. También puede recortar, recortar, rotar, voltear, ajustar y agregar texto a sus fotos y videos.
-Soporta múltiples formatos y resoluciones
-Presentación de fotos con música apk soporta múltiples formatos y resoluciones para guardar y compartir sus presentaciones de diapositivas. Puede elegir entre formatos MP4, AVI, MOV, WMV, FLV y GIF para sus presentaciones de diapositivas. También puede elegir entre resoluciones HD, Full HD y 4K para sus presentaciones de diapositivas. Puede guardar y compartir sus presentaciones en formatos compatibles y de alta calidad.
-Es gratis y seguro descargar
-Presentación de fotos con música apk es gratis y seguro para descargar desde el sitio web oficial o la Google Play Store. No es necesario que pagues tarifas ni suscripciones para usar la aplicación. Tampoco necesitas preocuparte por ningún virus o malware que pueda dañar tu dispositivo. La aplicación es probada y verificada por fuentes y usuarios de confianza.
- Conclusión
-
-Si desea convertir sus fotos en impresionantes vídeos con música, descargar diapositivas de fotos con música apk hoy y probarlo. Te sorprenderá lo que puedes crear con esta aplicación.
- Preguntas frecuentes
-Q: ¿Qué es la presentación de fotos con música apk?
-A: Presentación de fotos con música apk es una aplicación que le permite crear increíbles presentaciones de diapositivas con sus fotos y música. Puede usar esta aplicación para combinar varias fotos en un solo video, agregar música, efectos, filtros, pegatinas, texto y más.
-Q: ¿Cómo puedo descargar diapositivas de fotos con música apk?
-A: Puede descargar diapositivas de fotos con apk de música desde el sitio web oficial o la Google Play Store. También puede escanear el código QR en el sitio web para descargar la aplicación.
-Q: ¿Cómo puedo utilizar diapositivas de fotos con música apk?
-A: Usted puede utilizar diapositivas de fotos con música apk siguiendo estos pasos:
-
-- Selecciona fotos de tu galería o cámara
-- Elija un tema, plantilla o marco para su presentación de diapositivas
-- Añadir música, texto y efectos a su presentación de diapositivas
-- Previsualizar y guardar su presentación en la galería o compartirlo en las redes sociales
-
-Q: ¿Cuáles son los beneficios de la presentación de fotos con apk de música?
-A: Presentación de fotos con música apk ofrece muchos beneficios tales como:
-
-- Es fácil y divertido de usar
-- Tiene una variedad de temas, plantillas y marcos para elegir
-- Tiene un potente editor de fotos y fabricante de videos musicales
-- Soporta múltiples formatos y resoluciones
-- Es gratis y seguro descargar
-
-Q: ¿Cuáles son las limitaciones de la presentación de fotos con apk de música?
-A: Presentación de fotos con música apk tiene algunas limitaciones como:
-
-- La versión gratuita tiene algunas restricciones en el número de fotos, temas, plantillas y marcos que puede utilizar
-- La aplicación puede no funcionar en algunos dispositivos o sistemas operativos
-
-- La aplicación puede tener algunos errores o errores que necesitan ser corregidos
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/diffusionmodules/model.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/diffusionmodules/model.py
deleted file mode 100644
index d3a5db6aa2ef915e270f1ae135e4a9918fdd884c..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/diffusionmodules/model.py
+++ /dev/null
@@ -1,776 +0,0 @@
-# pytorch_diffusion + derived encoder decoder
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-
-
-def get_timestep_embedding(timesteps, embedding_dim):
- """
- This matches the implementation in Denoising Diffusion Probabilistic Models:
- From Fairseq.
- Build sinusoidal embeddings.
- This matches the implementation in tensor2tensor, but differs slightly
- from the description in Section 3.5 of "Attention Is All You Need".
- """
- assert len(timesteps.shape) == 1
-
- half_dim = embedding_dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb)
- emb = emb.to(device=timesteps.device)
- emb = timesteps.float()[:, None] * emb[None, :]
- emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)
- if embedding_dim % 2 == 1: # zero pad
- emb = torch.nn.functional.pad(emb, (0,1,0,0))
- return emb
-
-
-def nonlinearity(x):
- # swish
- return x*torch.sigmoid(x)
-
-
-def Normalize(in_channels):
- return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
-
-
-class Upsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
- if self.with_conv:
- x = self.conv(x)
- return x
-
-
-class Downsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- # no asymmetric padding in torch conv, must do it ourselves
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=2,
- padding=0)
-
- def forward(self, x):
- if self.with_conv:
- pad = (0,1,0,1)
- x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
- x = self.conv(x)
- else:
- x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)
- return x
-
-
-class ResnetBlock(nn.Module):
- def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,
- dropout, temb_channels=512):
- super().__init__()
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.use_conv_shortcut = conv_shortcut
-
- self.norm1 = Normalize(in_channels)
- self.conv1 = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- if temb_channels > 0:
- self.temb_proj = torch.nn.Linear(temb_channels,
- out_channels)
- self.norm2 = Normalize(out_channels)
- self.dropout = torch.nn.Dropout(dropout)
- self.conv2 = torch.nn.Conv2d(out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- self.conv_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- else:
- self.nin_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x, temb):
- h = x
- h = self.norm1(h)
- h = nonlinearity(h)
- h = self.conv1(h)
-
- if temb is not None:
- h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None]
-
- h = self.norm2(h)
- h = nonlinearity(h)
- h = self.dropout(h)
- h = self.conv2(h)
-
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- x = self.conv_shortcut(x)
- else:
- x = self.nin_shortcut(x)
-
- return x+h
-
-
-class AttnBlock(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b,c,h,w = q.shape
- q = q.reshape(b,c,h*w)
- q = q.permute(0,2,1) # b,hw,c
- k = k.reshape(b,c,h*w) # b,c,hw
- w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j]
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = v.reshape(b,c,h*w)
- w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q)
- h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]
- h_ = h_.reshape(b,c,h,w)
-
- h_ = self.proj_out(h_)
-
- return x+h_
-
-
-class Model(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, use_timestep=True):
- super().__init__()
- self.ch = ch
- self.temb_ch = self.ch*4
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- self.use_timestep = use_timestep
- if self.use_timestep:
- # timestep embedding
- self.temb = nn.Module()
- self.temb.dense = nn.ModuleList([
- torch.nn.Linear(self.ch,
- self.temb_ch),
- torch.nn.Linear(self.temb_ch,
- self.temb_ch),
- ])
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(in_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = AttnBlock(block_in)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- skip_in = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- if i_block == self.num_res_blocks:
- skip_in = ch*in_ch_mult[i_level]
- block.append(ResnetBlock(in_channels=block_in+skip_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
-
- def forward(self, x, t=None):
- #assert x.shape[2] == x.shape[3] == self.resolution
-
- if self.use_timestep:
- # timestep embedding
- assert t is not None
- temb = get_timestep_embedding(t, self.ch)
- temb = self.temb.dense[0](temb)
- temb = nonlinearity(temb)
- temb = self.temb.dense[1](temb)
- else:
- temb = None
-
- # downsampling
- hs = [self.conv_in(x)]
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](hs[-1], temb)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- hs.append(h)
- if i_level != self.num_resolutions-1:
- hs.append(self.down[i_level].downsample(hs[-1]))
-
- # middle
- h = hs[-1]
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](
- torch.cat([h, hs.pop()], dim=1), temb)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class Encoder(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, z_channels, double_z=True, **ignore_kwargs):
- super().__init__()
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(in_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = AttnBlock(block_in)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- 2*z_channels if double_z else z_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
-
- def forward(self, x):
- #assert x.shape[2] == x.shape[3] == self.resolution, "{}, {}, {}".format(x.shape[2], x.shape[3], self.resolution)
-
- # timestep embedding
- temb = None
-
- # downsampling
- hs = [self.conv_in(x)]
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](hs[-1], temb)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- hs.append(h)
- if i_level != self.num_resolutions-1:
- hs.append(self.down[i_level].downsample(hs[-1]))
-
- # middle
- h = hs[-1]
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class Decoder(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, z_channels, give_pre_end=False, **ignorekwargs):
- super().__init__()
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
- self.give_pre_end = give_pre_end
-
- # compute in_ch_mult, block_in and curr_res at lowest res
- in_ch_mult = (1,)+tuple(ch_mult)
- block_in = ch*ch_mult[self.num_resolutions-1]
- curr_res = resolution // 2**(self.num_resolutions-1)
- self.z_shape = (1,z_channels,curr_res,curr_res)
- print("Working with z of shape {} = {} dimensions.".format(
- self.z_shape, np.prod(self.z_shape)))
-
- # z to block_in
- self.conv_in = torch.nn.Conv2d(z_channels,
- block_in,
- kernel_size=3,
- stride=1,
- padding=1)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = AttnBlock(block_in)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, z):
- #assert z.shape[1:] == self.z_shape[1:]
- self.last_z_shape = z.shape
-
- # timestep embedding
- temb = None
-
- # z to block_in
- h = self.conv_in(z)
-
- # middle
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](h, temb)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- # end
- if self.give_pre_end:
- return h
-
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class VUNet(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True,
- in_channels, c_channels,
- resolution, z_channels, use_timestep=False, **ignore_kwargs):
- super().__init__()
- self.ch = ch
- self.temb_ch = self.ch*4
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
-
- self.use_timestep = use_timestep
- if self.use_timestep:
- # timestep embedding
- self.temb = nn.Module()
- self.temb.dense = nn.ModuleList([
- torch.nn.Linear(self.ch,
- self.temb_ch),
- torch.nn.Linear(self.temb_ch,
- self.temb_ch),
- ])
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(c_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- self.z_in = torch.nn.Conv2d(z_channels,
- block_in,
- kernel_size=1,
- stride=1,
- padding=0)
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=2*block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = AttnBlock(block_in)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- skip_in = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- if i_block == self.num_res_blocks:
- skip_in = ch*in_ch_mult[i_level]
- block.append(ResnetBlock(in_channels=block_in+skip_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
-
- def forward(self, x, z):
- #assert x.shape[2] == x.shape[3] == self.resolution
-
- if self.use_timestep:
- # timestep embedding
- assert t is not None
- temb = get_timestep_embedding(t, self.ch)
- temb = self.temb.dense[0](temb)
- temb = nonlinearity(temb)
- temb = self.temb.dense[1](temb)
- else:
- temb = None
-
- # downsampling
- hs = [self.conv_in(x)]
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](hs[-1], temb)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- hs.append(h)
- if i_level != self.num_resolutions-1:
- hs.append(self.down[i_level].downsample(hs[-1]))
-
- # middle
- h = hs[-1]
- z = self.z_in(z)
- h = torch.cat((h,z),dim=1)
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](
- torch.cat([h, hs.pop()], dim=1), temb)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class SimpleDecoder(nn.Module):
- def __init__(self, in_channels, out_channels, *args, **kwargs):
- super().__init__()
- self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1),
- ResnetBlock(in_channels=in_channels,
- out_channels=2 * in_channels,
- temb_channels=0, dropout=0.0),
- ResnetBlock(in_channels=2 * in_channels,
- out_channels=4 * in_channels,
- temb_channels=0, dropout=0.0),
- ResnetBlock(in_channels=4 * in_channels,
- out_channels=2 * in_channels,
- temb_channels=0, dropout=0.0),
- nn.Conv2d(2*in_channels, in_channels, 1),
- Upsample(in_channels, with_conv=True)])
- # end
- self.norm_out = Normalize(in_channels)
- self.conv_out = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- for i, layer in enumerate(self.model):
- if i in [1,2,3]:
- x = layer(x, None)
- else:
- x = layer(x)
-
- h = self.norm_out(x)
- h = nonlinearity(h)
- x = self.conv_out(h)
- return x
-
-
-class UpsampleDecoder(nn.Module):
- def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution,
- ch_mult=(2,2), dropout=0.0):
- super().__init__()
- # upsampling
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- block_in = in_channels
- curr_res = resolution // 2 ** (self.num_resolutions - 1)
- self.res_blocks = nn.ModuleList()
- self.upsample_blocks = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- res_block = []
- block_out = ch * ch_mult[i_level]
- for i_block in range(self.num_res_blocks + 1):
- res_block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- self.res_blocks.append(nn.ModuleList(res_block))
- if i_level != self.num_resolutions - 1:
- self.upsample_blocks.append(Upsample(block_in, True))
- curr_res = curr_res * 2
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- # upsampling
- h = x
- for k, i_level in enumerate(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks + 1):
- h = self.res_blocks[i_level][i_block](h, None)
- if i_level != self.num_resolutions - 1:
- h = self.upsample_blocks[k](h)
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/status_codes.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/status_codes.py
deleted file mode 100644
index 5e29502cddfa9a9887a93399ab4193fb75dfe605..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/status_codes.py
+++ /dev/null
@@ -1,6 +0,0 @@
-SUCCESS = 0
-ERROR = 1
-UNKNOWN_ERROR = 2
-VIRTUALENV_NOT_FOUND = 3
-PREVIOUS_BUILD_DIR_ERROR = 4
-NO_MATCHES_FOUND = 23
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/_internal_utils.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/_internal_utils.py
deleted file mode 100644
index 7dc9bc53360e95abfa99fe1ebd205a3d3ac620e6..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/_internal_utils.py
+++ /dev/null
@@ -1,48 +0,0 @@
-"""
-requests._internal_utils
-~~~~~~~~~~~~~~
-
-Provides utility functions that are consumed internally by Requests
-which depend on extremely few external helpers (such as compat)
-"""
-import re
-
-from .compat import builtin_str
-
-_VALID_HEADER_NAME_RE_BYTE = re.compile(rb"^[^:\s][^:\r\n]*$")
-_VALID_HEADER_NAME_RE_STR = re.compile(r"^[^:\s][^:\r\n]*$")
-_VALID_HEADER_VALUE_RE_BYTE = re.compile(rb"^\S[^\r\n]*$|^$")
-_VALID_HEADER_VALUE_RE_STR = re.compile(r"^\S[^\r\n]*$|^$")
-
-HEADER_VALIDATORS = {
- bytes: (_VALID_HEADER_NAME_RE_BYTE, _VALID_HEADER_VALUE_RE_BYTE),
- str: (_VALID_HEADER_NAME_RE_STR, _VALID_HEADER_VALUE_RE_STR),
-}
-
-
-def to_native_string(string, encoding="ascii"):
- """Given a string object, regardless of type, returns a representation of
- that string in the native string type, encoding and decoding where
- necessary. This assumes ASCII unless told otherwise.
- """
- if isinstance(string, builtin_str):
- out = string
- else:
- out = string.decode(encoding)
-
- return out
-
-
-def unicode_is_ascii(u_string):
- """Determine if unicode string only contains ASCII characters.
-
- :param str u_string: unicode string to check. Must be unicode
- and not Python 2 `str`.
- :rtype: bool
- """
- assert isinstance(u_string, str)
- try:
- u_string.encode("ascii")
- return True
- except UnicodeEncodeError:
- return False
diff --git a/spaces/Bready11/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/README.md b/spaces/Bready11/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/README.md
deleted file mode 100644
index 9e6fdadab8d9cb17332136cfdf6d0970fc9dbe5a..0000000000000000000000000000000000000000
--- a/spaces/Bready11/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Onodofthenorth-SD PixelArt SpriteSheet Generator
-emoji: 🚀
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.45.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/merge.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/merge.h
deleted file mode 100644
index d80906e3d31faa5f01519ab5c7963fe8762f77bb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/merge.h
+++ /dev/null
@@ -1,91 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-// XXX calling this function is an error; there is no implementation
-template
-__host__ __device__
- OutputIterator merge(thrust::execution_policy &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result,
- StrictWeakOrdering comp);
-
-
-template
-__host__ __device__
- OutputIterator merge(thrust::execution_policy &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator2 last2,
- OutputIterator result);
-
-
-template
-__host__ __device__
- thrust::pair
- merge_by_key(thrust::execution_policy &exec,
- InputIterator1 keys_first1, InputIterator1 keys_last1,
- InputIterator2 keys_first2, InputIterator2 keys_last2,
- InputIterator3 values_first1, InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result,
- Compare comp);
-
-
-template
-__host__ __device__
- thrust::pair
- merge_by_key(thrust::execution_policy &exec,
- InputIterator1 keys_first1, InputIterator1 keys_last1,
- InputIterator2 keys_first2, InputIterator2 keys_last2,
- InputIterator3 values_first1, InputIterator4 values_first2,
- OutputIterator1 keys_result,
- OutputIterator2 values_result);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/tuple.h b/spaces/CVPR/LIVE/thrust/thrust/tuple.h
deleted file mode 100644
index 930f9032611d9f86caf9a50adb576f047eafd14d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/tuple.h
+++ /dev/null
@@ -1,585 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file tuple.h
- * \brief A type encapsulating a heterogeneous collection of elements
- */
-
-/*
- * Copyright (C) 1999, 2000 Jaakko Järvi (jaakko.jarvi@cs.utu.fi)
- *
- * Distributed under the Boost Software License, Version 1.0.
- * (See accompanying NOTICE file for the complete license)
- *
- * For more information, see http://www.boost.org
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-
-/*! \addtogroup utility
- * \{
- */
-
-/*! \addtogroup tuple
- * \{
- */
-
-/*! \cond
- */
-
-struct null_type;
-
-/*! \endcond
- */
-
-/*! This metafunction returns the type of a
- * \p tuple's Nth element.
- *
- * \tparam N This parameter selects the element of interest.
- * \tparam T A \c tuple type of interest.
- *
- * \see pair
- * \see tuple
- */
-template
- struct tuple_element
-{
- private:
- typedef typename T::tail_type Next;
-
- public:
- /*! The result of this metafunction is returned in \c type.
- */
- typedef typename tuple_element::type type;
-}; // end tuple_element
-
-/*! This metafunction returns the number of elements
- * of a \p tuple type of interest.
- *
- * \tparam T A \c tuple type of interest.
- *
- * \see pair
- * \see tuple
- */
-template
- struct tuple_size
-{
- /*! The result of this metafunction is returned in \c value.
- */
- static const int value = 1 + tuple_size::value;
-}; // end tuple_size
-
-// get function for non-const cons-lists, returns a reference to the element
-
-/*! The \p get function returns a reference to a \p tuple element of
- * interest.
- *
- * \param t A reference to a \p tuple of interest.
- * \return A reference to \p t's Nth element.
- *
- * \tparam N The index of the element of interest.
- *
- * The following code snippet demonstrates how to use \p get to print
- * the value of a \p tuple element.
- *
- * \code
- * #include
- * #include
- * ...
- * thrust::tuple t(13, "thrust");
- *
- * std::cout << "The 1st value of t is " << thrust::get<0>(t) << std::endl;
- * \endcode
- *
- * \see pair
- * \see tuple
- */
-template
-__host__ __device__
-inline typename access_traits<
- typename tuple_element >::type
- >::non_const_type
-get(detail::cons& t);
-
-
-/*! The \p get function returns a \c const reference to a \p tuple element of
- * interest.
- *
- * \param t A reference to a \p tuple of interest.
- * \return A \c const reference to \p t's Nth element.
- *
- * \tparam N The index of the element of interest.
- *
- * The following code snippet demonstrates how to use \p get to print
- * the value of a \p tuple element.
- *
- * \code
- * #include
- * #include
- * ...
- * thrust::tuple t(13, "thrust");
- *
- * std::cout << "The 1st value of t is " << thrust::get<0>(t) << std::endl;
- * \endcode
- *
- * \see pair
- * \see tuple
- */
-template
-__host__ __device__
-inline typename access_traits<
- typename tuple_element >::type
- >::const_type
-get(const detail::cons& t);
-
-
-
-/*! \p tuple is a class template that can be instantiated with up to ten arguments.
- * Each template argument specifies the type of element in the \p tuple.
- * Consequently, tuples are heterogeneous, fixed-size collections of values. An
- * instantiation of \p tuple with two arguments is similar to an instantiation
- * of \p pair with the same two arguments. Individual elements of a \p tuple may
- * be accessed with the \p get function.
- *
- * \tparam TN The type of the N \c tuple element. Thrust's \p tuple
- * type currently supports up to ten elements.
- *
- * The following code snippet demonstrates how to create a new \p tuple object
- * and inspect and modify the value of its elements.
- *
- * \code
- * #include
- * #include
- * ...
- * // create a tuple containing an int, a float, and a string
- * thrust::tuple t(13, 0.1f, "thrust");
- *
- * // individual members are accessed with the free function get
- * std::cout << "The first element's value is " << thrust::get<0>(t) << std::endl;
- *
- * // or the member function get
- * std::cout << "The second element's value is " << t.get<1>() << std::endl;
- *
- * // we can also modify elements with the same function
- * thrust::get<0>(t) += 10;
- * \endcode
- *
- * \see pair
- * \see get
- * \see make_tuple
- * \see tuple_element
- * \see tuple_size
- * \see tie
- */
-template
- class tuple :
- public detail::map_tuple_to_cons::type
-{
- /*! \cond
- */
-
- private:
- typedef typename detail::map_tuple_to_cons::type inherited;
-
- /*! \endcond
- */
-
- public:
- /*! \p tuple's no-argument constructor initializes each element.
- */
- inline __host__ __device__
- tuple(void) {}
-
- /*! \p tuple's one-argument constructor copy constructs the first element from the given parameter
- * and intializes all other elements.
- * \param t0 The value to assign to this \p tuple's first element.
- */
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0)
- : inherited(t0,
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- /*! \p tuple's one-argument constructor copy constructs the first two elements from the given parameters
- * and intializes all other elements.
- * \param t0 The value to assign to this \p tuple's first element.
- * \param t1 The value to assign to this \p tuple's second element.
- * \note \p tuple's constructor has ten variants of this form, the rest of which are ommitted here for brevity.
- */
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1)
- : inherited(t0, t1,
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- /*! \cond
- */
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2)
- : inherited(t0, t1, t2,
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2,
- typename access_traits::parameter_type t3)
- : inherited(t0, t1, t2, t3,
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2,
- typename access_traits::parameter_type t3,
- typename access_traits::parameter_type t4)
- : inherited(t0, t1, t2, t3, t4,
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2,
- typename access_traits::parameter_type t3,
- typename access_traits::parameter_type t4,
- typename access_traits::parameter_type t5)
- : inherited(t0, t1, t2, t3, t4, t5,
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2,
- typename access_traits::parameter_type t3,
- typename access_traits::parameter_type t4,
- typename access_traits::parameter_type t5,
- typename access_traits::parameter_type t6)
- : inherited(t0, t1, t2, t3, t4, t5, t6,
- static_cast(null_type()),
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2,
- typename access_traits::parameter_type t3,
- typename access_traits::parameter_type t4,
- typename access_traits::parameter_type t5,
- typename access_traits::parameter_type t6,
- typename access_traits::parameter_type t7)
- : inherited(t0, t1, t2, t3, t4, t5, t6, t7,
- static_cast(null_type()),
- static_cast(null_type())) {}
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2,
- typename access_traits::parameter_type t3,
- typename access_traits::parameter_type t4,
- typename access_traits::parameter_type t5,
- typename access_traits::parameter_type t6,
- typename access_traits::parameter_type t7,
- typename access_traits::parameter_type t8)
- : inherited(t0, t1, t2, t3, t4, t5, t6, t7, t8,
- static_cast(null_type())) {}
-
- inline __host__ __device__
- tuple(typename access_traits::parameter_type t0,
- typename access_traits::parameter_type t1,
- typename access_traits::parameter_type t2,
- typename access_traits::parameter_type t3,
- typename access_traits::parameter_type t4,
- typename access_traits::parameter_type t5,
- typename access_traits::parameter_type t6,
- typename access_traits::parameter_type t7,
- typename access_traits::parameter_type t8,
- typename access_traits::parameter_type t9)
- : inherited(t0, t1, t2, t3, t4, t5, t6, t7, t8, t9) {}
-
-
- template
- inline __host__ __device__
- tuple(const detail::cons& p) : inherited(p) {}
-
- __thrust_exec_check_disable__
- template
- inline __host__ __device__
- tuple& operator=(const detail::cons& k)
- {
- inherited::operator=(k);
- return *this;
- }
-
- /*! \endcond
- */
-
- /*! This assignment operator allows assigning the first two elements of this \p tuple from a \p pair.
- * \param k A \p pair to assign from.
- */
- __thrust_exec_check_disable__
- template
- __host__ __device__ inline
- tuple& operator=(const thrust::pair& k) {
- //BOOST_STATIC_ASSERT(length::value == 2);// check_length = 2
- this->head = k.first;
- this->tail.head = k.second;
- return *this;
- }
-
- /*! \p swap swaps the elements of two tuples.
- *
- * \param t The other tuple with which to swap.
- */
- inline __host__ __device__
- void swap(tuple &t)
- {
- inherited::swap(t);
- }
-};
-
-/*! \cond
- */
-
-template <>
-class tuple :
- public null_type
-{
-public:
- typedef null_type inherited;
-};
-
-/*! \endcond
- */
-
-
-/*! This version of \p make_tuple creates a new \c tuple object from a
- * single object.
- *
- * \param t0 The object to copy from.
- * \return A \p tuple object with a single member which is a copy of \p t0.
- */
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0);
-
-/*! This version of \p make_tuple creates a new \c tuple object from two
- * objects.
- *
- * \param t0 The first object to copy from.
- * \param t1 The second object to copy from.
- * \return A \p tuple object with two members which are copies of \p t0
- * and \p t1.
- *
- * \note \p make_tuple has ten variants, the rest of which are omitted here
- * for brevity.
- */
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1);
-
-/*! This version of \p tie creates a new \c tuple whose single element is
- * a reference which refers to this function's argument.
- *
- * \param t0 The object to reference.
- * \return A \p tuple object with one member which is a reference to \p t0.
- */
-template
-__host__ __device__ inline
-tuple tie(T0& t0);
-
-/*! This version of \p tie creates a new \c tuple of references object which
- * refers to this function's arguments.
- *
- * \param t0 The first object to reference.
- * \param t1 The second object to reference.
- * \return A \p tuple object with two members which are references to \p t0
- * and \p t1.
- *
- * \note \p tie has ten variants, the rest of which are omitted here for
- * brevity.
- */
-template
-__host__ __device__ inline
-tuple tie(T0& t0, T1& t1);
-
-/*! \p swap swaps the contents of two tuples.
- *
- * \param x The first \p tuple to swap.
- * \param y The second \p tuple to swap.
- */
-template<
- typename T0, typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8, typename T9,
- typename U0, typename U1, typename U2, typename U3, typename U4, typename U5, typename U6, typename U7, typename U8, typename U9
->
-inline __host__ __device__
-void swap(tuple &x,
- tuple &y);
-
-
-
-/*! \cond
- */
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2);
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3);
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4);
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5);
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6);
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6, const T7& t7);
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6, const T7& t7, const T8& t8);
-
-template
-__host__ __device__ inline
- typename detail::make_tuple_mapper::type
- make_tuple(const T0& t0, const T1& t1, const T2& t2, const T3& t3, const T4& t4, const T5& t5, const T6& t6, const T7& t7, const T8& t8, const T9& t9);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6, T7 &t7);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6, T7 &t7, T8 &t8);
-
-template
-__host__ __device__ inline
-tuple tie(T0 &t0, T1 &t1, T2 &t2, T3 &t3, T4 &t4, T5 &t5, T6 &t6, T7 &t7, T8 &t8, T9 &t9);
-
-
-__host__ __device__ inline
-bool operator==(const null_type&, const null_type&);
-
-__host__ __device__ inline
-bool operator>=(const null_type&, const null_type&);
-
-__host__ __device__ inline
-bool operator<=(const null_type&, const null_type&);
-
-__host__ __device__ inline
-bool operator!=(const null_type&, const null_type&);
-
-__host__ __device__ inline
-bool operator<(const null_type&, const null_type&);
-
-__host__ __device__ inline
-bool operator>(const null_type&, const null_type&);
-
-/*! \endcond
- */
-
-/*! \} // tuple
- */
-
-/*! \} // utility
- */
-
-} // end thrust
-
diff --git a/spaces/CVPR/MonoScene/monoscene/unet3d_kitti.py b/spaces/CVPR/MonoScene/monoscene/unet3d_kitti.py
deleted file mode 100644
index 91d5339fbdf34e28d017d7e4e29ce4923169bef5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/MonoScene/monoscene/unet3d_kitti.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# encoding: utf-8
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from monoscene.modules import SegmentationHead
-from monoscene.CRP3D import CPMegaVoxels
-from monoscene.modules import Process, Upsample, Downsample
-
-
-class UNet3D(nn.Module):
- def __init__(
- self,
- class_num,
- norm_layer,
- full_scene_size,
- feature,
- project_scale,
- context_prior=None,
- bn_momentum=0.1,
- ):
- super(UNet3D, self).__init__()
- self.business_layer = []
- self.project_scale = project_scale
- self.full_scene_size = full_scene_size
- self.feature = feature
-
- size_l1 = (
- int(self.full_scene_size[0] / project_scale),
- int(self.full_scene_size[1] / project_scale),
- int(self.full_scene_size[2] / project_scale),
- )
- size_l2 = (size_l1[0] // 2, size_l1[1] // 2, size_l1[2] // 2)
- size_l3 = (size_l2[0] // 2, size_l2[1] // 2, size_l2[2] // 2)
-
- dilations = [1, 2, 3]
- self.process_l1 = nn.Sequential(
- Process(self.feature, norm_layer, bn_momentum, dilations=[1, 2, 3]),
- Downsample(self.feature, norm_layer, bn_momentum),
- )
- self.process_l2 = nn.Sequential(
- Process(self.feature * 2, norm_layer, bn_momentum, dilations=[1, 2, 3]),
- Downsample(self.feature * 2, norm_layer, bn_momentum),
- )
-
- self.up_13_l2 = Upsample(
- self.feature * 4, self.feature * 2, norm_layer, bn_momentum
- )
- self.up_12_l1 = Upsample(
- self.feature * 2, self.feature, norm_layer, bn_momentum
- )
- self.up_l1_lfull = Upsample(
- self.feature, self.feature // 2, norm_layer, bn_momentum
- )
-
- self.ssc_head = SegmentationHead(
- self.feature // 2, self.feature // 2, class_num, dilations
- )
-
- self.context_prior = context_prior
- if context_prior:
- self.CP_mega_voxels = CPMegaVoxels(
- self.feature * 4, size_l3, bn_momentum=bn_momentum
- )
-
- def forward(self, input_dict):
- res = {}
-
- x3d_l1 = input_dict["x3d"]
-
- x3d_l2 = self.process_l1(x3d_l1)
-
- x3d_l3 = self.process_l2(x3d_l2)
-
- if self.context_prior:
- ret = self.CP_mega_voxels(x3d_l3)
- x3d_l3 = ret["x"]
- for k in ret.keys():
- res[k] = ret[k]
-
- x3d_up_l2 = self.up_13_l2(x3d_l3) + x3d_l2
- x3d_up_l1 = self.up_12_l1(x3d_up_l2) + x3d_l1
- x3d_up_lfull = self.up_l1_lfull(x3d_up_l1)
-
- ssc_logit_full = self.ssc_head(x3d_up_lfull)
-
- res["ssc_logit"] = ssc_logit_full
-
- return res
diff --git a/spaces/CobaltZvc/Docs_Buddy/app.py b/spaces/CobaltZvc/Docs_Buddy/app.py
deleted file mode 100644
index a3f89c0078bd81d1a236260f244fb3d4ba3e7fb0..0000000000000000000000000000000000000000
--- a/spaces/CobaltZvc/Docs_Buddy/app.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import cohere
-import streamlit as st
-from serpapi import GoogleSearch
-import requests
-from geopy.geocoders import Nominatim
-from PIL import Image
-from io import BytesIO
-
-st.title("Hi there!👨⚕️🩺")
-st.title("Welcome to *Virtual Diagnosis*")
-st.write("> **This app is meant to assist medical professionals ONLY**")
-
-co = cohere.Client(st.secrets["COHERE_API"])
-prompt = st.text_input('What are the symptoms of your patient? (*Try to keep the symptoms meaningful*)')
-prompt_med = st.text_input('Search a medicine here: (*Enter the correct spelling of the medicine*)')
-geolocator = Nominatim(user_agent="geoapiExercises")
-
-def get_coordinates(location):
- try:
- location = geolocator.geocode(location)
- return (location.latitude, location.longitude)
- except:
- return None
-
-with open('symptoms_1.txt', 'r') as file:
- symptoms = [line.strip().lower() for line in file]
-if prompt:
- if any(symptom in prompt.lower() for symptom in symptoms):
- response = co.generate(
- model = 'command-xlarge-nightly', #xlarge #medium #small
- prompt = f"user: Suggest prescription medications for these symptoms: {prompt}\nTLDR:", #
- max_tokens=300,
- temperature=0.9,
- k=0,
- p=0.75,
- frequency_penalty=0,
- presence_penalty=0,
- stop_sequences=[],
- return_likelihoods='NONE'
- )
-
- text = format(response.generations[0].text)
- st.write('Prescription medications: %s' %text)
- st.download_button('Download example prescriptions', text)
- print("done!")
-
-
- params = {
- "engine": "google_shopping",
- "google_domain": "google.com",
- "q": text,
- "api_key": st.secrets["GOOGLE_API"]
- }
-
- search = GoogleSearch(params)
- items = search.get_dict()
-
-
- for key, result in items.items():
- if "google_shopping_url" in result:
- st.caption(f'Click here for Google search page', unsafe_allow_html=True)
- else:
- pass
-
- for i in range(10):
- item = items['shopping_results'][i]
- response = requests.get(item['thumbnail'])
- st.image(Image.open(BytesIO(response.content)),
- caption=item['title'], width=400)
- st.text(item['source'])
- st.text(item['price'])
- st.caption(f'Click to buy', unsafe_allow_html=True)
-
-
- address = st.text_input("Enter your location to search pharmacies near you: ( For best results, enter location in this *format: Area, City, Country*.)")
-
- if address:
- coordinates = get_coordinates(address)
- params = {
- "engine": "google_maps",
- "q": "Pharmacies",
- "ll": "@" + str(coordinates[0]) + "," + str(coordinates[1]) + ",15.1z",
- "type": "search",
- "api_key": st.secrets["GOOGLE_API"]
- }
-
- search = GoogleSearch(params)
- results = search.get_dict()
- local_results = results["local_results"]
- for x in range(5):
- st.write("Name of pharmacy: ", local_results[x]["title"])
- st.write("address of pharmacy: ", local_results[x]["address"])
-
- else:
- st.write("Kindly pertain your inputs to possible medical symptoms only. Or try rephrasing.")
-
-if prompt_med:
- params = {
- "engine": "google_shopping",
- "google_domain": "google.com",
- "q": f"{prompt_med} medicine",
- "hl": "en",
- # "gl": "in",
- "api_key": st.secrets["GOOGLE_API"]
- }
-
- search = GoogleSearch(params)
- items = search.get_dict()
-
-
- for key, result in items.items():
- if "google_shopping_url" in result:
- st.caption(f'Click here for Google search page', unsafe_allow_html=True)
- else:
- pass
-
- for i in range(10):
- item = items['shopping_results'][i]
- response = requests.get(item['thumbnail'])
- st.image(Image.open(BytesIO(response.content)),
- caption=item['title'], width=400)
- st.text(item['source'])
- st.text(item['price'])
- st.caption(f'Click to buy', unsafe_allow_html=True)
\ No newline at end of file
diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py"
deleted file mode 100644
index 26f42cad0c13bf601fc997c4d7cc5b237d2f97df..0000000000000000000000000000000000000000
--- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py"
+++ /dev/null
@@ -1,186 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-fast_debug = False
-
-class PaperFileGroup():
- def __init__(self):
- self.file_paths = []
- self.file_contents = []
- self.sp_file_contents = []
- self.sp_file_index = []
- self.sp_file_tag = []
-
- # count_token
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
- self.get_token_num = get_token_num
-
- def run_file_split(self, max_token_limit=1900):
- """
- 将长文本分离开来
- """
- for index, file_content in enumerate(self.file_contents):
- if self.get_token_num(file_content) < max_token_limit:
- self.sp_file_contents.append(file_content)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index])
- else:
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
- for j, segment in enumerate(segments):
- self.sp_file_contents.append(segment)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.md")
-
- print('Segmentation: done')
-
-def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
- import time, os, re
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
-
- # <-------- 读取Markdown文件,删除其中的所有注释 ---------->
- pfg = PaperFileGroup()
-
- for index, fp in enumerate(file_manifest):
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- # 记录删除注释后的文本
- pfg.file_paths.append(fp)
- pfg.file_contents.append(file_content)
-
- # <-------- 拆分过长的Markdown文件 ---------->
- pfg.run_file_split(max_token_limit=1500)
- n_split = len(pfg.sp_file_contents)
-
- # <-------- 多线程润色开始 ---------->
- if language == 'en->zh':
- inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" +
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
- inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
- sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
- elif language == 'zh->en':
- inputs_array = [f"This is a Markdown file, translate it into English, do not modify any existing Markdown commands:" +
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
- inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
- sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
-
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array=inputs_array,
- inputs_show_user_array=inputs_show_user_array,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history_array=[[""] for _ in range(n_split)],
- sys_prompt_array=sys_prompt_array,
- # max_workers=5, # OpenAI所允许的最大并行过载
- scroller_max_len = 80
- )
-
- # <-------- 整理结果,退出 ---------->
- create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
- res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
- history = gpt_response_collection
- chatbot.append((f"{fp}完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
-def get_files_from_everything(txt):
- import glob, os
-
- success = True
- if txt.startswith('http'):
- # 网络的远程文件
- txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
- txt = txt.replace("/blob/", "/")
- import requests
- from toolbox import get_conf
- proxies, = get_conf('proxies')
- r = requests.get(txt, proxies=proxies)
- with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content)
- project_folder = './gpt_log/'
- file_manifest = ['./gpt_log/temp.md']
- elif txt.endswith('.md'):
- # 直接给定文件
- file_manifest = [txt]
- project_folder = os.path.dirname(txt)
- elif os.path.exists(txt):
- # 本地路径,递归搜索
- project_folder = txt
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
- else:
- success = False
-
- return success, file_manifest, project_folder
-
-
-@CatchException
-def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- import glob, os
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- history = [] # 清空历史,以免输入溢出
-
- success, file_manifest, project_folder = get_files_from_everything(txt)
-
- if not success:
- # 什么都没有
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh')
-
-
-
-
-
-@CatchException
-def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- import glob, os
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- history = [] # 清空历史,以免输入溢出
- success, file_manifest, project_folder = get_files_from_everything(txt)
- if not success:
- # 什么都没有
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en')
\ No newline at end of file
diff --git a/spaces/Cran-May/SEA-orca/app.py b/spaces/Cran-May/SEA-orca/app.py
deleted file mode 100644
index 7c3ab0e1c4056997c3120d16d5dc7a910f9a418d..0000000000000000000000000000000000000000
--- a/spaces/Cran-May/SEA-orca/app.py
+++ /dev/null
@@ -1,132 +0,0 @@
-from __future__ import annotations
-import gradio as gr
-import time
-from ctransformers import AutoModelForCausalLM
-from typing import Iterable
-import gradio as gr
-from gradio.themes.base import Base
-from gradio.themes.utils import colors, fonts, sizes
-import subprocess
-
-from huggingface_hub import hf_hub_download
-
-# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
-model = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-OpenOrca-GGUF", model_file="mistral-7b-openorca.Q3_K_L.gguf", model_type="mistral", gpu_layers=0)
-ins = '''[INST] <>
-You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
-If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
-<>
-{} [/INST]
-'''
-
-
-theme = gr.themes.Monochrome(
- primary_hue="indigo",
- secondary_hue="blue",
- neutral_hue="slate",
- radius_size=gr.themes.sizes.radius_sm,
- font=[gr.themes.GoogleFont("Open Sans"), "ui-sans-serif", "system-ui", "sans-serif"],
-)
-def response(question):
- res = model(ins.format(question))
- yield res
-
-
-examples = [
- "Instead of making a peanut butter and jelly sandwich, what else could I combine peanut butter with in a sandwich? Give five ideas",
- "How do I make a campfire?",
- "Explain to me the difference between nuclear fission and fusion.",
- "I'm selling my Nikon D-750, write a short blurb for my ad."
-]
-
-def process_example(args):
- for x in response(args):
- pass
- return x
-
-css = ".generating {visibility: hidden}"
-
-# Based on the gradio theming guide and borrowed from https://huggingface.co/spaces/shivi/dolly-v2-demo
-class SeafoamCustom(Base):
- def __init__(
- self,
- *,
- primary_hue: colors.Color | str = colors.emerald,
- secondary_hue: colors.Color | str = colors.blue,
- neutral_hue: colors.Color | str = colors.blue,
- spacing_size: sizes.Size | str = sizes.spacing_md,
- radius_size: sizes.Size | str = sizes.radius_md,
- font: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("Quicksand"),
- "ui-sans-serif",
- "sans-serif",
- ),
- font_mono: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("IBM Plex Mono"),
- "ui-monospace",
- "monospace",
- ),
- ):
- super().__init__(
- primary_hue=primary_hue,
- secondary_hue=secondary_hue,
- neutral_hue=neutral_hue,
- spacing_size=spacing_size,
- radius_size=radius_size,
- font=font,
- font_mono=font_mono,
- )
- super().set(
- button_primary_background_fill="linear-gradient(90deg, *primary_300, *secondary_400)",
- button_primary_background_fill_hover="linear-gradient(90deg, *primary_200, *secondary_300)",
- button_primary_text_color="white",
- button_primary_background_fill_dark="linear-gradient(90deg, *primary_600, *secondary_800)",
- block_shadow="*shadow_drop_lg",
- button_shadow="*shadow_drop_lg",
- input_background_fill="zinc",
- input_border_color="*secondary_300",
- input_shadow="*shadow_drop",
- input_shadow_focus="*shadow_drop_lg",
- )
-
-
-seafoam = SeafoamCustom()
-
-
-with gr.Blocks(theme=seafoam, analytics_enabled=False, css=css) as demo:
- with gr.Column():
- gr.Markdown(
- """ ## Shi-Ci Extensional Analyzer
-
- Type in the box below and click the button to generate answers to your most pressing questions!
-
- """
- )
-
- with gr.Row():
-
- with gr.Column(scale=3):
- instruction = gr.Textbox(placeholder="Enter your question here", label="Question", elem_id="q-input")
-
- with gr.Box():
- gr.Markdown("**Answer**")
- output = gr.Markdown(elem_id="q-output")
- submit = gr.Button("Generate", variant="primary")
- gr.Examples(
- examples=examples,
- inputs=[instruction],
- cache_examples=False,
- fn=process_example,
- outputs=[output],
- )
-
-
-
- submit.click(response, inputs=[instruction], outputs=[output])
- instruction.submit(response, inputs=[instruction], outputs=[output])
-
-demo.queue(concurrency_count=1).launch(debug=False,share=True)
\ No newline at end of file
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/distributions/__init__.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/distributions/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Cyril666/ContourNet-ABI/modules/transformer.py b/spaces/Cyril666/ContourNet-ABI/modules/transformer.py
deleted file mode 100644
index 6dde312185c7c68f54562885f23ea3b0670e6c40..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/modules/transformer.py
+++ /dev/null
@@ -1,901 +0,0 @@
-# pytorch 1.5.0
-import copy
-import math
-import warnings
-from typing import Optional
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from torch.nn import Dropout, LayerNorm, Linear, Module, ModuleList, Parameter
-from torch.nn import functional as F
-from torch.nn.init import constant_, xavier_uniform_
-
-
-def multi_head_attention_forward(query, # type: Tensor
- key, # type: Tensor
- value, # type: Tensor
- embed_dim_to_check, # type: int
- num_heads, # type: int
- in_proj_weight, # type: Tensor
- in_proj_bias, # type: Tensor
- bias_k, # type: Optional[Tensor]
- bias_v, # type: Optional[Tensor]
- add_zero_attn, # type: bool
- dropout_p, # type: float
- out_proj_weight, # type: Tensor
- out_proj_bias, # type: Tensor
- training=True, # type: bool
- key_padding_mask=None, # type: Optional[Tensor]
- need_weights=True, # type: bool
- attn_mask=None, # type: Optional[Tensor]
- use_separate_proj_weight=False, # type: bool
- q_proj_weight=None, # type: Optional[Tensor]
- k_proj_weight=None, # type: Optional[Tensor]
- v_proj_weight=None, # type: Optional[Tensor]
- static_k=None, # type: Optional[Tensor]
- static_v=None # type: Optional[Tensor]
- ):
- # type: (...) -> Tuple[Tensor, Optional[Tensor]]
- r"""
- Args:
- query, key, value: map a query and a set of key-value pairs to an output.
- See "Attention Is All You Need" for more details.
- embed_dim_to_check: total dimension of the model.
- num_heads: parallel attention heads.
- in_proj_weight, in_proj_bias: input projection weight and bias.
- bias_k, bias_v: bias of the key and value sequences to be added at dim=0.
- add_zero_attn: add a new batch of zeros to the key and
- value sequences at dim=1.
- dropout_p: probability of an element to be zeroed.
- out_proj_weight, out_proj_bias: the output projection weight and bias.
- training: apply dropout if is ``True``.
- key_padding_mask: if provided, specified padding elements in the key will
- be ignored by the attention. This is an binary mask. When the value is True,
- the corresponding value on the attention layer will be filled with -inf.
- need_weights: output attn_output_weights.
- attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
- the batches while a 3D mask allows to specify a different mask for the entries of each batch.
- use_separate_proj_weight: the function accept the proj. weights for query, key,
- and value in different forms. If false, in_proj_weight will be used, which is
- a combination of q_proj_weight, k_proj_weight, v_proj_weight.
- q_proj_weight, k_proj_weight, v_proj_weight, in_proj_bias: input projection weight and bias.
- static_k, static_v: static key and value used for attention operators.
- Shape:
- Inputs:
- - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
- the embedding dimension.
- - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
- the embedding dimension.
- - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
- the embedding dimension.
- - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
- If a ByteTensor is provided, the non-zero positions will be ignored while the zero positions
- will be unchanged. If a BoolTensor is provided, the positions with the
- value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
- - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
- 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
- S is the source sequence length. attn_mask ensures that position i is allowed to attend the unmasked
- positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
- while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
- are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
- is provided, it will be added to the attention weight.
- - static_k: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
- N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
- - static_v: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
- N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
- Outputs:
- - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
- E is the embedding dimension.
- - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
- L is the target sequence length, S is the source sequence length.
- """
- # if not torch.jit.is_scripting():
- # tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v,
- # out_proj_weight, out_proj_bias)
- # if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops):
- # return handle_torch_function(
- # multi_head_attention_forward, tens_ops, query, key, value,
- # embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias,
- # bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight,
- # out_proj_bias, training=training, key_padding_mask=key_padding_mask,
- # need_weights=need_weights, attn_mask=attn_mask,
- # use_separate_proj_weight=use_separate_proj_weight,
- # q_proj_weight=q_proj_weight, k_proj_weight=k_proj_weight,
- # v_proj_weight=v_proj_weight, static_k=static_k, static_v=static_v)
- tgt_len, bsz, embed_dim = query.size()
- assert embed_dim == embed_dim_to_check
- assert key.size() == value.size()
-
- head_dim = embed_dim // num_heads
- assert head_dim * num_heads == embed_dim, "embed_dim must be divisible by num_heads"
- scaling = float(head_dim) ** -0.5
-
- if not use_separate_proj_weight:
- if torch.equal(query, key) and torch.equal(key, value):
- # self-attention
- q, k, v = F.linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)
-
- elif torch.equal(key, value):
- # encoder-decoder attention
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = 0
- _end = embed_dim
- _w = in_proj_weight[_start:_end, :]
- if _b is not None:
- _b = _b[_start:_end]
- q = F.linear(query, _w, _b)
-
- if key is None:
- assert value is None
- k = None
- v = None
- else:
-
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = embed_dim
- _end = None
- _w = in_proj_weight[_start:, :]
- if _b is not None:
- _b = _b[_start:]
- k, v = F.linear(key, _w, _b).chunk(2, dim=-1)
-
- else:
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = 0
- _end = embed_dim
- _w = in_proj_weight[_start:_end, :]
- if _b is not None:
- _b = _b[_start:_end]
- q = F.linear(query, _w, _b)
-
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = embed_dim
- _end = embed_dim * 2
- _w = in_proj_weight[_start:_end, :]
- if _b is not None:
- _b = _b[_start:_end]
- k = F.linear(key, _w, _b)
-
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = embed_dim * 2
- _end = None
- _w = in_proj_weight[_start:, :]
- if _b is not None:
- _b = _b[_start:]
- v = F.linear(value, _w, _b)
- else:
- q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight)
- len1, len2 = q_proj_weight_non_opt.size()
- assert len1 == embed_dim and len2 == query.size(-1)
-
- k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight)
- len1, len2 = k_proj_weight_non_opt.size()
- assert len1 == embed_dim and len2 == key.size(-1)
-
- v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight)
- len1, len2 = v_proj_weight_non_opt.size()
- assert len1 == embed_dim and len2 == value.size(-1)
-
- if in_proj_bias is not None:
- q = F.linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim])
- k = F.linear(key, k_proj_weight_non_opt, in_proj_bias[embed_dim:(embed_dim * 2)])
- v = F.linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2):])
- else:
- q = F.linear(query, q_proj_weight_non_opt, in_proj_bias)
- k = F.linear(key, k_proj_weight_non_opt, in_proj_bias)
- v = F.linear(value, v_proj_weight_non_opt, in_proj_bias)
- q = q * scaling
-
- if attn_mask is not None:
- assert attn_mask.dtype == torch.float32 or attn_mask.dtype == torch.float64 or \
- attn_mask.dtype == torch.float16 or attn_mask.dtype == torch.uint8 or attn_mask.dtype == torch.bool, \
- 'Only float, byte, and bool types are supported for attn_mask, not {}'.format(attn_mask.dtype)
- if attn_mask.dtype == torch.uint8:
- warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
- attn_mask = attn_mask.to(torch.bool)
-
- if attn_mask.dim() == 2:
- attn_mask = attn_mask.unsqueeze(0)
- if list(attn_mask.size()) != [1, query.size(0), key.size(0)]:
- raise RuntimeError('The size of the 2D attn_mask is not correct.')
- elif attn_mask.dim() == 3:
- if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]:
- raise RuntimeError('The size of the 3D attn_mask is not correct.')
- else:
- raise RuntimeError("attn_mask's dimension {} is not supported".format(attn_mask.dim()))
- # attn_mask's dim is 3 now.
-
- # # convert ByteTensor key_padding_mask to bool
- # if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:
- # warnings.warn("Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
- # key_padding_mask = key_padding_mask.to(torch.bool)
-
- if bias_k is not None and bias_v is not None:
- if static_k is None and static_v is None:
- k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
- v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
- if attn_mask is not None:
- attn_mask = pad(attn_mask, (0, 1))
- if key_padding_mask is not None:
- key_padding_mask = pad(key_padding_mask, (0, 1))
- else:
- assert static_k is None, "bias cannot be added to static key."
- assert static_v is None, "bias cannot be added to static value."
- else:
- assert bias_k is None
- assert bias_v is None
-
- q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
- if k is not None:
- k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
- if v is not None:
- v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
-
- if static_k is not None:
- assert static_k.size(0) == bsz * num_heads
- assert static_k.size(2) == head_dim
- k = static_k
-
- if static_v is not None:
- assert static_v.size(0) == bsz * num_heads
- assert static_v.size(2) == head_dim
- v = static_v
-
- src_len = k.size(1)
-
- if key_padding_mask is not None:
- assert key_padding_mask.size(0) == bsz
- assert key_padding_mask.size(1) == src_len
-
- if add_zero_attn:
- src_len += 1
- k = torch.cat([k, torch.zeros((k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device)], dim=1)
- v = torch.cat([v, torch.zeros((v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device)], dim=1)
- if attn_mask is not None:
- attn_mask = pad(attn_mask, (0, 1))
- if key_padding_mask is not None:
- key_padding_mask = pad(key_padding_mask, (0, 1))
-
- attn_output_weights = torch.bmm(q, k.transpose(1, 2))
- assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len]
-
- if attn_mask is not None:
- if attn_mask.dtype == torch.bool:
- attn_output_weights.masked_fill_(attn_mask, float('-inf'))
- else:
- attn_output_weights += attn_mask
-
-
- if key_padding_mask is not None:
- attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
- attn_output_weights = attn_output_weights.masked_fill(
- key_padding_mask.unsqueeze(1).unsqueeze(2),
- float('-inf'),
- )
- attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len)
-
- attn_output_weights = F.softmax(
- attn_output_weights, dim=-1)
- attn_output_weights = F.dropout(attn_output_weights, p=dropout_p, training=training)
-
- attn_output = torch.bmm(attn_output_weights, v)
- assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]
- attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
- attn_output = F.linear(attn_output, out_proj_weight, out_proj_bias)
-
- if need_weights:
- # average attention weights over heads
- attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
- return attn_output, attn_output_weights.sum(dim=1) / num_heads
- else:
- return attn_output, None
-
-class MultiheadAttention(Module):
- r"""Allows the model to jointly attend to information
- from different representation subspaces.
- See reference: Attention Is All You Need
- .. math::
- \text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
- \text{where} head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)
- Args:
- embed_dim: total dimension of the model.
- num_heads: parallel attention heads.
- dropout: a Dropout layer on attn_output_weights. Default: 0.0.
- bias: add bias as module parameter. Default: True.
- add_bias_kv: add bias to the key and value sequences at dim=0.
- add_zero_attn: add a new batch of zeros to the key and
- value sequences at dim=1.
- kdim: total number of features in key. Default: None.
- vdim: total number of features in value. Default: None.
- Note: if kdim and vdim are None, they will be set to embed_dim such that
- query, key, and value have the same number of features.
- Examples::
- >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
- >>> attn_output, attn_output_weights = multihead_attn(query, key, value)
- """
- # __annotations__ = {
- # 'bias_k': torch._jit_internal.Optional[torch.Tensor],
- # 'bias_v': torch._jit_internal.Optional[torch.Tensor],
- # }
- __constants__ = ['q_proj_weight', 'k_proj_weight', 'v_proj_weight', 'in_proj_weight']
-
- def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None):
- super(MultiheadAttention, self).__init__()
- self.embed_dim = embed_dim
- self.kdim = kdim if kdim is not None else embed_dim
- self.vdim = vdim if vdim is not None else embed_dim
- self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
-
- self.num_heads = num_heads
- self.dropout = dropout
- self.head_dim = embed_dim // num_heads
- assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
-
- if self._qkv_same_embed_dim is False:
- self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
- self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))
- self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))
- self.register_parameter('in_proj_weight', None)
- else:
- self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim))
- self.register_parameter('q_proj_weight', None)
- self.register_parameter('k_proj_weight', None)
- self.register_parameter('v_proj_weight', None)
-
- if bias:
- self.in_proj_bias = Parameter(torch.empty(3 * embed_dim))
- else:
- self.register_parameter('in_proj_bias', None)
- self.out_proj = Linear(embed_dim, embed_dim, bias=bias)
-
- if add_bias_kv:
- self.bias_k = Parameter(torch.empty(1, 1, embed_dim))
- self.bias_v = Parameter(torch.empty(1, 1, embed_dim))
- else:
- self.bias_k = self.bias_v = None
-
- self.add_zero_attn = add_zero_attn
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- if self._qkv_same_embed_dim:
- xavier_uniform_(self.in_proj_weight)
- else:
- xavier_uniform_(self.q_proj_weight)
- xavier_uniform_(self.k_proj_weight)
- xavier_uniform_(self.v_proj_weight)
-
- if self.in_proj_bias is not None:
- constant_(self.in_proj_bias, 0.)
- constant_(self.out_proj.bias, 0.)
- if self.bias_k is not None:
- xavier_normal_(self.bias_k)
- if self.bias_v is not None:
- xavier_normal_(self.bias_v)
-
- def __setstate__(self, state):
- # Support loading old MultiheadAttention checkpoints generated by v1.1.0
- if '_qkv_same_embed_dim' not in state:
- state['_qkv_same_embed_dim'] = True
-
- super(MultiheadAttention, self).__setstate__(state)
-
- def forward(self, query, key, value, key_padding_mask=None,
- need_weights=True, attn_mask=None):
- # type: (Tensor, Tensor, Tensor, Optional[Tensor], bool, Optional[Tensor]) -> Tuple[Tensor, Optional[Tensor]]
- r"""
- Args:
- query, key, value: map a query and a set of key-value pairs to an output.
- See "Attention Is All You Need" for more details.
- key_padding_mask: if provided, specified padding elements in the key will
- be ignored by the attention. This is an binary mask. When the value is True,
- the corresponding value on the attention layer will be filled with -inf.
- need_weights: output attn_output_weights.
- attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
- the batches while a 3D mask allows to specify a different mask for the entries of each batch.
- Shape:
- - Inputs:
- - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
- the embedding dimension.
- - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
- the embedding dimension.
- - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
- the embedding dimension.
- - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
- If a ByteTensor is provided, the non-zero positions will be ignored while the position
- with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the
- value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
- - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
- 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
- S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked
- positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
- while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
- is not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
- is provided, it will be added to the attention weight.
- - Outputs:
- - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
- E is the embedding dimension.
- - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
- L is the target sequence length, S is the source sequence length.
- """
- if not self._qkv_same_embed_dim:
- return multi_head_attention_forward(
- query, key, value, self.embed_dim, self.num_heads,
- self.in_proj_weight, self.in_proj_bias,
- self.bias_k, self.bias_v, self.add_zero_attn,
- self.dropout, self.out_proj.weight, self.out_proj.bias,
- training=self.training,
- key_padding_mask=key_padding_mask, need_weights=need_weights,
- attn_mask=attn_mask, use_separate_proj_weight=True,
- q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,
- v_proj_weight=self.v_proj_weight)
- else:
- return multi_head_attention_forward(
- query, key, value, self.embed_dim, self.num_heads,
- self.in_proj_weight, self.in_proj_bias,
- self.bias_k, self.bias_v, self.add_zero_attn,
- self.dropout, self.out_proj.weight, self.out_proj.bias,
- training=self.training,
- key_padding_mask=key_padding_mask, need_weights=need_weights,
- attn_mask=attn_mask)
-
-
-class Transformer(Module):
- r"""A transformer model. User is able to modify the attributes as needed. The architecture
- is based on the paper "Attention Is All You Need". Ashish Vaswani, Noam Shazeer,
- Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and
- Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information
- Processing Systems, pages 6000-6010. Users can build the BERT(https://arxiv.org/abs/1810.04805)
- model with corresponding parameters.
-
- Args:
- d_model: the number of expected features in the encoder/decoder inputs (default=512).
- nhead: the number of heads in the multiheadattention models (default=8).
- num_encoder_layers: the number of sub-encoder-layers in the encoder (default=6).
- num_decoder_layers: the number of sub-decoder-layers in the decoder (default=6).
- dim_feedforward: the dimension of the feedforward network model (default=2048).
- dropout: the dropout value (default=0.1).
- activation: the activation function of encoder/decoder intermediate layer, relu or gelu (default=relu).
- custom_encoder: custom encoder (default=None).
- custom_decoder: custom decoder (default=None).
-
- Examples::
- >>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
- >>> src = torch.rand((10, 32, 512))
- >>> tgt = torch.rand((20, 32, 512))
- >>> out = transformer_model(src, tgt)
-
- Note: A full example to apply nn.Transformer module for the word language model is available in
- https://github.com/pytorch/examples/tree/master/word_language_model
- """
-
- def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,
- num_decoder_layers=6, dim_feedforward=2048, dropout=0.1,
- activation="relu", custom_encoder=None, custom_decoder=None):
- super(Transformer, self).__init__()
-
- if custom_encoder is not None:
- self.encoder = custom_encoder
- else:
- encoder_layer = TransformerEncoderLayer(d_model, nhead, dim_feedforward, dropout, activation)
- encoder_norm = LayerNorm(d_model)
- self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)
-
- if custom_decoder is not None:
- self.decoder = custom_decoder
- else:
- decoder_layer = TransformerDecoderLayer(d_model, nhead, dim_feedforward, dropout, activation)
- decoder_norm = LayerNorm(d_model)
- self.decoder = TransformerDecoder(decoder_layer, num_decoder_layers, decoder_norm)
-
- self._reset_parameters()
-
- self.d_model = d_model
- self.nhead = nhead
-
- def forward(self, src, tgt, src_mask=None, tgt_mask=None,
- memory_mask=None, src_key_padding_mask=None,
- tgt_key_padding_mask=None, memory_key_padding_mask=None):
- # type: (Tensor, Tensor, Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor]) -> Tensor # noqa
- r"""Take in and process masked source/target sequences.
-
- Args:
- src: the sequence to the encoder (required).
- tgt: the sequence to the decoder (required).
- src_mask: the additive mask for the src sequence (optional).
- tgt_mask: the additive mask for the tgt sequence (optional).
- memory_mask: the additive mask for the encoder output (optional).
- src_key_padding_mask: the ByteTensor mask for src keys per batch (optional).
- tgt_key_padding_mask: the ByteTensor mask for tgt keys per batch (optional).
- memory_key_padding_mask: the ByteTensor mask for memory keys per batch (optional).
-
- Shape:
- - src: :math:`(S, N, E)`.
- - tgt: :math:`(T, N, E)`.
- - src_mask: :math:`(S, S)`.
- - tgt_mask: :math:`(T, T)`.
- - memory_mask: :math:`(T, S)`.
- - src_key_padding_mask: :math:`(N, S)`.
- - tgt_key_padding_mask: :math:`(N, T)`.
- - memory_key_padding_mask: :math:`(N, S)`.
-
- Note: [src/tgt/memory]_mask ensures that position i is allowed to attend the unmasked
- positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
- while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
- are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
- is provided, it will be added to the attention weight.
- [src/tgt/memory]_key_padding_mask provides specified elements in the key to be ignored by
- the attention. If a ByteTensor is provided, the non-zero positions will be ignored while the zero
- positions will be unchanged. If a BoolTensor is provided, the positions with the
- value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
-
- - output: :math:`(T, N, E)`.
-
- Note: Due to the multi-head attention architecture in the transformer model,
- the output sequence length of a transformer is same as the input sequence
- (i.e. target) length of the decode.
-
- where S is the source sequence length, T is the target sequence length, N is the
- batch size, E is the feature number
-
- Examples:
- >>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)
- """
-
- if src.size(1) != tgt.size(1):
- raise RuntimeError("the batch number of src and tgt must be equal")
-
- if src.size(2) != self.d_model or tgt.size(2) != self.d_model:
- raise RuntimeError("the feature number of src and tgt must be equal to d_model")
-
- memory = self.encoder(src, mask=src_mask, src_key_padding_mask=src_key_padding_mask)
- output = self.decoder(tgt, memory, tgt_mask=tgt_mask, memory_mask=memory_mask,
- tgt_key_padding_mask=tgt_key_padding_mask,
- memory_key_padding_mask=memory_key_padding_mask)
- return output
-
- def generate_square_subsequent_mask(self, sz):
- r"""Generate a square mask for the sequence. The masked positions are filled with float('-inf').
- Unmasked positions are filled with float(0.0).
- """
- mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
- mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
- return mask
-
- def _reset_parameters(self):
- r"""Initiate parameters in the transformer model."""
-
- for p in self.parameters():
- if p.dim() > 1:
- xavier_uniform_(p)
-
-
-class TransformerEncoder(Module):
- r"""TransformerEncoder is a stack of N encoder layers
-
- Args:
- encoder_layer: an instance of the TransformerEncoderLayer() class (required).
- num_layers: the number of sub-encoder-layers in the encoder (required).
- norm: the layer normalization component (optional).
-
- Examples::
- >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
- >>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)
- >>> src = torch.rand(10, 32, 512)
- >>> out = transformer_encoder(src)
- """
- __constants__ = ['norm']
-
- def __init__(self, encoder_layer, num_layers, norm=None):
- super(TransformerEncoder, self).__init__()
- self.layers = _get_clones(encoder_layer, num_layers)
- self.num_layers = num_layers
- self.norm = norm
-
- def forward(self, src, mask=None, src_key_padding_mask=None):
- # type: (Tensor, Optional[Tensor], Optional[Tensor]) -> Tensor
- r"""Pass the input through the encoder layers in turn.
-
- Args:
- src: the sequence to the encoder (required).
- mask: the mask for the src sequence (optional).
- src_key_padding_mask: the mask for the src keys per batch (optional).
-
- Shape:
- see the docs in Transformer class.
- """
- output = src
-
- for i, mod in enumerate(self.layers):
- output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask)
-
- if self.norm is not None:
- output = self.norm(output)
-
- return output
-
-
-class TransformerDecoder(Module):
- r"""TransformerDecoder is a stack of N decoder layers
-
- Args:
- decoder_layer: an instance of the TransformerDecoderLayer() class (required).
- num_layers: the number of sub-decoder-layers in the decoder (required).
- norm: the layer normalization component (optional).
-
- Examples::
- >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
- >>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)
- >>> memory = torch.rand(10, 32, 512)
- >>> tgt = torch.rand(20, 32, 512)
- >>> out = transformer_decoder(tgt, memory)
- """
- __constants__ = ['norm']
-
- def __init__(self, decoder_layer, num_layers, norm=None):
- super(TransformerDecoder, self).__init__()
- self.layers = _get_clones(decoder_layer, num_layers)
- self.num_layers = num_layers
- self.norm = norm
-
- def forward(self, tgt, memory, memory2=None, tgt_mask=None,
- memory_mask=None, memory_mask2=None, tgt_key_padding_mask=None,
- memory_key_padding_mask=None, memory_key_padding_mask2=None):
- # type: (Tensor, Tensor, Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor]) -> Tensor
- r"""Pass the inputs (and mask) through the decoder layer in turn.
-
- Args:
- tgt: the sequence to the decoder (required).
- memory: the sequence from the last layer of the encoder (required).
- tgt_mask: the mask for the tgt sequence (optional).
- memory_mask: the mask for the memory sequence (optional).
- tgt_key_padding_mask: the mask for the tgt keys per batch (optional).
- memory_key_padding_mask: the mask for the memory keys per batch (optional).
-
- Shape:
- see the docs in Transformer class.
- """
- output = tgt
-
- for mod in self.layers:
- output = mod(output, memory, memory2=memory2, tgt_mask=tgt_mask,
- memory_mask=memory_mask, memory_mask2=memory_mask2,
- tgt_key_padding_mask=tgt_key_padding_mask,
- memory_key_padding_mask=memory_key_padding_mask,
- memory_key_padding_mask2=memory_key_padding_mask2)
-
- if self.norm is not None:
- output = self.norm(output)
-
- return output
-
-class TransformerEncoderLayer(Module):
- r"""TransformerEncoderLayer is made up of self-attn and feedforward network.
- This standard encoder layer is based on the paper "Attention Is All You Need".
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
- Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in
- Neural Information Processing Systems, pages 6000-6010. Users may modify or implement
- in a different way during application.
-
- Args:
- d_model: the number of expected features in the input (required).
- nhead: the number of heads in the multiheadattention models (required).
- dim_feedforward: the dimension of the feedforward network model (default=2048).
- dropout: the dropout value (default=0.1).
- activation: the activation function of intermediate layer, relu or gelu (default=relu).
-
- Examples::
- >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
- >>> src = torch.rand(10, 32, 512)
- >>> out = encoder_layer(src)
- """
-
- def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
- activation="relu", debug=False):
- super(TransformerEncoderLayer, self).__init__()
- self.debug = debug
- self.self_attn = MultiheadAttention(d_model, nhead, dropout=dropout)
- # Implementation of Feedforward model
- self.linear1 = Linear(d_model, dim_feedforward)
- self.dropout = Dropout(dropout)
- self.linear2 = Linear(dim_feedforward, d_model)
-
- self.norm1 = LayerNorm(d_model)
- self.norm2 = LayerNorm(d_model)
- self.dropout1 = Dropout(dropout)
- self.dropout2 = Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
-
- def __setstate__(self, state):
- if 'activation' not in state:
- state['activation'] = F.relu
- super(TransformerEncoderLayer, self).__setstate__(state)
-
- def forward(self, src, src_mask=None, src_key_padding_mask=None):
- # type: (Tensor, Optional[Tensor], Optional[Tensor]) -> Tensor
- r"""Pass the input through the encoder layer.
-
- Args:
- src: the sequence to the encoder layer (required).
- src_mask: the mask for the src sequence (optional).
- src_key_padding_mask: the mask for the src keys per batch (optional).
-
- Shape:
- see the docs in Transformer class.
- """
- src2, attn = self.self_attn(src, src, src, attn_mask=src_mask,
- key_padding_mask=src_key_padding_mask)
- if self.debug: self.attn = attn
- src = src + self.dropout1(src2)
- src = self.norm1(src)
- src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
- src = src + self.dropout2(src2)
- src = self.norm2(src)
-
- return src
-
-
-class TransformerDecoderLayer(Module):
- r"""TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.
- This standard decoder layer is based on the paper "Attention Is All You Need".
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
- Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in
- Neural Information Processing Systems, pages 6000-6010. Users may modify or implement
- in a different way during application.
-
- Args:
- d_model: the number of expected features in the input (required).
- nhead: the number of heads in the multiheadattention models (required).
- dim_feedforward: the dimension of the feedforward network model (default=2048).
- dropout: the dropout value (default=0.1).
- activation: the activation function of intermediate layer, relu or gelu (default=relu).
-
- Examples::
- >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
- >>> memory = torch.rand(10, 32, 512)
- >>> tgt = torch.rand(20, 32, 512)
- >>> out = decoder_layer(tgt, memory)
- """
-
- def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
- activation="relu", self_attn=True, siamese=False, debug=False):
- super(TransformerDecoderLayer, self).__init__()
- self.has_self_attn, self.siamese = self_attn, siamese
- self.debug = debug
- if self.has_self_attn:
- self.self_attn = MultiheadAttention(d_model, nhead, dropout=dropout)
- self.norm1 = LayerNorm(d_model)
- self.dropout1 = Dropout(dropout)
- self.multihead_attn = MultiheadAttention(d_model, nhead, dropout=dropout)
- # Implementation of Feedforward model
- self.linear1 = Linear(d_model, dim_feedforward)
- self.dropout = Dropout(dropout)
- self.linear2 = Linear(dim_feedforward, d_model)
-
- self.norm2 = LayerNorm(d_model)
- self.norm3 = LayerNorm(d_model)
- self.dropout2 = Dropout(dropout)
- self.dropout3 = Dropout(dropout)
- if self.siamese:
- self.multihead_attn2 = MultiheadAttention(d_model, nhead, dropout=dropout)
-
- self.activation = _get_activation_fn(activation)
-
- def __setstate__(self, state):
- if 'activation' not in state:
- state['activation'] = F.relu
- super(TransformerDecoderLayer, self).__setstate__(state)
-
- def forward(self, tgt, memory, tgt_mask=None, memory_mask=None,
- tgt_key_padding_mask=None, memory_key_padding_mask=None,
- memory2=None, memory_mask2=None, memory_key_padding_mask2=None):
- # type: (Tensor, Tensor, Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor]) -> Tensor
- r"""Pass the inputs (and mask) through the decoder layer.
-
- Args:
- tgt: the sequence to the decoder layer (required).
- memory: the sequence from the last layer of the encoder (required).
- tgt_mask: the mask for the tgt sequence (optional).
- memory_mask: the mask for the memory sequence (optional).
- tgt_key_padding_mask: the mask for the tgt keys per batch (optional).
- memory_key_padding_mask: the mask for the memory keys per batch (optional).
-
- Shape:
- see the docs in Transformer class.
- """
- if self.has_self_attn:
- tgt2, attn = self.self_attn(tgt, tgt, tgt, attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)
- tgt = tgt + self.dropout1(tgt2)
- tgt = self.norm1(tgt)
- if self.debug: self.attn = attn
- tgt2, attn2 = self.multihead_attn(tgt, memory, memory, attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)
- if self.debug: self.attn2 = attn2
-
- if self.siamese:
- tgt3, attn3 = self.multihead_attn2(tgt, memory2, memory2, attn_mask=memory_mask2,
- key_padding_mask=memory_key_padding_mask2)
- tgt = tgt + self.dropout2(tgt3)
- if self.debug: self.attn3 = attn3
-
- tgt = tgt + self.dropout2(tgt2)
- tgt = self.norm2(tgt)
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
- tgt = tgt + self.dropout3(tgt2)
- tgt = self.norm3(tgt)
-
- return tgt
-
-
-def _get_clones(module, N):
- return ModuleList([copy.deepcopy(module) for i in range(N)])
-
-
-def _get_activation_fn(activation):
- if activation == "relu":
- return F.relu
- elif activation == "gelu":
- return F.gelu
-
- raise RuntimeError("activation should be relu/gelu, not {}".format(activation))
-
-
-class PositionalEncoding(nn.Module):
- r"""Inject some information about the relative or absolute position of the tokens
- in the sequence. The positional encodings have the same dimension as
- the embeddings, so that the two can be summed. Here, we use sine and cosine
- functions of different frequencies.
- .. math::
- \text{PosEncoder}(pos, 2i) = sin(pos/10000^(2i/d_model))
- \text{PosEncoder}(pos, 2i+1) = cos(pos/10000^(2i/d_model))
- \text{where pos is the word position and i is the embed idx)
- Args:
- d_model: the embed dim (required).
- dropout: the dropout value (default=0.1).
- max_len: the max. length of the incoming sequence (default=5000).
- Examples:
- >>> pos_encoder = PositionalEncoding(d_model)
- """
-
- def __init__(self, d_model, dropout=0.1, max_len=5000):
- super(PositionalEncoding, self).__init__()
- self.dropout = nn.Dropout(p=dropout)
-
- pe = torch.zeros(max_len, d_model)
- position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
- div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
- pe[:, 0::2] = torch.sin(position * div_term)
- pe[:, 1::2] = torch.cos(position * div_term)
- pe = pe.unsqueeze(0).transpose(0, 1)
- self.register_buffer('pe', pe)
-
- def forward(self, x):
- r"""Inputs of forward function
- Args:
- x: the sequence fed to the positional encoder model (required).
- Shape:
- x: [sequence length, batch size, embed dim]
- output: [sequence length, batch size, embed dim]
- Examples:
- >>> output = pos_encoder(x)
- """
-
- x = x + self.pe[:x.size(0), :]
- return self.dropout(x)
-
-
-if __name__ == '__main__':
- transformer_model = Transformer(nhead=16, num_encoder_layers=12)
- src = torch.rand((10, 32, 512))
- tgt = torch.rand((20, 32, 512))
- out = transformer_model(src, tgt)
- print(out)
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/llava_instruct_dataset.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/llava_instruct_dataset.py
deleted file mode 100644
index 105e0981581b7934c5df2bc53ecf03142cc4c969..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/llava_instruct_dataset.py
+++ /dev/null
@@ -1,228 +0,0 @@
-import os
-from video_llama.datasets.datasets.base_dataset import BaseDataset
-from video_llama.datasets.datasets.caption_datasets import CaptionDataset
-import pandas as pd
-import decord
-from decord import VideoReader
-import random
-import torch
-from torch.utils.data.dataloader import default_collate
-from PIL import Image
-from typing import Dict, Optional, Sequence
-import transformers
-import pathlib
-import json
-from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaTokenizer
-from video_llama.conversation.conversation_video import Conversation,SeparatorStyle
-DEFAULT_IMAGE_PATCH_TOKEN = ''
-DEFAULT_IMAGE_TOKEN = ""
-import copy
-IGNORE_INDEX = -100
-image_conversation = Conversation(
- system="",
- roles=("Human", "Assistant"),
- messages=[],
- offset=0,
- sep_style=SeparatorStyle.SINGLE,
- sep="###",
-)
-IGNORE_INDEX = -100
-
-class Instruct_Dataset(BaseDataset):
- def __init__(self, vis_processor, text_processor, vis_root, ann_root,num_video_query_token=32,tokenizer_name = '/mnt/workspace/ckpt/vicuna-13b/',data_type = 'image'):
- """
- vis_root (string): Root directory of Llava images (e.g. webvid_eval/video/)
- ann_root (string): Root directory of video (e.g. webvid_eval/annotations/)
- split (string): val or test
- """
- super().__init__(vis_processor=vis_processor, text_processor=text_processor)
-
- data_path = pathlib.Path(ann_root)
- with data_path.open(encoding='utf-8') as f:
- self.annotation = json.load(f)
-
- self.vis_root = vis_root
- self.resize_size = 224
- self.num_frm = 8
- self.tokenizer = LlamaTokenizer.from_pretrained(tokenizer_name, use_fast=False)
- self.tokenizer.pad_token = self.tokenizer.eos_token
- self.tokenizer.add_tokens([DEFAULT_IMAGE_PATCH_TOKEN], special_tokens=True)
- self.num_video_query_token = num_video_query_token
- self.IMAGE_PATCH_TOKEN_ID = self.tokenizer.get_vocab()[DEFAULT_IMAGE_PATCH_TOKEN]
-
- self.transform = AlproVideoTrainProcessor(
- image_size=self.resize_size, n_frms = self.num_frm
- ).transform
- self.data_type = data_type
-
- def _get_image_path(self, sample):
- rel_video_fp ='COCO_train2014_' + sample['image']
- full_video_fp = os.path.join(self.vis_root, rel_video_fp)
- return full_video_fp
-
- def __getitem__(self, index):
- num_retries = 10 # skip error videos
- for _ in range(num_retries):
- try:
- sample = self.annotation[index]
-
- image_path = self._get_image_path(sample)
- conversation_list = sample['conversations']
- image = Image.open(image_path).convert("RGB")
-
- image = self.vis_processor(image)
- # text = self.text_processor(text)
- sources = preprocess_multimodal(copy.deepcopy(conversation_list), None, cur_token_len=self.num_video_query_token)
- data_dict = preprocess(
- sources,
- self.tokenizer)
- data_dict = dict(input_ids=data_dict["input_ids"][0],
- labels=data_dict["labels"][0])
-
- # image exist in the data
- data_dict['image'] = image
- except:
- print(f"Failed to load examples with image: {image_path}. "
- f"Will randomly sample an example as a replacement.")
- index = random.randint(0, len(self) - 1)
- continue
- break
- else:
- raise RuntimeError(f"Failed to fetch image after {num_retries} retries.")
- # "image_id" is kept to stay compatible with the COCO evaluation format
- return {
- "image": image,
- "text_input": data_dict["input_ids"],
- "labels": data_dict["labels"],
- "type":'image',
- }
-
- def __len__(self):
- return len(self.annotation)
-
- def collater(self, instances):
- input_ids, labels = tuple([instance[key] for instance in instances]
- for key in ("text_input", "labels"))
- input_ids = torch.nn.utils.rnn.pad_sequence(
- input_ids,
- batch_first=True,
- padding_value=self.tokenizer.pad_token_id)
- labels = torch.nn.utils.rnn.pad_sequence(labels,
- batch_first=True,
- padding_value=IGNORE_INDEX)
- batch = dict(
- input_ids=input_ids,
- labels=labels,
- attention_mask=input_ids.ne(self.tokenizer.pad_token_id),
- )
-
- if 'image' in instances[0]:
- images = [instance['image'] for instance in instances]
- if all(x is not None and x.shape == images[0].shape for x in images):
- batch['images'] = torch.stack(images)
- else:
- batch['images'] = images
- batch['conv_type'] = 'multi'
- return batch
-
-
-def preprocess_multimodal(
- conversation_list: Sequence[str],
- multimodal_cfg: dict,
- cur_token_len: int,
-) -> Dict:
- # 将conversational list中
- is_multimodal = True
- # image_token_len = multimodal_cfg['image_token_len']
- image_token_len = cur_token_len
-
- for sentence in conversation_list:
- replace_token = ''+DEFAULT_IMAGE_PATCH_TOKEN * image_token_len+'/'
- sentence["value"] = sentence["value"].replace(DEFAULT_IMAGE_TOKEN, replace_token)
-
- return [conversation_list]
-
-def _add_speaker_and_signal(header, source, get_conversation=True):
- """Add speaker and start/end signal on each round."""
- BEGIN_SIGNAL = "###"
- END_SIGNAL = "\n"
- conversation = header
- for sentence in source:
- from_str = sentence["from"]
- if from_str.lower() == "human":
- from_str = image_conversation.roles[0]
- elif from_str.lower() == "gpt":
- from_str = image_conversation.roles[1]
- else:
- from_str = 'unknown'
- sentence["value"] = (BEGIN_SIGNAL + from_str + ": " +
- sentence["value"] + END_SIGNAL)
- if get_conversation:
- conversation += sentence["value"]
- conversation += BEGIN_SIGNAL
- return conversation
-
-def _tokenize_fn(strings: Sequence[str],
- tokenizer: transformers.PreTrainedTokenizer) -> Dict:
- """Tokenize a list of strings."""
- tokenized_list = [
- tokenizer(
- text,
- return_tensors="pt",
- padding="longest",
- max_length=512,
- truncation=True,
- ) for text in strings
- ]
- input_ids = labels = [
- tokenized.input_ids[0] for tokenized in tokenized_list
- ]
- input_ids_lens = labels_lens = [
- tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item()
- for tokenized in tokenized_list
- ]
- return dict(
- input_ids=input_ids,
- labels=labels,
- input_ids_lens=input_ids_lens,
- labels_lens=labels_lens,
- )
-
-def preprocess(
- sources: Sequence[str],
- tokenizer: transformers.PreTrainedTokenizer,
-) -> Dict:
- """
- Given a list of sources, each is a conversation list. This transform:
- 1. Add signal '### ' at the beginning each sentence, with end signal '\n';
- 2. Concatenate conversations together;
- 3. Tokenize the concatenated conversation;
- 4. Make a deepcopy as the target. Mask human words with IGNORE_INDEX.
- """
- # add end signal and concatenate together
- conversations = []
- for source in sources:
- header = f"{image_conversation.system}\n\n"
- conversation = _add_speaker_and_signal(header, source)
- conversations.append(conversation)
- # tokenize conversations
- conversations_tokenized = _tokenize_fn(conversations, tokenizer)
- input_ids = conversations_tokenized["input_ids"]
- targets = copy.deepcopy(input_ids)
- for target, source in zip(targets, sources):
- tokenized_lens = _tokenize_fn([header] + [s["value"] for s in source],
- tokenizer)["input_ids_lens"]
- speakers = [sentence["from"] for sentence in source]
- _mask_targets(target, tokenized_lens, speakers)
-
- return dict(input_ids=input_ids, labels=targets)
-
-def _mask_targets(target, tokenized_lens, speakers):
- # cur_idx = 0
- cur_idx = tokenized_lens[0]
- tokenized_lens = tokenized_lens[1:]
- target[:cur_idx] = IGNORE_INDEX
- for tokenized_len, speaker in zip(tokenized_lens, speakers):
- if speaker == "human":
- target[cur_idx+2:cur_idx + tokenized_len] = IGNORE_INDEX
- cur_idx += tokenized_len
diff --git a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/spectro.py b/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/spectro.py
deleted file mode 100644
index 63e0ede4714b13903bdbddb6edafe32aac7bcc1c..0000000000000000000000000000000000000000
--- a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/spectro.py
+++ /dev/null
@@ -1,185 +0,0 @@
-"""
-Audio processing tools to convert between spectrogram images and waveforms.
-"""
-import io
-import typing as T
-
-import numpy as np
-from PIL import Image
-import pydub
-from scipy.io import wavfile
-import torch
-import torchaudio
-
-
-def wav_bytes_from_spectrogram_image(image: Image.Image) -> T.Tuple[io.BytesIO, float]:
- """
- Reconstruct a WAV audio clip from a spectrogram image. Also returns the duration in seconds.
- """
-
- max_volume = 50
- power_for_image = 0.25
- Sxx = spectrogram_from_image(image, max_volume=max_volume, power_for_image=power_for_image)
-
- sample_rate = 44100 # [Hz]
- clip_duration_ms = 5000 # [ms]
-
- bins_per_image = 512
- n_mels = 512
-
- # FFT parameters
- window_duration_ms = 100 # [ms]
- padded_duration_ms = 400 # [ms]
- step_size_ms = 10 # [ms]
-
- # Derived parameters
- num_samples = int(image.width / float(bins_per_image) * clip_duration_ms) * sample_rate
- n_fft = int(padded_duration_ms / 1000.0 * sample_rate)
- hop_length = int(step_size_ms / 1000.0 * sample_rate)
- win_length = int(window_duration_ms / 1000.0 * sample_rate)
-
- samples = waveform_from_spectrogram(
- Sxx=Sxx,
- n_fft=n_fft,
- hop_length=hop_length,
- win_length=win_length,
- num_samples=num_samples,
- sample_rate=sample_rate,
- mel_scale=True,
- n_mels=n_mels,
- max_mel_iters=200,
- num_griffin_lim_iters=32,
- )
-
- wav_bytes = io.BytesIO()
- wavfile.write(wav_bytes, sample_rate, samples.astype(np.int16))
- wav_bytes.seek(0)
-
- duration_s = float(len(samples)) / sample_rate
-
- return wav_bytes, duration_s
-
-
-def spectrogram_from_image(
- image: Image.Image, max_volume: float = 50, power_for_image: float = 0.25
-) -> np.ndarray:
- """
- Compute a spectrogram magnitude array from a spectrogram image.
-
- TODO(hayk): Add image_from_spectrogram and call this out as the reverse.
- """
- # Convert to a numpy array of floats
- data = np.array(image).astype(np.float32)
-
- # Flip Y take a single channel
- data = data[::-1, :, 0]
-
- # Invert
- data = 255 - data
-
- # Rescale to max volume
- data = data * max_volume / 255
-
- # Reverse the power curve
- data = np.power(data, 1 / power_for_image)
-
- return data
-
-
-def spectrogram_from_waveform(
- waveform: np.ndarray,
- sample_rate: int,
- n_fft: int,
- hop_length: int,
- win_length: int,
- mel_scale: bool = True,
- n_mels: int = 512,
-) -> np.ndarray:
- """
- Compute a spectrogram from a waveform.
- """
-
- spectrogram_func = torchaudio.transforms.Spectrogram(
- n_fft=n_fft,
- power=None,
- hop_length=hop_length,
- win_length=win_length,
- )
-
- waveform_tensor = torch.from_numpy(waveform.astype(np.float32)).reshape(1, -1)
- Sxx_complex = spectrogram_func(waveform_tensor).numpy()[0]
-
- Sxx_mag = np.abs(Sxx_complex)
-
- if mel_scale:
- mel_scaler = torchaudio.transforms.MelScale(
- n_mels=n_mels,
- sample_rate=sample_rate,
- f_min=0,
- f_max=10000,
- n_stft=n_fft // 2 + 1,
- norm=None,
- mel_scale="htk",
- )
-
- Sxx_mag = mel_scaler(torch.from_numpy(Sxx_mag)).numpy()
-
- return Sxx_mag
-
-
-def waveform_from_spectrogram(
- Sxx: np.ndarray,
- n_fft: int,
- hop_length: int,
- win_length: int,
- num_samples: int,
- sample_rate: int,
- mel_scale: bool = True,
- n_mels: int = 512,
- max_mel_iters: int = 200,
- num_griffin_lim_iters: int = 32,
- device: str = "cuda:0",
-) -> np.ndarray:
- """
- Reconstruct a waveform from a spectrogram.
-
- This is an approximate inverse of spectrogram_from_waveform, using the Griffin-Lim algorithm
- to approximate the phase.
- """
- Sxx_torch = torch.from_numpy(Sxx).to(device)
-
- # TODO(hayk): Make this a class that caches the two things
-
- if mel_scale:
- mel_inv_scaler = torchaudio.transforms.InverseMelScale(
- n_mels=n_mels,
- sample_rate=sample_rate,
- f_min=0,
- f_max=10000,
- n_stft=n_fft // 2 + 1,
- norm=None,
- mel_scale="htk",
- max_iter=max_mel_iters,
- ).to(device)
-
- Sxx_torch = mel_inv_scaler(Sxx_torch)
-
- griffin_lim = torchaudio.transforms.GriffinLim(
- n_fft=n_fft,
- win_length=win_length,
- hop_length=hop_length,
- power=1.0,
- n_iter=num_griffin_lim_iters,
- ).to(device)
-
- waveform = griffin_lim(Sxx_torch).cpu().numpy()
-
- return waveform
-
-
-def mp3_bytes_from_wav_bytes(wav_bytes: io.BytesIO) -> io.BytesIO:
- mp3_bytes = io.BytesIO()
- sound = pydub.AudioSegment.from_wav(wav_bytes)
- sound.export(mp3_bytes, format="mp3")
- mp3_bytes.seek(0)
- return mp3_bytes
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/worker.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/worker.py
deleted file mode 100644
index f1302899f2f0e078613e69d9a8103ecc00bae95d..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/worker.py
+++ /dev/null
@@ -1,269 +0,0 @@
-"""Async gunicorn worker for aiohttp.web"""
-
-import asyncio
-import os
-import re
-import signal
-import sys
-from types import FrameType
-from typing import Any, Awaitable, Callable, Optional, Union # noqa
-
-from gunicorn.config import AccessLogFormat as GunicornAccessLogFormat
-from gunicorn.workers import base
-
-from aiohttp import web
-
-from .helpers import set_result
-from .web_app import Application
-from .web_log import AccessLogger
-
-try:
- import ssl
-
- SSLContext = ssl.SSLContext
-except ImportError: # pragma: no cover
- ssl = None # type: ignore[assignment]
- SSLContext = object # type: ignore[misc,assignment]
-
-
-__all__ = ("GunicornWebWorker", "GunicornUVLoopWebWorker", "GunicornTokioWebWorker")
-
-
-class GunicornWebWorker(base.Worker): # type: ignore[misc,no-any-unimported]
-
- DEFAULT_AIOHTTP_LOG_FORMAT = AccessLogger.LOG_FORMAT
- DEFAULT_GUNICORN_LOG_FORMAT = GunicornAccessLogFormat.default
-
- def __init__(self, *args: Any, **kw: Any) -> None: # pragma: no cover
- super().__init__(*args, **kw)
-
- self._task: Optional[asyncio.Task[None]] = None
- self.exit_code = 0
- self._notify_waiter: Optional[asyncio.Future[bool]] = None
-
- def init_process(self) -> None:
- # create new event_loop after fork
- asyncio.get_event_loop().close()
-
- self.loop = asyncio.new_event_loop()
- asyncio.set_event_loop(self.loop)
-
- super().init_process()
-
- def run(self) -> None:
- self._task = self.loop.create_task(self._run())
-
- try: # ignore all finalization problems
- self.loop.run_until_complete(self._task)
- except Exception:
- self.log.exception("Exception in gunicorn worker")
- self.loop.run_until_complete(self.loop.shutdown_asyncgens())
- self.loop.close()
-
- sys.exit(self.exit_code)
-
- async def _run(self) -> None:
- runner = None
- if isinstance(self.wsgi, Application):
- app = self.wsgi
- elif asyncio.iscoroutinefunction(self.wsgi):
- wsgi = await self.wsgi()
- if isinstance(wsgi, web.AppRunner):
- runner = wsgi
- app = runner.app
- else:
- app = wsgi
- else:
- raise RuntimeError(
- "wsgi app should be either Application or "
- "async function returning Application, got {}".format(self.wsgi)
- )
-
- if runner is None:
- access_log = self.log.access_log if self.cfg.accesslog else None
- runner = web.AppRunner(
- app,
- logger=self.log,
- keepalive_timeout=self.cfg.keepalive,
- access_log=access_log,
- access_log_format=self._get_valid_log_format(
- self.cfg.access_log_format
- ),
- )
- await runner.setup()
-
- ctx = self._create_ssl_context(self.cfg) if self.cfg.is_ssl else None
-
- runner = runner
- assert runner is not None
- server = runner.server
- assert server is not None
- for sock in self.sockets:
- site = web.SockSite(
- runner,
- sock,
- ssl_context=ctx,
- shutdown_timeout=self.cfg.graceful_timeout / 100 * 95,
- )
- await site.start()
-
- # If our parent changed then we shut down.
- pid = os.getpid()
- try:
- while self.alive: # type: ignore[has-type]
- self.notify()
-
- cnt = server.requests_count
- if self.cfg.max_requests and cnt > self.cfg.max_requests:
- self.alive = False
- self.log.info("Max requests, shutting down: %s", self)
-
- elif pid == os.getpid() and self.ppid != os.getppid():
- self.alive = False
- self.log.info("Parent changed, shutting down: %s", self)
- else:
- await self._wait_next_notify()
- except BaseException:
- pass
-
- await runner.cleanup()
-
- def _wait_next_notify(self) -> "asyncio.Future[bool]":
- self._notify_waiter_done()
-
- loop = self.loop
- assert loop is not None
- self._notify_waiter = waiter = loop.create_future()
- self.loop.call_later(1.0, self._notify_waiter_done, waiter)
-
- return waiter
-
- def _notify_waiter_done(
- self, waiter: Optional["asyncio.Future[bool]"] = None
- ) -> None:
- if waiter is None:
- waiter = self._notify_waiter
- if waiter is not None:
- set_result(waiter, True)
-
- if waiter is self._notify_waiter:
- self._notify_waiter = None
-
- def init_signals(self) -> None:
- # Set up signals through the event loop API.
-
- self.loop.add_signal_handler(
- signal.SIGQUIT, self.handle_quit, signal.SIGQUIT, None
- )
-
- self.loop.add_signal_handler(
- signal.SIGTERM, self.handle_exit, signal.SIGTERM, None
- )
-
- self.loop.add_signal_handler(
- signal.SIGINT, self.handle_quit, signal.SIGINT, None
- )
-
- self.loop.add_signal_handler(
- signal.SIGWINCH, self.handle_winch, signal.SIGWINCH, None
- )
-
- self.loop.add_signal_handler(
- signal.SIGUSR1, self.handle_usr1, signal.SIGUSR1, None
- )
-
- self.loop.add_signal_handler(
- signal.SIGABRT, self.handle_abort, signal.SIGABRT, None
- )
-
- # Don't let SIGTERM and SIGUSR1 disturb active requests
- # by interrupting system calls
- signal.siginterrupt(signal.SIGTERM, False)
- signal.siginterrupt(signal.SIGUSR1, False)
- # Reset signals so Gunicorn doesn't swallow subprocess return codes
- # See: https://github.com/aio-libs/aiohttp/issues/6130
- if sys.version_info < (3, 8):
- # Starting from Python 3.8,
- # the default child watcher is ThreadedChildWatcher.
- # The watcher doesn't depend on SIGCHLD signal,
- # there is no need to reset it.
- signal.signal(signal.SIGCHLD, signal.SIG_DFL)
-
- def handle_quit(self, sig: int, frame: FrameType) -> None:
- self.alive = False
-
- # worker_int callback
- self.cfg.worker_int(self)
-
- # wakeup closing process
- self._notify_waiter_done()
-
- def handle_abort(self, sig: int, frame: FrameType) -> None:
- self.alive = False
- self.exit_code = 1
- self.cfg.worker_abort(self)
- sys.exit(1)
-
- @staticmethod
- def _create_ssl_context(cfg: Any) -> "SSLContext":
- """Creates SSLContext instance for usage in asyncio.create_server.
-
- See ssl.SSLSocket.__init__ for more details.
- """
- if ssl is None: # pragma: no cover
- raise RuntimeError("SSL is not supported.")
-
- ctx = ssl.SSLContext(cfg.ssl_version)
- ctx.load_cert_chain(cfg.certfile, cfg.keyfile)
- ctx.verify_mode = cfg.cert_reqs
- if cfg.ca_certs:
- ctx.load_verify_locations(cfg.ca_certs)
- if cfg.ciphers:
- ctx.set_ciphers(cfg.ciphers)
- return ctx
-
- def _get_valid_log_format(self, source_format: str) -> str:
- if source_format == self.DEFAULT_GUNICORN_LOG_FORMAT:
- return self.DEFAULT_AIOHTTP_LOG_FORMAT
- elif re.search(r"%\([^\)]+\)", source_format):
- raise ValueError(
- "Gunicorn's style options in form of `%(name)s` are not "
- "supported for the log formatting. Please use aiohttp's "
- "format specification to configure access log formatting: "
- "http://docs.aiohttp.org/en/stable/logging.html"
- "#format-specification"
- )
- else:
- return source_format
-
-
-class GunicornUVLoopWebWorker(GunicornWebWorker):
- def init_process(self) -> None:
- import uvloop
-
- # Close any existing event loop before setting a
- # new policy.
- asyncio.get_event_loop().close()
-
- # Setup uvloop policy, so that every
- # asyncio.get_event_loop() will create an instance
- # of uvloop event loop.
- asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
-
- super().init_process()
-
-
-class GunicornTokioWebWorker(GunicornWebWorker):
- def init_process(self) -> None: # pragma: no cover
- import tokio
-
- # Close any existing event loop before setting a
- # new policy.
- asyncio.get_event_loop().close()
-
- # Setup tokio policy, so that every
- # asyncio.get_event_loop() will create an instance
- # of tokio event loop.
- asyncio.set_event_loop_policy(tokio.EventLoopPolicy())
-
- super().init_process()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/md.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/md.py
deleted file mode 100644
index 13aa062e71e4c07832c3dea08a70925b61848dcd..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/md.py
+++ /dev/null
@@ -1,582 +0,0 @@
-from functools import lru_cache
-from logging import getLogger
-from typing import List, Optional
-
-from .constant import (
- COMMON_SAFE_ASCII_CHARACTERS,
- TRACE,
- UNICODE_SECONDARY_RANGE_KEYWORD,
-)
-from .utils import (
- is_accentuated,
- is_ascii,
- is_case_variable,
- is_cjk,
- is_emoticon,
- is_hangul,
- is_hiragana,
- is_katakana,
- is_latin,
- is_punctuation,
- is_separator,
- is_symbol,
- is_thai,
- is_unprintable,
- remove_accent,
- unicode_range,
-)
-
-
-class MessDetectorPlugin:
- """
- Base abstract class used for mess detection plugins.
- All detectors MUST extend and implement given methods.
- """
-
- def eligible(self, character: str) -> bool:
- """
- Determine if given character should be fed in.
- """
- raise NotImplementedError # pragma: nocover
-
- def feed(self, character: str) -> None:
- """
- The main routine to be executed upon character.
- Insert the logic in witch the text would be considered chaotic.
- """
- raise NotImplementedError # pragma: nocover
-
- def reset(self) -> None: # pragma: no cover
- """
- Permit to reset the plugin to the initial state.
- """
- raise NotImplementedError
-
- @property
- def ratio(self) -> float:
- """
- Compute the chaos ratio based on what your feed() has seen.
- Must NOT be lower than 0.; No restriction gt 0.
- """
- raise NotImplementedError # pragma: nocover
-
-
-class TooManySymbolOrPunctuationPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._punctuation_count: int = 0
- self._symbol_count: int = 0
- self._character_count: int = 0
-
- self._last_printable_char: Optional[str] = None
- self._frenzy_symbol_in_word: bool = False
-
- def eligible(self, character: str) -> bool:
- return character.isprintable()
-
- def feed(self, character: str) -> None:
- self._character_count += 1
-
- if (
- character != self._last_printable_char
- and character not in COMMON_SAFE_ASCII_CHARACTERS
- ):
- if is_punctuation(character):
- self._punctuation_count += 1
- elif (
- character.isdigit() is False
- and is_symbol(character)
- and is_emoticon(character) is False
- ):
- self._symbol_count += 2
-
- self._last_printable_char = character
-
- def reset(self) -> None: # pragma: no cover
- self._punctuation_count = 0
- self._character_count = 0
- self._symbol_count = 0
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- ratio_of_punctuation: float = (
- self._punctuation_count + self._symbol_count
- ) / self._character_count
-
- return ratio_of_punctuation if ratio_of_punctuation >= 0.3 else 0.0
-
-
-class TooManyAccentuatedPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._character_count: int = 0
- self._accentuated_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return character.isalpha()
-
- def feed(self, character: str) -> None:
- self._character_count += 1
-
- if is_accentuated(character):
- self._accentuated_count += 1
-
- def reset(self) -> None: # pragma: no cover
- self._character_count = 0
- self._accentuated_count = 0
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0 or self._character_count < 8:
- return 0.0
- ratio_of_accentuation: float = self._accentuated_count / self._character_count
- return ratio_of_accentuation if ratio_of_accentuation >= 0.35 else 0.0
-
-
-class UnprintablePlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._unprintable_count: int = 0
- self._character_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- if is_unprintable(character):
- self._unprintable_count += 1
- self._character_count += 1
-
- def reset(self) -> None: # pragma: no cover
- self._unprintable_count = 0
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- return (self._unprintable_count * 8) / self._character_count
-
-
-class SuspiciousDuplicateAccentPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._successive_count: int = 0
- self._character_count: int = 0
-
- self._last_latin_character: Optional[str] = None
-
- def eligible(self, character: str) -> bool:
- return character.isalpha() and is_latin(character)
-
- def feed(self, character: str) -> None:
- self._character_count += 1
- if (
- self._last_latin_character is not None
- and is_accentuated(character)
- and is_accentuated(self._last_latin_character)
- ):
- if character.isupper() and self._last_latin_character.isupper():
- self._successive_count += 1
- # Worse if its the same char duplicated with different accent.
- if remove_accent(character) == remove_accent(self._last_latin_character):
- self._successive_count += 1
- self._last_latin_character = character
-
- def reset(self) -> None: # pragma: no cover
- self._successive_count = 0
- self._character_count = 0
- self._last_latin_character = None
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- return (self._successive_count * 2) / self._character_count
-
-
-class SuspiciousRange(MessDetectorPlugin):
- def __init__(self) -> None:
- self._suspicious_successive_range_count: int = 0
- self._character_count: int = 0
- self._last_printable_seen: Optional[str] = None
-
- def eligible(self, character: str) -> bool:
- return character.isprintable()
-
- def feed(self, character: str) -> None:
- self._character_count += 1
-
- if (
- character.isspace()
- or is_punctuation(character)
- or character in COMMON_SAFE_ASCII_CHARACTERS
- ):
- self._last_printable_seen = None
- return
-
- if self._last_printable_seen is None:
- self._last_printable_seen = character
- return
-
- unicode_range_a: Optional[str] = unicode_range(self._last_printable_seen)
- unicode_range_b: Optional[str] = unicode_range(character)
-
- if is_suspiciously_successive_range(unicode_range_a, unicode_range_b):
- self._suspicious_successive_range_count += 1
-
- self._last_printable_seen = character
-
- def reset(self) -> None: # pragma: no cover
- self._character_count = 0
- self._suspicious_successive_range_count = 0
- self._last_printable_seen = None
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- ratio_of_suspicious_range_usage: float = (
- self._suspicious_successive_range_count * 2
- ) / self._character_count
-
- if ratio_of_suspicious_range_usage < 0.1:
- return 0.0
-
- return ratio_of_suspicious_range_usage
-
-
-class SuperWeirdWordPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._word_count: int = 0
- self._bad_word_count: int = 0
- self._foreign_long_count: int = 0
-
- self._is_current_word_bad: bool = False
- self._foreign_long_watch: bool = False
-
- self._character_count: int = 0
- self._bad_character_count: int = 0
-
- self._buffer: str = ""
- self._buffer_accent_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- if character.isalpha():
- self._buffer += character
- if is_accentuated(character):
- self._buffer_accent_count += 1
- if (
- self._foreign_long_watch is False
- and (is_latin(character) is False or is_accentuated(character))
- and is_cjk(character) is False
- and is_hangul(character) is False
- and is_katakana(character) is False
- and is_hiragana(character) is False
- and is_thai(character) is False
- ):
- self._foreign_long_watch = True
- return
- if not self._buffer:
- return
- if (
- character.isspace() or is_punctuation(character) or is_separator(character)
- ) and self._buffer:
- self._word_count += 1
- buffer_length: int = len(self._buffer)
-
- self._character_count += buffer_length
-
- if buffer_length >= 4:
- if self._buffer_accent_count / buffer_length > 0.34:
- self._is_current_word_bad = True
- # Word/Buffer ending with an upper case accentuated letter are so rare,
- # that we will consider them all as suspicious. Same weight as foreign_long suspicious.
- if is_accentuated(self._buffer[-1]) and self._buffer[-1].isupper():
- self._foreign_long_count += 1
- self._is_current_word_bad = True
- if buffer_length >= 24 and self._foreign_long_watch:
- camel_case_dst = [
- i
- for c, i in zip(self._buffer, range(0, buffer_length))
- if c.isupper()
- ]
- probable_camel_cased: bool = False
-
- if camel_case_dst and (len(camel_case_dst) / buffer_length <= 0.3):
- probable_camel_cased = True
-
- if not probable_camel_cased:
- self._foreign_long_count += 1
- self._is_current_word_bad = True
-
- if self._is_current_word_bad:
- self._bad_word_count += 1
- self._bad_character_count += len(self._buffer)
- self._is_current_word_bad = False
-
- self._foreign_long_watch = False
- self._buffer = ""
- self._buffer_accent_count = 0
- elif (
- character not in {"<", ">", "-", "=", "~", "|", "_"}
- and character.isdigit() is False
- and is_symbol(character)
- ):
- self._is_current_word_bad = True
- self._buffer += character
-
- def reset(self) -> None: # pragma: no cover
- self._buffer = ""
- self._is_current_word_bad = False
- self._foreign_long_watch = False
- self._bad_word_count = 0
- self._word_count = 0
- self._character_count = 0
- self._bad_character_count = 0
- self._foreign_long_count = 0
-
- @property
- def ratio(self) -> float:
- if self._word_count <= 10 and self._foreign_long_count == 0:
- return 0.0
-
- return self._bad_character_count / self._character_count
-
-
-class CjkInvalidStopPlugin(MessDetectorPlugin):
- """
- GB(Chinese) based encoding often render the stop incorrectly when the content does not fit and
- can be easily detected. Searching for the overuse of '丅' and '丄'.
- """
-
- def __init__(self) -> None:
- self._wrong_stop_count: int = 0
- self._cjk_character_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- if character in {"丅", "丄"}:
- self._wrong_stop_count += 1
- return
- if is_cjk(character):
- self._cjk_character_count += 1
-
- def reset(self) -> None: # pragma: no cover
- self._wrong_stop_count = 0
- self._cjk_character_count = 0
-
- @property
- def ratio(self) -> float:
- if self._cjk_character_count < 16:
- return 0.0
- return self._wrong_stop_count / self._cjk_character_count
-
-
-class ArchaicUpperLowerPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._buf: bool = False
-
- self._character_count_since_last_sep: int = 0
-
- self._successive_upper_lower_count: int = 0
- self._successive_upper_lower_count_final: int = 0
-
- self._character_count: int = 0
-
- self._last_alpha_seen: Optional[str] = None
- self._current_ascii_only: bool = True
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- is_concerned = character.isalpha() and is_case_variable(character)
- chunk_sep = is_concerned is False
-
- if chunk_sep and self._character_count_since_last_sep > 0:
- if (
- self._character_count_since_last_sep <= 64
- and character.isdigit() is False
- and self._current_ascii_only is False
- ):
- self._successive_upper_lower_count_final += (
- self._successive_upper_lower_count
- )
-
- self._successive_upper_lower_count = 0
- self._character_count_since_last_sep = 0
- self._last_alpha_seen = None
- self._buf = False
- self._character_count += 1
- self._current_ascii_only = True
-
- return
-
- if self._current_ascii_only is True and is_ascii(character) is False:
- self._current_ascii_only = False
-
- if self._last_alpha_seen is not None:
- if (character.isupper() and self._last_alpha_seen.islower()) or (
- character.islower() and self._last_alpha_seen.isupper()
- ):
- if self._buf is True:
- self._successive_upper_lower_count += 2
- self._buf = False
- else:
- self._buf = True
- else:
- self._buf = False
-
- self._character_count += 1
- self._character_count_since_last_sep += 1
- self._last_alpha_seen = character
-
- def reset(self) -> None: # pragma: no cover
- self._character_count = 0
- self._character_count_since_last_sep = 0
- self._successive_upper_lower_count = 0
- self._successive_upper_lower_count_final = 0
- self._last_alpha_seen = None
- self._buf = False
- self._current_ascii_only = True
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- return self._successive_upper_lower_count_final / self._character_count
-
-
-@lru_cache(maxsize=1024)
-def is_suspiciously_successive_range(
- unicode_range_a: Optional[str], unicode_range_b: Optional[str]
-) -> bool:
- """
- Determine if two Unicode range seen next to each other can be considered as suspicious.
- """
- if unicode_range_a is None or unicode_range_b is None:
- return True
-
- if unicode_range_a == unicode_range_b:
- return False
-
- if "Latin" in unicode_range_a and "Latin" in unicode_range_b:
- return False
-
- if "Emoticons" in unicode_range_a or "Emoticons" in unicode_range_b:
- return False
-
- # Latin characters can be accompanied with a combining diacritical mark
- # eg. Vietnamese.
- if ("Latin" in unicode_range_a or "Latin" in unicode_range_b) and (
- "Combining" in unicode_range_a or "Combining" in unicode_range_b
- ):
- return False
-
- keywords_range_a, keywords_range_b = unicode_range_a.split(
- " "
- ), unicode_range_b.split(" ")
-
- for el in keywords_range_a:
- if el in UNICODE_SECONDARY_RANGE_KEYWORD:
- continue
- if el in keywords_range_b:
- return False
-
- # Japanese Exception
- range_a_jp_chars, range_b_jp_chars = (
- unicode_range_a
- in (
- "Hiragana",
- "Katakana",
- ),
- unicode_range_b in ("Hiragana", "Katakana"),
- )
- if (range_a_jp_chars or range_b_jp_chars) and (
- "CJK" in unicode_range_a or "CJK" in unicode_range_b
- ):
- return False
- if range_a_jp_chars and range_b_jp_chars:
- return False
-
- if "Hangul" in unicode_range_a or "Hangul" in unicode_range_b:
- if "CJK" in unicode_range_a or "CJK" in unicode_range_b:
- return False
- if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin":
- return False
-
- # Chinese/Japanese use dedicated range for punctuation and/or separators.
- if ("CJK" in unicode_range_a or "CJK" in unicode_range_b) or (
- unicode_range_a in ["Katakana", "Hiragana"]
- and unicode_range_b in ["Katakana", "Hiragana"]
- ):
- if "Punctuation" in unicode_range_a or "Punctuation" in unicode_range_b:
- return False
- if "Forms" in unicode_range_a or "Forms" in unicode_range_b:
- return False
-
- return True
-
-
-@lru_cache(maxsize=2048)
-def mess_ratio(
- decoded_sequence: str, maximum_threshold: float = 0.2, debug: bool = False
-) -> float:
- """
- Compute a mess ratio given a decoded bytes sequence. The maximum threshold does stop the computation earlier.
- """
-
- detectors: List[MessDetectorPlugin] = [
- md_class() for md_class in MessDetectorPlugin.__subclasses__()
- ]
-
- length: int = len(decoded_sequence) + 1
-
- mean_mess_ratio: float = 0.0
-
- if length < 512:
- intermediary_mean_mess_ratio_calc: int = 32
- elif length <= 1024:
- intermediary_mean_mess_ratio_calc = 64
- else:
- intermediary_mean_mess_ratio_calc = 128
-
- for character, index in zip(decoded_sequence + "\n", range(length)):
- for detector in detectors:
- if detector.eligible(character):
- detector.feed(character)
-
- if (
- index > 0 and index % intermediary_mean_mess_ratio_calc == 0
- ) or index == length - 1:
- mean_mess_ratio = sum(dt.ratio for dt in detectors)
-
- if mean_mess_ratio >= maximum_threshold:
- break
-
- if debug:
- logger = getLogger("charset_normalizer")
-
- logger.log(
- TRACE,
- "Mess-detector extended-analysis start. "
- f"intermediary_mean_mess_ratio_calc={intermediary_mean_mess_ratio_calc} mean_mess_ratio={mean_mess_ratio} "
- f"maximum_threshold={maximum_threshold}",
- )
-
- if len(decoded_sequence) > 16:
- logger.log(TRACE, f"Starting with: {decoded_sequence[:16]}")
- logger.log(TRACE, f"Ending with: {decoded_sequence[-16::]}")
-
- for dt in detectors: # pragma: nocover
- logger.log(TRACE, f"{dt.__class__}: {dt.ratio}")
-
- return round(mean_mess_ratio, 3)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-6b9ac83e.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-6b9ac83e.js
deleted file mode 100644
index 953f8499d74e0719ce48cd2f6693adbb74ec4e2c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-6b9ac83e.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as P,e as Q,s as R,G as J,k as B,O as G,N as q,K as k,o as O,p as w,z as S,v as N,A as C,x as T,V as Y,B as Z,am as y,P as V,R as H,U as j,M as v,Q as U,a1 as p,E as x,ae as $,h as z,j as D,q as ee,r as le,t as F,F as E}from"./index-1d65707a.js";/* empty css */import{B as te}from"./Button-f155035a.js";import{B as ne}from"./BlockTitle-dee077e8.js";import"./Info-7c6961ef.js";function K(l,e,n){const t=l.slice();return t[13]=e[n],t}function ie(l){let e;return{c(){e=V(l[3])},m(n,t){w(n,e,t)},p(n,t){t&8&&H(e,n[3])},d(n){n&&C(e)}}}function M(l){let e,n,t,f,c,u=l[13]+"",i,h,b,d;function m(){return l[10](l[13])}function s(..._){return l[11](l[13],..._)}return{c(){e=q("label"),n=q("input"),f=G(),c=q("span"),i=V(u),h=G(),n.disabled=l[2],n.checked=t=l[0].includes(l[13]),k(n,"type","checkbox"),k(n,"name","test"),k(n,"class","svelte-1qxcj04"),k(c,"class","ml-2 svelte-1qxcj04"),k(e,"class","svelte-1qxcj04"),j(e,"disabled",l[2]),j(e,"selected",l[0].includes(l[13]))},m(_,r){w(_,e,r),v(e,n),v(e,f),v(e,c),v(c,i),v(e,h),b||(d=[U(n,"change",m),U(n,"input",s)],b=!0)},p(_,r){l=_,r&4&&(n.disabled=l[2]),r&3&&t!==(t=l[0].includes(l[13]))&&(n.checked=t),r&2&&u!==(u=l[13]+"")&&H(i,u),r&4&&j(e,"disabled",l[2]),r&3&&j(e,"selected",l[0].includes(l[13]))},d(_){_&&C(e),b=!1,p(d)}}}function se(l){let e,n,t,f;e=new ne({props:{show_label:l[5],info:l[4],$$slots:{default:[ie]},$$scope:{ctx:l}}});let c=J(l[1]),u=[];for(let i=0;i{t.includes(o)?t.splice(t.indexOf(o),1):t.push(o),n(0,t)};function _(){m("change",t),c||m("input")}y(()=>{n(8,c=!1)});const r=o=>s(o),g=(o,A)=>m("select",{index:u.indexOf(o),value:o,selected:A.currentTarget.checked});return l.$$set=o=>{"value"in o&&n(0,t=o.value),"value_is_output"in o&&n(8,c=o.value_is_output),"choices"in o&&n(1,u=o.choices),"disabled"in o&&n(2,i=o.disabled),"label"in o&&n(3,h=o.label),"info"in o&&n(4,b=o.info),"show_label"in o&&n(5,d=o.show_label)},l.$$.update=()=>{l.$$.dirty&513&&JSON.stringify(t)!==JSON.stringify(f)&&(n(9,f=t.slice()),_())},[t,u,i,h,b,d,m,s,c,f,r,g]}class ue extends P{constructor(e){super(),Q(this,e,ae,se,R,{value:0,value_is_output:8,choices:1,disabled:2,label:3,info:4,show_label:5})}}function ce(l){let e,n,t,f,c,u;const i=[l[13]];let h={};for(let s=0;sD(t,"value",b)),z.push(()=>D(t,"value_is_output",d)),t.$on("select",l[16]),t.$on("change",l[17]),t.$on("input",l[18]),{c(){B(e.$$.fragment),n=G(),B(t.$$.fragment)},m(s,_){O(e,s,_),w(s,n,_),O(t,s,_),u=!0},p(s,_){const r=_&8192?ee(i,[le(s[13])]):{};e.$set(r);const g={};_&32&&(g.choices=s[5]),_&1024&&(g.label=s[10]),_&2048&&(g.info=s[11]),_&4096&&(g.show_label=s[12]),_&512&&(g.disabled=s[9]==="static"),!f&&_&1&&(f=!0,g.value=s[0],F(()=>f=!1)),!c&&_&2&&(c=!0,g.value_is_output=s[1],F(()=>c=!1)),t.$set(g)},i(s){u||(S(e.$$.fragment,s),S(t.$$.fragment,s),u=!0)},o(s){N(e.$$.fragment,s),N(t.$$.fragment,s),u=!1},d(s){s&&C(n),T(e,s),T(t,s)}}}function fe(l){let e,n;return e=new te({props:{visible:l[4],elem_id:l[2],elem_classes:l[3],type:"fieldset",container:l[6],scale:l[7],min_width:l[8],$$slots:{default:[ce]},$$scope:{ctx:l}}}),{c(){B(e.$$.fragment)},m(t,f){O(e,t,f),n=!0},p(t,[f]){const c={};f&16&&(c.visible=t[4]),f&4&&(c.elem_id=t[2]),f&8&&(c.elem_classes=t[3]),f&64&&(c.container=t[6]),f&128&&(c.scale=t[7]),f&256&&(c.min_width=t[8]),f&540195&&(c.$$scope={dirty:f,ctx:t}),e.$set(c)},i(t){n||(S(e.$$.fragment,t),n=!0)},o(t){N(e.$$.fragment,t),n=!1},d(t){T(e,t)}}}function oe(l,e,n){let{elem_id:t=""}=e,{elem_classes:f=[]}=e,{visible:c=!0}=e,{value:u=[]}=e,{value_is_output:i=!1}=e,{choices:h}=e,{container:b=!0}=e,{scale:d=null}=e,{min_width:m=void 0}=e,{mode:s}=e,{label:_="Checkbox Group"}=e,{info:r=void 0}=e,{show_label:g}=e,{loading_status:o}=e;function A(a){u=a,n(0,u)}function I(a){i=a,n(1,i)}function L(a){E.call(this,l,a)}function W(a){E.call(this,l,a)}function X(a){E.call(this,l,a)}return l.$$set=a=>{"elem_id"in a&&n(2,t=a.elem_id),"elem_classes"in a&&n(3,f=a.elem_classes),"visible"in a&&n(4,c=a.visible),"value"in a&&n(0,u=a.value),"value_is_output"in a&&n(1,i=a.value_is_output),"choices"in a&&n(5,h=a.choices),"container"in a&&n(6,b=a.container),"scale"in a&&n(7,d=a.scale),"min_width"in a&&n(8,m=a.min_width),"mode"in a&&n(9,s=a.mode),"label"in a&&n(10,_=a.label),"info"in a&&n(11,r=a.info),"show_label"in a&&n(12,g=a.show_label),"loading_status"in a&&n(13,o=a.loading_status)},[u,i,t,f,c,h,b,d,m,s,_,r,g,o,A,I,L,W,X]}class _e extends P{constructor(e){super(),Q(this,e,oe,fe,R,{elem_id:2,elem_classes:3,visible:4,value:0,value_is_output:1,choices:5,container:6,scale:7,min_width:8,mode:9,label:10,info:11,show_label:12,loading_status:13})}}const ge=_e,ke=["static","dynamic"],ve=l=>({type:{payload:"Array"},description:{payload:"list of selected choices"},example_data:l.choices.length?[l.choices[0]]:[]});export{ge as Component,ve as document,ke as modes};
-//# sourceMappingURL=index-6b9ac83e.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/shell-86dd1d99.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/shell-86dd1d99.js
deleted file mode 100644
index 413d6906ba550f466a9babaadea0e07f796466f1..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/shell-86dd1d99.js
+++ /dev/null
@@ -1,2 +0,0 @@
-var c={};function s(n,e){for(var r=0;r1&&n.eat("$");var r=n.next();return/['"({]/.test(r)?(e.tokens[0]=l(r,r=="("?"quote":r=="{"?"def":"string"),u(n,e)):(/\d/.test(r)||n.eatWhile(/\w/),e.tokens.shift(),"def")};function w(n){return function(e,r){return e.sol()&&e.string==n&&r.tokens.shift(),e.skipToEnd(),"string.special"}}function u(n,e){return(e.tokens[0]||d)(n,e)}const v={name:"shell",startState:function(){return{tokens:[]}},token:function(n,e){return u(n,e)},languageData:{autocomplete:k.concat(h,p),closeBrackets:{brackets:["(","[","{","'",'"',"`"]},commentTokens:{line:"#"}}};export{v as shell};
-//# sourceMappingURL=shell-86dd1d99.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_events.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_events.py
deleted file mode 100644
index 075bf8a469d44d2388b08ec3d009fe55d44cb6eb..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_events.py
+++ /dev/null
@@ -1,369 +0,0 @@
-# High level events that make up HTTP/1.1 conversations. Loosely inspired by
-# the corresponding events in hyper-h2:
-#
-# http://python-hyper.org/h2/en/stable/api.html#events
-#
-# Don't subclass these. Stuff will break.
-
-import re
-from abc import ABC
-from dataclasses import dataclass, field
-from typing import Any, cast, Dict, List, Tuple, Union
-
-from ._abnf import method, request_target
-from ._headers import Headers, normalize_and_validate
-from ._util import bytesify, LocalProtocolError, validate
-
-# Everything in __all__ gets re-exported as part of the h11 public API.
-__all__ = [
- "Event",
- "Request",
- "InformationalResponse",
- "Response",
- "Data",
- "EndOfMessage",
- "ConnectionClosed",
-]
-
-method_re = re.compile(method.encode("ascii"))
-request_target_re = re.compile(request_target.encode("ascii"))
-
-
-class Event(ABC):
- """
- Base class for h11 events.
- """
-
- __slots__ = ()
-
-
-@dataclass(init=False, frozen=True)
-class Request(Event):
- """The beginning of an HTTP request.
-
- Fields:
-
- .. attribute:: method
-
- An HTTP method, e.g. ``b"GET"`` or ``b"POST"``. Always a byte
- string. :term:`Bytes-like objects ` and native
- strings containing only ascii characters will be automatically
- converted to byte strings.
-
- .. attribute:: target
-
- The target of an HTTP request, e.g. ``b"/index.html"``, or one of the
- more exotic formats described in `RFC 7320, section 5.3
- `_. Always a byte
- string. :term:`Bytes-like objects ` and native
- strings containing only ascii characters will be automatically
- converted to byte strings.
-
- .. attribute:: headers
-
- Request headers, represented as a list of (name, value) pairs. See
- :ref:`the header normalization rules ` for details.
-
- .. attribute:: http_version
-
- The HTTP protocol version, represented as a byte string like
- ``b"1.1"``. See :ref:`the HTTP version normalization rules
- ` for details.
-
- """
-
- __slots__ = ("method", "headers", "target", "http_version")
-
- method: bytes
- headers: Headers
- target: bytes
- http_version: bytes
-
- def __init__(
- self,
- *,
- method: Union[bytes, str],
- headers: Union[Headers, List[Tuple[bytes, bytes]], List[Tuple[str, str]]],
- target: Union[bytes, str],
- http_version: Union[bytes, str] = b"1.1",
- _parsed: bool = False,
- ) -> None:
- super().__init__()
- if isinstance(headers, Headers):
- object.__setattr__(self, "headers", headers)
- else:
- object.__setattr__(
- self, "headers", normalize_and_validate(headers, _parsed=_parsed)
- )
- if not _parsed:
- object.__setattr__(self, "method", bytesify(method))
- object.__setattr__(self, "target", bytesify(target))
- object.__setattr__(self, "http_version", bytesify(http_version))
- else:
- object.__setattr__(self, "method", method)
- object.__setattr__(self, "target", target)
- object.__setattr__(self, "http_version", http_version)
-
- # "A server MUST respond with a 400 (Bad Request) status code to any
- # HTTP/1.1 request message that lacks a Host header field and to any
- # request message that contains more than one Host header field or a
- # Host header field with an invalid field-value."
- # -- https://tools.ietf.org/html/rfc7230#section-5.4
- host_count = 0
- for name, value in self.headers:
- if name == b"host":
- host_count += 1
- if self.http_version == b"1.1" and host_count == 0:
- raise LocalProtocolError("Missing mandatory Host: header")
- if host_count > 1:
- raise LocalProtocolError("Found multiple Host: headers")
-
- validate(method_re, self.method, "Illegal method characters")
- validate(request_target_re, self.target, "Illegal target characters")
-
- # This is an unhashable type.
- __hash__ = None # type: ignore
-
-
-@dataclass(init=False, frozen=True)
-class _ResponseBase(Event):
- __slots__ = ("headers", "http_version", "reason", "status_code")
-
- headers: Headers
- http_version: bytes
- reason: bytes
- status_code: int
-
- def __init__(
- self,
- *,
- headers: Union[Headers, List[Tuple[bytes, bytes]], List[Tuple[str, str]]],
- status_code: int,
- http_version: Union[bytes, str] = b"1.1",
- reason: Union[bytes, str] = b"",
- _parsed: bool = False,
- ) -> None:
- super().__init__()
- if isinstance(headers, Headers):
- object.__setattr__(self, "headers", headers)
- else:
- object.__setattr__(
- self, "headers", normalize_and_validate(headers, _parsed=_parsed)
- )
- if not _parsed:
- object.__setattr__(self, "reason", bytesify(reason))
- object.__setattr__(self, "http_version", bytesify(http_version))
- if not isinstance(status_code, int):
- raise LocalProtocolError("status code must be integer")
- # Because IntEnum objects are instances of int, but aren't
- # duck-compatible (sigh), see gh-72.
- object.__setattr__(self, "status_code", int(status_code))
- else:
- object.__setattr__(self, "reason", reason)
- object.__setattr__(self, "http_version", http_version)
- object.__setattr__(self, "status_code", status_code)
-
- self.__post_init__()
-
- def __post_init__(self) -> None:
- pass
-
- # This is an unhashable type.
- __hash__ = None # type: ignore
-
-
-@dataclass(init=False, frozen=True)
-class InformationalResponse(_ResponseBase):
- """An HTTP informational response.
-
- Fields:
-
- .. attribute:: status_code
-
- The status code of this response, as an integer. For an
- :class:`InformationalResponse`, this is always in the range [100,
- 200).
-
- .. attribute:: headers
-
- Request headers, represented as a list of (name, value) pairs. See
- :ref:`the header normalization rules ` for
- details.
-
- .. attribute:: http_version
-
- The HTTP protocol version, represented as a byte string like
- ``b"1.1"``. See :ref:`the HTTP version normalization rules
- ` for details.
-
- .. attribute:: reason
-
- The reason phrase of this response, as a byte string. For example:
- ``b"OK"``, or ``b"Not Found"``.
-
- """
-
- def __post_init__(self) -> None:
- if not (100 <= self.status_code < 200):
- raise LocalProtocolError(
- "InformationalResponse status_code should be in range "
- "[100, 200), not {}".format(self.status_code)
- )
-
- # This is an unhashable type.
- __hash__ = None # type: ignore
-
-
-@dataclass(init=False, frozen=True)
-class Response(_ResponseBase):
- """The beginning of an HTTP response.
-
- Fields:
-
- .. attribute:: status_code
-
- The status code of this response, as an integer. For an
- :class:`Response`, this is always in the range [200,
- 1000).
-
- .. attribute:: headers
-
- Request headers, represented as a list of (name, value) pairs. See
- :ref:`the header normalization rules ` for details.
-
- .. attribute:: http_version
-
- The HTTP protocol version, represented as a byte string like
- ``b"1.1"``. See :ref:`the HTTP version normalization rules
- ` for details.
-
- .. attribute:: reason
-
- The reason phrase of this response, as a byte string. For example:
- ``b"OK"``, or ``b"Not Found"``.
-
- """
-
- def __post_init__(self) -> None:
- if not (200 <= self.status_code < 1000):
- raise LocalProtocolError(
- "Response status_code should be in range [200, 1000), not {}".format(
- self.status_code
- )
- )
-
- # This is an unhashable type.
- __hash__ = None # type: ignore
-
-
-@dataclass(init=False, frozen=True)
-class Data(Event):
- """Part of an HTTP message body.
-
- Fields:
-
- .. attribute:: data
-
- A :term:`bytes-like object` containing part of a message body. Or, if
- using the ``combine=False`` argument to :meth:`Connection.send`, then
- any object that your socket writing code knows what to do with, and for
- which calling :func:`len` returns the number of bytes that will be
- written -- see :ref:`sendfile` for details.
-
- .. attribute:: chunk_start
-
- A marker that indicates whether this data object is from the start of a
- chunked transfer encoding chunk. This field is ignored when when a Data
- event is provided to :meth:`Connection.send`: it is only valid on
- events emitted from :meth:`Connection.next_event`. You probably
- shouldn't use this attribute at all; see
- :ref:`chunk-delimiters-are-bad` for details.
-
- .. attribute:: chunk_end
-
- A marker that indicates whether this data object is the last for a
- given chunked transfer encoding chunk. This field is ignored when when
- a Data event is provided to :meth:`Connection.send`: it is only valid
- on events emitted from :meth:`Connection.next_event`. You probably
- shouldn't use this attribute at all; see
- :ref:`chunk-delimiters-are-bad` for details.
-
- """
-
- __slots__ = ("data", "chunk_start", "chunk_end")
-
- data: bytes
- chunk_start: bool
- chunk_end: bool
-
- def __init__(
- self, data: bytes, chunk_start: bool = False, chunk_end: bool = False
- ) -> None:
- object.__setattr__(self, "data", data)
- object.__setattr__(self, "chunk_start", chunk_start)
- object.__setattr__(self, "chunk_end", chunk_end)
-
- # This is an unhashable type.
- __hash__ = None # type: ignore
-
-
-# XX FIXME: "A recipient MUST ignore (or consider as an error) any fields that
-# are forbidden to be sent in a trailer, since processing them as if they were
-# present in the header section might bypass external security filters."
-# https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#chunked.trailer.part
-# Unfortunately, the list of forbidden fields is long and vague :-/
-@dataclass(init=False, frozen=True)
-class EndOfMessage(Event):
- """The end of an HTTP message.
-
- Fields:
-
- .. attribute:: headers
-
- Default value: ``[]``
-
- Any trailing headers attached to this message, represented as a list of
- (name, value) pairs. See :ref:`the header normalization rules
- ` for details.
-
- Must be empty unless ``Transfer-Encoding: chunked`` is in use.
-
- """
-
- __slots__ = ("headers",)
-
- headers: Headers
-
- def __init__(
- self,
- *,
- headers: Union[
- Headers, List[Tuple[bytes, bytes]], List[Tuple[str, str]], None
- ] = None,
- _parsed: bool = False,
- ) -> None:
- super().__init__()
- if headers is None:
- headers = Headers([])
- elif not isinstance(headers, Headers):
- headers = normalize_and_validate(headers, _parsed=_parsed)
-
- object.__setattr__(self, "headers", headers)
-
- # This is an unhashable type.
- __hash__ = None # type: ignore
-
-
-@dataclass(frozen=True)
-class ConnectionClosed(Event):
- """This event indicates that the sender has closed their outgoing
- connection.
-
- Note that this does not necessarily mean that they can't *receive* further
- data, because TCP connections are composed to two one-way channels which
- can be closed independently. See :ref:`closing` for details.
-
- No fields.
- """
-
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_webhooks_server.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_webhooks_server.py
deleted file mode 100644
index 7cc5dd4ce7769fee10e0198cffe79f64a33b211d..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_webhooks_server.py
+++ /dev/null
@@ -1,369 +0,0 @@
-# coding=utf-8
-# Copyright 2023-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Contains `WebhooksServer` and `webhook_endpoint` to create a webhook server easily."""
-import atexit
-import inspect
-import os
-from functools import wraps
-from typing import TYPE_CHECKING, Callable, Dict, Optional
-
-from .utils import experimental, is_gradio_available
-
-
-if TYPE_CHECKING:
- import gradio as gr
-
-
-from fastapi import FastAPI, Request
-from fastapi.responses import JSONResponse
-
-
-_global_app: Optional["WebhooksServer"] = None
-_is_local = os.getenv("SYSTEM") != "spaces"
-
-
-@experimental
-class WebhooksServer:
- """
- The [`WebhooksServer`] class lets you create an instance of a Gradio app that can receive Huggingface webhooks.
- These webhooks can be registered using the [`~WebhooksServer.add_webhook`] decorator. Webhook endpoints are added to
- the app as a POST endpoint to the FastAPI router. Once all the webhooks are registered, the `run` method has to be
- called to start the app.
-
- It is recommended to accept [`WebhookPayload`] as the first argument of the webhook function. It is a Pydantic
- model that contains all the information about the webhook event. The data will be parsed automatically for you.
-
- Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
- WebhooksServer and deploy it on a Space.
-
-
-
- `WebhooksServer` is experimental. Its API is subject to change in the future.
-
-
-
-
-
- You must have `gradio` installed to use `WebhooksServer` (`pip install --upgrade gradio`).
-
-
-
- Args:
- ui (`gradio.Blocks`, optional):
- A Gradio UI instance to be used as the Space landing page. If `None`, a UI displaying instructions
- about the configured webhooks is created.
- webhook_secret (`str`, optional):
- A secret key to verify incoming webhook requests. You can set this value to any secret you want as long as
- you also configure it in your [webhooks settings panel](https://huggingface.co/settings/webhooks). You
- can also set this value as the `WEBHOOK_SECRET` environment variable. If no secret is provided, the
- webhook endpoints are opened without any security.
-
- Example:
-
- ```python
- import gradio as gr
- from huggingface_hub import WebhooksServer, WebhookPayload
-
- with gr.Blocks() as ui:
- ...
-
- app = WebhooksServer(ui=ui, webhook_secret="my_secret_key")
-
- @app.add_webhook("/say_hello")
- async def hello(payload: WebhookPayload):
- return {"message": "hello"}
-
- app.run()
- ```
- """
-
- def __new__(cls, *args, **kwargs) -> "WebhooksServer":
- if not is_gradio_available():
- raise ImportError(
- "You must have `gradio` installed to use `WebhooksServer`. Please run `pip install --upgrade gradio`"
- " first."
- )
- return super().__new__(cls)
-
- def __init__(
- self,
- ui: Optional["gr.Blocks"] = None,
- webhook_secret: Optional[str] = None,
- ) -> None:
- self._ui = ui
-
- self.webhook_secret = webhook_secret or os.getenv("WEBHOOK_SECRET")
- self.registered_webhooks: Dict[str, Callable] = {}
- _warn_on_empty_secret(self.webhook_secret)
-
- def add_webhook(self, path: Optional[str] = None) -> Callable:
- """
- Decorator to add a webhook to the [`WebhooksServer`] server.
-
- Args:
- path (`str`, optional):
- The URL path to register the webhook function. If not provided, the function name will be used as the
- path. In any case, all webhooks are registered under `/webhooks`.
-
- Raises:
- ValueError: If the provided path is already registered as a webhook.
-
- Example:
- ```python
- from huggingface_hub import WebhooksServer, WebhookPayload
-
- app = WebhooksServer()
-
- @app.add_webhook
- async def trigger_training(payload: WebhookPayload):
- if payload.repo.type == "dataset" and payload.event.action == "update":
- # Trigger a training job if a dataset is updated
- ...
-
- app.run()
- ```
- """
- # Usage: directly as decorator. Example: `@app.add_webhook`
- if callable(path):
- # If path is a function, it means it was used as a decorator without arguments
- return self.add_webhook()(path)
-
- # Usage: provide a path. Example: `@app.add_webhook(...)`
- @wraps(FastAPI.post)
- def _inner_post(*args, **kwargs):
- func = args[0]
- abs_path = f"/webhooks/{(path or func.__name__).strip('/')}"
- if abs_path in self.registered_webhooks:
- raise ValueError(f"Webhook {abs_path} already exists.")
- self.registered_webhooks[abs_path] = func
-
- return _inner_post
-
- def run(self) -> None:
- """Starts the Gradio app with the FastAPI server and registers the webhooks."""
- ui = self._ui or self._get_default_ui()
-
- # Start Gradio App
- # - as non-blocking so that webhooks can be added afterwards
- # - as shared if launch locally (to debug webhooks)
- self.fastapi_app, _, _ = ui.launch(prevent_thread_lock=True, share=_is_local)
-
- # Register webhooks to FastAPI app
- for path, func in self.registered_webhooks.items():
- # Add secret check if required
- if self.webhook_secret is not None:
- func = _wrap_webhook_to_check_secret(func, webhook_secret=self.webhook_secret)
-
- # Add route to FastAPI app
- self.fastapi_app.post(path)(func)
-
- # Print instructions and block main thread
- url = (ui.share_url or ui.local_url).strip("/")
- message = "\nWebhooks are correctly setup and ready to use:"
- message += "\n" + "\n".join(f" - POST {url}{webhook}" for webhook in self.registered_webhooks)
- message += "\nGo to https://huggingface.co/settings/webhooks to setup your webhooks."
- print(message)
-
- ui.block_thread()
-
- def _get_default_ui(self) -> "gr.Blocks":
- """Default UI if not provided (lists webhooks and provides basic instructions)."""
- import gradio as gr
-
- with gr.Blocks() as ui:
- gr.Markdown("# This is an app to process 🤗 Webhooks")
- gr.Markdown(
- "Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on"
- " specific repos or to all repos belonging to particular set of users/organizations (not just your"
- " repos, but any repo). Check out this [guide](https://huggingface.co/docs/hub/webhooks) to get to"
- " know more about webhooks on the Huggingface Hub."
- )
- gr.Markdown(
- f"{len(self.registered_webhooks)} webhook(s) are registered:"
- + "\n\n"
- + "\n ".join(
- f"- [{webhook_path}]({_get_webhook_doc_url(webhook.__name__, webhook_path)})"
- for webhook_path, webhook in self.registered_webhooks.items()
- )
- )
- gr.Markdown(
- "Go to https://huggingface.co/settings/webhooks to setup your webhooks."
- + "\nYou app is running locally. Please look at the logs to check the full URL you need to set."
- if _is_local
- else (
- "\nThis app is running on a Space. You can find the corresponding URL in the options menu"
- " (top-right) > 'Embed the Space'. The URL looks like 'https://{username}-{repo_name}.hf.space'."
- )
- )
- return ui
-
-
-@experimental
-def webhook_endpoint(path: Optional[str] = None) -> Callable:
- """Decorator to start a [`WebhooksServer`] and register the decorated function as a webhook endpoint.
-
- This is a helper to get started quickly. If you need more flexibility (custom landing page or webhook secret),
- you can use [`WebhooksServer`] directly. You can register multiple webhook endpoints (to the same server) by using
- this decorator multiple times.
-
- Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
- server and deploy it on a Space.
-
-
-
- `webhook_endpoint` is experimental. Its API is subject to change in the future.
-
-
-
-
-
- You must have `gradio` installed to use `webhook_endpoint` (`pip install --upgrade gradio`).
-
-
-
- Args:
- path (`str`, optional):
- The URL path to register the webhook function. If not provided, the function name will be used as the path.
- In any case, all webhooks are registered under `/webhooks`.
-
- Examples:
- The default usage is to register a function as a webhook endpoint. The function name will be used as the path.
- The server will be started automatically at exit (i.e. at the end of the script).
-
- ```python
- from huggingface_hub import webhook_endpoint, WebhookPayload
-
- @webhook_endpoint
- async def trigger_training(payload: WebhookPayload):
- if payload.repo.type == "dataset" and payload.event.action == "update":
- # Trigger a training job if a dataset is updated
- ...
-
- # Server is automatically started at the end of the script.
- ```
-
- Advanced usage: register a function as a webhook endpoint and start the server manually. This is useful if you
- are running it in a notebook.
-
- ```python
- from huggingface_hub import webhook_endpoint, WebhookPayload
-
- @webhook_endpoint
- async def trigger_training(payload: WebhookPayload):
- if payload.repo.type == "dataset" and payload.event.action == "update":
- # Trigger a training job if a dataset is updated
- ...
-
- # Start the server manually
- trigger_training.run()
- ```
- """
- if callable(path):
- # If path is a function, it means it was used as a decorator without arguments
- return webhook_endpoint()(path)
-
- @wraps(WebhooksServer.add_webhook)
- def _inner(func: Callable) -> Callable:
- app = _get_global_app()
- app.add_webhook(path)(func)
- if len(app.registered_webhooks) == 1:
- # Register `app.run` to run at exit (only once)
- atexit.register(app.run)
-
- @wraps(app.run)
- def _run_now():
- # Run the app directly (without waiting atexit)
- atexit.unregister(app.run)
- app.run()
-
- func.run = _run_now # type: ignore
- return func
-
- return _inner
-
-
-def _get_global_app() -> WebhooksServer:
- global _global_app
- if _global_app is None:
- _global_app = WebhooksServer()
- return _global_app
-
-
-def _warn_on_empty_secret(webhook_secret: Optional[str]) -> None:
- if webhook_secret is None:
- print("Webhook secret is not defined. This means your webhook endpoints will be open to everyone.")
- print(
- "To add a secret, set `WEBHOOK_SECRET` as environment variable or pass it at initialization: "
- "\n\t`app = WebhooksServer(webhook_secret='my_secret', ...)`"
- )
- print(
- "For more details about webhook secrets, please refer to"
- " https://huggingface.co/docs/hub/webhooks#webhook-secret."
- )
- else:
- print("Webhook secret is correctly defined.")
-
-
-def _get_webhook_doc_url(webhook_name: str, webhook_path: str) -> str:
- """Returns the anchor to a given webhook in the docs (experimental)"""
- return "/docs#/default/" + webhook_name + webhook_path.replace("/", "_") + "_post"
-
-
-def _wrap_webhook_to_check_secret(func: Callable, webhook_secret: str) -> Callable:
- """Wraps a webhook function to check the webhook secret before calling the function.
-
- This is a hacky way to add the `request` parameter to the function signature. Since FastAPI based itself on route
- parameters to inject the values to the function, we need to hack the function signature to retrieve the `Request`
- object (and hence the headers). A far cleaner solution would be to use a middleware. However, since
- `fastapi==0.90.1`, a middleware cannot be added once the app has started. And since the FastAPI app is started by
- Gradio internals (and not by us), we cannot add a middleware.
-
- This method is called only when a secret has been defined by the user. If a request is sent without the
- "x-webhook-secret", the function will return a 401 error (unauthorized). If the header is sent but is incorrect,
- the function will return a 403 error (forbidden).
-
- Inspired by https://stackoverflow.com/a/33112180.
- """
- initial_sig = inspect.signature(func)
-
- @wraps(func)
- async def _protected_func(request: Request, **kwargs):
- request_secret = request.headers.get("x-webhook-secret")
- if request_secret is None:
- return JSONResponse({"error": "x-webhook-secret header not set."}, status_code=401)
- if request_secret != webhook_secret:
- return JSONResponse({"error": "Invalid webhook secret."}, status_code=403)
-
- # Inject `request` in kwargs if required
- if "request" in initial_sig.parameters:
- kwargs["request"] = request
-
- # Handle both sync and async routes
- if inspect.iscoroutinefunction(func):
- return await func(**kwargs)
- else:
- return func(**kwargs)
-
- # Update signature to include request
- if "request" not in initial_sig.parameters:
- _protected_func.__signature__ = initial_sig.replace( # type: ignore
- parameters=(
- inspect.Parameter(name="request", kind=inspect.Parameter.POSITIONAL_OR_KEYWORD, annotation=Request),
- )
- + tuple(initial_sig.parameters.values())
- )
-
- # Return protected route
- return _protected_func
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/loss/grad_loss.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/loss/grad_loss.py
deleted file mode 100644
index f77bef42e0575584a3aea34da0926a8363863c11..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/loss/grad_loss.py
+++ /dev/null
@@ -1,57 +0,0 @@
-"""
-@Date: 2021/08/12
-@description:
-"""
-
-import torch
-import torch.nn as nn
-import numpy as np
-
-from visualization.grad import get_all
-
-
-class GradLoss(nn.Module):
- def __init__(self):
- super().__init__()
- self.loss = nn.L1Loss()
- self.cos = nn.CosineSimilarity(dim=-1, eps=0)
-
- self.grad_conv = nn.Conv1d(1, 1, kernel_size=3, stride=1, padding=0, bias=False, padding_mode='circular')
- self.grad_conv.weight = nn.Parameter(torch.tensor([[[1, 0, -1]]]).float())
- self.grad_conv.weight.requires_grad = False
-
- def forward(self, gt, dt):
- gt_direction, _, gt_angle_grad = get_all(gt['depth'], self.grad_conv)
- dt_direction, _, dt_angle_grad = get_all(dt['depth'], self.grad_conv)
-
- normal_loss = (1 - self.cos(gt_direction, dt_direction)).mean()
- grad_loss = self.loss(gt_angle_grad, dt_angle_grad)
- return [normal_loss, grad_loss]
-
-
-if __name__ == '__main__':
- from dataset.mp3d_dataset import MP3DDataset
- from utils.boundary import depth2boundaries
- from utils.conversion import uv2xyz
- from visualization.boundary import draw_boundaries
- from visualization.floorplan import draw_floorplan
-
- def show_boundary(image, depth, ratio):
- boundary_list = depth2boundaries(ratio, depth, step=None)
- draw_boundaries(image.transpose(1, 2, 0), boundary_list=boundary_list, show=True)
- draw_floorplan(uv2xyz(boundary_list[0])[..., ::2], show=True, center_color=0.8)
-
- mp3d_dataset = MP3DDataset(root_dir='../src/dataset/mp3d', mode='train', patch_num=256)
- gt = mp3d_dataset.__getitem__(1)
- gt['depth'] = torch.from_numpy(gt['depth'][np.newaxis]) # batch size is 1
- dummy_dt = {
- 'depth': gt['depth'].clone(),
- }
- # dummy_dt['depth'][..., 20] *= 3 # some different
-
- # show_boundary(gt['image'], gt['depth'][0].numpy(), gt['ratio'])
- # show_boundary(gt['image'], dummy_dt['depth'][0].numpy(), gt['ratio'])
-
- grad_loss = GradLoss()
- loss = grad_loss(gt, dummy_dt)
- print(loss)
diff --git a/spaces/Datasculptor/MusicGen/tests/common_utils/wav_utils.py b/spaces/Datasculptor/MusicGen/tests/common_utils/wav_utils.py
deleted file mode 100644
index d3a563ee1749a58217ece55c9a08b8d93c0fc386..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/tests/common_utils/wav_utils.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from pathlib import Path
-import typing as tp
-
-import torch
-import torchaudio
-
-
-def get_white_noise(chs: int = 1, num_frames: int = 1):
- wav = torch.randn(chs, num_frames)
- return wav
-
-
-def get_batch_white_noise(bs: int = 1, chs: int = 1, num_frames: int = 1):
- wav = torch.randn(bs, chs, num_frames)
- return wav
-
-
-def save_wav(path: str, wav: torch.Tensor, sample_rate: int):
- fp = Path(path)
- kwargs: tp.Dict[str, tp.Any] = {}
- if fp.suffix == '.wav':
- kwargs['encoding'] = 'PCM_S'
- kwargs['bits_per_sample'] = 16
- elif fp.suffix == '.mp3':
- kwargs['compression'] = 320
- torchaudio.save(str(fp), wav, sample_rate, **kwargs)
diff --git a/spaces/DeividasM/whisper-medium-lt/README.md b/spaces/DeividasM/whisper-medium-lt/README.md
deleted file mode 100644
index 69d9453c4a195b22a65ac4a0c72b2e377bad3a7e..0000000000000000000000000000000000000000
--- a/spaces/DeividasM/whisper-medium-lt/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Whisper medium Lithuanian
-emoji: 🦹🏻♂️
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-tags:
-- whisper-event
-duplicated_from: whisper-event/whisper-demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DonnyChuang/test_generator/app.py b/spaces/DonnyChuang/test_generator/app.py
deleted file mode 100644
index c7b3028c95a6167e058f9de8f200f8d87c27edcb..0000000000000000000000000000000000000000
--- a/spaces/DonnyChuang/test_generator/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-generator = pipeline('text-generation', model='huggingface/bigscience/bloom-560m')
-
-def generate(text):
- result = generator(text, max_length=100, num_return_sequences=1)
- return result[0]["generated_text"]
-
-examples = [
- ["Zoe Kwan is a 20-year old singer and songwriter who has taken Hong Kong’s music scene by storm."],
- ["Zoe only recently began writing songs."],
-]
-
-demo = gr.Interface(fn=generate, inputs=gr.inputs.Textbox(lines=5, label="Input Text"), outputs=gr.outputs.Textbox(label="Generated Text"),
- title="Text Generator bloom-560m", examples=examples)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/ECCV2022/bytetrack/yolox/evaluators/coco_evaluator.py b/spaces/ECCV2022/bytetrack/yolox/evaluators/coco_evaluator.py
deleted file mode 100644
index 24dce235307cfe52062da31b0e06506b77b32b36..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/evaluators/coco_evaluator.py
+++ /dev/null
@@ -1,224 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-from loguru import logger
-from tqdm import tqdm
-
-import torch
-
-from yolox.utils import (
- gather,
- is_main_process,
- postprocess,
- synchronize,
- time_synchronized,
- xyxy2xywh
-)
-
-import contextlib
-import io
-import itertools
-import json
-import tempfile
-import time
-
-
-class COCOEvaluator:
- """
- COCO AP Evaluation class. All the data in the val2017 dataset are processed
- and evaluated by COCO API.
- """
-
- def __init__(
- self, dataloader, img_size, confthre, nmsthre, num_classes, testdev=False
- ):
- """
- Args:
- dataloader (Dataloader): evaluate dataloader.
- img_size (int): image size after preprocess. images are resized
- to squares whose shape is (img_size, img_size).
- confthre (float): confidence threshold ranging from 0 to 1, which
- is defined in the config file.
- nmsthre (float): IoU threshold of non-max supression ranging from 0 to 1.
- """
- self.dataloader = dataloader
- self.img_size = img_size
- self.confthre = confthre
- self.nmsthre = nmsthre
- self.num_classes = num_classes
- self.testdev = testdev
-
- def evaluate(
- self,
- model,
- distributed=False,
- half=False,
- trt_file=None,
- decoder=None,
- test_size=None,
- ):
- """
- COCO average precision (AP) Evaluation. Iterate inference on the test dataset
- and the results are evaluated by COCO API.
-
- NOTE: This function will change training mode to False, please save states if needed.
-
- Args:
- model : model to evaluate.
-
- Returns:
- ap50_95 (float) : COCO AP of IoU=50:95
- ap50 (float) : COCO AP of IoU=50
- summary (sr): summary info of evaluation.
- """
- # TODO half to amp_test
- tensor_type = torch.cuda.HalfTensor if half else torch.cuda.FloatTensor
- model = model.eval()
- if half:
- model = model.half()
- ids = []
- data_list = []
- progress_bar = tqdm if is_main_process() else iter
-
- inference_time = 0
- nms_time = 0
- n_samples = len(self.dataloader) - 1
-
- if trt_file is not None:
- from torch2trt import TRTModule
-
- model_trt = TRTModule()
- model_trt.load_state_dict(torch.load(trt_file))
-
- x = torch.ones(1, 3, test_size[0], test_size[1]).cuda()
- model(x)
- model = model_trt
-
- for cur_iter, (imgs, _, info_imgs, ids) in enumerate(
- progress_bar(self.dataloader)
- ):
- with torch.no_grad():
- imgs = imgs.type(tensor_type)
-
- # skip the the last iters since batchsize might be not enough for batch inference
- is_time_record = cur_iter < len(self.dataloader) - 1
- if is_time_record:
- start = time.time()
-
- outputs = model(imgs)
- if decoder is not None:
- outputs = decoder(outputs, dtype=outputs.type())
-
- if is_time_record:
- infer_end = time_synchronized()
- inference_time += infer_end - start
-
- outputs = postprocess(
- outputs, self.num_classes, self.confthre, self.nmsthre
- )
- if is_time_record:
- nms_end = time_synchronized()
- nms_time += nms_end - infer_end
-
- data_list.extend(self.convert_to_coco_format(outputs, info_imgs, ids))
-
- statistics = torch.cuda.FloatTensor([inference_time, nms_time, n_samples])
- if distributed:
- data_list = gather(data_list, dst=0)
- data_list = list(itertools.chain(*data_list))
- torch.distributed.reduce(statistics, dst=0)
-
- eval_results = self.evaluate_prediction(data_list, statistics)
- synchronize()
- return eval_results
-
- def convert_to_coco_format(self, outputs, info_imgs, ids):
- data_list = []
- for (output, img_h, img_w, img_id) in zip(
- outputs, info_imgs[0], info_imgs[1], ids
- ):
- if output is None:
- continue
- output = output.cpu()
-
- bboxes = output[:, 0:4]
-
- # preprocessing: resize
- scale = min(
- self.img_size[0] / float(img_h), self.img_size[1] / float(img_w)
- )
- bboxes /= scale
- bboxes = xyxy2xywh(bboxes)
-
- cls = output[:, 6]
- scores = output[:, 4] * output[:, 5]
- for ind in range(bboxes.shape[0]):
- label = self.dataloader.dataset.class_ids[int(cls[ind])]
- pred_data = {
- "image_id": int(img_id),
- "category_id": label,
- "bbox": bboxes[ind].numpy().tolist(),
- "score": scores[ind].numpy().item(),
- "segmentation": [],
- } # COCO json format
- data_list.append(pred_data)
- return data_list
-
- def evaluate_prediction(self, data_dict, statistics):
- if not is_main_process():
- return 0, 0, None
-
- logger.info("Evaluate in main process...")
-
- annType = ["segm", "bbox", "keypoints"]
-
- inference_time = statistics[0].item()
- nms_time = statistics[1].item()
- n_samples = statistics[2].item()
-
- a_infer_time = 1000 * inference_time / (n_samples * self.dataloader.batch_size)
- a_nms_time = 1000 * nms_time / (n_samples * self.dataloader.batch_size)
-
- time_info = ", ".join(
- [
- "Average {} time: {:.2f} ms".format(k, v)
- for k, v in zip(
- ["forward", "NMS", "inference"],
- [a_infer_time, a_nms_time, (a_infer_time + a_nms_time)],
- )
- ]
- )
-
- info = time_info + "\n"
-
- # Evaluate the Dt (detection) json comparing with the ground truth
- if len(data_dict) > 0:
- cocoGt = self.dataloader.dataset.coco
- # TODO: since pycocotools can't process dict in py36, write data to json file.
- if self.testdev:
- json.dump(data_dict, open("./yolox_testdev_2017.json", "w"))
- cocoDt = cocoGt.loadRes("./yolox_testdev_2017.json")
- else:
- _, tmp = tempfile.mkstemp()
- json.dump(data_dict, open(tmp, "w"))
- cocoDt = cocoGt.loadRes(tmp)
- '''
- try:
- from yolox.layers import COCOeval_opt as COCOeval
- except ImportError:
- from pycocotools import cocoeval as COCOeval
- logger.warning("Use standard COCOeval.")
- '''
- #from pycocotools.cocoeval import COCOeval
- from yolox.layers import COCOeval_opt as COCOeval
- cocoEval = COCOeval(cocoGt, cocoDt, annType[1])
- cocoEval.evaluate()
- cocoEval.accumulate()
- redirect_string = io.StringIO()
- with contextlib.redirect_stdout(redirect_string):
- cocoEval.summarize()
- info += redirect_string.getvalue()
- return cocoEval.stats[0], cocoEval.stats[1], info
- else:
- return 0, 0, info
diff --git a/spaces/EasyEasy/EasyProxy/README.md b/spaces/EasyEasy/EasyProxy/README.md
deleted file mode 100644
index 736ebeee903af4da943e8525f2b1cbf3637c3914..0000000000000000000000000000000000000000
--- a/spaces/EasyEasy/EasyProxy/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-title: EasyProxy
-sdk: docker
-colorFrom: red
-colorTo: gray
----
\ No newline at end of file
diff --git a/spaces/Egrt/LicenseGAN/utils/degradations.py b/spaces/Egrt/LicenseGAN/utils/degradations.py
deleted file mode 100644
index 578967483e20c969931dc6082c9b007ea9f1c714..0000000000000000000000000000000000000000
--- a/spaces/Egrt/LicenseGAN/utils/degradations.py
+++ /dev/null
@@ -1,765 +0,0 @@
-import cv2
-import math
-import numpy as np
-import random
-import torch
-from scipy import special
-from scipy.stats import multivariate_normal
-from torchvision.transforms.functional_tensor import rgb_to_grayscale
-
-# -------------------------------------------------------------------- #
-# --------------------------- blur kernels --------------------------- #
-# -------------------------------------------------------------------- #
-
-
-# --------------------------- util functions --------------------------- #
-def sigma_matrix2(sig_x, sig_y, theta):
- """Calculate the rotated sigma matrix (two dimensional matrix).
-
- Args:
- sig_x (float):
- sig_y (float):
- theta (float): Radian measurement.
-
- Returns:
- ndarray: Rotated sigma matrix.
- """
- d_matrix = np.array([[sig_x**2, 0], [0, sig_y**2]])
- u_matrix = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
- return np.dot(u_matrix, np.dot(d_matrix, u_matrix.T))
-
-
-def mesh_grid(kernel_size):
- """Generate the mesh grid, centering at zero.
-
- Args:
- kernel_size (int):
-
- Returns:
- xy (ndarray): with the shape (kernel_size, kernel_size, 2)
- xx (ndarray): with the shape (kernel_size, kernel_size)
- yy (ndarray): with the shape (kernel_size, kernel_size)
- """
- ax = np.arange(-kernel_size // 2 + 1., kernel_size // 2 + 1.)
- xx, yy = np.meshgrid(ax, ax)
- xy = np.hstack((xx.reshape((kernel_size * kernel_size, 1)), yy.reshape(kernel_size * kernel_size,
- 1))).reshape(kernel_size, kernel_size, 2)
- return xy, xx, yy
-
-
-def pdf2(sigma_matrix, grid):
- """Calculate PDF of the bivariate Gaussian distribution.
-
- Args:
- sigma_matrix (ndarray): with the shape (2, 2)
- grid (ndarray): generated by :func:`mesh_grid`,
- with the shape (K, K, 2), K is the kernel size.
-
- Returns:
- kernel (ndarrray): un-normalized kernel.
- """
- inverse_sigma = np.linalg.inv(sigma_matrix)
- kernel = np.exp(-0.5 * np.sum(np.dot(grid, inverse_sigma) * grid, 2))
- return kernel
-
-
-def cdf2(d_matrix, grid):
- """Calculate the CDF of the standard bivariate Gaussian distribution.
- Used in skewed Gaussian distribution.
-
- Args:
- d_matrix (ndarrasy): skew matrix.
- grid (ndarray): generated by :func:`mesh_grid`,
- with the shape (K, K, 2), K is the kernel size.
-
- Returns:
- cdf (ndarray): skewed cdf.
- """
- rv = multivariate_normal([0, 0], [[1, 0], [0, 1]])
- grid = np.dot(grid, d_matrix)
- cdf = rv.cdf(grid)
- return cdf
-
-
-def bivariate_Gaussian(kernel_size, sig_x, sig_y, theta, grid=None, isotropic=True):
- """Generate a bivariate isotropic or anisotropic Gaussian kernel.
-
- In the isotropic mode, only `sig_x` is used. `sig_y` and `theta` is ignored.
-
- Args:
- kernel_size (int):
- sig_x (float):
- sig_y (float):
- theta (float): Radian measurement.
- grid (ndarray, optional): generated by :func:`mesh_grid`,
- with the shape (K, K, 2), K is the kernel size. Default: None
- isotropic (bool):
-
- Returns:
- kernel (ndarray): normalized kernel.
- """
- if grid is None:
- grid, _, _ = mesh_grid(kernel_size)
- if isotropic:
- sigma_matrix = np.array([[sig_x**2, 0], [0, sig_x**2]])
- else:
- sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)
- kernel = pdf2(sigma_matrix, grid)
- kernel = kernel / np.sum(kernel)
- return kernel
-
-
-def bivariate_generalized_Gaussian(kernel_size, sig_x, sig_y, theta, beta, grid=None, isotropic=True):
- """Generate a bivariate generalized Gaussian kernel.
- Described in `Parameter Estimation For Multivariate Generalized
- Gaussian Distributions`_
- by Pascal et. al (2013).
-
- In the isotropic mode, only `sig_x` is used. `sig_y` and `theta` is ignored.
-
- Args:
- kernel_size (int):
- sig_x (float):
- sig_y (float):
- theta (float): Radian measurement.
- beta (float): shape parameter, beta = 1 is the normal distribution.
- grid (ndarray, optional): generated by :func:`mesh_grid`,
- with the shape (K, K, 2), K is the kernel size. Default: None
-
- Returns:
- kernel (ndarray): normalized kernel.
-
- .. _Parameter Estimation For Multivariate Generalized Gaussian
- Distributions: https://arxiv.org/abs/1302.6498
- """
- if grid is None:
- grid, _, _ = mesh_grid(kernel_size)
- if isotropic:
- sigma_matrix = np.array([[sig_x**2, 0], [0, sig_x**2]])
- else:
- sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)
- inverse_sigma = np.linalg.inv(sigma_matrix)
- kernel = np.exp(-0.5 * np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta))
- kernel = kernel / np.sum(kernel)
- return kernel
-
-
-def bivariate_plateau(kernel_size, sig_x, sig_y, theta, beta, grid=None, isotropic=True):
- """Generate a plateau-like anisotropic kernel.
- 1 / (1+x^(beta))
-
- Ref: https://stats.stackexchange.com/questions/203629/is-there-a-plateau-shaped-distribution
-
- In the isotropic mode, only `sig_x` is used. `sig_y` and `theta` is ignored.
-
- Args:
- kernel_size (int):
- sig_x (float):
- sig_y (float):
- theta (float): Radian measurement.
- beta (float): shape parameter, beta = 1 is the normal distribution.
- grid (ndarray, optional): generated by :func:`mesh_grid`,
- with the shape (K, K, 2), K is the kernel size. Default: None
-
- Returns:
- kernel (ndarray): normalized kernel.
- """
- if grid is None:
- grid, _, _ = mesh_grid(kernel_size)
- if isotropic:
- sigma_matrix = np.array([[sig_x**2, 0], [0, sig_x**2]])
- else:
- sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)
- inverse_sigma = np.linalg.inv(sigma_matrix)
- kernel = np.reciprocal(np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta) + 1)
- kernel = kernel / np.sum(kernel)
- return kernel
-
-
-def random_bivariate_Gaussian(kernel_size,
- sigma_x_range,
- sigma_y_range,
- rotation_range,
- noise_range=None,
- isotropic=True):
- """Randomly generate bivariate isotropic or anisotropic Gaussian kernels.
-
- In the isotropic mode, only `sigma_x_range` is used. `sigma_y_range` and `rotation_range` is ignored.
-
- Args:
- kernel_size (int):
- sigma_x_range (tuple): [0.6, 5]
- sigma_y_range (tuple): [0.6, 5]
- rotation range (tuple): [-math.pi, math.pi]
- noise_range(tuple, optional): multiplicative kernel noise,
- [0.75, 1.25]. Default: None
-
- Returns:
- kernel (ndarray):
- """
- assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'
- assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'
- sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])
- if isotropic is False:
- assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'
- assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'
- sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])
- rotation = np.random.uniform(rotation_range[0], rotation_range[1])
- else:
- sigma_y = sigma_x
- rotation = 0
-
- kernel = bivariate_Gaussian(kernel_size, sigma_x, sigma_y, rotation, isotropic=isotropic)
-
- # add multiplicative noise
- if noise_range is not None:
- assert noise_range[0] < noise_range[1], 'Wrong noise range.'
- noise = np.random.uniform(noise_range[0], noise_range[1], size=kernel.shape)
- kernel = kernel * noise
- kernel = kernel / np.sum(kernel)
- return kernel
-
-
-def random_bivariate_generalized_Gaussian(kernel_size,
- sigma_x_range,
- sigma_y_range,
- rotation_range,
- beta_range,
- noise_range=None,
- isotropic=True):
- """Randomly generate bivariate generalized Gaussian kernels.
-
- In the isotropic mode, only `sigma_x_range` is used. `sigma_y_range` and `rotation_range` is ignored.
-
- Args:
- kernel_size (int):
- sigma_x_range (tuple): [0.6, 5]
- sigma_y_range (tuple): [0.6, 5]
- rotation range (tuple): [-math.pi, math.pi]
- beta_range (tuple): [0.5, 8]
- noise_range(tuple, optional): multiplicative kernel noise,
- [0.75, 1.25]. Default: None
-
- Returns:
- kernel (ndarray):
- """
- assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'
- assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'
- sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])
- if isotropic is False:
- assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'
- assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'
- sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])
- rotation = np.random.uniform(rotation_range[0], rotation_range[1])
- else:
- sigma_y = sigma_x
- rotation = 0
-
- # assume beta_range[0] < 1 < beta_range[1]
- if np.random.uniform() < 0.5:
- beta = np.random.uniform(beta_range[0], 1)
- else:
- beta = np.random.uniform(1, beta_range[1])
-
- kernel = bivariate_generalized_Gaussian(kernel_size, sigma_x, sigma_y, rotation, beta, isotropic=isotropic)
-
- # add multiplicative noise
- if noise_range is not None:
- assert noise_range[0] < noise_range[1], 'Wrong noise range.'
- noise = np.random.uniform(noise_range[0], noise_range[1], size=kernel.shape)
- kernel = kernel * noise
- kernel = kernel / np.sum(kernel)
- return kernel
-
-
-def random_bivariate_plateau(kernel_size,
- sigma_x_range,
- sigma_y_range,
- rotation_range,
- beta_range,
- noise_range=None,
- isotropic=True):
- """Randomly generate bivariate plateau kernels.
-
- In the isotropic mode, only `sigma_x_range` is used. `sigma_y_range` and `rotation_range` is ignored.
-
- Args:
- kernel_size (int):
- sigma_x_range (tuple): [0.6, 5]
- sigma_y_range (tuple): [0.6, 5]
- rotation range (tuple): [-math.pi/2, math.pi/2]
- beta_range (tuple): [1, 4]
- noise_range(tuple, optional): multiplicative kernel noise,
- [0.75, 1.25]. Default: None
-
- Returns:
- kernel (ndarray):
- """
- assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'
- assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'
- sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])
- if isotropic is False:
- assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'
- assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'
- sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])
- rotation = np.random.uniform(rotation_range[0], rotation_range[1])
- else:
- sigma_y = sigma_x
- rotation = 0
-
- # TODO: this may be not proper
- if np.random.uniform() < 0.5:
- beta = np.random.uniform(beta_range[0], 1)
- else:
- beta = np.random.uniform(1, beta_range[1])
-
- kernel = bivariate_plateau(kernel_size, sigma_x, sigma_y, rotation, beta, isotropic=isotropic)
- # add multiplicative noise
- if noise_range is not None:
- assert noise_range[0] < noise_range[1], 'Wrong noise range.'
- noise = np.random.uniform(noise_range[0], noise_range[1], size=kernel.shape)
- kernel = kernel * noise
- kernel = kernel / np.sum(kernel)
-
- return kernel
-
-
-def random_mixed_kernels(kernel_list,
- kernel_prob,
- kernel_size=21,
- sigma_x_range=(0.6, 5),
- sigma_y_range=(0.6, 5),
- rotation_range=(-math.pi, math.pi),
- betag_range=(0.5, 8),
- betap_range=(0.5, 8),
- noise_range=None):
- """Randomly generate mixed kernels.
-
- Args:
- kernel_list (tuple): a list name of kernel types,
- support ['iso', 'aniso', 'skew', 'generalized', 'plateau_iso',
- 'plateau_aniso']
- kernel_prob (tuple): corresponding kernel probability for each
- kernel type
- kernel_size (int):
- sigma_x_range (tuple): [0.6, 5]
- sigma_y_range (tuple): [0.6, 5]
- rotation range (tuple): [-math.pi, math.pi]
- beta_range (tuple): [0.5, 8]
- noise_range(tuple, optional): multiplicative kernel noise,
- [0.75, 1.25]. Default: None
-
- Returns:
- kernel (ndarray):
- """
- kernel_type = random.choices(kernel_list, kernel_prob)[0]
- if kernel_type == 'iso':
- kernel = random_bivariate_Gaussian(
- kernel_size, sigma_x_range, sigma_y_range, rotation_range, noise_range=noise_range, isotropic=True)
- elif kernel_type == 'aniso':
- kernel = random_bivariate_Gaussian(
- kernel_size, sigma_x_range, sigma_y_range, rotation_range, noise_range=noise_range, isotropic=False)
- elif kernel_type == 'generalized_iso':
- kernel = random_bivariate_generalized_Gaussian(
- kernel_size,
- sigma_x_range,
- sigma_y_range,
- rotation_range,
- betag_range,
- noise_range=noise_range,
- isotropic=True)
- elif kernel_type == 'generalized_aniso':
- kernel = random_bivariate_generalized_Gaussian(
- kernel_size,
- sigma_x_range,
- sigma_y_range,
- rotation_range,
- betag_range,
- noise_range=noise_range,
- isotropic=False)
- elif kernel_type == 'plateau_iso':
- kernel = random_bivariate_plateau(
- kernel_size, sigma_x_range, sigma_y_range, rotation_range, betap_range, noise_range=None, isotropic=True)
- elif kernel_type == 'plateau_aniso':
- kernel = random_bivariate_plateau(
- kernel_size, sigma_x_range, sigma_y_range, rotation_range, betap_range, noise_range=None, isotropic=False)
- return kernel
-
-
-np.seterr(divide='ignore', invalid='ignore')
-
-
-def circular_lowpass_kernel(cutoff, kernel_size, pad_to=0):
- """2D sinc filter, ref: https://dsp.stackexchange.com/questions/58301/2-d-circularly-symmetric-low-pass-filter
-
- Args:
- cutoff (float): cutoff frequency in radians (pi is max)
- kernel_size (int): horizontal and vertical size, must be odd.
- pad_to (int): pad kernel size to desired size, must be odd or zero.
- """
- assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'
- kernel = np.fromfunction(
- lambda x, y: cutoff * special.j1(cutoff * np.sqrt(
- (x - (kernel_size - 1) / 2)**2 + (y - (kernel_size - 1) / 2)**2)) / (2 * np.pi * np.sqrt(
- (x - (kernel_size - 1) / 2)**2 + (y - (kernel_size - 1) / 2)**2)), [kernel_size, kernel_size])
- kernel[(kernel_size - 1) // 2, (kernel_size - 1) // 2] = cutoff**2 / (4 * np.pi)
- kernel = kernel / np.sum(kernel)
- if pad_to > kernel_size:
- pad_size = (pad_to - kernel_size) // 2
- kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
- return kernel
-
-
-# ------------------------------------------------------------- #
-# --------------------------- noise --------------------------- #
-# ------------------------------------------------------------- #
-
-# ----------------------- Gaussian Noise ----------------------- #
-
-
-def generate_gaussian_noise(img, sigma=10, gray_noise=False):
- """Generate Gaussian noise.
-
- Args:
- img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.
- sigma (float): Noise scale (measured in range 255). Default: 10.
-
- Returns:
- (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],
- float32.
- """
- if gray_noise:
- noise = np.float32(np.random.randn(*(img.shape[0:2]))) * sigma / 255.
- noise = np.expand_dims(noise, axis=2).repeat(3, axis=2)
- else:
- noise = np.float32(np.random.randn(*(img.shape))) * sigma / 255.
- return noise
-
-
-def add_gaussian_noise(img, sigma=10, clip=True, rounds=False, gray_noise=False):
- """Add Gaussian noise.
-
- Args:
- img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.
- sigma (float): Noise scale (measured in range 255). Default: 10.
-
- Returns:
- (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],
- float32.
- """
- noise = generate_gaussian_noise(img, sigma, gray_noise)
- out = img + noise
- if clip and rounds:
- out = np.clip((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = np.clip(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-def generate_gaussian_noise_pt(img, sigma=10, gray_noise=0):
- """Add Gaussian noise (PyTorch version).
-
- Args:
- img (Tensor): Shape (b, c, h, w), range[0, 1], float32.
- scale (float | Tensor): Noise scale. Default: 1.0.
-
- Returns:
- (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],
- float32.
- """
- b, _, h, w = img.size()
- if not isinstance(sigma, (float, int)):
- sigma = sigma.view(img.size(0), 1, 1, 1)
- if isinstance(gray_noise, (float, int)):
- cal_gray_noise = gray_noise > 0
- else:
- gray_noise = gray_noise.view(b, 1, 1, 1)
- cal_gray_noise = torch.sum(gray_noise) > 0
-
- if cal_gray_noise:
- noise_gray = torch.randn(*img.size()[2:4], dtype=img.dtype, device=img.device) * sigma / 255.
- noise_gray = noise_gray.view(b, 1, h, w)
-
- # always calculate color noise
- noise = torch.randn(*img.size(), dtype=img.dtype, device=img.device) * sigma / 255.
-
- if cal_gray_noise:
- noise = noise * (1 - gray_noise) + noise_gray * gray_noise
- return noise
-
-
-def add_gaussian_noise_pt(img, sigma=10, gray_noise=0, clip=True, rounds=False):
- """Add Gaussian noise (PyTorch version).
-
- Args:
- img (Tensor): Shape (b, c, h, w), range[0, 1], float32.
- scale (float | Tensor): Noise scale. Default: 1.0.
-
- Returns:
- (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],
- float32.
- """
- noise = generate_gaussian_noise_pt(img, sigma, gray_noise)
- out = img + noise
- if clip and rounds:
- out = torch.clamp((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = torch.clamp(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-# ----------------------- Random Gaussian Noise ----------------------- #
-def random_generate_gaussian_noise(img, sigma_range=(0, 10), gray_prob=0):
- sigma = np.random.uniform(sigma_range[0], sigma_range[1])
- if np.random.uniform() < gray_prob:
- gray_noise = True
- else:
- gray_noise = False
- return generate_gaussian_noise(img, sigma, gray_noise)
-
-
-def random_add_gaussian_noise(img, sigma_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):
- noise = random_generate_gaussian_noise(img, sigma_range, gray_prob)
- out = img + noise
- if clip and rounds:
- out = np.clip((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = np.clip(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-def random_generate_gaussian_noise_pt(img, sigma_range=(0, 10), gray_prob=0):
- sigma = torch.rand(
- img.size(0), dtype=img.dtype, device=img.device) * (sigma_range[1] - sigma_range[0]) + sigma_range[0]
- gray_noise = torch.rand(img.size(0), dtype=img.dtype, device=img.device)
- gray_noise = (gray_noise < gray_prob).float()
- return generate_gaussian_noise_pt(img, sigma, gray_noise)
-
-
-def random_add_gaussian_noise_pt(img, sigma_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):
- noise = random_generate_gaussian_noise_pt(img, sigma_range, gray_prob)
- out = img + noise
- if clip and rounds:
- out = torch.clamp((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = torch.clamp(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-# ----------------------- Poisson (Shot) Noise ----------------------- #
-
-
-def generate_poisson_noise(img, scale=1.0, gray_noise=False):
- """Generate poisson noise.
-
- Ref: https://github.com/scikit-image/scikit-image/blob/main/skimage/util/noise.py#L37-L219
-
- Args:
- img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.
- scale (float): Noise scale. Default: 1.0.
- gray_noise (bool): Whether generate gray noise. Default: False.
-
- Returns:
- (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],
- float32.
- """
- if gray_noise:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
- # round and clip image for counting vals correctly
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
- vals = len(np.unique(img))
- vals = 2**np.ceil(np.log2(vals))
- out = np.float32(np.random.poisson(img * vals) / float(vals))
- noise = out - img
- if gray_noise:
- noise = np.repeat(noise[:, :, np.newaxis], 3, axis=2)
- return noise * scale
-
-
-def add_poisson_noise(img, scale=1.0, clip=True, rounds=False, gray_noise=False):
- """Add poisson noise.
-
- Args:
- img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.
- scale (float): Noise scale. Default: 1.0.
- gray_noise (bool): Whether generate gray noise. Default: False.
-
- Returns:
- (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],
- float32.
- """
- noise = generate_poisson_noise(img, scale, gray_noise)
- out = img + noise
- if clip and rounds:
- out = np.clip((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = np.clip(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-def generate_poisson_noise_pt(img, scale=1.0, gray_noise=0):
- """Generate a batch of poisson noise (PyTorch version)
-
- Args:
- img (Tensor): Input image, shape (b, c, h, w), range [0, 1], float32.
- scale (float | Tensor): Noise scale. Number or Tensor with shape (b).
- Default: 1.0.
- gray_noise (float | Tensor): 0-1 number or Tensor with shape (b).
- 0 for False, 1 for True. Default: 0.
-
- Returns:
- (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],
- float32.
- """
- b, _, h, w = img.size()
- if isinstance(gray_noise, (float, int)):
- cal_gray_noise = gray_noise > 0
- else:
- gray_noise = gray_noise.view(b, 1, 1, 1)
- cal_gray_noise = torch.sum(gray_noise) > 0
- if cal_gray_noise:
- img_gray = rgb_to_grayscale(img, num_output_channels=1)
- # round and clip image for counting vals correctly
- img_gray = torch.clamp((img_gray * 255.0).round(), 0, 255) / 255.
- # use for-loop to get the unique values for each sample
- vals_list = [len(torch.unique(img_gray[i, :, :, :])) for i in range(b)]
- vals_list = [2**np.ceil(np.log2(vals)) for vals in vals_list]
- vals = img_gray.new_tensor(vals_list).view(b, 1, 1, 1)
- out = torch.poisson(img_gray * vals) / vals
- noise_gray = out - img_gray
- noise_gray = noise_gray.expand(b, 3, h, w)
-
- # always calculate color noise
- # round and clip image for counting vals correctly
- img = torch.clamp((img * 255.0).round(), 0, 255) / 255.
- # use for-loop to get the unique values for each sample
- vals_list = [len(torch.unique(img[i, :, :, :])) for i in range(b)]
- vals_list = [2**np.ceil(np.log2(vals)) for vals in vals_list]
- vals = img.new_tensor(vals_list).view(b, 1, 1, 1)
- out = torch.poisson(img * vals) / vals
- noise = out - img
- if cal_gray_noise:
- noise = noise * (1 - gray_noise) + noise_gray * gray_noise
- if not isinstance(scale, (float, int)):
- scale = scale.view(b, 1, 1, 1)
- return noise * scale
-
-
-def add_poisson_noise_pt(img, scale=1.0, clip=True, rounds=False, gray_noise=0):
- """Add poisson noise to a batch of images (PyTorch version).
-
- Args:
- img (Tensor): Input image, shape (b, c, h, w), range [0, 1], float32.
- scale (float | Tensor): Noise scale. Number or Tensor with shape (b).
- Default: 1.0.
- gray_noise (float | Tensor): 0-1 number or Tensor with shape (b).
- 0 for False, 1 for True. Default: 0.
-
- Returns:
- (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],
- float32.
- """
- noise = generate_poisson_noise_pt(img, scale, gray_noise)
- out = img + noise
- if clip and rounds:
- out = torch.clamp((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = torch.clamp(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-# ----------------------- Random Poisson (Shot) Noise ----------------------- #
-
-
-def random_generate_poisson_noise(img, scale_range=(0, 1.0), gray_prob=0):
- scale = np.random.uniform(scale_range[0], scale_range[1])
- if np.random.uniform() < gray_prob:
- gray_noise = True
- else:
- gray_noise = False
- return generate_poisson_noise(img, scale, gray_noise)
-
-
-def random_add_poisson_noise(img, scale_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):
- noise = random_generate_poisson_noise(img, scale_range, gray_prob)
- out = img + noise
- if clip and rounds:
- out = np.clip((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = np.clip(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-def random_generate_poisson_noise_pt(img, scale_range=(0, 1.0), gray_prob=0):
- scale = torch.rand(
- img.size(0), dtype=img.dtype, device=img.device) * (scale_range[1] - scale_range[0]) + scale_range[0]
- gray_noise = torch.rand(img.size(0), dtype=img.dtype, device=img.device)
- gray_noise = (gray_noise < gray_prob).float()
- return generate_poisson_noise_pt(img, scale, gray_noise)
-
-
-def random_add_poisson_noise_pt(img, scale_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):
- noise = random_generate_poisson_noise_pt(img, scale_range, gray_prob)
- out = img + noise
- if clip and rounds:
- out = torch.clamp((out * 255.0).round(), 0, 255) / 255.
- elif clip:
- out = torch.clamp(out, 0, 1)
- elif rounds:
- out = (out * 255.0).round() / 255.
- return out
-
-
-# ------------------------------------------------------------------------ #
-# --------------------------- JPEG compression --------------------------- #
-# ------------------------------------------------------------------------ #
-
-
-def add_jpg_compression(img, quality=90):
- """Add JPG compression artifacts.
-
- Args:
- img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.
- quality (float): JPG compression quality. 0 for lowest quality, 100 for
- best quality. Default: 90.
-
- Returns:
- (Numpy array): Returned image after JPG, shape (h, w, c), range[0, 1],
- float32.
- """
- img = np.clip(img, 0, 1)
- encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
- _, encimg = cv2.imencode('.jpg', img * 255., encode_param)
- img = np.float32(cv2.imdecode(encimg, 1)) / 255.
- return img
-
-
-def random_add_jpg_compression(img, quality_range=(90, 100)):
- """Randomly add JPG compression artifacts.
-
- Args:
- img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.
- quality_range (tuple[float] | list[float]): JPG compression quality
- range. 0 for lowest quality, 100 for best quality.
- Default: (90, 100).
-
- Returns:
- (Numpy array): Returned image after JPG, shape (h, w, c), range[0, 1],
- float32.
- """
- quality = np.random.uniform(quality_range[0], quality_range[1])
- return add_jpg_compression(img, quality)
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/base.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/base.py
deleted file mode 100644
index e21667df4ce4baa6bb6aad9f8679bd756e2ffdb7..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/base.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import bisect
-import numpy as np
-import albumentations
-from PIL import Image
-from torch.utils.data import Dataset, ConcatDataset
-
-
-class ConcatDatasetWithIndex(ConcatDataset):
- """Modified from original pytorch code to return dataset idx"""
- def __getitem__(self, idx):
- if idx < 0:
- if -idx > len(self):
- raise ValueError("absolute value of index should not exceed dataset length")
- idx = len(self) + idx
- dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
- if dataset_idx == 0:
- sample_idx = idx
- else:
- sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
- return self.datasets[dataset_idx][sample_idx], dataset_idx
-
-
-class ImagePaths(Dataset):
- def __init__(self, paths, size=None, random_crop=False, labels=None):
- self.size = size
- self.random_crop = random_crop
-
- self.labels = dict() if labels is None else labels
- self.labels["file_path_"] = paths
- self._length = len(paths)
-
- if self.size is not None and self.size > 0:
- self.rescaler = albumentations.SmallestMaxSize(max_size = self.size)
- if not self.random_crop:
- self.cropper = albumentations.CenterCrop(height=self.size,width=self.size)
- else:
- self.cropper = albumentations.RandomCrop(height=self.size,width=self.size)
- self.preprocessor = albumentations.Compose([self.rescaler, self.cropper])
- else:
- self.preprocessor = lambda **kwargs: kwargs
-
- def __len__(self):
- return self._length
-
- def preprocess_image(self, image_path):
- image = Image.open(image_path)
- if not image.mode == "RGB":
- image = image.convert("RGB")
- image = np.array(image).astype(np.uint8)
- image = self.preprocessor(image=image)["image"]
- image = (image/127.5 - 1.0).astype(np.float32)
- return image
-
- def __getitem__(self, i):
- example = dict()
- example["image"] = self.preprocess_image(self.labels["file_path_"][i])
- for k in self.labels:
- example[k] = self.labels[k][i]
- return example
-
-
-class NumpyPaths(ImagePaths):
- def preprocess_image(self, image_path):
- image = np.load(image_path).squeeze(0) # 3 x 1024 x 1024
- image = np.transpose(image, (1,2,0))
- image = Image.fromarray(image, mode="RGB")
- image = np.array(image).astype(np.uint8)
- image = self.preprocessor(image=image)["image"]
- image = (image/127.5 - 1.0).astype(np.float32)
- return image
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/onnx_inference.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/onnx_inference.py
deleted file mode 100644
index 6633659fc83b19d82611d3c9cc840e9c547734d0..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/onnx_inference.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import librosa
-import numpy as np
-import onnxruntime
-import soundfile
-
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-class ContentVec:
- def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
- logger.info("Load model(s) from {}".format(vec_path))
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def __call__(self, wav):
- return self.forward(wav)
-
- def forward(self, wav):
- feats = wav
- if feats.ndim == 2: # double channels
- feats = feats.mean(-1)
- assert feats.ndim == 1, feats.ndim
- feats = np.expand_dims(np.expand_dims(feats, 0), 0)
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)[0]
- return logits.transpose(0, 2, 1)
-
-
-def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
- if f0_predictor == "pm":
- from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
-
- f0_predictor_object = PMF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "harvest":
- from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import (
- HarvestF0Predictor,
- )
-
- f0_predictor_object = HarvestF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "dio":
- from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
-
- f0_predictor_object = DioF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-
-class OnnxRVC:
- def __init__(
- self,
- model_path,
- sr=40000,
- hop_size=512,
- vec_path="vec-768-layer-12",
- device="cpu",
- ):
- vec_path = f"pretrained/{vec_path}.onnx"
- self.vec_model = ContentVec(vec_path, device)
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(model_path, providers=providers)
- self.sampling_rate = sr
- self.hop_size = hop_size
-
- def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
- onnx_input = {
- self.model.get_inputs()[0].name: hubert,
- self.model.get_inputs()[1].name: hubert_length,
- self.model.get_inputs()[2].name: pitch,
- self.model.get_inputs()[3].name: pitchf,
- self.model.get_inputs()[4].name: ds,
- self.model.get_inputs()[5].name: rnd,
- }
- return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
-
- def inference(
- self,
- raw_path,
- sid,
- f0_method="dio",
- f0_up_key=0,
- pad_time=0.5,
- cr_threshold=0.02,
- ):
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_predictor = get_f0_predictor(
- f0_method,
- hop_length=self.hop_size,
- sampling_rate=self.sampling_rate,
- threshold=cr_threshold,
- )
- wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
- org_length = len(wav)
- if org_length / sr > 50.0:
- raise RuntimeError("Reached Max Length")
-
- wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
- wav16k = wav16k
-
- hubert = self.vec_model(wav16k)
- hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
- hubert_length = hubert.shape[1]
-
- pitchf = f0_predictor.compute_f0(wav, hubert_length)
- pitchf = pitchf * 2 ** (f0_up_key / 12)
- pitch = pitchf.copy()
- f0_mel = 1127 * np.log(1 + pitch / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- pitch = np.rint(f0_mel).astype(np.int64)
-
- pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
- pitch = pitch.reshape(1, len(pitch))
- ds = np.array([sid]).astype(np.int64)
-
- rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
- hubert_length = np.array([hubert_length]).astype(np.int64)
-
- out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
- out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
- return out_wav[0:org_length]
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/detection/yolov5face/utils/extract_ckpt.py b/spaces/FelixLuoX/codeformer/CodeFormer/facelib/detection/yolov5face/utils/extract_ckpt.py
deleted file mode 100644
index 4b8b631348f2d0cdea4e5a3594bb59f3e8f34a0f..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/detection/yolov5face/utils/extract_ckpt.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import torch
-import sys
-sys.path.insert(0,'./facelib/detection/yolov5face')
-model = torch.load('facelib/detection/yolov5face/yolov5n-face.pt', map_location='cpu')['model']
-torch.save(model.state_dict(),'weights/facelib/yolov5n-face.pth')
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/utils/i18n.py b/spaces/FridaZuley/RVC_HFKawaii/utils/i18n.py
deleted file mode 100644
index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/utils/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = "es_ES"
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "es_ES"
- language = "es_ES"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- # print("Use Language:", self.language)
- print("")
diff --git a/spaces/Godrose0728/Aisound02/commons.py b/spaces/Godrose0728/Aisound02/commons.py
deleted file mode 100644
index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000
--- a/spaces/Godrose0728/Aisound02/commons.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/Gradio-Blocks/Alexa-NLU-Clone/README.md b/spaces/Gradio-Blocks/Alexa-NLU-Clone/README.md
deleted file mode 100644
index a6f4efd599e6279aebf91ebb00d72a3055d1cac8..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/Alexa-NLU-Clone/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Alexa-NLU-Clone
-emoji: 👩💼 🗪 🤖
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Gradio-Blocks/are-you-wearing-a-mask/data/README.md b/spaces/Gradio-Blocks/are-you-wearing-a-mask/data/README.md
deleted file mode 100644
index 6ec06413ff49b4be26a210d567d35d7d66c62622..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/are-you-wearing-a-mask/data/README.md
+++ /dev/null
@@ -1 +0,0 @@
-# Example data goes here
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/README.md
deleted file mode 100644
index 89f5adb5fb41809bfcddeca80b7fe730d70e4838..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/README.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# Feature Pyramid Grids
-
-## Introduction
-
-```latex
-@article{chen2020feature,
- title={Feature pyramid grids},
- author={Chen, Kai and Cao, Yuhang and Loy, Chen Change and Lin, Dahua and Feichtenhofer, Christoph},
- journal={arXiv preprint arXiv:2004.03580},
- year={2020}
-}
-```
-
-## Results and Models
-
-We benchmark the new training schedule (crop training, large batch, unfrozen BN, 50 epochs) introduced in NAS-FPN.
-All backbones are Resnet-50 in pytorch style.
-
-| Method | Neck | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
-|:------------:|:-----------:|:-------:|:--------:|:--------------:|:------:|:-------:|:-------:|:--------:|
-| Faster R-CNN | FPG | 50e | 20.0 | - | 42.2 | - |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fpg/faster_rcnn_r50_fpg_crop640_50e_coco.py) |
-| Faster R-CNN | FPG-chn128 | 50e | 11.9 | - | 41.2 | - |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fpg/faster_rcnn_r50_fpg-chn128_crop640_50e_coco.py) |
-| Mask R-CNN | FPG | 50e | 23.2 | - | 42.7 | 37.8 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py) |
-| Mask R-CNN | FPG-chn128 | 50e | 15.3 | - | 41.7 | 36.9 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py) |
-| RetinaNet | FPG | 50e | 20.8 | - | 40.5 | - |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fpg/retinanet_r50_fpg_crop640_50e_coco.py) |
-| RetinaNet | FPG-chn128 | 50e | | - | | - |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fpg/retinanet_r50_fpg-chn128_crop640_50e_coco.py) |
-
-**Note**: Chn128 means to decrease the number of channels of features and convs from 256 (default) to 128 in
-Neck and BBox Head, which can greatly decrease memory consumption without sacrificing much precision.
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/transforms.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/transforms.py
deleted file mode 100644
index df55b0a496516bf7373fe96cf746c561dd713c3b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/transforms.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import numpy as np
-import torch
-
-
-def bbox_flip(bboxes, img_shape, direction='horizontal'):
- """Flip bboxes horizontally or vertically.
-
- Args:
- bboxes (Tensor): Shape (..., 4*k)
- img_shape (tuple): Image shape.
- direction (str): Flip direction, options are "horizontal", "vertical",
- "diagonal". Default: "horizontal"
-
- Returns:
- Tensor: Flipped bboxes.
- """
- assert bboxes.shape[-1] % 4 == 0
- assert direction in ['horizontal', 'vertical', 'diagonal']
- flipped = bboxes.clone()
- if direction == 'horizontal':
- flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4]
- flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4]
- elif direction == 'vertical':
- flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4]
- flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4]
- else:
- flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4]
- flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4]
- flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4]
- flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4]
- return flipped
-
-
-def bbox_mapping(bboxes,
- img_shape,
- scale_factor,
- flip,
- flip_direction='horizontal'):
- """Map bboxes from the original image scale to testing scale."""
- new_bboxes = bboxes * bboxes.new_tensor(scale_factor)
- if flip:
- new_bboxes = bbox_flip(new_bboxes, img_shape, flip_direction)
- return new_bboxes
-
-
-def bbox_mapping_back(bboxes,
- img_shape,
- scale_factor,
- flip,
- flip_direction='horizontal'):
- """Map bboxes from testing scale to original image scale."""
- new_bboxes = bbox_flip(bboxes, img_shape,
- flip_direction) if flip else bboxes
- new_bboxes = new_bboxes.view(-1, 4) / new_bboxes.new_tensor(scale_factor)
- return new_bboxes.view(bboxes.shape)
-
-
-def bbox2roi(bbox_list):
- """Convert a list of bboxes to roi format.
-
- Args:
- bbox_list (list[Tensor]): a list of bboxes corresponding to a batch
- of images.
-
- Returns:
- Tensor: shape (n, 5), [batch_ind, x1, y1, x2, y2]
- """
- rois_list = []
- for img_id, bboxes in enumerate(bbox_list):
- if bboxes.size(0) > 0:
- img_inds = bboxes.new_full((bboxes.size(0), 1), img_id)
- rois = torch.cat([img_inds, bboxes[:, :4]], dim=-1)
- else:
- rois = bboxes.new_zeros((0, 5))
- rois_list.append(rois)
- rois = torch.cat(rois_list, 0)
- return rois
-
-
-def roi2bbox(rois):
- """Convert rois to bounding box format.
-
- Args:
- rois (torch.Tensor): RoIs with the shape (n, 5) where the first
- column indicates batch id of each RoI.
-
- Returns:
- list[torch.Tensor]: Converted boxes of corresponding rois.
- """
- bbox_list = []
- img_ids = torch.unique(rois[:, 0].cpu(), sorted=True)
- for img_id in img_ids:
- inds = (rois[:, 0] == img_id.item())
- bbox = rois[inds, 1:]
- bbox_list.append(bbox)
- return bbox_list
-
-
-def bbox2result(bboxes, labels, num_classes):
- """Convert detection results to a list of numpy arrays.
-
- Args:
- bboxes (torch.Tensor | np.ndarray): shape (n, 5)
- labels (torch.Tensor | np.ndarray): shape (n, )
- num_classes (int): class number, including background class
-
- Returns:
- list(ndarray): bbox results of each class
- """
- if bboxes.shape[0] == 0:
- return [np.zeros((0, 5), dtype=np.float32) for i in range(num_classes)]
- else:
- if isinstance(bboxes, torch.Tensor):
- bboxes = bboxes.detach().cpu().numpy()
- labels = labels.detach().cpu().numpy()
- return [bboxes[labels == i, :] for i in range(num_classes)]
-
-
-def distance2bbox(points, distance, max_shape=None):
- """Decode distance prediction to bounding box.
-
- Args:
- points (Tensor): Shape (B, N, 2) or (N, 2).
- distance (Tensor): Distance from the given point to 4
- boundaries (left, top, right, bottom). Shape (B, N, 4) or (N, 4)
- max_shape (Sequence[int] or torch.Tensor or Sequence[
- Sequence[int]],optional): Maximum bounds for boxes, specifies
- (H, W, C) or (H, W). If priors shape is (B, N, 4), then
- the max_shape should be a Sequence[Sequence[int]]
- and the length of max_shape should also be B.
-
- Returns:
- Tensor: Boxes with shape (N, 4) or (B, N, 4)
- """
- x1 = points[..., 0] - distance[..., 0]
- y1 = points[..., 1] - distance[..., 1]
- x2 = points[..., 0] + distance[..., 2]
- y2 = points[..., 1] + distance[..., 3]
-
- bboxes = torch.stack([x1, y1, x2, y2], -1)
-
- if max_shape is not None:
- if not isinstance(max_shape, torch.Tensor):
- max_shape = x1.new_tensor(max_shape)
- max_shape = max_shape[..., :2].type_as(x1)
- if max_shape.ndim == 2:
- assert bboxes.ndim == 3
- assert max_shape.size(0) == bboxes.size(0)
-
- min_xy = x1.new_tensor(0)
- max_xy = torch.cat([max_shape, max_shape],
- dim=-1).flip(-1).unsqueeze(-2)
- bboxes = torch.where(bboxes < min_xy, min_xy, bboxes)
- bboxes = torch.where(bboxes > max_xy, max_xy, bboxes)
-
- return bboxes
-
-
-def bbox2distance(points, bbox, max_dis=None, eps=0.1):
- """Decode bounding box based on distances.
-
- Args:
- points (Tensor): Shape (n, 2), [x, y].
- bbox (Tensor): Shape (n, 4), "xyxy" format
- max_dis (float): Upper bound of the distance.
- eps (float): a small value to ensure target < max_dis, instead <=
-
- Returns:
- Tensor: Decoded distances.
- """
- left = points[:, 0] - bbox[:, 0]
- top = points[:, 1] - bbox[:, 1]
- right = bbox[:, 2] - points[:, 0]
- bottom = bbox[:, 3] - points[:, 1]
- if max_dis is not None:
- left = left.clamp(min=0, max=max_dis - eps)
- top = top.clamp(min=0, max=max_dis - eps)
- right = right.clamp(min=0, max=max_dis - eps)
- bottom = bottom.clamp(min=0, max=max_dis - eps)
- return torch.stack([left, top, right, bottom], -1)
-
-
-def bbox_rescale(bboxes, scale_factor=1.0):
- """Rescale bounding box w.r.t. scale_factor.
-
- Args:
- bboxes (Tensor): Shape (n, 4) for bboxes or (n, 5) for rois
- scale_factor (float): rescale factor
-
- Returns:
- Tensor: Rescaled bboxes.
- """
- if bboxes.size(1) == 5:
- bboxes_ = bboxes[:, 1:]
- inds_ = bboxes[:, 0]
- else:
- bboxes_ = bboxes
- cx = (bboxes_[:, 0] + bboxes_[:, 2]) * 0.5
- cy = (bboxes_[:, 1] + bboxes_[:, 3]) * 0.5
- w = bboxes_[:, 2] - bboxes_[:, 0]
- h = bboxes_[:, 3] - bboxes_[:, 1]
- w = w * scale_factor
- h = h * scale_factor
- x1 = cx - 0.5 * w
- x2 = cx + 0.5 * w
- y1 = cy - 0.5 * h
- y2 = cy + 0.5 * h
- if bboxes.size(1) == 5:
- rescaled_bboxes = torch.stack([inds_, x1, y1, x2, y2], dim=-1)
- else:
- rescaled_bboxes = torch.stack([x1, y1, x2, y2], dim=-1)
- return rescaled_bboxes
-
-
-def bbox_cxcywh_to_xyxy(bbox):
- """Convert bbox coordinates from (cx, cy, w, h) to (x1, y1, x2, y2).
-
- Args:
- bbox (Tensor): Shape (n, 4) for bboxes.
-
- Returns:
- Tensor: Converted bboxes.
- """
- cx, cy, w, h = bbox.split((1, 1, 1, 1), dim=-1)
- bbox_new = [(cx - 0.5 * w), (cy - 0.5 * h), (cx + 0.5 * w), (cy + 0.5 * h)]
- return torch.cat(bbox_new, dim=-1)
-
-
-def bbox_xyxy_to_cxcywh(bbox):
- """Convert bbox coordinates from (x1, y1, x2, y2) to (cx, cy, w, h).
-
- Args:
- bbox (Tensor): Shape (n, 4) for bboxes.
-
- Returns:
- Tensor: Converted bboxes.
- """
- x1, y1, x2, y2 = bbox.split((1, 1, 1, 1), dim=-1)
- bbox_new = [(x1 + x2) / 2, (y1 + y2) / 2, (x2 - x1), (y2 - y1)]
- return torch.cat(bbox_new, dim=-1)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py
deleted file mode 100644
index 584b7135fd95464f3d2c965440a0b92161cde09a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './fcn_hr18_480x480_80k_pascal_context.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w18_small',
- backbone=dict(
- extra=dict(
- stage1=dict(num_blocks=(2, )),
- stage2=dict(num_blocks=(2, 2)),
- stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
- stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 5deb5872b00a30d5c18a980c4d6c1b0d915908b9..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/__init__.py
deleted file mode 100644
index 1e0c7e429ab96d67be667e23bf7a0ffa389c036b..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""RVQ."""
-# flake8: noqa
-from .vq import ResidualVectorQuantizer
-from .base import BaseQuantizer, DummyQuantizer, QuantizedResult
diff --git a/spaces/Gregory-L/EleutherAI-gpt-neo-1.3B/app.py b/spaces/Gregory-L/EleutherAI-gpt-neo-1.3B/app.py
deleted file mode 100644
index 9c54f2713e7d54f872c9b69df85d3f4fdd419852..0000000000000000000000000000000000000000
--- a/spaces/Gregory-L/EleutherAI-gpt-neo-1.3B/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/EleutherAI/gpt-neo-1.3B").launch()
\ No newline at end of file
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/spnet_model.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/spnet_model.py
deleted file mode 100644
index 489bc60e883e8024713ede22a72699566e44979b..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/spnet_model.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import os
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torchvision.transforms as transforms
-from torch import Tensor
-
-from app_utils import normalize
-from base_model import BaseRGBDModel
-from device import cpu_device, device
-from SPNet.model import SPNet
-
-
-class SPNetModel(BaseRGBDModel):
- def __init__(self):
- """Wrapper of SPNet"""
- super(SPNetModel, self).__init__()
- print('SPNetModel')
- self.model = SPNet(32,50)
-
- self.model.load_state_dict(
- torch.load(
- os.path.join('pretrained_models', 'SPNet', 'SPNet_model_best.pth'),
- map_location=cpu_device
- )
- )
- self.model.to(device)
- self.model.eval()
-
- self.testsize = 352
- self.images_transform = transforms.Compose([
- transforms.Resize((self.testsize, self.testsize)),
- transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
- self.depths_transform = transforms.Compose([
- transforms.Resize((self.testsize, self.testsize)),
- ])
-
- def inference(
- self, image: Tensor, depth: Tensor,
- ) -> np.ndarray:
- origin_shape = image.shape
-
- # 1. Preprocessing
- image: Tensor = self.images_transform(image)
- depth: Tensor = self.depths_transform(depth)
- images = image.unsqueeze(0)
- depths = depth.unsqueeze(0)
-
- # 2. Inference
- images, depths = images.to(device), depths.to(device)
- pred_no_sigmoid = self.model(images, depths)[2]
-
- # 3. Return saliency maps
- res: Tensor = F.interpolate(
- pred_no_sigmoid, size=(origin_shape[1], origin_shape[2]),
- mode='bilinear', align_corners=False
- )
- res = res.sigmoid().squeeze().data.cpu().numpy()
- res = normalize(res)
-
- return res
\ No newline at end of file
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh
deleted file mode 100644
index ad35d7adf28dc9b23d13a6a3fec0b12cb760e855..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env sh
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-#
-# Please follow the instructions here http://alt.qcri.org/tools/arabic-normalizer/
-# to install tools needed for Arabic
-
-echo "Please install Arabic tools: http://alt.qcri.org/tools/arabic-normalizer/"
-echo "Then update environment variables in tokenizer_ar.sh"
-exit 1
-
-SVMTOOL=...
-GOMOSESGO=...
-QCRI_ARABIC_NORMALIZER=...
-
-export PERL5LIB="$SVMTOOL/lib":"$GOMOSESGO/bin/MADA-3.2":$PERL5LIB
-
-
-tempfile=$(mktemp)
-cat - > $tempfile
-
-cd $QCRI_ARABIC_NORMALIZER
-
-bash qcri_normalizer_mada3.2_aramorph1.2.1.sh $tempfile
-cat $tempfile.mada_norm-aramorph.europarl_tok
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/incremental_decoding_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/incremental_decoding_utils.py
deleted file mode 100644
index b26e6cd01cd4cbdffa23d88b354eb4a55a94189b..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/incremental_decoding_utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import uuid
-from typing import Dict, Optional
-
-from torch import Tensor
-
-
-class FairseqIncrementalState(object):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.init_incremental_state()
-
- def init_incremental_state(self):
- self._incremental_state_id = str(uuid.uuid4())
-
- def _get_full_incremental_state_key(self, key: str) -> str:
- return "{}.{}".format(self._incremental_state_id, key)
-
- def get_incremental_state(
- self,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]],
- key: str,
- ) -> Optional[Dict[str, Optional[Tensor]]]:
- """Helper for getting incremental state for an nn.Module."""
- full_key = self._get_full_incremental_state_key(key)
- if incremental_state is None or full_key not in incremental_state:
- return None
- return incremental_state[full_key]
-
- def set_incremental_state(
- self,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]],
- key: str,
- value: Dict[str, Optional[Tensor]],
- ) -> Optional[Dict[str, Dict[str, Optional[Tensor]]]]:
- """Helper for setting incremental state for an nn.Module."""
- if incremental_state is not None:
- full_key = self._get_full_incremental_state_key(key)
- incremental_state[full_key] = value
- return incremental_state
-
-
-def with_incremental_state(cls):
- cls.__bases__ = (FairseqIncrementalState,) + tuple(
- b for b in cls.__bases__ if b != FairseqIncrementalState
- )
- return cls
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_file_io.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_file_io.py
deleted file mode 100644
index 425812bf1672489093941e5fa09f9da3171559ee..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_file_io.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import shutil
-import sys
-import tempfile
-import unittest
-from typing import Optional
-from unittest.mock import MagicMock
-
-
-class TestFileIO(unittest.TestCase):
-
- _tmpdir: Optional[str] = None
- _tmpfile: Optional[str] = None
- _tmpfile_contents = "Hello, World"
-
- @classmethod
- def setUpClass(cls) -> None:
- cls._tmpdir = tempfile.mkdtemp()
- with open(os.path.join(cls._tmpdir, "test.txt"), "w") as f:
- cls._tmpfile = f.name
- f.write(cls._tmpfile_contents)
- f.flush()
-
- @classmethod
- def tearDownClass(cls) -> None:
- # Cleanup temp working dir.
- if cls._tmpdir is not None:
- shutil.rmtree(cls._tmpdir) # type: ignore
-
- def test_file_io(self):
- from fairseq.file_io import PathManager
-
- with PathManager.open(os.path.join(self._tmpdir, "test.txt"), "r") as f:
- s = f.read()
- self.assertEqual(s, self._tmpfile_contents)
-
- def test_file_io_oss(self):
- # Mock iopath to simulate oss environment.
- sys.modules["iopath"] = MagicMock()
- from fairseq.file_io import PathManager
-
- with PathManager.open(os.path.join(self._tmpdir, "test.txt"), "r") as f:
- s = f.read()
- self.assertEqual(s, self._tmpfile_contents)
-
- def test_file_io_async(self):
- # ioPath `PathManager` is initialized after the first `opena` call.
- try:
- from fairseq.file_io import IOPathManager, PathManager
- _asyncfile = os.path.join(self._tmpdir, "async.txt")
- f = PathManager.opena(_asyncfile, "wb")
- f.close()
-
- finally:
- self.assertTrue(PathManager.async_close())
diff --git a/spaces/Hazem/roop/roop/__init__.py b/spaces/Hazem/roop/roop/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Hexamind/QnA/src/control/control.py b/spaces/Hexamind/QnA/src/control/control.py
deleted file mode 100644
index 6eb996f1d599b380f742bd858ff19f67d76a8774..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/QnA/src/control/control.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import pandas as pd
-
-from src.tools.retriever import Retriever
-from src.tools.llm import LlmAgent
-from src.model.block import Block
-
-
-class Controller:
-
- def __init__(self, retriever: Retriever, llm: LlmAgent, plan_language: str, content_language: str, specials: {}):
- self.plan_language = plan_language
- self.content_language = content_language
- self.retriever = retriever
- self.specials = specials
- self.llm = llm
-
- def get_response(self, query_fr: str, histo_fr: [(str, str)]) -> (str, [Block]):
- histo_conversation, histo_queries = self._get_histo(histo_fr)
- queries = self.llm.translate(text=histo_queries) if self.plan_language == 'en' else histo_queries
- block_sources = self.retriever.similarity_search(query=queries)
- block_sources = self._select_best_sources(block_sources)
- for block in block_sources:
- self._expand_block_with_specials(block, histo_queries)
- sources_contents = [s.content for s in block_sources]
- context = '\n'.join(sources_contents)
- answer = self.llm.generate_paragraph(query=queries, histo=histo_conversation, context=context,
- language=self.content_language)
- sources_contents_fr = [s.content_fr for s in block_sources[:2]]
- context_fr = '\n'.join(sources_contents_fr)
- if self.content_language == 'en':
- answer = self.llm.generate_answer(answer_en=answer, query=query_fr,
- histo_fr=histo_conversation, context_fr=context_fr)
- answer = self._clean_answer(answer)
- return answer, block_sources
-
- @staticmethod
- def _get_histo(histo: [(str, str)]) -> (str, str):
- histo_conversation = ""
- histo_queries = ""
-
- for (query, answer) in histo[-5:]:
- histo_conversation += f'user: {query} \n bot: {answer}\n'
- histo_queries += query + '\n'
- return histo_conversation[:-1], histo_queries
-
- @staticmethod
- def _clean_answer(answer: str) -> str:
- answer = answer.strip('bot:')
- while answer and answer[-1] in {"'", '"', " ", "`"}:
- answer = answer[:-1]
- while answer and answer[0] in {"'", '"', " ", "`"}:
- answer = answer[1:]
- answer = answer.strip('bot:')
- if answer:
- if answer[-1] != ".":
- answer += "."
- return answer
-
- @staticmethod
- def _select_best_sources(sources: [Block], delta_1_2=0.15, delta_1_n=0.3, absolute=1.2, alpha=0.9) -> [Block]:
- """
- Select the best sources: not far from the very best, not far from the last selected, and not too bad per se
- """
- best_sources = []
- for idx, s in enumerate(sources):
- if idx == 0 \
- or (s.distance - sources[idx - 1].distance < delta_1_2
- and s.distance - sources[0].distance < delta_1_n) \
- or s.distance < absolute:
- best_sources.append(s)
- delta_1_2 *= alpha
- delta_1_n *= alpha
- absolute *= alpha
- else:
- break
- return best_sources
-
- def _expand_block_with_specials(self, block: Block, query: str) -> Block:
- """
- Performs special treatments for blocks expanding the text in the block
- For example, it may add specific content extracted from a table based on elements of the query
- """
-
- def any_in(l1: [], l2: []) -> bool:
- """
- checks if any of el in l1 belongs to l2
- """
- return 0 < len([el for el in l1 if el in l2])
-
- def get_countries_names(df: pd.DataFrame) -> [str]:
- """
- extends the ortograph of countries: ex. Etats-Unis = USA = Etats Unis, etc.
- """
- countries_fr = list(df['pays'])
- countries_en = list(df['country'])
- countries_names = {c_fr: [c_fr, c_en] for c_fr, c_en in zip(countries_fr, countries_en)}
- countries_extensions = self.specials['countries_extensions']
- for c in set(countries_extensions.keys()).intersection(set(countries_names.keys())):
- countries_names[c] += countries_extensions[c]
- return countries_names
-
- def remote_rate_fn(ctrl: Controller, block: Block, query: str) -> Block:
- remote_rate_df = self.specials['remote_rate_df']
- remote_rate_known = self.specials['remote_rate_known']
- remote_rate_unknown = self.specials['remote_rate_unknown']
- countries_fr = list(remote_rate_df['pays'])
- countries_names = get_countries_names(remote_rate_df)
- countries_of_interest = [c for c in countries_fr if any_in(countries_names[c], query)]
- for c in countries_of_interest:
- rate = remote_rate_df[remote_rate_df['pays'] == c]['rate'].values[0]
- block.content += remote_rate_known + c + " is " + rate + '\n'
- if len(countries_of_interest) == 0:
- block.content += remote_rate_unknown
- return block
-
- def accommodation_meal_fn(ctrl: Controller, block: Block, query: str) -> Block:
- accommodation_meal_df = self.specials['accommodation_meal_df']
- accommodation_meal_known = self.specials['accommodation_meal_known']
- accommodation_meal_unknown = self.specials['accommodation_meal_unknown']
- countries_fr = list(accommodation_meal_df['pays'])
- countries_names = get_countries_names(df=accommodation_meal_df)
- countries_of_interest = [c for c in countries_fr if any_in(countries_names[c], query)]
- for c in countries_of_interest:
- rate = accommodation_meal_df[accommodation_meal_df['pays'] == c][['meal', 'accommodation']].values
- block.content += accommodation_meal_known + c + " is " + rate[0][0] + ' for meals and ' \
- + rate[0][1] + ' for accommodation\n'
- if len(countries_of_interest) == 0:
- block.content += accommodation_meal_unknown
- return block
-
- def expand_block(special: str, ctrl: Controller, block: Block, query: str) -> Block:
- routing_table = {'RemotenessRateTable': remote_rate_fn,
- 'AccommodationMealTable': accommodation_meal_fn, }
- if special in routing_table.keys():
- fn = routing_table[special]
- block = fn(ctrl, block, query)
- return block
-
- for special in block.specials:
- block = expand_block(special, self, block, query)
- return block
diff --git a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/rerank_generate.py b/spaces/ICML2022/OFA/fairseq/examples/noisychannel/rerank_generate.py
deleted file mode 100644
index daeeae059a677a9fcd7c370be087f1f5c189bc52..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/rerank_generate.py
+++ /dev/null
@@ -1,397 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Generate n-best translations using a trained model.
-"""
-
-import os
-import subprocess
-from contextlib import redirect_stdout
-
-from fairseq import options
-from fairseq_cli import generate, preprocess
-
-from examples.noisychannel import rerank_options, rerank_utils
-
-
-def gen_and_reprocess_nbest(args):
- if args.score_dict_dir is None:
- args.score_dict_dir = args.data
- if args.prefix_len is not None:
- assert (
- args.right_to_left1 is False
- ), "prefix length not compatible with right to left models"
- assert (
- args.right_to_left2 is False
- ), "prefix length not compatible with right to left models"
-
- if args.nbest_list is not None:
- assert args.score_model2 is None
-
- if args.backwards1:
- scorer1_src = args.target_lang
- scorer1_tgt = args.source_lang
- else:
- scorer1_src = args.source_lang
- scorer1_tgt = args.target_lang
-
- store_data = (
- os.path.join(os.path.dirname(__file__)) + "/rerank_data/" + args.data_dir_name
- )
- if not os.path.exists(store_data):
- os.makedirs(store_data)
-
- (
- pre_gen,
- left_to_right_preprocessed_dir,
- right_to_left_preprocessed_dir,
- backwards_preprocessed_dir,
- lm_preprocessed_dir,
- ) = rerank_utils.get_directories(
- args.data_dir_name,
- args.num_rescore,
- args.gen_subset,
- args.gen_model_name,
- args.shard_id,
- args.num_shards,
- args.sampling,
- args.prefix_len,
- args.target_prefix_frac,
- args.source_prefix_frac,
- )
- assert not (
- args.right_to_left1 and args.backwards1
- ), "backwards right to left not supported"
- assert not (
- args.right_to_left2 and args.backwards2
- ), "backwards right to left not supported"
- assert not (
- args.prefix_len is not None and args.target_prefix_frac is not None
- ), "target prefix frac and target prefix len incompatible"
-
- # make directory to store generation results
- if not os.path.exists(pre_gen):
- os.makedirs(pre_gen)
-
- rerank1_is_gen = (
- args.gen_model == args.score_model1 and args.source_prefix_frac is None
- )
- rerank2_is_gen = (
- args.gen_model == args.score_model2 and args.source_prefix_frac is None
- )
-
- if args.nbest_list is not None:
- rerank2_is_gen = True
-
- # make directories to store preprossed nbest list for reranking
- if not os.path.exists(left_to_right_preprocessed_dir):
- os.makedirs(left_to_right_preprocessed_dir)
- if not os.path.exists(right_to_left_preprocessed_dir):
- os.makedirs(right_to_left_preprocessed_dir)
- if not os.path.exists(lm_preprocessed_dir):
- os.makedirs(lm_preprocessed_dir)
- if not os.path.exists(backwards_preprocessed_dir):
- os.makedirs(backwards_preprocessed_dir)
-
- score1_file = rerank_utils.rescore_file_name(
- pre_gen,
- args.prefix_len,
- args.model1_name,
- target_prefix_frac=args.target_prefix_frac,
- source_prefix_frac=args.source_prefix_frac,
- backwards=args.backwards1,
- )
- if args.score_model2 is not None:
- score2_file = rerank_utils.rescore_file_name(
- pre_gen,
- args.prefix_len,
- args.model2_name,
- target_prefix_frac=args.target_prefix_frac,
- source_prefix_frac=args.source_prefix_frac,
- backwards=args.backwards2,
- )
-
- predictions_bpe_file = pre_gen + "/generate_output_bpe.txt"
-
- using_nbest = args.nbest_list is not None
-
- if using_nbest:
- print("Using predefined n-best list from interactive.py")
- predictions_bpe_file = args.nbest_list
-
- else:
- if not os.path.isfile(predictions_bpe_file):
- print("STEP 1: generate predictions using the p(T|S) model with bpe")
- print(args.data)
- param1 = [
- args.data,
- "--path",
- args.gen_model,
- "--shard-id",
- str(args.shard_id),
- "--num-shards",
- str(args.num_shards),
- "--nbest",
- str(args.num_rescore),
- "--batch-size",
- str(args.batch_size),
- "--beam",
- str(args.num_rescore),
- "--batch-size",
- str(args.num_rescore),
- "--gen-subset",
- args.gen_subset,
- "--source-lang",
- args.source_lang,
- "--target-lang",
- args.target_lang,
- ]
- if args.sampling:
- param1 += ["--sampling"]
-
- gen_parser = options.get_generation_parser()
- input_args = options.parse_args_and_arch(gen_parser, param1)
-
- print(input_args)
- with open(predictions_bpe_file, "w") as f:
- with redirect_stdout(f):
- generate.main(input_args)
-
- gen_output = rerank_utils.BitextOutputFromGen(
- predictions_bpe_file,
- bpe_symbol=args.post_process,
- nbest=using_nbest,
- prefix_len=args.prefix_len,
- target_prefix_frac=args.target_prefix_frac,
- )
-
- if args.diff_bpe:
- rerank_utils.write_reprocessed(
- gen_output.no_bpe_source,
- gen_output.no_bpe_hypo,
- gen_output.no_bpe_target,
- pre_gen + "/source_gen_bpe." + args.source_lang,
- pre_gen + "/target_gen_bpe." + args.target_lang,
- pre_gen + "/reference_gen_bpe." + args.target_lang,
- )
- bitext_bpe = args.rescore_bpe_code
- bpe_src_param = [
- "-c",
- bitext_bpe,
- "--input",
- pre_gen + "/source_gen_bpe." + args.source_lang,
- "--output",
- pre_gen + "/rescore_data." + args.source_lang,
- ]
- bpe_tgt_param = [
- "-c",
- bitext_bpe,
- "--input",
- pre_gen + "/target_gen_bpe." + args.target_lang,
- "--output",
- pre_gen + "/rescore_data." + args.target_lang,
- ]
-
- subprocess.call(
- [
- "python",
- os.path.join(
- os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py"
- ),
- ]
- + bpe_src_param,
- shell=False,
- )
-
- subprocess.call(
- [
- "python",
- os.path.join(
- os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py"
- ),
- ]
- + bpe_tgt_param,
- shell=False,
- )
-
- if (not os.path.isfile(score1_file) and not rerank1_is_gen) or (
- args.score_model2 is not None
- and not os.path.isfile(score2_file)
- and not rerank2_is_gen
- ):
- print(
- "STEP 2: process the output of generate.py so we have clean text files with the translations"
- )
-
- rescore_file = "/rescore_data"
- if args.prefix_len is not None:
- prefix_len_rescore_file = rescore_file + "prefix" + str(args.prefix_len)
- if args.target_prefix_frac is not None:
- target_prefix_frac_rescore_file = (
- rescore_file + "target_prefix_frac" + str(args.target_prefix_frac)
- )
- if args.source_prefix_frac is not None:
- source_prefix_frac_rescore_file = (
- rescore_file + "source_prefix_frac" + str(args.source_prefix_frac)
- )
-
- if not args.right_to_left1 or not args.right_to_left2:
- if not args.diff_bpe:
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen + rescore_file + "." + args.source_lang,
- pre_gen + rescore_file + "." + args.target_lang,
- pre_gen + "/reference_file",
- bpe_symbol=args.post_process,
- )
- if args.prefix_len is not None:
- bw_rescore_file = prefix_len_rescore_file
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen + prefix_len_rescore_file + "." + args.source_lang,
- pre_gen + prefix_len_rescore_file + "." + args.target_lang,
- pre_gen + "/reference_file",
- prefix_len=args.prefix_len,
- bpe_symbol=args.post_process,
- )
- elif args.target_prefix_frac is not None:
- bw_rescore_file = target_prefix_frac_rescore_file
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen
- + target_prefix_frac_rescore_file
- + "."
- + args.source_lang,
- pre_gen
- + target_prefix_frac_rescore_file
- + "."
- + args.target_lang,
- pre_gen + "/reference_file",
- bpe_symbol=args.post_process,
- target_prefix_frac=args.target_prefix_frac,
- )
- else:
- bw_rescore_file = rescore_file
-
- if args.source_prefix_frac is not None:
- fw_rescore_file = source_prefix_frac_rescore_file
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen
- + source_prefix_frac_rescore_file
- + "."
- + args.source_lang,
- pre_gen
- + source_prefix_frac_rescore_file
- + "."
- + args.target_lang,
- pre_gen + "/reference_file",
- bpe_symbol=args.post_process,
- source_prefix_frac=args.source_prefix_frac,
- )
- else:
- fw_rescore_file = rescore_file
-
- if args.right_to_left1 or args.right_to_left2:
- rerank_utils.write_reprocessed(
- gen_output.source,
- gen_output.hypo,
- gen_output.target,
- pre_gen + "/right_to_left_rescore_data." + args.source_lang,
- pre_gen + "/right_to_left_rescore_data." + args.target_lang,
- pre_gen + "/right_to_left_reference_file",
- right_to_left=True,
- bpe_symbol=args.post_process,
- )
-
- print("STEP 3: binarize the translations")
- if (
- not args.right_to_left1
- or args.score_model2 is not None
- and not args.right_to_left2
- or not rerank1_is_gen
- ):
-
- if args.backwards1 or args.backwards2:
- if args.backwards_score_dict_dir is not None:
- bw_dict = args.backwards_score_dict_dir
- else:
- bw_dict = args.score_dict_dir
- bw_preprocess_param = [
- "--source-lang",
- scorer1_src,
- "--target-lang",
- scorer1_tgt,
- "--trainpref",
- pre_gen + bw_rescore_file,
- "--srcdict",
- bw_dict + "/dict." + scorer1_src + ".txt",
- "--tgtdict",
- bw_dict + "/dict." + scorer1_tgt + ".txt",
- "--destdir",
- backwards_preprocessed_dir,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(bw_preprocess_param)
- preprocess.main(input_args)
-
- preprocess_param = [
- "--source-lang",
- scorer1_src,
- "--target-lang",
- scorer1_tgt,
- "--trainpref",
- pre_gen + fw_rescore_file,
- "--srcdict",
- args.score_dict_dir + "/dict." + scorer1_src + ".txt",
- "--tgtdict",
- args.score_dict_dir + "/dict." + scorer1_tgt + ".txt",
- "--destdir",
- left_to_right_preprocessed_dir,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(preprocess_param)
- preprocess.main(input_args)
-
- if args.right_to_left1 or args.right_to_left2:
- preprocess_param = [
- "--source-lang",
- scorer1_src,
- "--target-lang",
- scorer1_tgt,
- "--trainpref",
- pre_gen + "/right_to_left_rescore_data",
- "--srcdict",
- args.score_dict_dir + "/dict." + scorer1_src + ".txt",
- "--tgtdict",
- args.score_dict_dir + "/dict." + scorer1_tgt + ".txt",
- "--destdir",
- right_to_left_preprocessed_dir,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(preprocess_param)
- preprocess.main(input_args)
-
- return gen_output
-
-
-def cli_main():
- parser = rerank_options.get_reranking_parser()
- args = options.parse_args_and_arch(parser)
- gen_and_reprocess_nbest(args)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/tasks/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/tasks/__init__.py
deleted file mode 100644
index d878278475fb24cf6b97d66d784e657567f5aa80..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/tasks/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-import os
-
-for file in os.listdir(os.path.dirname(__file__)):
- if file.endswith(".py") and not file.startswith("_"):
- task_name = file[: file.find(".py")]
- importlib.import_module("examples.speech_text_joint_to_text.tasks." + task_name)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/pass_through.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/pass_through.py
deleted file mode 100644
index 2f93db328c1de9b268e8ee1c0c1cad558fd089aa..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/pass_through.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class PassThroughScheduleConfig(FairseqDataclass):
- pass
-
-
-@register_lr_scheduler("pass_through", dataclass=PassThroughScheduleConfig)
-class PassThroughScheduleSchedule(FairseqLRScheduler):
- """Delegate lr scheduling to the optimizer."""
-
- def __init__(self, cfg: PassThroughScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
- assert (
- hasattr(optimizer, "lr_scheduler") and optimizer.lr_scheduler is not None
- ), "Pass-through schedule can only be used with optimizers with their own schedulers"
-
- def state_dict(self):
- return self.optimizer.lr_scheduler.state_dict()
-
- def load_state_dict(self, state_dict):
- self.optimizer.lr_scheduler.load_state_dict(state_dict)
-
- def step_begin_epoch(self, epoch):
- """Update the learning rate at the beginning of the given epoch."""
- return self.optimizer.lr_scheduler.step_begin_epoch(epoch)
-
- def step_update(self, num_updates):
- """Update the learning rate after each update."""
- return self.optimizer.lr_scheduler.step_update(num_updates)
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/__init__.py b/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/__init__.py
deleted file mode 100644
index 32e3592f896d61b4127e09d0476381b9d55e32ff..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from .deform_conv import (DeformConv, DeformConvPack, ModulatedDeformConv, ModulatedDeformConvPack, deform_conv,
- modulated_deform_conv)
-
-__all__ = [
- 'DeformConv', 'DeformConvPack', 'ModulatedDeformConv', 'ModulatedDeformConvPack', 'deform_conv',
- 'modulated_deform_conv'
-]
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/gen_mask_dataset_hydra.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/gen_mask_dataset_hydra.py
deleted file mode 100644
index 4f4fdea52315f24f83fbd802e51a1815097d0fcb..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/gen_mask_dataset_hydra.py
+++ /dev/null
@@ -1,124 +0,0 @@
-#!/usr/bin/env python3
-
-import glob
-import os
-import shutil
-import traceback
-import hydra
-from omegaconf import OmegaConf
-
-import PIL.Image as Image
-import numpy as np
-from joblib import Parallel, delayed
-
-from saicinpainting.evaluation.masks.mask import SegmentationMask, propose_random_square_crop
-from saicinpainting.evaluation.utils import load_yaml, SmallMode
-from saicinpainting.training.data.masks import MixedMaskGenerator
-
-
-class MakeManyMasksWrapper:
- def __init__(self, impl, variants_n=2):
- self.impl = impl
- self.variants_n = variants_n
-
- def get_masks(self, img):
- img = np.transpose(np.array(img), (2, 0, 1))
- return [self.impl(img)[0] for _ in range(self.variants_n)]
-
-
-def process_images(src_images, indir, outdir, config):
- if config.generator_kind == 'segmentation':
- mask_generator = SegmentationMask(**config.mask_generator_kwargs)
- elif config.generator_kind == 'random':
- mask_generator_kwargs = OmegaConf.to_container(config.mask_generator_kwargs, resolve=True)
- variants_n = mask_generator_kwargs.pop('variants_n', 2)
- mask_generator = MakeManyMasksWrapper(MixedMaskGenerator(**mask_generator_kwargs),
- variants_n=variants_n)
- else:
- raise ValueError(f'Unexpected generator kind: {config.generator_kind}')
-
- max_tamper_area = config.get('max_tamper_area', 1)
-
- for infile in src_images:
- try:
- file_relpath = infile[len(indir):]
- img_outpath = os.path.join(outdir, file_relpath)
- os.makedirs(os.path.dirname(img_outpath), exist_ok=True)
-
- image = Image.open(infile).convert('RGB')
-
- # scale input image to output resolution and filter smaller images
- if min(image.size) < config.cropping.out_min_size:
- handle_small_mode = SmallMode(config.cropping.handle_small_mode)
- if handle_small_mode == SmallMode.DROP:
- continue
- elif handle_small_mode == SmallMode.UPSCALE:
- factor = config.cropping.out_min_size / min(image.size)
- out_size = (np.array(image.size) * factor).round().astype('uint32')
- image = image.resize(out_size, resample=Image.BICUBIC)
- else:
- factor = config.cropping.out_min_size / min(image.size)
- out_size = (np.array(image.size) * factor).round().astype('uint32')
- image = image.resize(out_size, resample=Image.BICUBIC)
-
- # generate and select masks
- src_masks = mask_generator.get_masks(image)
-
- filtered_image_mask_pairs = []
- for cur_mask in src_masks:
- if config.cropping.out_square_crop:
- (crop_left,
- crop_top,
- crop_right,
- crop_bottom) = propose_random_square_crop(cur_mask,
- min_overlap=config.cropping.crop_min_overlap)
- cur_mask = cur_mask[crop_top:crop_bottom, crop_left:crop_right]
- cur_image = image.copy().crop((crop_left, crop_top, crop_right, crop_bottom))
- else:
- cur_image = image
-
- if len(np.unique(cur_mask)) == 0 or cur_mask.mean() > max_tamper_area:
- continue
-
- filtered_image_mask_pairs.append((cur_image, cur_mask))
-
- mask_indices = np.random.choice(len(filtered_image_mask_pairs),
- size=min(len(filtered_image_mask_pairs), config.max_masks_per_image),
- replace=False)
-
- # crop masks; save masks together with input image
- mask_basename = os.path.join(outdir, os.path.splitext(file_relpath)[0])
- for i, idx in enumerate(mask_indices):
- cur_image, cur_mask = filtered_image_mask_pairs[idx]
- cur_basename = mask_basename + f'_crop{i:03d}'
- Image.fromarray(np.clip(cur_mask * 255, 0, 255).astype('uint8'),
- mode='L').save(cur_basename + f'_mask{i:03d}.png')
- cur_image.save(cur_basename + '.png')
- except KeyboardInterrupt:
- return
- except Exception as ex:
- print(f'Could not make masks for {infile} due to {ex}:\n{traceback.format_exc()}')
-
-
-@hydra.main(config_path='../configs/data_gen/whydra', config_name='random_medium_256.yaml')
-def main(config: OmegaConf):
- if not config.indir.endswith('/'):
- config.indir += '/'
-
- os.makedirs(config.outdir, exist_ok=True)
-
- in_files = list(glob.glob(os.path.join(config.indir, '**', f'*.{config.location.extension}'),
- recursive=True))
- if config.n_jobs == 0:
- process_images(in_files, config.indir, config.outdir, config)
- else:
- in_files_n = len(in_files)
- chunk_size = in_files_n // config.n_jobs + (1 if in_files_n % config.n_jobs > 0 else 0)
- Parallel(n_jobs=config.n_jobs)(
- delayed(process_images)(in_files[start:start+chunk_size], config.indir, config.outdir, config)
- for start in range(0, len(in_files), chunk_size)
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/split_tar.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/split_tar.py
deleted file mode 100644
index ac1692addbb4191200c8c871fe356bb80d534c44..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/split_tar.py
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/usr/bin/env python3
-
-
-import tqdm
-import webdataset as wds
-
-
-def main(args):
- input_dataset = wds.Dataset(args.infile)
- output_dataset = wds.ShardWriter(args.outpattern)
- for rec in tqdm.tqdm(input_dataset):
- output_dataset.write(rec)
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('infile', type=str)
- aparser.add_argument('outpattern', type=str)
-
- main(aparser.parse_args())
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/models/attention_flax.py b/spaces/Jackflack09/diffuse-custom/diffusers/models/attention_flax.py
deleted file mode 100644
index 71106e05452cc7525cfbb81f2ac52926887313ec..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/models/attention_flax.py
+++ /dev/null
@@ -1,298 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import flax.linen as nn
-import jax.numpy as jnp
-
-
-class FlaxAttentionBlock(nn.Module):
- r"""
- A Flax multi-head attention module as described in: https://arxiv.org/abs/1706.03762
-
- Parameters:
- query_dim (:obj:`int`):
- Input hidden states dimension
- heads (:obj:`int`, *optional*, defaults to 8):
- Number of heads
- dim_head (:obj:`int`, *optional*, defaults to 64):
- Hidden states dimension inside each head
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
-
- """
- query_dim: int
- heads: int = 8
- dim_head: int = 64
- dropout: float = 0.0
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- inner_dim = self.dim_head * self.heads
- self.scale = self.dim_head**-0.5
-
- # Weights were exported with old names {to_q, to_k, to_v, to_out}
- self.query = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_q")
- self.key = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_k")
- self.value = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_v")
-
- self.proj_attn = nn.Dense(self.query_dim, dtype=self.dtype, name="to_out_0")
-
- def reshape_heads_to_batch_dim(self, tensor):
- batch_size, seq_len, dim = tensor.shape
- head_size = self.heads
- tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size)
- tensor = jnp.transpose(tensor, (0, 2, 1, 3))
- tensor = tensor.reshape(batch_size * head_size, seq_len, dim // head_size)
- return tensor
-
- def reshape_batch_dim_to_heads(self, tensor):
- batch_size, seq_len, dim = tensor.shape
- head_size = self.heads
- tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim)
- tensor = jnp.transpose(tensor, (0, 2, 1, 3))
- tensor = tensor.reshape(batch_size // head_size, seq_len, dim * head_size)
- return tensor
-
- def __call__(self, hidden_states, context=None, deterministic=True):
- context = hidden_states if context is None else context
-
- query_proj = self.query(hidden_states)
- key_proj = self.key(context)
- value_proj = self.value(context)
-
- query_states = self.reshape_heads_to_batch_dim(query_proj)
- key_states = self.reshape_heads_to_batch_dim(key_proj)
- value_states = self.reshape_heads_to_batch_dim(value_proj)
-
- # compute attentions
- attention_scores = jnp.einsum("b i d, b j d->b i j", query_states, key_states)
- attention_scores = attention_scores * self.scale
- attention_probs = nn.softmax(attention_scores, axis=2)
-
- # attend to values
- hidden_states = jnp.einsum("b i j, b j d -> b i d", attention_probs, value_states)
- hidden_states = self.reshape_batch_dim_to_heads(hidden_states)
- hidden_states = self.proj_attn(hidden_states)
- return hidden_states
-
-
-class FlaxBasicTransformerBlock(nn.Module):
- r"""
- A Flax transformer block layer with `GLU` (Gated Linear Unit) activation function as described in:
- https://arxiv.org/abs/1706.03762
-
-
- Parameters:
- dim (:obj:`int`):
- Inner hidden states dimension
- n_heads (:obj:`int`):
- Number of heads
- d_head (:obj:`int`):
- Hidden states dimension inside each head
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- only_cross_attention (`bool`, defaults to `False`):
- Whether to only apply cross attention.
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- dim: int
- n_heads: int
- d_head: int
- dropout: float = 0.0
- only_cross_attention: bool = False
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- # self attention (or cross_attention if only_cross_attention is True)
- self.attn1 = FlaxAttentionBlock(self.dim, self.n_heads, self.d_head, self.dropout, dtype=self.dtype)
- # cross attention
- self.attn2 = FlaxAttentionBlock(self.dim, self.n_heads, self.d_head, self.dropout, dtype=self.dtype)
- self.ff = FlaxGluFeedForward(dim=self.dim, dropout=self.dropout, dtype=self.dtype)
- self.norm1 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype)
- self.norm2 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype)
- self.norm3 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype)
-
- def __call__(self, hidden_states, context, deterministic=True):
- # self attention
- residual = hidden_states
- if self.only_cross_attention:
- hidden_states = self.attn1(self.norm1(hidden_states), context, deterministic=deterministic)
- else:
- hidden_states = self.attn1(self.norm1(hidden_states), deterministic=deterministic)
- hidden_states = hidden_states + residual
-
- # cross attention
- residual = hidden_states
- hidden_states = self.attn2(self.norm2(hidden_states), context, deterministic=deterministic)
- hidden_states = hidden_states + residual
-
- # feed forward
- residual = hidden_states
- hidden_states = self.ff(self.norm3(hidden_states), deterministic=deterministic)
- hidden_states = hidden_states + residual
-
- return hidden_states
-
-
-class FlaxTransformer2DModel(nn.Module):
- r"""
- A Spatial Transformer layer with Gated Linear Unit (GLU) activation function as described in:
- https://arxiv.org/pdf/1506.02025.pdf
-
-
- Parameters:
- in_channels (:obj:`int`):
- Input number of channels
- n_heads (:obj:`int`):
- Number of heads
- d_head (:obj:`int`):
- Hidden states dimension inside each head
- depth (:obj:`int`, *optional*, defaults to 1):
- Number of transformers block
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- use_linear_projection (`bool`, defaults to `False`): tbd
- only_cross_attention (`bool`, defaults to `False`): tbd
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- n_heads: int
- d_head: int
- depth: int = 1
- dropout: float = 0.0
- use_linear_projection: bool = False
- only_cross_attention: bool = False
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.norm = nn.GroupNorm(num_groups=32, epsilon=1e-5)
-
- inner_dim = self.n_heads * self.d_head
- if self.use_linear_projection:
- self.proj_in = nn.Dense(inner_dim, dtype=self.dtype)
- else:
- self.proj_in = nn.Conv(
- inner_dim,
- kernel_size=(1, 1),
- strides=(1, 1),
- padding="VALID",
- dtype=self.dtype,
- )
-
- self.transformer_blocks = [
- FlaxBasicTransformerBlock(
- inner_dim,
- self.n_heads,
- self.d_head,
- dropout=self.dropout,
- only_cross_attention=self.only_cross_attention,
- dtype=self.dtype,
- )
- for _ in range(self.depth)
- ]
-
- if self.use_linear_projection:
- self.proj_out = nn.Dense(inner_dim, dtype=self.dtype)
- else:
- self.proj_out = nn.Conv(
- inner_dim,
- kernel_size=(1, 1),
- strides=(1, 1),
- padding="VALID",
- dtype=self.dtype,
- )
-
- def __call__(self, hidden_states, context, deterministic=True):
- batch, height, width, channels = hidden_states.shape
- residual = hidden_states
- hidden_states = self.norm(hidden_states)
- if self.use_linear_projection:
- hidden_states = hidden_states.reshape(batch, height * width, channels)
- hidden_states = self.proj_in(hidden_states)
- else:
- hidden_states = self.proj_in(hidden_states)
- hidden_states = hidden_states.reshape(batch, height * width, channels)
-
- for transformer_block in self.transformer_blocks:
- hidden_states = transformer_block(hidden_states, context, deterministic=deterministic)
-
- if self.use_linear_projection:
- hidden_states = self.proj_out(hidden_states)
- hidden_states = hidden_states.reshape(batch, height, width, channels)
- else:
- hidden_states = hidden_states.reshape(batch, height, width, channels)
- hidden_states = self.proj_out(hidden_states)
-
- hidden_states = hidden_states + residual
- return hidden_states
-
-
-class FlaxGluFeedForward(nn.Module):
- r"""
- Flax module that encapsulates two Linear layers separated by a gated linear unit activation from:
- https://arxiv.org/abs/2002.05202
-
- Parameters:
- dim (:obj:`int`):
- Inner hidden states dimension
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- dim: int
- dropout: float = 0.0
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- # The second linear layer needs to be called
- # net_2 for now to match the index of the Sequential layer
- self.net_0 = FlaxGEGLU(self.dim, self.dropout, self.dtype)
- self.net_2 = nn.Dense(self.dim, dtype=self.dtype)
-
- def __call__(self, hidden_states, deterministic=True):
- hidden_states = self.net_0(hidden_states)
- hidden_states = self.net_2(hidden_states)
- return hidden_states
-
-
-class FlaxGEGLU(nn.Module):
- r"""
- Flax implementation of a Linear layer followed by the variant of the gated linear unit activation function from
- https://arxiv.org/abs/2002.05202.
-
- Parameters:
- dim (:obj:`int`):
- Input hidden states dimension
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- dim: int
- dropout: float = 0.0
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- inner_dim = self.dim * 4
- self.proj = nn.Dense(inner_dim * 2, dtype=self.dtype)
-
- def __call__(self, hidden_states, deterministic=True):
- hidden_states = self.proj(hidden_states)
- hidden_linear, hidden_gelu = jnp.split(hidden_states, 2, axis=2)
- return hidden_linear * nn.gelu(hidden_gelu)
diff --git a/spaces/Jamkonams/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md b/spaces/Jamkonams/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
deleted file mode 100644
index a4f28a3d27d66d79cb95f2b8b847832172bb5f11..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-
-
-### Background
-
-
-### Changes
-
-
-### Documentation
-
-
-### Test Plan
-
-
-### PR Quality Checklist
-- [ ] My pull request is atomic and focuses on a single change.
-- [ ] I have thoroughly tested my changes with multiple different prompts.
-- [ ] I have considered potential risks and mitigations for my changes.
-- [ ] I have documented my changes clearly and comprehensively.
-- [ ] I have not snuck in any "extra" small tweaks changes
-
-
-
-
diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/metrics.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/metrics.py
deleted file mode 100644
index ca5ea20bcbb9c0f571b18c6d6e4d44e57acc7d14..0000000000000000000000000000000000000000
--- a/spaces/JohnnyPittt/audio-styling/deepafx_st/metrics.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import torch
-import auraloss
-import resampy
-import torchaudio
-from pesq import pesq
-import pyloudnorm as pyln
-
-
-def crest_factor(x):
- """Compute the crest factor of waveform."""
-
- peak, _ = x.abs().max(dim=-1)
- rms = torch.sqrt((x ** 2).mean(dim=-1))
-
- return 20 * torch.log(peak / rms.clamp(1e-8))
-
-
-def rms_energy(x):
-
- rms = torch.sqrt((x ** 2).mean(dim=-1))
-
- return 20 * torch.log(rms.clamp(1e-8))
-
-
-def spectral_centroid(x):
- """Compute the crest factor of waveform.
-
- See: https://gist.github.com/endolith/359724
-
- """
-
- spectrum = torch.fft.rfft(x).abs()
- normalized_spectrum = spectrum / spectrum.sum()
- normalized_frequencies = torch.linspace(0, 1, spectrum.shape[-1])
- spectral_centroid = torch.sum(normalized_frequencies * normalized_spectrum)
-
- return spectral_centroid
-
-
-def loudness(x, sample_rate):
- """Compute the loudness in dB LUFS of waveform."""
- meter = pyln.Meter(sample_rate)
-
- # add stereo dim if needed
- if x.shape[0] < 2:
- x = x.repeat(2, 1)
-
- return torch.tensor(meter.integrated_loudness(x.permute(1, 0).numpy()))
-
-
-class MelSpectralDistance(torch.nn.Module):
- def __init__(self, sample_rate, length=65536):
- super().__init__()
- self.error = auraloss.freq.MelSTFTLoss(
- sample_rate,
- fft_size=length,
- hop_size=length,
- win_length=length,
- w_sc=0,
- w_log_mag=1,
- w_lin_mag=1,
- n_mels=128,
- scale_invariance=False,
- )
-
- # I think scale invariance may not work well,
- # since aspects of the phase may be considered?
-
- def forward(self, input, target):
- return self.error(input, target)
-
-
-class PESQ(torch.nn.Module):
- def __init__(self, sample_rate):
- super().__init__()
- self.sample_rate = sample_rate
-
- def forward(self, input, target):
- if self.sample_rate != 16000:
- target = resampy.resample(
- target.view(-1).numpy(),
- self.sample_rate,
- 16000,
- )
- input = resampy.resample(
- input.view(-1).numpy(),
- self.sample_rate,
- 16000,
- )
-
- return pesq(
- 16000,
- target,
- input,
- "wb",
- )
-
-
-class CrestFactorError(torch.nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input, target):
- return torch.nn.functional.l1_loss(
- crest_factor(input),
- crest_factor(target),
- ).item()
-
-
-class RMSEnergyError(torch.nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input, target):
- return torch.nn.functional.l1_loss(
- rms_energy(input),
- rms_energy(target),
- ).item()
-
-
-class SpectralCentroidError(torch.nn.Module):
- def __init__(self, sample_rate, n_fft=2048, hop_length=512):
- super().__init__()
-
- self.spectral_centroid = torchaudio.transforms.SpectralCentroid(
- sample_rate,
- n_fft=n_fft,
- hop_length=hop_length,
- )
-
- def forward(self, input, target):
- return torch.nn.functional.l1_loss(
- self.spectral_centroid(input + 1e-16).mean(),
- self.spectral_centroid(target + 1e-16).mean(),
- ).item()
-
-
-class LoudnessError(torch.nn.Module):
- def __init__(self, sample_rate: int, peak_normalize: bool = False):
- super().__init__()
- self.sample_rate = sample_rate
- self.peak_normalize = peak_normalize
-
- def forward(self, input, target):
-
- if self.peak_normalize:
- # peak normalize
- x = input / input.abs().max()
- y = target / target.abs().max()
- else:
- x = input
- y = target
-
- return torch.nn.functional.l1_loss(
- loudness(x.view(1, -1), self.sample_rate),
- loudness(y.view(1, -1), self.sample_rate),
- ).item()
diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/losses/contperceptual.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/losses/contperceptual.py
deleted file mode 100644
index 672c1e32a1389def02461c0781339681060c540e..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/losses/contperceptual.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import torch
-import torch.nn as nn
-
-from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no?
-
-
-class LPIPSWithDiscriminator(nn.Module):
- def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0,
- disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
- perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
- disc_loss="hinge"):
-
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- self.kl_weight = kl_weight
- self.pixel_weight = pixelloss_weight
- self.perceptual_loss = LPIPS().eval()
- self.perceptual_weight = perceptual_weight
- # output log variance
- self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init)
-
- self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(self, inputs, reconstructions, posteriors, optimizer_idx,
- global_step, last_layer=None, cond=None, split="train",
- weights=None):
- rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
- rec_loss = rec_loss + self.perceptual_weight * p_loss
-
- nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar
- weighted_nll_loss = nll_loss
- if weights is not None:
- weighted_nll_loss = weights*nll_loss
- weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0]
- nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- kl_loss = posteriors.kl()
- kl_loss = torch.sum(kl_loss) / kl_loss.shape[0]
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
- g_loss = -torch.mean(logits_fake)
-
- if self.disc_factor > 0.0:
- try:
- d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
- else:
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss
-
- log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(),
- "{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean()
- }
- return d_loss, log
-
diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/tests/test_watermark.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/tests/test_watermark.py
deleted file mode 100644
index f93f8a6e70763c0e284157bc8225827520b2f5ef..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/tests/test_watermark.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import cv2
-import fire
-from imwatermark import WatermarkDecoder
-
-
-def testit(img_path):
- bgr = cv2.imread(img_path)
- decoder = WatermarkDecoder('bytes', 136)
- watermark = decoder.decode(bgr, 'dwtDct')
- try:
- dec = watermark.decode('utf-8')
- except:
- dec = "null"
- print(dec)
-
-
-if __name__ == "__main__":
- fire.Fire(testit)
\ No newline at end of file
diff --git a/spaces/KenjieDec/GPEN/retinaface/layers/modules/__init__.py b/spaces/KenjieDec/GPEN/retinaface/layers/modules/__init__.py
deleted file mode 100644
index cf24bddbf283f233d0b93fc074a2bac2f5c044a9..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/retinaface/layers/modules/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .multibox_loss import MultiBoxLoss
-
-__all__ = ['MultiBoxLoss']
diff --git a/spaces/Keshav4/resume-data-extraction/app.py b/spaces/Keshav4/resume-data-extraction/app.py
deleted file mode 100644
index 54edc1812c72de71135595459a6e76cf332edea9..0000000000000000000000000000000000000000
--- a/spaces/Keshav4/resume-data-extraction/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from pydoc import describe
-import gradio as gr
-from main import Main
-
-
-main = Main()
-
-def parse_cv(cv):
- return main.parse_cv(cv.name)
-
-
-description = """A demo for a CV parser."""
-article = "Resume Parser by Sybghat"
-file_input = gr.inputs.File(file_count="single", type="file", label="Upload a CV: .PDF Or .TXT", optional=False)
-iface = gr.Interface(fn=parse_cv, inputs=file_input, outputs="json", allow_flagging="never",
- allow_screenshot=False, title="CV Parser", theme="seafoam", description=description, article=article)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/synthesizer_dataset.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/synthesizer_dataset.py
deleted file mode 100644
index 36fcaf4dd6e52444358277b9da98611862fa07c0..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/synthesizer_dataset.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import torch
-from torch.utils.data import Dataset
-import numpy as np
-from pathlib import Path
-from synthesizer.utils.text import text_to_sequence
-
-
-class SynthesizerDataset(Dataset):
- def __init__(self, metadata_fpath: Path, mel_dir: Path, embed_dir: Path, hparams):
- print("Using inputs from:\n\t%s\n\t%s\n\t%s" % (metadata_fpath, mel_dir, embed_dir))
-
- with metadata_fpath.open("r") as metadata_file:
- metadata = [line.split("|") for line in metadata_file]
-
- mel_fnames = [x[1] for x in metadata if int(x[4])]
- mel_fpaths = [mel_dir.joinpath(fname) for fname in mel_fnames]
- embed_fnames = [x[2] for x in metadata if int(x[4])]
- embed_fpaths = [embed_dir.joinpath(fname) for fname in embed_fnames]
- self.samples_fpaths = list(zip(mel_fpaths, embed_fpaths))
- self.samples_texts = [x[5].strip() for x in metadata if int(x[4])]
- self.metadata = metadata
- self.hparams = hparams
-
- print("Found %d samples" % len(self.samples_fpaths))
-
- def __getitem__(self, index):
- # Sometimes index may be a list of 2 (not sure why this happens)
- # If that is the case, return a single item corresponding to first element in index
- if index is list:
- index = index[0]
-
- mel_path, embed_path = self.samples_fpaths[index]
- mel = np.load(mel_path).T.astype(np.float32)
-
- # Load the embed
- embed = np.load(embed_path)
-
- # Get the text and clean it
- text = text_to_sequence(self.samples_texts[index], self.hparams.tts_cleaner_names)
-
- # Convert the list returned by text_to_sequence to a numpy array
- text = np.asarray(text).astype(np.int32)
-
- return text, mel.astype(np.float32), embed.astype(np.float32), index
-
- def __len__(self):
- return len(self.samples_fpaths)
-
-
-def collate_synthesizer(batch, r, hparams):
- # Text
- x_lens = [len(x[0]) for x in batch]
- max_x_len = max(x_lens)
-
- chars = [pad1d(x[0], max_x_len) for x in batch]
- chars = np.stack(chars)
-
- # Mel spectrogram
- spec_lens = [x[1].shape[-1] for x in batch]
- max_spec_len = max(spec_lens) + 1
- if max_spec_len % r != 0:
- max_spec_len += r - max_spec_len % r
-
- # WaveRNN mel spectrograms are normalized to [0, 1] so zero padding adds silence
- # By default, SV2TTS uses symmetric mels, where -1*max_abs_value is silence.
- if hparams.symmetric_mels:
- mel_pad_value = -1 * hparams.max_abs_value
- else:
- mel_pad_value = 0
-
- mel = [pad2d(x[1], max_spec_len, pad_value=mel_pad_value) for x in batch]
- mel = np.stack(mel)
-
- # Speaker embedding (SV2TTS)
- embeds = np.array([x[2] for x in batch])
-
- # Index (for vocoder preprocessing)
- indices = [x[3] for x in batch]
-
-
- # Convert all to tensor
- chars = torch.tensor(chars).long()
- mel = torch.tensor(mel)
- embeds = torch.tensor(embeds)
-
- return chars, mel, embeds, indices
-
-def pad1d(x, max_len, pad_value=0):
- return np.pad(x, (0, max_len - len(x)), mode="constant", constant_values=pad_value)
-
-def pad2d(x, max_len, pad_value=0):
- return np.pad(x, ((0, 0), (0, max_len - x.shape[-1])), mode="constant", constant_values=pad_value)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/structures/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/structures/__init__.py
deleted file mode 100644
index b72a5b8f6586200b0b87c77d834ac9b7733f0f3f..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/structures/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .det_data_sample import DetDataSample, OptSampleList, SampleList
-
-__all__ = ['DetDataSample', 'SampleList', 'OptSampleList']
diff --git a/spaces/Lamai/LAMAIGPT/autogpt/memory/weaviate.py b/spaces/Lamai/LAMAIGPT/autogpt/memory/weaviate.py
deleted file mode 100644
index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000
--- a/spaces/Lamai/LAMAIGPT/autogpt/memory/weaviate.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import uuid
-
-import weaviate
-from weaviate import Client
-from weaviate.embedded import EmbeddedOptions
-from weaviate.util import generate_uuid5
-
-from autogpt.config import Config
-from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
-
-
-def default_schema(weaviate_index):
- return {
- "class": weaviate_index,
- "properties": [
- {
- "name": "raw_text",
- "dataType": ["text"],
- "description": "original text for the embedding",
- }
- ],
- }
-
-
-class WeaviateMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- auth_credentials = self._build_auth_credentials(cfg)
-
- url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}"
-
- if cfg.use_weaviate_embedded:
- self.client = Client(
- embedded_options=EmbeddedOptions(
- hostname=cfg.weaviate_host,
- port=int(cfg.weaviate_port),
- persistence_data_path=cfg.weaviate_embedded_path,
- )
- )
-
- print(
- f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}"
- )
- else:
- self.client = Client(url, auth_client_secret=auth_credentials)
-
- self.index = WeaviateMemory.format_classname(cfg.memory_index)
- self._create_schema()
-
- @staticmethod
- def format_classname(index):
- # weaviate uses capitalised index names
- # The python client uses the following code to format
- # index names before the corresponding class is created
- if len(index) == 1:
- return index.capitalize()
- return index[0].capitalize() + index[1:]
-
- def _create_schema(self):
- schema = default_schema(self.index)
- if not self.client.schema.contains(schema):
- self.client.schema.create_class(schema)
-
- def _build_auth_credentials(self, cfg):
- if cfg.weaviate_username and cfg.weaviate_password:
- return weaviate.AuthClientPassword(
- cfg.weaviate_username, cfg.weaviate_password
- )
- if cfg.weaviate_api_key:
- return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key)
- else:
- return None
-
- def add(self, data):
- vector = get_ada_embedding(data)
-
- doc_uuid = generate_uuid5(data, self.index)
- data_object = {"raw_text": data}
-
- with self.client.batch as batch:
- batch.add_data_object(
- uuid=doc_uuid,
- data_object=data_object,
- class_name=self.index,
- vector=vector,
- )
-
- return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}"
-
- def get(self, data):
- return self.get_relevant(data, 1)
-
- def clear(self):
- self.client.schema.delete_all()
-
- # weaviate does not yet have a neat way to just remove the items in an index
- # without removing the entire schema, therefore we need to re-create it
- # after a call to delete_all
- self._create_schema()
-
- return "Obliterated"
-
- def get_relevant(self, data, num_relevant=5):
- query_embedding = get_ada_embedding(data)
- try:
- results = (
- self.client.query.get(self.index, ["raw_text"])
- .with_near_vector({"vector": query_embedding, "certainty": 0.7})
- .with_limit(num_relevant)
- .do()
- )
-
- if len(results["data"]["Get"][self.index]) > 0:
- return [
- str(item["raw_text"]) for item in results["data"]["Get"][self.index]
- ]
- else:
- return []
-
- except Exception as err:
- print(f"Unexpected error {err=}, {type(err)=}")
- return []
-
- def get_stats(self):
- result = self.client.query.aggregate(self.index).with_meta_count().do()
- class_data = result["data"]["Aggregate"][self.index]
-
- return class_data[0]["meta"] if class_data else {}
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/envelope.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/envelope.py
deleted file mode 100644
index fd0cbc083cdc8fdf5cb794dd3f8e7be69ca5b252..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/envelope.py
+++ /dev/null
@@ -1,126 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-import sys
-
-from . import Indicator, MovingAverage
-
-
-class EnvelopeMixIn(object):
- '''
- MixIn class to create a subclass with another indicator. The main line of
- that indicator will be surrounded by an upper and lower band separated a
- given "perc"entage from the input main line
-
- The usage is:
-
- - Class XXXEnvelope(XXX, EnvelopeMixIn)
-
- Formula:
- - 'line' (inherited from XXX))
- - top = 'line' * (1 + perc)
- - bot = 'line' * (1 - perc)
-
- See also:
- - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:moving_average_envelopes
- '''
- lines = ('top', 'bot',)
- params = (('perc', 2.5),)
- plotlines = dict(top=dict(_samecolor=True), bot=dict(_samecolor=True),)
-
- def __init__(self):
- # Mix-in & directly from object -> does not necessarily need super
- # super(EnvelopeMixIn, self).__init__()
- perc = self.p.perc / 100.0
-
- self.lines.top = self.lines[0] * (1.0 + perc)
- self.lines.bot = self.lines[0] * (1.0 - perc)
-
- super(EnvelopeMixIn, self).__init__()
-
-
-class _EnvelopeBase(Indicator):
- lines = ('src',)
-
- # plot the envelope lines along the passed source
- plotinfo = dict(subplot=False)
-
- # Do not replot the data line
- plotlines = dict(src=dict(_plotskip=True))
-
- def __init__(self):
- self.lines.src = self.data
- super(_EnvelopeBase, self).__init__()
-
-
-class Envelope(_EnvelopeBase, EnvelopeMixIn):
- '''
- It creates envelopes bands separated from the source data by a given
- percentage
-
- Formula:
- - src = datasource
- - top = src * (1 + perc)
- - bot = src * (1 - perc)
-
- See also:
- - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:moving_average_envelopes
- '''
-
-
-# Automatic creation of Moving Average Envelope classes
-
-for movav in MovingAverage._movavs[1:]:
- _newclsdoc = '''
- %s and envelope bands separated "perc" from it
-
- Formula:
- - %s (from %s)
- - top = %s * (1 + perc)
- - bot = %s * (1 - perc)
-
- See also:
- - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:moving_average_envelopes
- '''
- # Skip aliases - they will be created automatically
- if getattr(movav, 'aliased', ''):
- continue
-
- movname = movav.__name__
- linename = movav.lines._getlinealias(0)
- newclsname = movname + 'Envelope'
-
- newaliases = []
- for alias in getattr(movav, 'alias', []):
- for suffix in ['Envelope']:
- newaliases.append(alias + suffix)
-
- newclsdoc = _newclsdoc % (movname, linename, movname, linename, linename)
-
- newclsdct = {'__doc__': newclsdoc,
- '__module__': EnvelopeMixIn.__module__,
- '_notregister': True,
- 'alias': newaliases}
- newcls = type(str(newclsname), (movav, EnvelopeMixIn), newclsdct)
- module = sys.modules[EnvelopeMixIn.__module__]
- setattr(module, newclsname, newcls)
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/psenet_r50_fpnf.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/psenet_r50_fpnf.py
deleted file mode 100644
index a3aff0d1325d3b9e25b5ed095cea28d313f611a0..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/psenet_r50_fpnf.py
+++ /dev/null
@@ -1,51 +0,0 @@
-model_poly = dict(
- type='PSENet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPNF',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- fusion_type='concat'),
- bbox_head=dict(
- type='PSEHead',
- in_channels=[256],
- out_channels=7,
- loss=dict(type='PSELoss'),
- postprocessor=dict(type='PSEPostprocessor', text_repr_type='poly')),
- train_cfg=None,
- test_cfg=None)
-
-model_quad = dict(
- type='PSENet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPNF',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- fusion_type='concat'),
- bbox_head=dict(
- type='PSEHead',
- in_channels=[256],
- out_channels=7,
- loss=dict(type='PSELoss'),
- postprocessor=dict(type='PSEPostprocessor', text_repr_type='quad')),
- train_cfg=None,
- test_cfg=None)
diff --git a/spaces/ML701G7/taim-gan/src/models/modules/residual.py b/spaces/ML701G7/taim-gan/src/models/modules/residual.py
deleted file mode 100644
index 8aa9340e07c26f981b7f71376a544054875680b3..0000000000000000000000000000000000000000
--- a/spaces/ML701G7/taim-gan/src/models/modules/residual.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""Residual Block Adopted from ManiGAN"""
-
-from typing import Any
-
-import torch
-from torch import nn
-
-
-class ResidualBlock(nn.Module):
- """Residual Block"""
-
- def __init__(self, channel_num: int) -> None:
- """
- :param channel_num: Number of channels in the input
- """
- super().__init__()
- self.block = nn.Sequential(
- nn.Conv2d(
- channel_num,
- channel_num * 2,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False,
- ),
- nn.InstanceNorm2d(channel_num * 2),
- nn.GLU(dim=1),
- nn.Conv2d(
- channel_num, channel_num, kernel_size=3, stride=1, padding=1, bias=False
- ),
- nn.InstanceNorm2d(channel_num),
- )
-
- def forward(self, input_tensor: torch.Tensor) -> Any:
- """
- :param input_tensor: Input tensor
- :return: Output tensor
- """
- residual = input_tensor
- out = self.block(input_tensor)
- out += residual
- return out
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/linter.sh b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/linter.sh
deleted file mode 100644
index df2e17436d30e89ff1728109301599f425f1ad6b..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/linter.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-{
- black --version | grep -E "23\." > /dev/null
-} || {
- echo "Linter requires 'black==23.*' !"
- exit 1
-}
-
-ISORT_VERSION=$(isort --version-number)
-if [[ "$ISORT_VERSION" != 5.12* ]]; then
- echo "Linter requires isort==5.12.0 !"
- exit 1
-fi
-
-echo "Running isort ..."
-isort . --atomic
-
-echo "Running black ..."
-black -l 100 .
-
-echo "Running flake8 ..."
-if [ -x "$(command -v flake8)" ]; then
- flake8 .
-else
- python3 -m flake8 .
-fi
-
-echo "Running mypy..."
-
-mypy --exclude 'setup.py|notebooks' .
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/dataset/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/dataset/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Manmay/tortoise-tts/tortoise/read.py b/spaces/Manmay/tortoise-tts/tortoise/read.py
deleted file mode 100644
index e5839aa89522d4770ab3f53ef2aca5b7eb7eac84..0000000000000000000000000000000000000000
--- a/spaces/Manmay/tortoise-tts/tortoise/read.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import argparse
-import os
-from time import time
-
-import torch
-import torchaudio
-
-from api import TextToSpeech, MODELS_DIR
-from utils.audio import load_audio, load_voices
-from utils.text import split_and_recombine_text
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--textfile', type=str, help='A file containing the text to read.', default="tortoise/data/riding_hood.txt")
- parser.add_argument('--voice', type=str, help='Selects the voice to use for generation. See options in voices/ directory (and add your own!) '
- 'Use the & character to join two voices together. Use a comma to perform inference on multiple voices.', default='pat')
- parser.add_argument('--output_path', type=str, help='Where to store outputs.', default='results/longform/')
- parser.add_argument('--output_name', type=str, help='How to name the output file', default='combined.wav')
- parser.add_argument('--preset', type=str, help='Which voice preset to use.', default='standard')
- parser.add_argument('--regenerate', type=str, help='Comma-separated list of clip numbers to re-generate, or nothing.', default=None)
- parser.add_argument('--candidates', type=int, help='How many output candidates to produce per-voice. Only the first candidate is actually used in the final product, the others can be used manually.', default=1)
- parser.add_argument('--model_dir', type=str, help='Where to find pretrained model checkpoints. Tortoise automatically downloads these to .models, so this'
- 'should only be specified if you have custom checkpoints.', default=MODELS_DIR)
- parser.add_argument('--seed', type=int, help='Random seed which can be used to reproduce results.', default=None)
- parser.add_argument('--produce_debug_state', type=bool, help='Whether or not to produce debug_state.pth, which can aid in reproducing problems. Defaults to true.', default=True)
- parser.add_argument('--use_deepspeed', type=bool, help='Use deepspeed for speed bump.', default=False)
- parser.add_argument('--kv_cache', type=bool, help='If you disable this please wait for a long a time to get the output', default=True)
- parser.add_argument('--half', type=bool, help="float16(half) precision inference if True it's faster and take less vram and ram", default=True)
-
-
- args = parser.parse_args()
- if torch.backends.mps.is_available():
- args.use_deepspeed = False
- tts = TextToSpeech(models_dir=args.model_dir, use_deepspeed=args.use_deepspeed, kv_cache=args.kv_cache, half=args.half)
-
- outpath = args.output_path
- outname = args.output_name
- selected_voices = args.voice.split(',')
- regenerate = args.regenerate
- if regenerate is not None:
- regenerate = [int(e) for e in regenerate.split(',')]
-
- # Process text
- with open(args.textfile, 'r', encoding='utf-8') as f:
- text = ' '.join([l for l in f.readlines()])
- if '|' in text:
- print("Found the '|' character in your text, which I will use as a cue for where to split it up. If this was not"
- "your intent, please remove all '|' characters from the input.")
- texts = text.split('|')
- else:
- texts = split_and_recombine_text(text)
-
- seed = int(time()) if args.seed is None else args.seed
- for selected_voice in selected_voices:
- voice_outpath = os.path.join(outpath, selected_voice)
- os.makedirs(voice_outpath, exist_ok=True)
-
- if '&' in selected_voice:
- voice_sel = selected_voice.split('&')
- else:
- voice_sel = [selected_voice]
-
- voice_samples, conditioning_latents = load_voices(voice_sel)
- all_parts = []
- for j, text in enumerate(texts):
- if regenerate is not None and j not in regenerate:
- all_parts.append(load_audio(os.path.join(voice_outpath, f'{j}.wav'), 24000))
- continue
- gen = tts.tts_with_preset(text, voice_samples=voice_samples, conditioning_latents=conditioning_latents,
- preset=args.preset, k=args.candidates, use_deterministic_seed=seed)
- if args.candidates == 1:
- audio_ = gen.squeeze(0).cpu()
- torchaudio.save(os.path.join(voice_outpath, f'{j}.wav'), audio_, 24000)
- else:
- candidate_dir = os.path.join(voice_outpath, str(j))
- os.makedirs(candidate_dir, exist_ok=True)
- for k, g in enumerate(gen):
- torchaudio.save(os.path.join(candidate_dir, f'{k}.wav'), g.squeeze(0).cpu(), 24000)
- audio_ = gen[0].squeeze(0).cpu()
- all_parts.append(audio_)
-
- if args.candidates == 1:
- full_audio = torch.cat(all_parts, dim=-1)
- torchaudio.save(os.path.join(voice_outpath, f"{outname}.wav"), full_audio, 24000)
-
- if args.produce_debug_state:
- os.makedirs('debug_states', exist_ok=True)
- dbg_state = (seed, texts, voice_samples, conditioning_latents)
- torch.save(dbg_state, f'debug_states/read_debug_{selected_voice}.pth')
-
- # Combine each candidate's audio clips.
- if args.candidates > 1:
- audio_clips = []
- for candidate in range(args.candidates):
- for line in range(len(texts)):
- wav_file = os.path.join(voice_outpath, str(line), f"{candidate}.wav")
- audio_clips.append(load_audio(wav_file, 24000))
- audio_clips = torch.cat(audio_clips, dim=-1)
- torchaudio.save(os.path.join(voice_outpath, f"{outname}_{candidate:02d}.wav"), audio_clips, 24000)
- audio_clips = []
diff --git a/spaces/Matthijs/mms-tts-demo/uroman/lib/JSON/backportPP.pm b/spaces/Matthijs/mms-tts-demo/uroman/lib/JSON/backportPP.pm
deleted file mode 100644
index db4f8bbb3b741e95c5817edde612718af0f889e4..0000000000000000000000000000000000000000
--- a/spaces/Matthijs/mms-tts-demo/uroman/lib/JSON/backportPP.pm
+++ /dev/null
@@ -1,2806 +0,0 @@
-package # This is JSON::backportPP
- JSON::PP;
-
-# JSON-2.0
-
-use 5.005;
-use strict;
-use base qw(Exporter);
-use overload ();
-
-use Carp ();
-use B ();
-#use Devel::Peek;
-
-use vars qw($VERSION);
-$VERSION = '2.27204';
-
-@JSON::PP::EXPORT = qw(encode_json decode_json from_json to_json);
-
-# instead of hash-access, i tried index-access for speed.
-# but this method is not faster than what i expected. so it will be changed.
-
-use constant P_ASCII => 0;
-use constant P_LATIN1 => 1;
-use constant P_UTF8 => 2;
-use constant P_INDENT => 3;
-use constant P_CANONICAL => 4;
-use constant P_SPACE_BEFORE => 5;
-use constant P_SPACE_AFTER => 6;
-use constant P_ALLOW_NONREF => 7;
-use constant P_SHRINK => 8;
-use constant P_ALLOW_BLESSED => 9;
-use constant P_CONVERT_BLESSED => 10;
-use constant P_RELAXED => 11;
-
-use constant P_LOOSE => 12;
-use constant P_ALLOW_BIGNUM => 13;
-use constant P_ALLOW_BAREKEY => 14;
-use constant P_ALLOW_SINGLEQUOTE => 15;
-use constant P_ESCAPE_SLASH => 16;
-use constant P_AS_NONBLESSED => 17;
-
-use constant P_ALLOW_UNKNOWN => 18;
-
-use constant OLD_PERL => $] < 5.008 ? 1 : 0;
-
-BEGIN {
- my @xs_compati_bit_properties = qw(
- latin1 ascii utf8 indent canonical space_before space_after allow_nonref shrink
- allow_blessed convert_blessed relaxed allow_unknown
- );
- my @pp_bit_properties = qw(
- allow_singlequote allow_bignum loose
- allow_barekey escape_slash as_nonblessed
- );
-
- # Perl version check, Unicode handling is enable?
- # Helper module sets @JSON::PP::_properties.
- if ($] < 5.008 ) {
- my $helper = $] >= 5.006 ? 'JSON::backportPP::Compat5006' : 'JSON::backportPP::Compat5005';
- eval qq| require $helper |;
- if ($@) { Carp::croak $@; }
- }
-
- for my $name (@xs_compati_bit_properties, @pp_bit_properties) {
- my $flag_name = 'P_' . uc($name);
-
- eval qq/
- sub $name {
- my \$enable = defined \$_[1] ? \$_[1] : 1;
-
- if (\$enable) {
- \$_[0]->{PROPS}->[$flag_name] = 1;
- }
- else {
- \$_[0]->{PROPS}->[$flag_name] = 0;
- }
-
- \$_[0];
- }
-
- sub get_$name {
- \$_[0]->{PROPS}->[$flag_name] ? 1 : '';
- }
- /;
- }
-
-}
-
-
-
-# Functions
-
-my %encode_allow_method
- = map {($_ => 1)} qw/utf8 pretty allow_nonref latin1 self_encode escape_slash
- allow_blessed convert_blessed indent indent_length allow_bignum
- as_nonblessed
- /;
-my %decode_allow_method
- = map {($_ => 1)} qw/utf8 allow_nonref loose allow_singlequote allow_bignum
- allow_barekey max_size relaxed/;
-
-
-my $JSON; # cache
-
-sub encode_json ($) { # encode
- ($JSON ||= __PACKAGE__->new->utf8)->encode(@_);
-}
-
-
-sub decode_json { # decode
- ($JSON ||= __PACKAGE__->new->utf8)->decode(@_);
-}
-
-# Obsoleted
-
-sub to_json($) {
- Carp::croak ("JSON::PP::to_json has been renamed to encode_json.");
-}
-
-
-sub from_json($) {
- Carp::croak ("JSON::PP::from_json has been renamed to decode_json.");
-}
-
-
-# Methods
-
-sub new {
- my $class = shift;
- my $self = {
- max_depth => 512,
- max_size => 0,
- indent => 0,
- FLAGS => 0,
- fallback => sub { encode_error('Invalid value. JSON can only reference.') },
- indent_length => 3,
- };
-
- bless $self, $class;
-}
-
-
-sub encode {
- return $_[0]->PP_encode_json($_[1]);
-}
-
-
-sub decode {
- return $_[0]->PP_decode_json($_[1], 0x00000000);
-}
-
-
-sub decode_prefix {
- return $_[0]->PP_decode_json($_[1], 0x00000001);
-}
-
-
-# accessor
-
-
-# pretty printing
-
-sub pretty {
- my ($self, $v) = @_;
- my $enable = defined $v ? $v : 1;
-
- if ($enable) { # indent_length(3) for JSON::XS compatibility
- $self->indent(1)->indent_length(3)->space_before(1)->space_after(1);
- }
- else {
- $self->indent(0)->space_before(0)->space_after(0);
- }
-
- $self;
-}
-
-# etc
-
-sub max_depth {
- my $max = defined $_[1] ? $_[1] : 0x80000000;
- $_[0]->{max_depth} = $max;
- $_[0];
-}
-
-
-sub get_max_depth { $_[0]->{max_depth}; }
-
-
-sub max_size {
- my $max = defined $_[1] ? $_[1] : 0;
- $_[0]->{max_size} = $max;
- $_[0];
-}
-
-
-sub get_max_size { $_[0]->{max_size}; }
-
-
-sub filter_json_object {
- $_[0]->{cb_object} = defined $_[1] ? $_[1] : 0;
- $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0;
- $_[0];
-}
-
-sub filter_json_single_key_object {
- if (@_ > 1) {
- $_[0]->{cb_sk_object}->{$_[1]} = $_[2];
- }
- $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0;
- $_[0];
-}
-
-sub indent_length {
- if (!defined $_[1] or $_[1] > 15 or $_[1] < 0) {
- Carp::carp "The acceptable range of indent_length() is 0 to 15.";
- }
- else {
- $_[0]->{indent_length} = $_[1];
- }
- $_[0];
-}
-
-sub get_indent_length {
- $_[0]->{indent_length};
-}
-
-sub sort_by {
- $_[0]->{sort_by} = defined $_[1] ? $_[1] : 1;
- $_[0];
-}
-
-sub allow_bigint {
- Carp::carp("allow_bigint() is obsoleted. use allow_bignum() insted.");
-}
-
-###############################
-
-###
-### Perl => JSON
-###
-
-
-{ # Convert
-
- my $max_depth;
- my $indent;
- my $ascii;
- my $latin1;
- my $utf8;
- my $space_before;
- my $space_after;
- my $canonical;
- my $allow_blessed;
- my $convert_blessed;
-
- my $indent_length;
- my $escape_slash;
- my $bignum;
- my $as_nonblessed;
-
- my $depth;
- my $indent_count;
- my $keysort;
-
-
- sub PP_encode_json {
- my $self = shift;
- my $obj = shift;
-
- $indent_count = 0;
- $depth = 0;
-
- my $idx = $self->{PROPS};
-
- ($ascii, $latin1, $utf8, $indent, $canonical, $space_before, $space_after, $allow_blessed,
- $convert_blessed, $escape_slash, $bignum, $as_nonblessed)
- = @{$idx}[P_ASCII .. P_SPACE_AFTER, P_ALLOW_BLESSED, P_CONVERT_BLESSED,
- P_ESCAPE_SLASH, P_ALLOW_BIGNUM, P_AS_NONBLESSED];
-
- ($max_depth, $indent_length) = @{$self}{qw/max_depth indent_length/};
-
- $keysort = $canonical ? sub { $a cmp $b } : undef;
-
- if ($self->{sort_by}) {
- $keysort = ref($self->{sort_by}) eq 'CODE' ? $self->{sort_by}
- : $self->{sort_by} =~ /\D+/ ? $self->{sort_by}
- : sub { $a cmp $b };
- }
-
- encode_error("hash- or arrayref expected (not a simple scalar, use allow_nonref to allow this)")
- if(!ref $obj and !$idx->[ P_ALLOW_NONREF ]);
-
- my $str = $self->object_to_json($obj);
-
- $str .= "\n" if ( $indent ); # JSON::XS 2.26 compatible
-
- unless ($ascii or $latin1 or $utf8) {
- utf8::upgrade($str);
- }
-
- if ($idx->[ P_SHRINK ]) {
- utf8::downgrade($str, 1);
- }
-
- return $str;
- }
-
-
- sub object_to_json {
- my ($self, $obj) = @_;
- my $type = ref($obj);
-
- if($type eq 'HASH'){
- return $self->hash_to_json($obj);
- }
- elsif($type eq 'ARRAY'){
- return $self->array_to_json($obj);
- }
- elsif ($type) { # blessed object?
- if (blessed($obj)) {
-
- return $self->value_to_json($obj) if ( $obj->isa('JSON::PP::Boolean') );
-
- if ( $convert_blessed and $obj->can('TO_JSON') ) {
- my $result = $obj->TO_JSON();
- if ( defined $result and ref( $result ) ) {
- if ( refaddr( $obj ) eq refaddr( $result ) ) {
- encode_error( sprintf(
- "%s::TO_JSON method returned same object as was passed instead of a new one",
- ref $obj
- ) );
- }
- }
-
- return $self->object_to_json( $result );
- }
-
- return "$obj" if ( $bignum and _is_bignum($obj) );
- return $self->blessed_to_json($obj) if ($allow_blessed and $as_nonblessed); # will be removed.
-
- encode_error( sprintf("encountered object '%s', but neither allow_blessed "
- . "nor convert_blessed settings are enabled", $obj)
- ) unless ($allow_blessed);
-
- return 'null';
- }
- else {
- return $self->value_to_json($obj);
- }
- }
- else{
- return $self->value_to_json($obj);
- }
- }
-
-
- sub hash_to_json {
- my ($self, $obj) = @_;
- my @res;
-
- encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)")
- if (++$depth > $max_depth);
-
- my ($pre, $post) = $indent ? $self->_up_indent() : ('', '');
- my $del = ($space_before ? ' ' : '') . ':' . ($space_after ? ' ' : '');
-
- for my $k ( _sort( $obj ) ) {
- if ( OLD_PERL ) { utf8::decode($k) } # key for Perl 5.6 / be optimized
- push @res, string_to_json( $self, $k )
- . $del
- . ( $self->object_to_json( $obj->{$k} ) || $self->value_to_json( $obj->{$k} ) );
- }
-
- --$depth;
- $self->_down_indent() if ($indent);
-
- return '{' . ( @res ? $pre : '' ) . ( @res ? join( ",$pre", @res ) . $post : '' ) . '}';
- }
-
-
- sub array_to_json {
- my ($self, $obj) = @_;
- my @res;
-
- encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)")
- if (++$depth > $max_depth);
-
- my ($pre, $post) = $indent ? $self->_up_indent() : ('', '');
-
- for my $v (@$obj){
- push @res, $self->object_to_json($v) || $self->value_to_json($v);
- }
-
- --$depth;
- $self->_down_indent() if ($indent);
-
- return '[' . ( @res ? $pre : '' ) . ( @res ? join( ",$pre", @res ) . $post : '' ) . ']';
- }
-
-
- sub value_to_json {
- my ($self, $value) = @_;
-
- return 'null' if(!defined $value);
-
- my $b_obj = B::svref_2object(\$value); # for round trip problem
- my $flags = $b_obj->FLAGS;
-
- return $value # as is
- if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV?
-
- my $type = ref($value);
-
- if(!$type){
- return string_to_json($self, $value);
- }
- elsif( blessed($value) and $value->isa('JSON::PP::Boolean') ){
- return $$value == 1 ? 'true' : 'false';
- }
- elsif ($type) {
- if ((overload::StrVal($value) =~ /=(\w+)/)[0]) {
- return $self->value_to_json("$value");
- }
-
- if ($type eq 'SCALAR' and defined $$value) {
- return $$value eq '1' ? 'true'
- : $$value eq '0' ? 'false'
- : $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ? 'null'
- : encode_error("cannot encode reference to scalar");
- }
-
- if ( $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ) {
- return 'null';
- }
- else {
- if ( $type eq 'SCALAR' or $type eq 'REF' ) {
- encode_error("cannot encode reference to scalar");
- }
- else {
- encode_error("encountered $value, but JSON can only represent references to arrays or hashes");
- }
- }
-
- }
- else {
- return $self->{fallback}->($value)
- if ($self->{fallback} and ref($self->{fallback}) eq 'CODE');
- return 'null';
- }
-
- }
-
-
- my %esc = (
- "\n" => '\n',
- "\r" => '\r',
- "\t" => '\t',
- "\f" => '\f',
- "\b" => '\b',
- "\"" => '\"',
- "\\" => '\\\\',
- "\'" => '\\\'',
- );
-
-
- sub string_to_json {
- my ($self, $arg) = @_;
-
- $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g;
- $arg =~ s/\//\\\//g if ($escape_slash);
- $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg;
-
- if ($ascii) {
- $arg = JSON_PP_encode_ascii($arg);
- }
-
- if ($latin1) {
- $arg = JSON_PP_encode_latin1($arg);
- }
-
- if ($utf8) {
- utf8::encode($arg);
- }
-
- return '"' . $arg . '"';
- }
-
-
- sub blessed_to_json {
- my $reftype = reftype($_[1]) || '';
- if ($reftype eq 'HASH') {
- return $_[0]->hash_to_json($_[1]);
- }
- elsif ($reftype eq 'ARRAY') {
- return $_[0]->array_to_json($_[1]);
- }
- else {
- return 'null';
- }
- }
-
-
- sub encode_error {
- my $error = shift;
- Carp::croak "$error";
- }
-
-
- sub _sort {
- defined $keysort ? (sort $keysort (keys %{$_[0]})) : keys %{$_[0]};
- }
-
-
- sub _up_indent {
- my $self = shift;
- my $space = ' ' x $indent_length;
-
- my ($pre,$post) = ('','');
-
- $post = "\n" . $space x $indent_count;
-
- $indent_count++;
-
- $pre = "\n" . $space x $indent_count;
-
- return ($pre,$post);
- }
-
-
- sub _down_indent { $indent_count--; }
-
-
- sub PP_encode_box {
- {
- depth => $depth,
- indent_count => $indent_count,
- };
- }
-
-} # Convert
-
-
-sub _encode_ascii {
- join('',
- map {
- $_ <= 127 ?
- chr($_) :
- $_ <= 65535 ?
- sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_));
- } unpack('U*', $_[0])
- );
-}
-
-
-sub _encode_latin1 {
- join('',
- map {
- $_ <= 255 ?
- chr($_) :
- $_ <= 65535 ?
- sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_));
- } unpack('U*', $_[0])
- );
-}
-
-
-sub _encode_surrogates { # from perlunicode
- my $uni = $_[0] - 0x10000;
- return ($uni / 0x400 + 0xD800, $uni % 0x400 + 0xDC00);
-}
-
-
-sub _is_bignum {
- $_[0]->isa('Math::BigInt') or $_[0]->isa('Math::BigFloat');
-}
-
-
-
-#
-# JSON => Perl
-#
-
-my $max_intsize;
-
-BEGIN {
- my $checkint = 1111;
- for my $d (5..64) {
- $checkint .= 1;
- my $int = eval qq| $checkint |;
- if ($int =~ /[eE]/) {
- $max_intsize = $d - 1;
- last;
- }
- }
-}
-
-{ # PARSE
-
- my %escapes = ( # by Jeremy Muhlich
- b => "\x8",
- t => "\x9",
- n => "\xA",
- f => "\xC",
- r => "\xD",
- '\\' => '\\',
- '"' => '"',
- '/' => '/',
- );
-
- my $text; # json data
- my $at; # offset
- my $ch; # 1chracter
- my $len; # text length (changed according to UTF8 or NON UTF8)
- # INTERNAL
- my $depth; # nest counter
- my $encoding; # json text encoding
- my $is_valid_utf8; # temp variable
- my $utf8_len; # utf8 byte length
- # FLAGS
- my $utf8; # must be utf8
- my $max_depth; # max nest number of objects and arrays
- my $max_size;
- my $relaxed;
- my $cb_object;
- my $cb_sk_object;
-
- my $F_HOOK;
-
- my $allow_bigint; # using Math::BigInt
- my $singlequote; # loosely quoting
- my $loose; #
- my $allow_barekey; # bareKey
-
- # $opt flag
- # 0x00000001 .... decode_prefix
- # 0x10000000 .... incr_parse
-
- sub PP_decode_json {
- my ($self, $opt); # $opt is an effective flag during this decode_json.
-
- ($self, $text, $opt) = @_;
-
- ($at, $ch, $depth) = (0, '', 0);
-
- if ( !defined $text or ref $text ) {
- decode_error("malformed JSON string, neither array, object, number, string or atom");
- }
-
- my $idx = $self->{PROPS};
-
- ($utf8, $relaxed, $loose, $allow_bigint, $allow_barekey, $singlequote)
- = @{$idx}[P_UTF8, P_RELAXED, P_LOOSE .. P_ALLOW_SINGLEQUOTE];
-
- if ( $utf8 ) {
- utf8::downgrade( $text, 1 ) or Carp::croak("Wide character in subroutine entry");
- }
- else {
- utf8::upgrade( $text );
- }
-
- $len = length $text;
-
- ($max_depth, $max_size, $cb_object, $cb_sk_object, $F_HOOK)
- = @{$self}{qw/max_depth max_size cb_object cb_sk_object F_HOOK/};
-
- if ($max_size > 1) {
- use bytes;
- my $bytes = length $text;
- decode_error(
- sprintf("attempted decode of JSON text of %s bytes size, but max_size is set to %s"
- , $bytes, $max_size), 1
- ) if ($bytes > $max_size);
- }
-
- # Currently no effect
- # should use regexp
- my @octets = unpack('C4', $text);
- $encoding = ( $octets[0] and $octets[1]) ? 'UTF-8'
- : (!$octets[0] and $octets[1]) ? 'UTF-16BE'
- : (!$octets[0] and !$octets[1]) ? 'UTF-32BE'
- : ( $octets[2] ) ? 'UTF-16LE'
- : (!$octets[2] ) ? 'UTF-32LE'
- : 'unknown';
-
- white(); # remove head white space
-
- my $valid_start = defined $ch; # Is there a first character for JSON structure?
-
- my $result = value();
-
- return undef if ( !$result && ( $opt & 0x10000000 ) ); # for incr_parse
-
- decode_error("malformed JSON string, neither array, object, number, string or atom") unless $valid_start;
-
- if ( !$idx->[ P_ALLOW_NONREF ] and !ref $result ) {
- decode_error(
- 'JSON text must be an object or array (but found number, string, true, false or null,'
- . ' use allow_nonref to allow this)', 1);
- }
-
- Carp::croak('something wrong.') if $len < $at; # we won't arrive here.
-
- my $consumed = defined $ch ? $at - 1 : $at; # consumed JSON text length
-
- white(); # remove tail white space
-
- if ( $ch ) {
- return ( $result, $consumed ) if ($opt & 0x00000001); # all right if decode_prefix
- decode_error("garbage after JSON object");
- }
-
- ( $opt & 0x00000001 ) ? ( $result, $consumed ) : $result;
- }
-
-
- sub next_chr {
- return $ch = undef if($at >= $len);
- $ch = substr($text, $at++, 1);
- }
-
-
- sub value {
- white();
- return if(!defined $ch);
- return object() if($ch eq '{');
- return array() if($ch eq '[');
- return string() if($ch eq '"' or ($singlequote and $ch eq "'"));
- return number() if($ch =~ /[0-9]/ or $ch eq '-');
- return word();
- }
-
- sub string {
- my ($i, $s, $t, $u);
- my $utf16;
- my $is_utf8;
-
- ($is_valid_utf8, $utf8_len) = ('', 0);
-
- $s = ''; # basically UTF8 flag on
-
- if($ch eq '"' or ($singlequote and $ch eq "'")){
- my $boundChar = $ch;
-
- OUTER: while( defined(next_chr()) ){
-
- if($ch eq $boundChar){
- next_chr();
-
- if ($utf16) {
- decode_error("missing low surrogate character in surrogate pair");
- }
-
- utf8::decode($s) if($is_utf8);
-
- return $s;
- }
- elsif($ch eq '\\'){
- next_chr();
- if(exists $escapes{$ch}){
- $s .= $escapes{$ch};
- }
- elsif($ch eq 'u'){ # UNICODE handling
- my $u = '';
-
- for(1..4){
- $ch = next_chr();
- last OUTER if($ch !~ /[0-9a-fA-F]/);
- $u .= $ch;
- }
-
- # U+D800 - U+DBFF
- if ($u =~ /^[dD][89abAB][0-9a-fA-F]{2}/) { # UTF-16 high surrogate?
- $utf16 = $u;
- }
- # U+DC00 - U+DFFF
- elsif ($u =~ /^[dD][c-fC-F][0-9a-fA-F]{2}/) { # UTF-16 low surrogate?
- unless (defined $utf16) {
- decode_error("missing high surrogate character in surrogate pair");
- }
- $is_utf8 = 1;
- $s .= JSON_PP_decode_surrogates($utf16, $u) || next;
- $utf16 = undef;
- }
- else {
- if (defined $utf16) {
- decode_error("surrogate pair expected");
- }
-
- if ( ( my $hex = hex( $u ) ) > 127 ) {
- $is_utf8 = 1;
- $s .= JSON_PP_decode_unicode($u) || next;
- }
- else {
- $s .= chr $hex;
- }
- }
-
- }
- else{
- unless ($loose) {
- $at -= 2;
- decode_error('illegal backslash escape sequence in string');
- }
- $s .= $ch;
- }
- }
- else{
-
- if ( ord $ch > 127 ) {
- if ( $utf8 ) {
- unless( $ch = is_valid_utf8($ch) ) {
- $at -= 1;
- decode_error("malformed UTF-8 character in JSON string");
- }
- else {
- $at += $utf8_len - 1;
- }
- }
- else {
- utf8::encode( $ch );
- }
-
- $is_utf8 = 1;
- }
-
- if (!$loose) {
- if ($ch =~ /[\x00-\x1f\x22\x5c]/) { # '/' ok
- $at--;
- decode_error('invalid character encountered while parsing JSON string');
- }
- }
-
- $s .= $ch;
- }
- }
- }
-
- decode_error("unexpected end of string while parsing JSON string");
- }
-
-
- sub white {
- while( defined $ch ){
- if($ch le ' '){
- next_chr();
- }
- elsif($ch eq '/'){
- next_chr();
- if(defined $ch and $ch eq '/'){
- 1 while(defined(next_chr()) and $ch ne "\n" and $ch ne "\r");
- }
- elsif(defined $ch and $ch eq '*'){
- next_chr();
- while(1){
- if(defined $ch){
- if($ch eq '*'){
- if(defined(next_chr()) and $ch eq '/'){
- next_chr();
- last;
- }
- }
- else{
- next_chr();
- }
- }
- else{
- decode_error("Unterminated comment");
- }
- }
- next;
- }
- else{
- $at--;
- decode_error("malformed JSON string, neither array, object, number, string or atom");
- }
- }
- else{
- if ($relaxed and $ch eq '#') { # correctly?
- pos($text) = $at;
- $text =~ /\G([^\n]*(?:\r\n|\r|\n|$))/g;
- $at = pos($text);
- next_chr;
- next;
- }
-
- last;
- }
- }
- }
-
-
- sub array {
- my $a = $_[0] || []; # you can use this code to use another array ref object.
-
- decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)')
- if (++$depth > $max_depth);
-
- next_chr();
- white();
-
- if(defined $ch and $ch eq ']'){
- --$depth;
- next_chr();
- return $a;
- }
- else {
- while(defined($ch)){
- push @$a, value();
-
- white();
-
- if (!defined $ch) {
- last;
- }
-
- if($ch eq ']'){
- --$depth;
- next_chr();
- return $a;
- }
-
- if($ch ne ','){
- last;
- }
-
- next_chr();
- white();
-
- if ($relaxed and $ch eq ']') {
- --$depth;
- next_chr();
- return $a;
- }
-
- }
- }
-
- decode_error(", or ] expected while parsing array");
- }
-
-
- sub object {
- my $o = $_[0] || {}; # you can use this code to use another hash ref object.
- my $k;
-
- decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)')
- if (++$depth > $max_depth);
- next_chr();
- white();
-
- if(defined $ch and $ch eq '}'){
- --$depth;
- next_chr();
- if ($F_HOOK) {
- return _json_object_hook($o);
- }
- return $o;
- }
- else {
- while (defined $ch) {
- $k = ($allow_barekey and $ch ne '"' and $ch ne "'") ? bareKey() : string();
- white();
-
- if(!defined $ch or $ch ne ':'){
- $at--;
- decode_error("':' expected");
- }
-
- next_chr();
- $o->{$k} = value();
- white();
-
- last if (!defined $ch);
-
- if($ch eq '}'){
- --$depth;
- next_chr();
- if ($F_HOOK) {
- return _json_object_hook($o);
- }
- return $o;
- }
-
- if($ch ne ','){
- last;
- }
-
- next_chr();
- white();
-
- if ($relaxed and $ch eq '}') {
- --$depth;
- next_chr();
- if ($F_HOOK) {
- return _json_object_hook($o);
- }
- return $o;
- }
-
- }
-
- }
-
- $at--;
- decode_error(", or } expected while parsing object/hash");
- }
-
-
- sub bareKey { # doesn't strictly follow Standard ECMA-262 3rd Edition
- my $key;
- while($ch =~ /[^\x00-\x23\x25-\x2F\x3A-\x40\x5B-\x5E\x60\x7B-\x7F]/){
- $key .= $ch;
- next_chr();
- }
- return $key;
- }
-
-
- sub word {
- my $word = substr($text,$at-1,4);
-
- if($word eq 'true'){
- $at += 3;
- next_chr;
- return $JSON::PP::true;
- }
- elsif($word eq 'null'){
- $at += 3;
- next_chr;
- return undef;
- }
- elsif($word eq 'fals'){
- $at += 3;
- if(substr($text,$at,1) eq 'e'){
- $at++;
- next_chr;
- return $JSON::PP::false;
- }
- }
-
- $at--; # for decode_error report
-
- decode_error("'null' expected") if ($word =~ /^n/);
- decode_error("'true' expected") if ($word =~ /^t/);
- decode_error("'false' expected") if ($word =~ /^f/);
- decode_error("malformed JSON string, neither array, object, number, string or atom");
- }
-
-
- sub number {
- my $n = '';
- my $v;
-
- # According to RFC4627, hex or oct digits are invalid.
- if($ch eq '0'){
- my $peek = substr($text,$at,1);
- my $hex = $peek =~ /[xX]/; # 0 or 1
-
- if($hex){
- decode_error("malformed number (leading zero must not be followed by another digit)");
- ($n) = ( substr($text, $at+1) =~ /^([0-9a-fA-F]+)/);
- }
- else{ # oct
- ($n) = ( substr($text, $at) =~ /^([0-7]+)/);
- if (defined $n and length $n > 1) {
- decode_error("malformed number (leading zero must not be followed by another digit)");
- }
- }
-
- if(defined $n and length($n)){
- if (!$hex and length($n) == 1) {
- decode_error("malformed number (leading zero must not be followed by another digit)");
- }
- $at += length($n) + $hex;
- next_chr;
- return $hex ? hex($n) : oct($n);
- }
- }
-
- if($ch eq '-'){
- $n = '-';
- next_chr;
- if (!defined $ch or $ch !~ /\d/) {
- decode_error("malformed number (no digits after initial minus)");
- }
- }
-
- while(defined $ch and $ch =~ /\d/){
- $n .= $ch;
- next_chr;
- }
-
- if(defined $ch and $ch eq '.'){
- $n .= '.';
-
- next_chr;
- if (!defined $ch or $ch !~ /\d/) {
- decode_error("malformed number (no digits after decimal point)");
- }
- else {
- $n .= $ch;
- }
-
- while(defined(next_chr) and $ch =~ /\d/){
- $n .= $ch;
- }
- }
-
- if(defined $ch and ($ch eq 'e' or $ch eq 'E')){
- $n .= $ch;
- next_chr;
-
- if(defined($ch) and ($ch eq '+' or $ch eq '-')){
- $n .= $ch;
- next_chr;
- if (!defined $ch or $ch =~ /\D/) {
- decode_error("malformed number (no digits after exp sign)");
- }
- $n .= $ch;
- }
- elsif(defined($ch) and $ch =~ /\d/){
- $n .= $ch;
- }
- else {
- decode_error("malformed number (no digits after exp sign)");
- }
-
- while(defined(next_chr) and $ch =~ /\d/){
- $n .= $ch;
- }
-
- }
-
- $v .= $n;
-
- if ($v !~ /[.eE]/ and length $v > $max_intsize) {
- if ($allow_bigint) { # from Adam Sussman
- require Math::BigInt;
- return Math::BigInt->new($v);
- }
- else {
- return "$v";
- }
- }
- elsif ($allow_bigint) {
- require Math::BigFloat;
- return Math::BigFloat->new($v);
- }
-
- return 0+$v;
- }
-
-
- sub is_valid_utf8 {
-
- $utf8_len = $_[0] =~ /[\x00-\x7F]/ ? 1
- : $_[0] =~ /[\xC2-\xDF]/ ? 2
- : $_[0] =~ /[\xE0-\xEF]/ ? 3
- : $_[0] =~ /[\xF0-\xF4]/ ? 4
- : 0
- ;
-
- return unless $utf8_len;
-
- my $is_valid_utf8 = substr($text, $at - 1, $utf8_len);
-
- return ( $is_valid_utf8 =~ /^(?:
- [\x00-\x7F]
- |[\xC2-\xDF][\x80-\xBF]
- |[\xE0][\xA0-\xBF][\x80-\xBF]
- |[\xE1-\xEC][\x80-\xBF][\x80-\xBF]
- |[\xED][\x80-\x9F][\x80-\xBF]
- |[\xEE-\xEF][\x80-\xBF][\x80-\xBF]
- |[\xF0][\x90-\xBF][\x80-\xBF][\x80-\xBF]
- |[\xF1-\xF3][\x80-\xBF][\x80-\xBF][\x80-\xBF]
- |[\xF4][\x80-\x8F][\x80-\xBF][\x80-\xBF]
- )$/x ) ? $is_valid_utf8 : '';
- }
-
-
- sub decode_error {
- my $error = shift;
- my $no_rep = shift;
- my $str = defined $text ? substr($text, $at) : '';
- my $mess = '';
- my $type = $] >= 5.008 ? 'U*'
- : $] < 5.006 ? 'C*'
- : utf8::is_utf8( $str ) ? 'U*' # 5.6
- : 'C*'
- ;
-
- for my $c ( unpack( $type, $str ) ) { # emulate pv_uni_display() ?
- $mess .= $c == 0x07 ? '\a'
- : $c == 0x09 ? '\t'
- : $c == 0x0a ? '\n'
- : $c == 0x0d ? '\r'
- : $c == 0x0c ? '\f'
- : $c < 0x20 ? sprintf('\x{%x}', $c)
- : $c == 0x5c ? '\\\\'
- : $c < 0x80 ? chr($c)
- : sprintf('\x{%x}', $c)
- ;
- if ( length $mess >= 20 ) {
- $mess .= '...';
- last;
- }
- }
-
- unless ( length $mess ) {
- $mess = '(end of string)';
- }
-
- Carp::croak (
- $no_rep ? "$error" : "$error, at character offset $at (before \"$mess\")"
- );
-
- }
-
-
- sub _json_object_hook {
- my $o = $_[0];
- my @ks = keys %{$o};
-
- if ( $cb_sk_object and @ks == 1 and exists $cb_sk_object->{ $ks[0] } and ref $cb_sk_object->{ $ks[0] } ) {
- my @val = $cb_sk_object->{ $ks[0] }->( $o->{$ks[0]} );
- if (@val == 1) {
- return $val[0];
- }
- }
-
- my @val = $cb_object->($o) if ($cb_object);
- if (@val == 0 or @val > 1) {
- return $o;
- }
- else {
- return $val[0];
- }
- }
-
-
- sub PP_decode_box {
- {
- text => $text,
- at => $at,
- ch => $ch,
- len => $len,
- depth => $depth,
- encoding => $encoding,
- is_valid_utf8 => $is_valid_utf8,
- };
- }
-
-} # PARSE
-
-
-sub _decode_surrogates { # from perlunicode
- my $uni = 0x10000 + (hex($_[0]) - 0xD800) * 0x400 + (hex($_[1]) - 0xDC00);
- my $un = pack('U*', $uni);
- utf8::encode( $un );
- return $un;
-}
-
-
-sub _decode_unicode {
- my $un = pack('U', hex shift);
- utf8::encode( $un );
- return $un;
-}
-
-#
-# Setup for various Perl versions (the code from JSON::PP58)
-#
-
-BEGIN {
-
- unless ( defined &utf8::is_utf8 ) {
- require Encode;
- *utf8::is_utf8 = *Encode::is_utf8;
- }
-
- if ( $] >= 5.008 ) {
- *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii;
- *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1;
- *JSON::PP::JSON_PP_decode_surrogates = \&_decode_surrogates;
- *JSON::PP::JSON_PP_decode_unicode = \&_decode_unicode;
- }
-
- if ($] >= 5.008 and $] < 5.008003) { # join() in 5.8.0 - 5.8.2 is broken.
- package # hide from PAUSE
- JSON::PP;
- require subs;
- subs->import('join');
- eval q|
- sub join {
- return '' if (@_ < 2);
- my $j = shift;
- my $str = shift;
- for (@_) { $str .= $j . $_; }
- return $str;
- }
- |;
- }
-
-
- sub JSON::PP::incr_parse {
- local $Carp::CarpLevel = 1;
- ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_parse( @_ );
- }
-
-
- sub JSON::PP::incr_skip {
- ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_skip;
- }
-
-
- sub JSON::PP::incr_reset {
- ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_reset;
- }
-
- eval q{
- sub JSON::PP::incr_text : lvalue {
- $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new;
-
- if ( $_[0]->{_incr_parser}->{incr_parsing} ) {
- Carp::croak("incr_text can not be called when the incremental parser already started parsing");
- }
- $_[0]->{_incr_parser}->{incr_text};
- }
- } if ( $] >= 5.006 );
-
-} # Setup for various Perl versions (the code from JSON::PP58)
-
-
-###############################
-# Utilities
-#
-
-BEGIN {
- eval 'require Scalar::Util';
- unless($@){
- *JSON::PP::blessed = \&Scalar::Util::blessed;
- *JSON::PP::reftype = \&Scalar::Util::reftype;
- *JSON::PP::refaddr = \&Scalar::Util::refaddr;
- }
- else{ # This code is from Scalar::Util.
- # warn $@;
- eval 'sub UNIVERSAL::a_sub_not_likely_to_be_here { ref($_[0]) }';
- *JSON::PP::blessed = sub {
- local($@, $SIG{__DIE__}, $SIG{__WARN__});
- ref($_[0]) ? eval { $_[0]->a_sub_not_likely_to_be_here } : undef;
- };
- my %tmap = qw(
- B::NULL SCALAR
- B::HV HASH
- B::AV ARRAY
- B::CV CODE
- B::IO IO
- B::GV GLOB
- B::REGEXP REGEXP
- );
- *JSON::PP::reftype = sub {
- my $r = shift;
-
- return undef unless length(ref($r));
-
- my $t = ref(B::svref_2object($r));
-
- return
- exists $tmap{$t} ? $tmap{$t}
- : length(ref($$r)) ? 'REF'
- : 'SCALAR';
- };
- *JSON::PP::refaddr = sub {
- return undef unless length(ref($_[0]));
-
- my $addr;
- if(defined(my $pkg = blessed($_[0]))) {
- $addr .= bless $_[0], 'Scalar::Util::Fake';
- bless $_[0], $pkg;
- }
- else {
- $addr .= $_[0]
- }
-
- $addr =~ /0x(\w+)/;
- local $^W;
- #no warnings 'portable';
- hex($1);
- }
- }
-}
-
-
-# shamelessly copied and modified from JSON::XS code.
-
-unless ( $INC{'JSON/PP.pm'} ) {
- eval q|
- package
- JSON::PP::Boolean;
-
- use overload (
- "0+" => sub { ${$_[0]} },
- "++" => sub { $_[0] = ${$_[0]} + 1 },
- "--" => sub { $_[0] = ${$_[0]} - 1 },
- fallback => 1,
- );
- |;
-}
-
-$JSON::PP::true = do { bless \(my $dummy = 1), "JSON::PP::Boolean" };
-$JSON::PP::false = do { bless \(my $dummy = 0), "JSON::PP::Boolean" };
-
-sub is_bool { defined $_[0] and UNIVERSAL::isa($_[0], "JSON::PP::Boolean"); }
-
-sub true { $JSON::PP::true }
-sub false { $JSON::PP::false }
-sub null { undef; }
-
-###############################
-
-###############################
-
-package # hide from PAUSE
- JSON::PP::IncrParser;
-
-use strict;
-
-use constant INCR_M_WS => 0; # initial whitespace skipping
-use constant INCR_M_STR => 1; # inside string
-use constant INCR_M_BS => 2; # inside backslash
-use constant INCR_M_JSON => 3; # outside anything, count nesting
-use constant INCR_M_C0 => 4;
-use constant INCR_M_C1 => 5;
-
-use vars qw($VERSION);
-$VERSION = '1.01';
-
-my $unpack_format = $] < 5.006 ? 'C*' : 'U*';
-
-sub new {
- my ( $class ) = @_;
-
- bless {
- incr_nest => 0,
- incr_text => undef,
- incr_parsing => 0,
- incr_p => 0,
- }, $class;
-}
-
-
-sub incr_parse {
- my ( $self, $coder, $text ) = @_;
-
- $self->{incr_text} = '' unless ( defined $self->{incr_text} );
-
- if ( defined $text ) {
- if ( utf8::is_utf8( $text ) and !utf8::is_utf8( $self->{incr_text} ) ) {
- utf8::upgrade( $self->{incr_text} ) ;
- utf8::decode( $self->{incr_text} ) ;
- }
- $self->{incr_text} .= $text;
- }
-
-
- my $max_size = $coder->get_max_size;
-
- if ( defined wantarray ) {
-
- $self->{incr_mode} = INCR_M_WS unless defined $self->{incr_mode};
-
- if ( wantarray ) {
- my @ret;
-
- $self->{incr_parsing} = 1;
-
- do {
- push @ret, $self->_incr_parse( $coder, $self->{incr_text} );
-
- unless ( !$self->{incr_nest} and $self->{incr_mode} == INCR_M_JSON ) {
- $self->{incr_mode} = INCR_M_WS if $self->{incr_mode} != INCR_M_STR;
- }
-
- } until ( length $self->{incr_text} >= $self->{incr_p} );
-
- $self->{incr_parsing} = 0;
-
- return @ret;
- }
- else { # in scalar context
- $self->{incr_parsing} = 1;
- my $obj = $self->_incr_parse( $coder, $self->{incr_text} );
- $self->{incr_parsing} = 0 if defined $obj; # pointed by Martin J. Evans
- return $obj ? $obj : undef; # $obj is an empty string, parsing was completed.
- }
-
- }
-
-}
-
-
-sub _incr_parse {
- my ( $self, $coder, $text, $skip ) = @_;
- my $p = $self->{incr_p};
- my $restore = $p;
-
- my @obj;
- my $len = length $text;
-
- if ( $self->{incr_mode} == INCR_M_WS ) {
- while ( $len > $p ) {
- my $s = substr( $text, $p, 1 );
- $p++ and next if ( 0x20 >= unpack($unpack_format, $s) );
- $self->{incr_mode} = INCR_M_JSON;
- last;
- }
- }
-
- while ( $len > $p ) {
- my $s = substr( $text, $p++, 1 );
-
- if ( $s eq '"' ) {
- if (substr( $text, $p - 2, 1 ) eq '\\' ) {
- next;
- }
-
- if ( $self->{incr_mode} != INCR_M_STR ) {
- $self->{incr_mode} = INCR_M_STR;
- }
- else {
- $self->{incr_mode} = INCR_M_JSON;
- unless ( $self->{incr_nest} ) {
- last;
- }
- }
- }
-
- if ( $self->{incr_mode} == INCR_M_JSON ) {
-
- if ( $s eq '[' or $s eq '{' ) {
- if ( ++$self->{incr_nest} > $coder->get_max_depth ) {
- Carp::croak('json text or perl structure exceeds maximum nesting level (max_depth set too low?)');
- }
- }
- elsif ( $s eq ']' or $s eq '}' ) {
- last if ( --$self->{incr_nest} <= 0 );
- }
- elsif ( $s eq '#' ) {
- while ( $len > $p ) {
- last if substr( $text, $p++, 1 ) eq "\n";
- }
- }
-
- }
-
- }
-
- $self->{incr_p} = $p;
-
- return if ( $self->{incr_mode} == INCR_M_STR and not $self->{incr_nest} );
- return if ( $self->{incr_mode} == INCR_M_JSON and $self->{incr_nest} > 0 );
-
- return '' unless ( length substr( $self->{incr_text}, 0, $p ) );
-
- local $Carp::CarpLevel = 2;
-
- $self->{incr_p} = $restore;
- $self->{incr_c} = $p;
-
- my ( $obj, $tail ) = $coder->PP_decode_json( substr( $self->{incr_text}, 0, $p ), 0x10000001 );
-
- $self->{incr_text} = substr( $self->{incr_text}, $p );
- $self->{incr_p} = 0;
-
- return $obj || '';
-}
-
-
-sub incr_text {
- if ( $_[0]->{incr_parsing} ) {
- Carp::croak("incr_text can not be called when the incremental parser already started parsing");
- }
- $_[0]->{incr_text};
-}
-
-
-sub incr_skip {
- my $self = shift;
- $self->{incr_text} = substr( $self->{incr_text}, $self->{incr_c} );
- $self->{incr_p} = 0;
-}
-
-
-sub incr_reset {
- my $self = shift;
- $self->{incr_text} = undef;
- $self->{incr_p} = 0;
- $self->{incr_mode} = 0;
- $self->{incr_nest} = 0;
- $self->{incr_parsing} = 0;
-}
-
-###############################
-
-
-1;
-__END__
-=pod
-
-=head1 NAME
-
-JSON::PP - JSON::XS compatible pure-Perl module.
-
-=head1 SYNOPSIS
-
- use JSON::PP;
-
- # exported functions, they croak on error
- # and expect/generate UTF-8
-
- $utf8_encoded_json_text = encode_json $perl_hash_or_arrayref;
- $perl_hash_or_arrayref = decode_json $utf8_encoded_json_text;
-
- # OO-interface
-
- $coder = JSON::PP->new->ascii->pretty->allow_nonref;
-
- $json_text = $json->encode( $perl_scalar );
- $perl_scalar = $json->decode( $json_text );
-
- $pretty_printed = $json->pretty->encode( $perl_scalar ); # pretty-printing
-
- # Note that JSON version 2.0 and above will automatically use
- # JSON::XS or JSON::PP, so you should be able to just:
-
- use JSON;
-
-
-=head1 VERSION
-
- 2.27200
-
-L 2.27 (~2.30) compatible.
-
-=head1 DESCRIPTION
-
-This module is L compatible pure Perl module.
-(Perl 5.8 or later is recommended)
-
-JSON::XS is the fastest and most proper JSON module on CPAN.
-It is written by Marc Lehmann in C, so must be compiled and
-installed in the used environment.
-
-JSON::PP is a pure-Perl module and has compatibility to JSON::XS.
-
-
-=head2 FEATURES
-
-=over
-
-=item * correct unicode handling
-
-This module knows how to handle Unicode (depending on Perl version).
-
-See to L and
-L.
-
-
-=item * round-trip integrity
-
-When you serialise a perl data structure using only data types
-supported by JSON and Perl, the deserialised data structure is
-identical on the Perl level. (e.g. the string "2.0" doesn't suddenly
-become "2" just because it looks like a number). There I minor
-exceptions to this, read the MAPPING section below to learn about
-those.
-
-
-=item * strict checking of JSON correctness
-
-There is no guessing, no generating of illegal JSON texts by default,
-and only JSON is accepted as input by default (the latter is a
-security feature). But when some options are set, loose checking
-features are available.
-
-=back
-
-=head1 FUNCTIONAL INTERFACE
-
-Some documents are copied and modified from L.
-
-=head2 encode_json
-
- $json_text = encode_json $perl_scalar
-
-Converts the given Perl data structure to a UTF-8 encoded, binary string.
-
-This function call is functionally identical to:
-
- $json_text = JSON::PP->new->utf8->encode($perl_scalar)
-
-=head2 decode_json
-
- $perl_scalar = decode_json $json_text
-
-The opposite of C: expects an UTF-8 (binary) string and tries
-to parse that as an UTF-8 encoded JSON text, returning the resulting
-reference.
-
-This function call is functionally identical to:
-
- $perl_scalar = JSON::PP->new->utf8->decode($json_text)
-
-=head2 JSON::PP::is_bool
-
- $is_boolean = JSON::PP::is_bool($scalar)
-
-Returns true if the passed scalar represents either JSON::PP::true or
-JSON::PP::false, two constants that act like C<1> and C<0> respectively
-and are also used to represent JSON C and C in Perl strings.
-
-=head2 JSON::PP::true
-
-Returns JSON true value which is blessed object.
-It C JSON::PP::Boolean object.
-
-=head2 JSON::PP::false
-
-Returns JSON false value which is blessed object.
-It C JSON::PP::Boolean object.
-
-=head2 JSON::PP::null
-
-Returns C.
-
-See L, below, for more information on how JSON values are mapped to
-Perl.
-
-
-=head1 HOW DO I DECODE A DATA FROM OUTER AND ENCODE TO OUTER
-
-This section supposes that your perl version is 5.8 or later.
-
-If you know a JSON text from an outer world - a network, a file content, and so on,
-is encoded in UTF-8, you should use C or C module object
-with C enable. And the decoded result will contain UNICODE characters.
-
- # from network
- my $json = JSON::PP->new->utf8;
- my $json_text = CGI->new->param( 'json_data' );
- my $perl_scalar = $json->decode( $json_text );
-
- # from file content
- local $/;
- open( my $fh, '<', 'json.data' );
- $json_text = <$fh>;
- $perl_scalar = decode_json( $json_text );
-
-If an outer data is not encoded in UTF-8, firstly you should C it.
-
- use Encode;
- local $/;
- open( my $fh, '<', 'json.data' );
- my $encoding = 'cp932';
- my $unicode_json_text = decode( $encoding, <$fh> ); # UNICODE
-
- # or you can write the below code.
- #
- # open( my $fh, "<:encoding($encoding)", 'json.data' );
- # $unicode_json_text = <$fh>;
-
-In this case, C<$unicode_json_text> is of course UNICODE string.
-So you B use C nor C module object with C enable.
-Instead of them, you use C module object with C disable.
-
- $perl_scalar = $json->utf8(0)->decode( $unicode_json_text );
-
-Or C and C:
-
- $perl_scalar = decode_json( encode( 'utf8', $unicode_json_text ) );
- # this way is not efficient.
-
-And now, you want to convert your C<$perl_scalar> into JSON data and
-send it to an outer world - a network or a file content, and so on.
-
-Your data usually contains UNICODE strings and you want the converted data to be encoded
-in UTF-8, you should use C or C module object with C enable.
-
- print encode_json( $perl_scalar ); # to a network? file? or display?
- # or
- print $json->utf8->encode( $perl_scalar );
-
-If C<$perl_scalar> does not contain UNICODE but C<$encoding>-encoded strings
-for some reason, then its characters are regarded as B for perl
-(because it does not concern with your $encoding).
-You B use C nor C module object with C enable.
-Instead of them, you use C module object with C disable.
-Note that the resulted text is a UNICODE string but no problem to print it.
-
- # $perl_scalar contains $encoding encoded string values
- $unicode_json_text = $json->utf8(0)->encode( $perl_scalar );
- # $unicode_json_text consists of characters less than 0x100
- print $unicode_json_text;
-
-Or C all string values and C:
-
- $perl_scalar->{ foo } = decode( $encoding, $perl_scalar->{ foo } );
- # ... do it to each string values, then encode_json
- $json_text = encode_json( $perl_scalar );
-
-This method is a proper way but probably not efficient.
-
-See to L, L.
-
-
-=head1 METHODS
-
-Basically, check to L or L |